text
stringlengths
6
128k
# Locality sensitive hashing via mechanical behavior Emma Lejeune1 Department of Mechanical Engineering Boston University Boston, MA <EMAIL_ADDRESS> & Peerasait Prachaseree Department of Mechanical Engineering Boston University Boston, MA 02215 <EMAIL_ADDRESS> ###### Abstract From healing wounds to maintaining homeostasis in cyclically loaded tissue, living systems have a phenomenal ability to sense, store, and respond to mechanical stimuli. Broadly speaking, there is significant interest in designing engineered systems to recapitulate this incredible functionality. In engineered systems, we have seen significant recent computationally driven advances in sensing and control. And, there has been a growing interest — inspired in part by the incredible distributed and emergent functionality observed in the natural world — in exploring the ability of engineered systems to perform computation through mechanisms that are fundamentally driven by physical laws. In this work, we focus on a small segment of this broad and evolving field: locality sensitive hashing via mechanical behavior. Specifically, we will address the question: can mechanical information (i.e., loads) be transformed by mechanical systems (i.e., converted into sensor readouts) such that the mechanical system meets the requirements for a locality sensitive hash function? Overall, we not only find that mechanical systems are able to perform this function, but also that different mechanical systems vary widely in their efficacy at this task. Looking forward, we view this work as a starting point for significant future investigation into the design and optimization of mechanical systems for conveying mechanical information for downstream computing. 11footnotetext: corresponding author _Keywords_ physical computing $\cdot$ morphological computing $\cdot$ programmable matter $\cdot$ mechanical hashing ## 1 Introduction From the cells embedded in our skin deciding if they should activate to heal a wound [1, 2], to robotic systems dexterously manipulating delicate objects [3, 4, 5], the ability to effectively transmit and interpret mechanical signals can lead to incredible functionality [6, 7, 8]. In natural systems, this ability leads to complex emergent behavior such as the maintenance of homeostasis in mechanically loaded tissue [9, 10]. And in engineered systems we can design for responsiveness by controlling the transmission of mechanical signals through material selection and structural form [11, 12]. Transmission and interpretation of mechanical signals is especially relevant to growing interests in “morphological computing” [13], “physical learning” [14], and “programmable matter” [15, 16]. Broadly speaking, these are all paradigms where a physical system is either programmed, or used to perform some form of “computation.” For example, researchers have experimentally realized physical logic gates [17, 18, 19], as well as responsive mechanisms that trigger functional behavior when activated [20, 21, 22]. And, within the scope of dynamical systems, researchers have used physical bodies to perform “reservoir computing” where a higher dimensional computational space is created by multiple non-linear responses to an input signal [23, 24], and cryptographic hashing where researchers have shown that chaotic hydrodynamics can be used to store and manipulate information in a fluid system [25]. In this paper, we will focus on a small segment of this broad and emerging field: locality sensitive hashing via mechanical behavior. Here, our goal is to explore this specific type of computation in the context of mechanical systems. Hashing, the process of converting arbitrarily sized inputs to outputs of a fixed size, is schematically illustrated in Fig. 1a [26]. In most popular applications of hashing (e.g., storing sensitive information), it is desirable to minimize collisions (i.e., the occurrence of different inputs mapping to the same output) and obfuscate the relationship between inputs and outputs. However, there has been a growing interest in alternative types of hashing algorithms – specifically hashing algorithms for applications such as similarity search, see Fig. 1b [27]. In these algorithms, the goal is to compress input data while preserving essential aspects of the relationship between input data points. In Section 2.1, we lay out the mathematical definition for “locality sensitive hashing” [28, 29]. In this paper, we will focus on the concept schematically illustrated in Fig. 1c. Can mechanical information (i.e., loads) be transformed by mechanical systems (i.e., converted into sensor readouts) such that the mechanical system meets the requirements for a locality sensitive hash function? In exploring this specific type of computing in mechanical systems, our goal is to lay a solid ideological foundation for future applications of physical computing where mechanical systems are tailored to act as a “physical computing layer” that transforms mechanical information to enable downstream responsiveness and control. The remainder of the paper is organized as follows. In Section 2, we further define locality sensitive hashing, elaborate on the concept of mechanical systems as locality sensitive hash functions, and define an example problem to explore the performance of different mechanical systems for locality sensitive hashing. Then, in Section 3, we show the results of our investigation of our example problem, and conclude in Section 4. Overall, our goal is threefold: (1) to introduce the concept of locality sensitive hashing in the context of mechanical systems, (2) to provide a straightforward “proof of concept” that mechanical systems can be used to perform locality sensitive hashing, and (3) to lay the foundation for future investigations on optimizing mechanical behavior to perform hashing for similarity search. Figure 1: a) Schematic illustration of a generally defined hash function; b) Schematic illustration of the requirements for locality sensitive hashing where $p_{\mathrm{collision}}$ is the probability of a hash collision that is larger for inputs that are closer together; c) Schematic illustration of a mechanical system performing locality sensitive hashing. ## 2 Methods We will begin in Section 2.1 by defining Locality Sensitive Hashing (LSH), then in Section 2.2 we will demonstrate how the concept behind LSH can be applied to mechanical systems as a “proof of concept.” Finally, in Section 2.3 we define an example problem that will set up the main investigation presented in this paper. ### 2.1 Locality sensitive hashing In simple terms, a “hash function” is a function that maps input data of an arbitrary size to a fixed size output, referred to as a “hash value” [30]. This is schematically illustrated in Fig. 1a. Hash functions have broad societal applications ranging from storing passwords, to checking if files match, to enabling data structures (e.g., dictionaries in Python) [31]. For these applications, hash functions are designed to minimize “hash collisons.” To minimize “hash collisons,” hash functions typically convert similar yet different inputs to drastically different hash values [32]. Therefore, for typical hash algorithms, it would not make sense to perform downstream applications that rely on the distance between hash values. However, there has been recent interest in an alternative type of hash algorithm referred to as “locality sensitive hashing” where the goal is to create hash functions that encourage collisions between similar inputs [27]. For these Locality Sensitive Hash (LSH) approaches, similar inputs should lead to similar or identical hash values. To date, these techniques have primarily been used for dimensionality reduction prior to nearest neighbor search [33]. More formally, we can introduce LSH through the following definition [27]. First, we describe our input data as points in a $N$ dimensional metric space $\mathcal{M}$ with distance function $d$. Here we will choose $d$ as the $L_{\infty}$ norm111Alternative choices of $||L||$ would also be acceptable, here we choose $||L||_{\infty}$ to simplify future calculations, see Appendix A.1 and A.2. We note briefly that our GitHub page hosts the code necessary to re-implement the numerical portions of our study with $||L||_{2}$, which leads to very similar results and identical conclusions to what we find using $||L||_{\infty}$. – $||\mathbf{x}||_{\infty}=\max\\{|x_{1}|,|x_{2}|,...,|x_{N}|\\}$ where $\mathbf{x}$ is the difference between two points in $\mathcal{M}$. We define a family of $h$ hash functions as $\mathcal{F}$ where for any two points $q_{i}$ and $q_{j}$ in $\mathcal{M}$ and any hash function $h$ chosen uniformly at random from $\mathcal{F}$ the following conditions hold: $\displaystyle\mathrm{if}\,\,\,d(q_{i},\,q_{j})\leq R$ $\displaystyle:\,Pr[h(q_{i})=h(q_{j})]\geq p_{1}$ (1) $\displaystyle\mathrm{if}\,\,\,d(q_{i},\,q_{j})>cR$ $\displaystyle:\,Pr[h(q_{i})=h(q_{j})]\leq p_{2}$ where threshold $R>0$, approximation factor $c>1$, and $Pr[]$ computes probabilities $p_{1}$ and $p_{2}$ with $0\leq p_{1},p_{2}\leq 1$. In this work, we will store hash values $h(q_{i})$ with the numpy.float64 data type, thus $Pr[h(q_{i})=h(q_{j})]\approx 0$. In physical implementations of these systems, the precision of $h(q_{i})$ will depend on the choice of sensor. Therefore, in establishing our mechanical analogue to a locality sensitive hashing algorithm, we re-write eqn. 1 in terms of a positive value $S$ as: $\displaystyle\mathrm{if}\,\,\,d(q_{i},\,q_{j})\leq R$ $\displaystyle:\,Pr[d\big{(}h(q_{i}),\,h(q_{j})\big{)}<S]\geq p_{1}$ (2) $\displaystyle\mathrm{if}\,\,\,d(q_{i},\,q_{j})>cR$ $\displaystyle:\,Pr[d\big{(}h(q_{i}),\,h(q_{j})\big{)}<S]\leq p_{2}$ which is elaborated on in Appendix A.1. With this definition, a family $\mathcal{F}$ is referred to as a “Locality Sensitive Hash” family, or alternatively as ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive if $p_{1}>p_{2}$. In simple and functional terms, illustrated schematically in Fig. 1b, a family of hash functions will exhibit LSH behavior if the probability of a hash collision is higher for points that are closer together in the input space. ### 2.2 Introduction to mechanical systems as locality sensitive hash functions In this paper, we will explore the idea of using mechanical behavior to perform locality sensitive hashing where $\mathcal{F}$ will define a class of mechanical systems. In Fig. 1c, we schematically illustrate our approach to defining this problem. Specifically, we will consider a vertical distributed load $w(x)$ applied on the surface of a mechanical system. The mechanical system will be drawn from a family $\mathcal{F}$, where the mechanical behavior of the system will lead to multiple force sensor readouts at discrete locations, treated as hash values. For this setup, we define the input continuous distributed load $w(x)$ as an evenly spaced $N\times 1$ dimensional vector (i.e., $w(x)$ is a continuous interpolation of $N$ points) and the process of “hashing” entails converting this $N$ dimensional vector into $ns$ (number of sensors) force sensor readouts. In future work the distributed load could be conceptualized as either multi-dimensional, i.e., $w(x,y)$, or displacement driven, and the sensor readouts could capture alternative forms of behavior (e.g., strain). Figure 2: a) Schematic illustration of $\mathcal{F_{\mathrm{ss}}}$, a family of simply supported beams with two supports ($A$ and $B$); b) Schematic illustration of $\mathcal{F_{\mathrm{ss-c3}}}$, a family of simply supported composite beams with three supports ($A$, $B$, and $C$). Here we will define our first family of mechanical hash functions $\mathcal{F}_{ss}$ as a family of simply supported beams, illustrated in Fig. 2a, where supports $A$ and $B$ are randomly placed at positions $l_{a}$ and $l_{b}$ such that they are separated by a minimum distance $mL$ where $0<m<1$ and $L$ is the length of the beam. Here, $w(x)$ will be the input distributed load, and force at each of the $ns$ sensors is the hash value output. Following eqn. 2, we will treat two hash values as a collision if the readouts at all sensors are within a tolerance $S$. We can then assess if the conditions defined in eqn. 1-2 hold for $\mathcal{F}_{ss}$. In Appendix A.1, we expand on this in detail and explicitly define $R$, $cR$, $p_{1}$, and $p_{2}$ for $\mathcal{F}_{ss}$. However, for $\mathcal{F}_{ss}$, it only requires a simple thought experiment to demonstrate that $\mathcal{F}_{ss}$ is not ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive. In brief, we can demonstrate that $\mathcal{F}_{ss}$ is not ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive by showing that we can choose two points in our inputs space ($w_{1}$, $w_{2}$), that are arbitrarily far apart ($d(w_{1},w_{2})>cR$ for an arbitrarily large $cR$) yet still lead to a hash collision for all possible mechanical hash functions in $\mathcal{F}_{ss}$ ($p_{2}=1$). In the context of mechanics, because all distributed loads with the same resultant force and centroid lead to the same reaction forces, simply supported beams do not meet the definition for locality sensitive mechanical hash functions. However, if we consider even a slightly more complicated family of mechanical systems, simply supported composite beams with $3$ supports, referred to as $\mathcal{F}_{ss-c3}$ and illustrated in Fig. 2b, the situation changes. For $\mathcal{F}_{ss-c3}$, we consider supports $A$, $B$, and $C$ that are randomly placed at positions $l_{a}$, $l_{b}$, and $l_{c}$ such that they are each separated by a minimum distance $mL$ where $0<m<0.5$ and segments $AB$ and $BC$ and connected through a roller support. Because $l_{a}$, $l_{b}$, and $l_{c}$ change for each hash function in $\mathcal{F}_{ss-c3}$, two far apart distributed loads will collide with $p_{2}<1$, thus if we define $R$ as small enough such that the readout at each sensor will be within tolerance $S$ and thus $p_{1}=1$, we can show that $\mathcal{F}_{ss-c3}$ is ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive. An explicit computation of $R$, $c$, $p_{1}$, and $p_{2}$ for $\mathcal{F}_{ss-c3}$ is expanded on in Appendix A.2. Overall, this simple demonstration is a proof of concept for mechanical systems as locality sensitive hash functions. Though this framework is straightforward, it is important to define as this work will lay the foundation for addressing more complex problems where we can consider broader definitions of mechanical hash functions, inputs, outputs, and system structure. In Section 3, we will examine the functional performance of these simple beams alongside more complicated mechanical systems that may lead to more desirable functional LSH behavior. In addition, the straightforward LSH approach defined here will serve as a baseline for novel strategies to optimize mechanical systems for downstream signal processing. ### 2.3 Example problem definition Beyond satisfying the criteria defined in eqn. 1, we are interested in assessing the functional utility of mechanical systems for performing locality sensitive hashing. To this end, we now define an example problem consisting of example loading, defined in Section 2.3.1, example mechanical systems, defined in Section 2.3.2, and evaluation metrics, defined in Section 2.3.3. In brief, we will investigate how different mechanical systems transform mechanical signals in the context of LSH. Figure 3: a) Illustration of the $20$ classes of applied loads; b) Schematic illustration of the simply supported and ensemble beam mechanical systems (note an example ensemble is explicitly illustrated in Appendix Fig. 8); c) Schematic illustration of the rectangular domains with different depths ($depth=[1.0,2.5,5.0,10.0,20.0]$) and number of sensors ($ns=[2,3,4,5]$). For all rectangular domains, we simulate the bottom fixed only at sensors ($\square$) and the whole bottom fixed ($\diamond$); d) Schematic illustration of the lattice domains ($L3$, $L4$, $L5$); e) Schematic illustration of the custom domains ($C1$, $C2$, $C3$). Note that all lattice and custom domains have $depth=10.0$. In b-e, the markers next to each mechanical system match the markers used in Fig. 4-5. #### 2.3.1 Example loads In Fig. 3a, we show our $200$ randomly generated applied loads across $20$ different categorical classes. In brief, we consider: constant loads, linear loads, piecewise linear loads, absolute value sinusoidal loads, and kernel density estimate loads based on randomly generated point densities. Note that, as illustrated in Fig. 1c, we apply all loads pointing downwards. Building on the framework laid out in Section 2.2, these loads are chosen to: (1) pose a risk of unwanted hash collisions across categories (i.e., identical centroids and resultant forces), while (2) being qualitatively different. For each of the $20$ categorical classes, we generate $20$ examples with different selections of correlated Perlin noise [34, 35]. Specifically, we add Perlin noise with a randomly selected seed and octave (random integer with range $[2,10]$) – this is the source of the variation across each category in the curves illustrated in Fig. 3a. Additional details for describing all categories of load are given in Appendix B.2, and the link to the code for re- generating these loads including selected random seeds is given in Section 5. #### 2.3.2 Example mechanical systems In Fig. 3b-e, we show the mechanical systems investigated in this study. For all examples, we set the top surface length $L=10$ (length units). In brief, we consider the following classes of mechanical systems: * • Simply supported and simply supported composite beams defined in Section 2.2. We will consider composite beams with up to $10$ supports, and we will report the performance of both single beam instances and hard voting based ensemble behavior for a total of $17$ different scenarios (see Fig. 3b). For the beam ensembles, each of the $100$ composite beams will have different randomly generated support locations. The final performance of each ensemble will then be represented as a single value that combines information from all $100$ randomly generated composite beams. An illustration of a representative beam ensemble is included in Appendix B.1. * • Homogeneous rectangular domains with variable depth, number of sensors, and fixity. We will consider rectangular domains with $2$, $3$, $4$, and $5$ force sensors, depths $1.0$, $2.5$, $5.0$, $10.0$, and $20.0$, and both with and without bottom fixity. For reference, a rectangular domain with $depth=1.0$ will have dimension $1.0\times 10.0$, and a rectangular domain with $depth=10.0$ will be a square. The combination of $4$ sensor options, $5$ depths, and $2$ bottom fixities leads to leads to a total of $4\times 5\times 2=40$ different rectangular domains (see Fig. 3c where all potential sensor placements and depths are illustrated). * • Lattice domains with $3$, $4$, and $5$ sensors and corresponding $2\times 2$, $3\times 3$, and $4\times 4$ window grids (see Fig. 3d). All lattice domains have $depth=10.0$. * • Custom domains with three different geometries and variable sensor numbers $ns$ (see Fig. 3e). All custom domains have $depth=10.0$. For each mechanical system, we run $400$ simulations corresponding to each of the applied loads illustrated in Fig. 3a and report the $y$ direction force at each sensor location. In the context of LSH, the applied loads are the input, and these forces are the hash function output. In all cases, sensor locations are fixed in both the $x$ and $y$ direction, but only the $y$ direction force is used in subsequent analysis. Simply supported beams are simulated in a Python [36] script that we link to in Section 5. All finite element simulations are conducted using open source finite element software FEniCS [37, 38] and built in FEniCS mesh generation software mshr. To avoid numerical artifacts, we simulate all domains as Neo-Hookean materials with $\nu=0.3$, and a fine mesh (mshr mesh parameter set to $200$) of quadratic triangular elements. The link to the code for re-generating all simulation results is given in Section 5. In addition, our code contains a tutorial for designing and simulating user defined architected domains to make it straightforward to expand on the initial results presented in this paper. #### 2.3.3 Evaluation metrics Based on the suite of $400$ loads defined in Section 2.3.1 and the $63$ mechanical systems defined in Section 2.3.2 (see Fig. 3), we will have over $25,000$ simulation results to analyze. To draw conclusions from these results, we will examine the relationship between three different quantities of interest: (1) the probability of a hash collision with respect to the $L^{\infty}$ distance between loads, (2) the Spearman’s Rank correlation coefficient $\rho$ between the $L^{\infty}$ distance between loads and the $L^{\infty}$ distance between hash values [39, 40], and (3) the classification accuracy based on a single nearest neighbor (note that there are $20$ categories of loads, with $20$ examples in each category, illustrated in Fig. 3a) [41]. To begin, we will report the probability of hash collision $p_{collision}$ as function of the $L^{\infty}$ distance between input loads and hash values for all $40$ rectangular domains and compare to a baseline simply supported beam from $\mathcal{F}_{ss}$. To approximate $p_{collision}$ vs. $L^{\infty}$ distance for our selection of example loads, we divide all load pairs into five equally sized bins based on the distance between input loads. Then, we compute $p_{collision}$ as the fraction of hash values in each bin with $L^{\infty}<0.01$ (input loads are normalized to sum up to $1$). In Fig. 4, we plot $p_{collision}$ with respect to the average input distance value associated with each equally sized bin. Thus, for each device we will have a curve that shows binned $p_{collision}$ vs. $L^{\infty}$ distance between loads. In Section 3, we visualize this behavior in Fig. 4 as one (limited) approach to observing LSH behavior. Beyond $p_{collision}$ vs. $L^{\infty}$ distance, we will also compute Spearman’s $\rho$ designed to capture the rank correlation between the $L^{\infty}$ distance between loads and the $L^{\infty}$ distance between hash values. For all $79,800$ load pairs ($400\times 399/2=79,800$ pairs) we rank order the distances of both the loads and the hash values and then compute $\rho$ as: $\ \rho=1-\frac{6\sum\limits_{i=1}^{n}r_{i}^{2}}{n\,(n^{2}-1)}$ (3) where $r_{i}$ is the difference between the ranks of the load and hash values distances and $n$ is the number of load pairs. For perfect monotonic correlation $\rho=1$ and for no correlation $\rho=0$. We perform this operation with the Python function scipy.stats.spearmanr() [40]. Though visualizing $p_{collision}$ vs. $L^{\infty}$ is a more intuitive match to the LSH definitions in eqn. 1-2, the quantitative comparison of preserving input relationships is perhaps more interpretable via Spearman’s $\rho$. In Section 3, we visualize Spearman’s $\rho$ in each device both with respect to classification accuracy (Fig. 5), and mechanical system properties (Fig. 6, see also Appendix Fig. 12). The third evaluation quantity is chosen as a set up for future applications in “learning to hash.” In future “learning to hash” applications, the hashing efficacy of the mechanical system will be measured via the performance on a functional task. Here we choose classification accuracy as an example functional task. In brief, each of the $400$ loads belongs to one of $20$ applied load classes ($20$ classes, $20$ loads per class). This is illustrated in Fig. 3a. Here, we compute classification accuracy using a simple nearest neighbor algorithm where we predict the class of a given load based on its hash value. For each of the $400$ loads, we implement a new k-nearest neighbor classifier based on the $399$ other loads and then see what class is predicted for the held out load with $k=1$ [42]. Classification accuracy is then defined based on the ability of this algorithm to predict the class of the held out load: $\mathrm{accuracy}=\frac{\mathrm{correct\,predictions}}{\mathrm{total\,predictions}}$ (4) where the “correct” prediction is which of the $20$ applied load classes (see Fig. 3a) the input load belongs to. We implement this algorithm with scikit- learn [41] and report the average prediction accuracy across all $400$ individual hold out cases. Because there are $20$ identically sized labeled classes according to the problem definition, the baseline prediction accuracy that represents random guessing is $0.05$. In Section 3, classification accuracy is reported in Fig. 5. As a brief note, the link to the code for computing all of these quantities of interest is given in Section 5. ## 3 Results and Discussion In this Section, we will summarize the results of the investigation detailed in Section 2.3. On one hand, the results of this study are largely intuitive – applied loads influence mechanical response, and different applied loads lead to different mechanical response. On the other hand, this initial investigation is an important step because it lays the groundwork for significant future investigation in designing mechanical systems that outperform the baseline proof of concept results shown here. Multiple future directions are explicitly stated in Section 4. Figure 4: Plot of $p_{\mathrm{collision}}$ of hash values (defined as the probability that $||h_{i}-h_{j}||_{\infty}<0.01$) vs. distance between input loads $||w_{i}-w_{j}||_{\infty}$ for all rectangular domains. For context, we also include the curve for a simply supported beam with two supports, and the curve for the mean of all rectangular samples (see inset, where $p_{\mathrm{collision}}=[0.14,0.099,0.075,0.070,0.047]$). ### 3.1 Probability of hash collision decreases with increasing distance between input loads The first major result of this investigation is shown in Fig. 4 where we visualize the probability of a hash collision $p_{\mathrm{collision}}$ vs. the $L^{\infty}$ distance between the normalized input loads (sampled with $N=1000$) for all rectangular domains. Following Section 2.3.2 and Section 2.3, we define a collision as the circumstance where every component of the hash value is within distance $S=0.01$ (with all input loads normalized to have the same total resultant force). For example, $[0.5,0.25,0.25]$ and $[0.495,0.2525,0.2525]$ would “collide.” The critical outcome shown in this plot is that $p_{\mathrm{collision}}$ decreases as the $L^{\infty}$ distance increases, which is desirable behavior for locality sensitive hashing and hashing for similarity search in general. We note briefly that the inset plot of Fig. 4 shows the mean $p_{\mathrm{collision}}$ curve for all rectangular domains where this decrease is readily visible. However, in Fig. 4, we also plot a baseline $p_{\mathrm{collision}}$ curve for a simply supported beam with two supports which shows that observing a decrease in $p_{\mathrm{collision}}$ vs. the $L^{\infty}$ input distance for this selection of input loads (see Fig. 3a) is not sufficient to claim LSH behavior. Therefore, we turn to the other metrics defined in Section 2.3.3, Spearman’s $\rho$ and classification accuracy, to add needed context to our investigation. Figure 5: Plot of classification accuracy vs. Spearman’s $\rho$ for all mechanical systems explored in this study. Note the inset plot which contains $S2$ (simply supported beam with two supports) and similarly performing mechanical systems. Fig. 12 also presents Spearman’s $\rho$ and classification accuracy plotted with respect to number of sensors and domain depth. ### 3.2 Spearman’s $\rho$ and classification accuracy vary across mechanical systems In Fig. 5, we explore two key components of the functional behavior of our mechanical hashing systems, Spearman’s $\rho$ and classification accuracy, and visualize their relationship. From a functional perspective, Spearman’s $\rho$ captures a picture of the potentially non-linear distance preservation between the input loads and the hash values. And, from a functional perspective, assessing classification accuracy will set up a toy problem and baseline functional performance for future work in optimizing mechanical domains to perform hashing functions. Specifically, we anticipate future work in designing application specific mechanical systems that target specific functionality (e.g., high classification accuracy) where the results shown here will serve as baselines at these tasks. Overall, we observe that accuracy ranges from $\approx 0.17$, the $accuracy$ of a simply supported beam with two supports, to $\approx 0.8$, the accuracy for simply supported ensembles with $>5$ sensors (note that force sensors are all located at the supports). For reference, $accuracy=0.05$ corresponds to random guessing, and $accuracy=0.77$ corresponds to the prediction accuracy that is obtained by performing classification with the input loads directly rather than with the hash values. And, overall, Spearman’s $\rho$ and classification accuracy appear to be correlated, which is consistent with LSH behavior for scenarios where the hash function is not specifically learned to perform a non-linear transformation on the distribution of inputs. In addition, it is worth mentioning that the inset plot in Fig. 5 contains multiple “poorly performing” domains with both low Spearman’s $\rho$ and low classification accuracy. Notably, the domains in this region are rectangular domains that are deep and/or have only two sensors. And, as demonstrated by the inset plot, these domains all perform similarly to the simply supported beam with two supports. For each of the mechanical systems introduced in Section 2.3.2, key observations are as follows: * • For the simply supported composite beams and beam ensembles, increasing $ns$ leads to both higher Spearman’s $\rho$ and higher $accuracy$. And, the hard voting based ensemble predictions tend to outperform the single mechanical system results. Notably, the transition between $ns=2$ and $ns=3$ leads to the largest incremental improvement in performance for the simply supported beams ($accuracy=0.175$ to $accuracy=0.42$), which is also consistent with our introduction to the concept of LSH in Section 2.2. * • For the rectangular domains, $accuracy$ and Spearman’s $\rho$ vary widely. As stated previously, some designs perform poorly – similar to the simply supported beam with $ns=2$ – whereas other designs exceed the simply supported composite beam performance for the equivalent number of sensors. In general, increasing $ns$ and decreasing $depth$ correspond to increases in $\rho$ and improvements in $accuracy$. From a mechanics perspective, this is a logical result as increasing $ns$ will provide more information about mechanical behavior while increasing $depth$ will lead to diminishing differentiation between applied tractions with the same resultant force and centroid following Saint-Venant’s principle [43]. * • In comparison to the rectangular domains, the lattice domains led to consistent performance improvements ($accuracy_{L3}=0.43$, $accuracy_{L4}=0.51$, $accuracy_{L5}=0.61$). * • In comparison to the rectangular domains and the lattice domains, the custom domains selected offered little performance improvement ($accuracy_{C1}=0.42$, $accuracy_{C2}=0.37$, $accuracy_{C3}=0.37$). Further visualizations are shown in Fig. 12 where Spearman’s $\rho$ and classification accuracy are plotted with respect to the number of sensors and domain depth. And, in Table 1, we also list Spearman’s $\rho$ and classification accuracy for each domain directly. Finally, it is worth re- emphasizing that even when classification is performed on the input signals directly, we only achieve $accuracy=0.77$. This is because we defined an example problem with loads that can be difficult to disaggregate in the presence of noise. In Appendix C Fig. 6, we provide the confusion matrix for the unaltered input signals to highlight which loads are leading to overlapping predictions. This quantitative outcome is consistent with the qualitative comparison that can be made by examining the plots in Fig. 3a. Figure 6: Visualization of Spearman’s $\rho$ (upper) and classification accuracy (lower) with respect to domain depth and number of sensors. In the left column, results from individual rectangular domains are indicated by the $\square$ and $\diamond$ markers, and the background shading is based on the rectangular domain with a fixed bottom. In the right column, results from the lattice and custom domains are superimposed on the same background shading, thus comparing the lattice and custom domains to the rectangular domain baseline. Note that the fill color of all markers is dictated by Spearman’s $\rho$ (upper) and classification accuracy (lower). Figure 12 also presents Spearman’s $\rho$ and classification accuracy plotted with respect to number of sensors and domain depth. ### 3.3 Architected domains change, and can be used to enhance, task specific LSH performance In Fig. 6, we re-organize the rectangular, lattice, and custom domain data shown in Fig. 5 to better visualize the influence of architected domains on Spearman’s $\rho$ and classification accuracy with respect to both domain depth and number of sensors. Critically, this figure demonstrates that even for a fixed number of sensors, there is a large variation in domain performance. And, by comparing Spearman’s $\rho$ and classification accuracy of the lattice domains to the rectangular domains, we can see that architecting these domains can help overcome the drop off in performance with respect to domain depth. Finally, the spread in performance between the lattice and custom domains also indicates that there is potential richness to this problem where selecting domains that will perform well is non-trivial. These observations taken together point towards a strong future opportunity to engineer systems that, for a given suite of possible loads and allowable number of sensors, are specifically designed to maximize Spearman’s $\rho$, classification accuracy, and/or achieve an alternative type of engineered relationship between mechanical inputs and sensor readouts. Though the notion that architected domains can alter force transmission is a straightforward result, the framework that we have established here will directly enable future work in the design and optimization of architected domains to perform desirable application specific signal transformations. ## 4 Conclusion In this paper, we began with a brief introduction to the concept of hashing and hashing for similarity search. Then, we define locality sensitive hashing and lay the foundation for considering mechanical systems as locality sensitive hash functions. From both our analytical and computational investigations, we find that mechanical systems can exhibit the properties required for locality sensitive hashing, and we find that by tuning mechanical inputs (e.g., boundary conditions, domain architecture) we can change the functional efficacy of mechanical systems for this task. Based on our observations, and the very general scope of “mechanical systems,” we anticipate that there will be a significantly broader potential range of behavior than what we captured in the systems selected for this study. Overall, the main contributions of this work are to: (1) introduce the concept of locality sensitive hashing via mechanical behavior, (2) define a numerical approach to readily assessing metrics that indicate functional locality sensitive hashing behavior, and (3) establish a baseline performance for future comparison where mechanical systems are optimized to perform hashing for similarity search related tasks. Following this thread, is worth highlighting that we view this investigation as a starting point for significant further study of hashing performed by mechanical systems. Looking forward, we anticipate four key endeavors that will build on this work. First, future work is required to demonstrate that mechanical systems besides simply supported and simply supported composite beams do or do not meet the formal definition of locality sensitive hash functions. We anticipate that this work will be conducted by beginning with our straightforward to compute proxies for locality sensitive behavior, and then showing formally that a given mechanical system is able to meet the definition laid out in eqn. 2 by anticipating extreme load pairs following the procedure in Appendix A.1. Second, an important next step is the physical realization of mechanical systems for locality sensitive hashing. Constraints imposed by the need for ready constructability and current sensing capabilities will pose a challenge [44], and it will be important to determine if our findings remain consistent in the equivalent experimentally realized systems. Third, it will be interesting to explore the efficacy of a learning to hash approach in mechanical systems where the input distribution is known and the hash function (i.e., the mechanical system) is designed specifically to perform a desired task [45, 46]. The framework and metrics defined here will allow us to construct an optimization problem where both system mechanical behavior and sensor placement can be jointly tailored to serving a specific function. In future learning to hash applications, the mechanical systems explored in this work can serve as a baseline for comparison, where optimized systems should lead to better performance of a desired task. Because this is a challenging optimization problem, we anticipate that there will be a need to implement efficient modeling and optimization strategies [47, 48, 49, 50]. Finally, we anticipate that that the structural form of architected materials that are highly effective at hashing for similarity search may exist in nature, and identifying relevant motifs may help us better understand force transmission in biological cells and tissue. Though this initial study is quite straightforward, we anticipate that it will directly enable a highly novel approach to physical computing. ## 5 Additional Information The data and code to reproduce and build on the results in this paper are provided on our GitHub page (https://github.com/elejeune11/mechHS). ## 6 Acknowledgements This work was made possible with funding through the Boston University David R. Dalton Career Development Professorship, the Hariri Institute Junior Faculty Fellowship, the Haythornthwaite Foundation Research Initiation Grant, and the National Science Foundation Grant CMMI-2127864. This support is gratefully acknowledge. ## Appendix A Simply Supported Composite Beams as Locality Sensitive Hash Functions Here we provide supporting information for the work presented in Section 2.1. As stated in the main text, we are exploring the problem schematically illustrated in Fig. 1c. Following the definition in eqn. 1, our goal is to determine if a given family of mechanical hash functions $\mathcal{F}$ is ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive. ### A.1 Simply Supported Beams Figure 7: a) Schematic illustration of $\mathcal{F_{\mathrm{ss}}}$; b) Visualization of the amplification of the $\pm R\,\mathbf{1}$ term defined in eqn. 6 for different support positions $l_{a}$ and $l_{b}$ (here $m=0.1$, the $\times$ markers indicate choices where $|l_{a}-l_{b}|<mL$, and we discretize $11$ evenly spaced potential support positions $[0,L]$); c) Schematic illustration of an example of two loads that will always collide for $\mathcal{F_{\mathrm{ss}}}$. As our first exploration, we define a family of mechanical hash functions $\mathcal{F}_{ss}$ as a family of simply supported beam reaction forces. This family, illustrated in Fig. 7a, is defined by the randomly generated placement of reaction supports, $A$ and $B$ placed at $l_{a}$ and $l_{b}$ respectively. These supports can have any location as long as they are separated by distance $mL$, where $0<m<1$ and $L$ is the length of the beam. To begin, we will first establish $R$, the threshold distance, and $p_{1}$, the probability of a collision for two loads within the threshold distance. For simplicity, we choose to define $R$ for $p_{1}=1$. To do this, we need to define the threshold distance between two loads $w(x)_{1}$ and $w(x)_{2}$ that will always hash to the same value, defined as $h[w_{1}(x)]=h[w_{2}(x)]$ and alternatively written as $h_{1}=h_{2}$. To explicitly define distance $R$, we need to define the size of our hash buckets $S$. Because our outputs will come in the form of continuous numerical values, we will conceptualize our hash buckets as discrete bins that break up this continuous space into bins of size $S$. To mitigate the influence of the placement of the bin boundaries, we will simply consider any two numbers within distance $S$ as identical. Therefore, for a simply supported beam, two loads $w_{1}(x)$ and $w_{2}(x)$ will experience a hash collision when both reaction forces collide, defined as: $|A_{y1}-A_{y2}|<S\qquad\mathrm{and}\qquad|B_{y1}-B_{y2}|<S\,.$ (5) For $\mathcal{F}_{ss}$, we can define $w_{1}$ and $w_{2}$ in terms of $R$ as: $w_{2}(x)=w_{1}(x)\pm R\,\mathbf{1}$ (6) where $\mathbf{1}$ is a vector with the same length $N$ as $w_{1}(x)$ and $w_{2}(x)$ such that $R$ is the $L^{\infty}$ norm of the distance between $w_{1}(x)$ and $w_{2}(x)$. For $\mathcal{F}_{ss}$, as illustrated in Fig. 7b, the most extreme amplification of the $\pm R\,\mathbf{1}$ term will occur when $l_{a}=0$ and $l_{b}=mL$ (alternatively $l_{a}=L-mL$, $l_{b}$ = L). For $l_{a}=0$ and $l_{b}=mL$, we perform a simple equilibrium calculation to compute: $\displaystyle A_{y1}$ $\displaystyle=\int_{0}^{L}w_{1}(x)\,\mathrm{d}x-\frac{1}{mL}\int_{0}^{L}w_{1}(x)\,x\,\mathrm{d}x$ $\displaystyle B_{y1}$ $\displaystyle=\frac{1}{mL}\int_{0}^{L}w_{1}(x)\,x\,\mathrm{d}x$ (7) $\displaystyle A_{y2}$ $\displaystyle=A_{y1}\pm\bigg{(}1-\frac{1}{2m}\bigg{)}RL$ $\displaystyle B_{y2}$ $\displaystyle=B_{y1}\pm\bigg{(}\frac{1}{2m}\bigg{)}RL$ which allows us to compute: $\displaystyle\bigg{|}A_{y1}-A_{y2}\bigg{|}$ $\displaystyle=\bigg{|}\bigg{(}1-\frac{1}{2m}\bigg{)}RL\bigg{|}$ (8) $\displaystyle\bigg{|}B_{y1}-B_{y2}\bigg{|}$ $\displaystyle=\bigg{|}\bigg{(}\frac{1}{2m}\bigg{)}RL\bigg{|}$ where: $\displaystyle S$ $\displaystyle=\max\bigg{(}\,\big{|}\big{(}1-\frac{1}{2m}\big{)}RL\big{|},\,\big{|}\big{(}\frac{1}{2m}\big{)}RL\big{|}\,\bigg{)}$ (9) $\displaystyle=\big{|}\big{(}\frac{1}{2m}\big{)}RL\big{|}$ for $0<m<1$ which allows us to compute: $R=2mS/L\,.$ (10) Therefore, for $d(p,q)<2mS/L$, $p_{1}=1$. Following our identification of $R$ for $p_{1}=1$, we need to see if it is possible to specify $c$ such that $p_{2}<p_{1}$. Here is where we encounter the fundamental limitation of $\mathcal{F}_{ss}$ as a ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive hash function. Of course, for $\mathcal{F}_{ss}$, we can define two loads $w_{1}(x)$ and $w_{2}(x)$ that are arbitrarily far apart (i.e., no upper limit on $c$) that will always lead to a hash collision. Specifically, we just need to choose two different loads with the same resultant force and centroid. For example, illustrated in Fig. 7c, we can define one load as: $w_{1}(x)=q\mathbf{1}$ (11) where $q$ is a constant and $\mathbf{1}$ is length $N$ vector of ones, and another load as a central spike, written as: $\displaystyle w_{2}(x)=\begin{cases}0,&x<\frac{L(N-1)}{2N}\\\ qN,&\frac{L(N-1)}{2N}\leq x\leq\frac{L(N+1)}{2N}\\\ 0,&\frac{L(N+1)}{2N}<x\\\ \end{cases}$ (12) and illustrated in Fig. 7c. For this choice of $w_{1}(x)$ and $w_{2}(x)$, we can compute $||w_{1}(x)-w_{2}(x)||_{\infty}$ as: $||w_{1}(x)-w_{2}(x)||_{\infty}=q(N-1)$ (13) which can become arbitrarily large.222$||w_{1}(x)-w_{2}(x)||_{\infty}=q(N-1)$ if $N$ is odd, $||w_{1}(x)-w_{2}(x)||_{\infty}=q(N/2-1)$ if $N$ is even. Because $p_{2}=1$ for any value of $c$, we can formally say that $\mathcal{F}_{ss}$ is not ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive. ### A.2 Simply Supported Composite Beams with $>2$ Supports Figure 8: a) Schematic illustration of $\mathcal{F_{\mathrm{ss-c3}}}$; b) Visualization of the amplification of the $\pm R\,\mathbf{1}$ term defined in eqn. 6 for different support positions $l_{b}$ and $l_{c}$ (here $m=0.1$, the $\bullet$ marker indicates the selection of $l_{a}=0$, the $\times$ markers indicate choices where $|l_{a}-l_{b}|<mL$ or $|l_{a}-l_{c}|<mL$ or $|l_{b}-l_{c}|<mL$, and we discretize $11$ evenly spaced potential support positions $[0,L]$); c) Schematic illustration of a pair of loads that we believe will have the highest probability of a hash collision for mechanical systems in $\mathcal{F_{\mathrm{ss-c3}}}$. If we instead consider a slightly more complicated family of mechanical systems, simply supported composite beams with 3 supports, referred to as $\mathcal{F}_{ss-c3}$ and illustrated in Fig. 8a, this picture changes. Here, we consider a composite beam where segments $AB$ and $BC$ are connected via a roller support with supports $A$, $B$, and $C$ located at $l_{a}$, $l_{b}$, and $l_{c}$ respectively. In this case, a hash collision between two loads $w_{1}(x)$ and $w_{2}(x)$ requires all three support reactions to collide, written as: $|A_{y1}-A_{y2}|<S\qquad\mathrm{and}\qquad|B_{y1}-B_{y2}|<S\qquad|C_{y1}-C_{y2}|<S\,.$ (14) We can follow the same logic to compute $R$ as the prior $\mathcal{F}_{ss}$ example. Specifically, for $\mathcal{F}_{ss-c3}$ we identify the the most extreme amplification of the $\pm R\mathbf{1}$ term introduced in eqn. 6 for $l_{a}=0$, $l_{b}=mL$, and $l_{c}=2mL$, where $m$ controls the minimum allowable distance between supports (see Fig. 8b for a visualization of this amplification). Again, we perform a simple equilibrium calculation to compute the reaction supports $A_{y}$, $B_{y}$, $C_{y}$ for $w_{1}(x)$ and $w_{2}(x)$ as: $\displaystyle A_{y1}$ $\displaystyle=\frac{mL\int_{0}^{mL}w_{1}(x)\,\mathrm{d}x\,-\int_{0}^{mL}w_{1}(x)\,x\,\mathrm{d}x}{mL}$ (15) $\displaystyle B_{y1}$ $\displaystyle=\int_{0}^{L}w_{1}(x)\,\mathrm{d}x-\frac{mL\int_{0}^{mL}w_{1}(x)\,\mathrm{d}x\,-\int_{0}^{mL}w_{1}(x)\,x\,\mathrm{d}x}{mL}-\frac{\int_{mL}^{L}w_{1}(x)\,x\,\mathrm{d}x-mL\int_{mL}^{L}w_{1}(x)\,\mathrm{d}x}{mL}$ $\displaystyle C_{y1}$ $\displaystyle=\frac{\int_{mL}^{L}w_{1}(x)\,x\,\mathrm{d}x-mL\int_{mL}^{L}w_{1}(x)\,\mathrm{d}x}{mL}$ $\displaystyle A_{y2}$ $\displaystyle=A_{y1}\pm\frac{RmL}{2}$ $\displaystyle B_{y2}$ $\displaystyle=B_{y1}\pm\bigg{(}RL-\frac{RmL}{2}-\frac{R(L-mL)^{2}}{2mL}\bigg{)}$ $\displaystyle C_{y2}$ $\displaystyle=C_{y1}\pm\frac{(L-mL)^{2}R}{2mL}$ which allows us to compute: $\displaystyle\bigg{|}A_{y1}-A_{y2}\bigg{|}$ $\displaystyle=\bigg{|}\frac{RmL}{2}\bigg{|}$ (17) $\displaystyle\bigg{|}B_{y1}-B_{y2}\bigg{|}$ $\displaystyle=\bigg{|}RL-\frac{RmL}{2}-\frac{R(L-mL)^{2}}{2mL}\bigg{|}$ $\displaystyle\bigg{|}C_{y1}-C_{y2}\bigg{|}$ $\displaystyle=\bigg{|}\frac{(L-mL)^{2}R}{2mL}\bigg{|}$ which can be manipulated, following the example in eqn. 9, as: $\displaystyle S$ $\displaystyle=\max\bigg{(}\bigg{|}\frac{RmL}{2}\bigg{|},\,\bigg{|}RL-\frac{RmL}{2}-\frac{R(L-mL)^{2}}{2mL}\bigg{|},\,\bigg{|}\frac{(L-mL)^{2}R}{2mL}\bigg{|}\bigg{)}$ (19) $\displaystyle S$ $\displaystyle=\begin{cases}\big{|}\frac{(L-mL)^{2}R}{2mL}\big{|},&0<m<1-\frac{\sqrt{12}}{6}\\\ \big{|}RL-\frac{RmL}{2}-\frac{R(L-mL)^{2}}{2mL}\big{|},&1-\frac{\sqrt{12}}{6}\leq m\leq\frac{1}{2}\\\ \end{cases}$ (20) which allows us to determine: $R=\frac{2SmL}{(L-mL)^{2}}$ (21) where $0<m\leq\frac{1}{3}$. As a brief note, we define $0<m\leq\frac{1}{3}$ for the general case of $3$ supports in order to accommodate all supports within total length $L$ while simultaneously allowing realizations that place supports at any location throughout the domain. Equation 21 will hold for $0<m<1-\frac{\sqrt{12}}{6}$. After we have shown that we can compute $R$ for $p_{1}=1$, we then need to show that for some value of $c$ greater than $1$, $p_{1}$ will be greater than $p_{2}$. To do this, we conceptualize a worst case example of $w_{1}(x)$ and $w_{2}(x)$ where substantially different loads will lead to hash collisions by considering the two loads illustrated in Fig. 8c. Here, $w_{1}(x)=\mathbf{0}$ and we can define $w_{2}(x)$ as a piecewise function: $\displaystyle w_{2}(x)$ $\displaystyle=\begin{cases}0,&x<\frac{tL}{N},\\\ -cR/2,&\frac{tL}{N}\leq x<\frac{(t+1)L}{N},\\\ cR,&\frac{(t+1)L}{N}\leq x<\frac{(t+2)L}{N},\\\ -cR/2,&\frac{(t+2)L}{N}\leq x<\frac{(t+3)L}{N},\\\ 0,&\frac{(t+3)L}{N}\leq x\\\ \end{cases}$ (22) where $t$ is an integer, and $L/N$ with $N\geq 3$ represents the discretization of the load, and $c>1$ following the definition introduced with eqn. 1. In this case, a hash collision will occur when either: (1) $c$ is small enough that $w_{2}(x)$ will always lead to a change in support force $<S$ regardless of where the support positions are located, or (2) when the support positions defined by distances $l_{a}$, $l_{b}$, and $l_{c}$ (see Fig. 7b-i) all fall outside the range $[tL/N,(t+3)L/N]$. To satisfy the conditions for locality sensitive hashing laid out in eqn. 1-2, we need to show that arbitrarily far apart functions (i.e., large $c$ and thus large distance between functions $d(w_{1},w_{2})=cR$) experience a hash collision with $p_{2}<p_{1}$. In other words, we need to determine if there is an upper bound on $p_{2}$ as values of $c$ become arbitrarily large. To do this, we consider scenario (2) and compute $p_{2}$ as the probability that distances $l_{a}$, $l_{b}$, and $l_{c}$ all fall outside the range $[tL/N,(t+3)L/N]$ as a function of our discretization $N$ and our minimum distance between support $m$. In Fig. 8c, we plot $p_{2}$ vs. $N$ for multiple values of $m$. Note that for $m\to 0$, we can readily compute: $p_{2}=\bigg{[}\frac{N-3}{N}\bigg{]}^{3}$ (23) as the probability that all three supports will be placed outside of the $tL/N\leq x\leq(t+3)L/N$ zone if their locations are randomly generated. From Fig. 9, we also see that higher values of $N$ lead to higher values of $p_{2}$. However, it is clear that even for this worst case of comparison points with large $c$, $p_{2}<p_{1}$ and thus the system is ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive. Notably, this simplest case example paves the way for the investigation of more complex mechanical systems as ($R$, $cR$, $p_{1}$, $p_{2}$)-sensitive. Figure 9: Simulated probability of a collision ($p_{\mathrm{collision}}$ with respect to the discretization of the input load ($N$) for different values of $m$. For $m=0$ the curve plotted matches eqn. 23. ## Appendix B Example Problem Additional Details In Section 2.3, we define the example problem that will lead to the results proposed in Section 3. Here we provide additional information to ensure that our problem definition is clear. ### B.1 Simply Supported Ensemble In Fig. 3b-d, we illustrate the mechanical systems that we explore as hash functions. Here, in Fig. 10, we explicitly illustrate what we mean by an “ensemble” of simply supported beams. Namely, each ensemble contains $100$ simply supported beams with randomly generated support locations. Each one of these devices may lead to a different load class prediction, and the final prediction of the ensemble is the hard voting based outcome of combining all $100$ of these predictions. In hard voting, the class labels with the highest frequency from the ensemble predictions becomes the final prediction. Figure 10: Explicit illustration of the difference between simply supported “S” and simply supported ensemble “E” mechanical systems for the case with $5$ supports. ### B.2 Applied Load Categories The applied loads introduce in Section 2.3.1 are described in more detail as follows: * • Class 1, constant load ($c=1/L$): $w_{1}(x)=-c$ (24) * • Class 2, piecewise linear load ($c=2/L$): $\displaystyle w_{2}(x)$ $\displaystyle=\begin{cases}-c+\frac{2c}{L}x,&0\leq x<\frac{L}{2},\\\ (\frac{L}{2}-x)\frac{2c}{L},&\frac{L}{2}\leq x\leq L\\\ \end{cases}$ (25) * • Class 3, piecewise linear load ($c=2/L$): $\displaystyle w_{3}(x)$ $\displaystyle=\begin{cases}-\frac{2c}{L}x,&0\leq x<\frac{L}{2},\\\ \frac{2c}{L}x-2c,&\frac{L}{2}\leq x\leq L\\\ \end{cases}$ (26) * • Class 4, linear load ($c=2/L$): $\ w_{4}(x)=\frac{c}{L}x-c$ (27) * • Class 5, linear load ($c=2/L$): $\ w_{5}(x)=-\frac{c}{L}x$ (28) * • Class 6, sine wave with wave number $k=0.5$ and offset $\varphi=0$: $\ w_{6}(x)=-\frac{\pi}{2L}\bigg{|}\sin\bigg{(}\frac{k}{2\pi L}x-2\pi\varphi\bigg{)}\bigg{|}$ (29) * • Class 7, sine wave with wave number $k=1.0$ and offset $\varphi=0$, see eqn. 29. * • Class 8, sine wave with wave number $k=1.0$ and offset $\varphi=0.25$, see eqn. 29. * • Class 9, sine wave with wave number $k=1.5$ and offset $\varphi=0$, see eqn. 29. * • Class 10, sine wave with wave number $k=1.5$ and offset $\varphi=0.25$, see eqn. 29. * • Class 11, sine wave with wave number $k=2.0$ and offset $\varphi=0$, see eqn. 29. * • Class 12, negative kernel density estimate (kde) based on $n=2$ points $p_{i}$ with uniform random location on the x-axis. The negative kde for a Gaussian kernel $\rho_{K}(x)$ with bandwidth $h=0.1L$ is written as: $\ w_{12}(x)=-\frac{1}{nh}\sum\limits_{i=1}^{i=n}\exp\bigg{(}\frac{-(x-p_{i})^{2}}{2h^{2}}\bigg{)}$ (30) * • Class 13, negative kernel density estimate (kde) based on $n=2$ points $p_{i}$ with uniform random location on the x-axis, see eqn. 30. * • Class 14, negative kernel density estimate (kde) based on $n=2$ points $p_{i}$ with uniform random location on the x-axis, see eqn. 30. * • Class 15, negative kernel density estimate (kde) based on $n=5$ points $p_{i}$ with uniform random location on the x-axis, see eqn. 30. * • Class 16, negative kernel density estimate (kde) based on $n=5$ points $p_{i}$ with uniform random location on the x-axis, see eqn. 30. * • Class 17, negative kernel density estimate (kde) based on $n=5$ points $p_{i}$ with uniform random location on the x-axis, see eqn. 30. * • Class 18, negative kernel density estimate (kde) based on $n=25$ points $p_{i}$ with uniform random location on the x-axis, see eqn. 30. * • Class 19, negative kernel density estimate (kde) based on $n=25$ points $p_{i}$ with uniform random location on the x-axis, see eqn. 30. * • Class 20, negative kernel density estimate (kde) based on $n=25$ points $p_{i}$ with uniform random location on the x-axis, see eqn. 30. Each load is set up so that the area under the curve is equal to $1$. There are $20$ examples for each of the $20$ classes of loads. The $20$ examples are all differentiated from each other through the addition of Perlin noise with randomly selected initial seed and integer octave in range $[2-10]$. Details for accessing the code to exactly reproduce these loads including the randomly generated Perlin noise are given in Section 5. ## Appendix C Results Additional Details This Appendix contains additional supporting results to supplement the information presented in Section 3. In Section 3, we compare the classification accuracy of our mechanical systems to both random guessing $accuracy=0.05$ and direct analysis of the original input data $accuracy=0.77$. Here, in Fig. 11, we show the confusion matrix for nearest neighbor load classification based on the original input data. The purpose of showing this graphic is to demonstrate that we have chosen a challenging suite of applied loads that are non-trivial to distinguish. Next, for completeness and as an alternative view of the relationship between Spearman’s $\rho$, classification accuracy, and mechanical domain properties, we provide Fig. 12 as a supplement to Fig. 5 and Fig. 6. Finally, we provide Table 1 which contains Spearman’s $\rho$ and classification accuracy for every system investigated in this study. These data are directly visualized in Fig. 5, Fig. 6, and Fig. 12. Figure 11: Confusion matrix based on the original input loads. Note that this confusion matrix corresponds to a classification accuracy of $0.77$ across $20$ classes. Figure 12: As a supplement to Fig. 5, we provide Spearman’s $\rho$ and classification accuracy plotted with respect to both number of sensors and domain depth. system type | number of sensors $ns$ | domain depth $d$ | Spearman’s $\rho$ | classification accuracy ---|---|---|---|--- simply supported | 2 | n/a | 0.36 | 0.17 simply supported | 3 | n/a | 0.55 | 0.42 simply supported | 4 | n/a | 0.59 | 0.5 simply supported | 5 | n/a | 0.59 | 0.5 simply supported | 6 | n/a | 0.67 | 0.65 simply supported | 7 | n/a | 0.75 | 0.75 simply supported | 8 | n/a | 0.74 | 0.76 simply supported | 9 | n/a | 0.8 | 0.81 simply supported | 10 | n/a | 0.79 | 0.8 ss ensemble | 3 | n/a | 0.49 | 0.47 ss ensemble | 4 | n/a | 0.54 | 0.65 ss ensemble | 5 | n/a | 0.55 | 0.72 ss ensemble | 6 | n/a | 0.59 | 0.81 ss ensemble | 7 | n/a | 0.63 | 0.82 ss ensemble | 8 | n/a | 0.65 | 0.83 ss ensemble | 9 | n/a | 0.68 | 0.82 ss ensemble | 10 | n/a | 0.7 | 0.82 rect fixed btm | 2 | 1 | 0.41 | 0.41 rect fixed btm | 3 | 1 | 0.52 | 0.65 rect fixed btm | 4 | 1 | 0.65 | 0.61 rect fixed btm | 5 | 1 | 0.69 | 0.77 rect fixed btm | 2 | 2.5 | 0.47 | 0.41 rect fixed btm | 3 | 2.5 | 0.49 | 0.56 rect fixed btm | 4 | 2.5 | 0.55 | 0.56 rect fixed btm | 5 | 2.5 | 0.55 | 0.63 rect fixed btm | 2 | 5 | 0.46 | 0.41 rect fixed btm | 3 | 5 | 0.46 | 0.42 rect fixed btm | 4 | 5 | 0.46 | 0.47 rect fixed btm | 5 | 5 | 0.46 | 0.47 rect fixed btm | 2 | 10 | 0.37 | 0.31 rect fixed btm | 3 | 10 | 0.37 | 0.39 rect fixed btm | 4 | 10 | 0.37 | 0.3 rect fixed btm | 5 | 10 | 0.37 | 0.37 rect fixed btm | 2 | 20 | 0.36 | 0.2 rect fixed btm | 3 | 20 | 0.36 | 0.44 rect fixed btm | 4 | 20 | 0.36 | 0.2 rect fixed btm | 5 | 20 | 0.36 | 0.42 rect | 2 | 1 | 0.37 | 0.17 rect | 3 | 1 | 0.55 | 0.44 rect | 4 | 1 | 0.59 | 0.52 rect | 5 | 1 | 0.69 | 0.63 rect | 2 | 2.5 | 0.36 | 0.18 rect | 3 | 2.5 | 0.55 | 0.41 rect | 4 | 2.5 | 0.59 | 0.51 rect | 5 | 2.5 | 0.65 | 0.61 rect | 2 | 5 | 0.36 | 0.2 rect | 3 | 5 | 0.47 | 0.42 rect | 4 | 5 | 0.46 | 0.47 rect | 5 | 5 | 0.48 | 0.46 rect | 2 | 10 | 0.36 | 0.2 rect | 3 | 10 | 0.36 | 0.38 rect | 4 | 10 | 0.36 | 0.2 rect | 5 | 10 | 0.36 | 0.38 rect | 2 | 20 | 0.36 | 0.21 rect | 3 | 20 | 0.36 | 0.44 rect | 4 | 20 | 0.36 | 0.2 rect | 5 | 20 | 0.36 | 0.42 lattice | 3 | 10 | 0.53 | 0.43 lattice | 4 | 10 | 0.58 | 0.51 lattice | 5 | 10 | 0.58 | 0.61 custom 1 | 3 | 10 | 0.53 | 0.42 custom 2 | 3 | 10 | 0.38 | 0.37 custom 3 | 5 | 10 | 0.44 | 0.37 Table 1: Summary of results, supporting information for Fig. 5. ## References * [1] Shoshana L Das, Prasenjit Bose, Emma Lejeune, Daniel H Reich, Christopher Chen, and Jeroen Eyckmans. Extracellular matrix alignment directs provisional matrix assembly and three dimensional fibrous tissue closure. Tissue Engineering Part A, 27(23-24):1447–1457, 2021. * [2] Vivek D Sree and Adrian B Tepole. Computational systems mechanobiology of growth and remodeling: Integration of tissue mechanics and cell regulatory network dynamics. Current Opinion in Biomedical Engineering, 15:75–80, 2020. * [3] Tomas Amadeo, Daniel Van Lewen, Taylor Janke, Tommaso Ranzani, Anand Devaiah, Urvashi Upadhyay, and Sheila Russo. Soft robotic deployable origami actuators for neurosurgical brain retraction. Frontiers in Robotics and AI, 8:437, 2022. * [4] Arincheyan Gerald, Max McCandless, Avani Sheth, Hiroyuki Aihara, and Sheila Russo. A soft sensor for bleeding detection in colonoscopies. Advanced Intelligent Systems, 4(4):2100254, 2022. * [5] Yi Yang, Katherine Vella, and Douglas P Holmes. Grasping with kirigami shells. Science Robotics, 6(54):eabd6426, 2021. * [6] Lillian Chin, Jeffrey Lipton, Michelle C Yuen, Rebecca Kramer-Bottiglio, and Daniela Rus. Automated recycling separation enabled by soft robotic material classification. In 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft), pages 102–107. IEEE, 2019. * [7] Ryan L Truby, Robert K Katzschmann, Jennifer A Lewis, and Daniela Rus. Soft robotic fingers with embedded ionogel sensors and discrete actuation modes for somatosensitive manipulation. In 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft), pages 322–329. IEEE, 2019. * [8] Andrew Spielberg, Alexander Amini, Lillian Chin, Wojciech Matusik, and Daniela Rus. Co-learning of task and sensor placement for soft robotics. IEEE Robotics and Automation Letters, 6(2):1208–1215, 2021. * [9] William D Meador, Mrudang Mathur, Gabriella P Sugerman, Marcin Malinowski, Tomasz Jazwiec, Xinmei Wang, Carla MR Lacerda, Tomasz A Timek, and Manuel K Rausch. The tricuspid valve also maladapts as shown in sheep with biventricular heart failure. Elife, 9:e63855, 2020. * [10] Tianhong Han, Taeksang Lee, Joanna Ledwon, Elbert Vaca, Sergey Turin, Aaron Kearney, Arun K Gosain, and Adrian B Tepole. Bayesian calibration of a computational model of tissue expansion based on a porcine animal model. Acta biomaterialia, 137:136–146, 2022. * [11] Luke E Osborn, Andrei Dragomir, Joseph L Betthauser, Christopher L Hunt, Harrison H Nguyen, Rahul R Kaliki, and Nitish V Thakor. Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain. Science robotics, 3(19):eaat3818, 2018. * [12] Ryan L Truby. Designing soft robots as robotic materials. Accounts of Materials Research, 2(10):854–857, 2021. * [13] Rudolf M Füchslin, Andrej Dzyakanchuk, Dandolo Flumini, Helmut Hauser, Kenneth J Hunt, Rolf H Luchsinger, Benedikt Reller, Stephan Scheidegger, and Richard Walker. Morphological computation and morphological control: steps toward a formal theory and applications. Artificial life, 19(1):9–34, 2013. * [14] Menachem Stern and Arvind Murugan. Learning without neurons in physical systems. arXiv preprint arXiv:2206.05831, 2022. * [15] Elliot Hawkes, B An, Nadia M Benbernou, H Tanaka, Sangbae Kim, Erik D Demaine, D Rus, and Robert J Wood. Programmable matter by folding. Proceedings of the National Academy of Sciences, 107(28):12441–12445, 2010. * [16] Tian Chen, Mark Pauly, and Pedro M Reis. A reprogrammable mechanical metamaterial with stable memory. Nature, 589(7842):386–390, 2021. * [17] Yuanping Song, Robert M Panas, Samira Chizari, Lucas A Shaw, Julie A Jackson, Jonathan B Hopkins, and Andrew J Pascall. Additively manufacturable micro-mechanical logic gates. Nature communications, 10(1):882, 2019. * [18] Charles El Helou, Philip R Buskohl, Christopher E Tabor, and Ryan L Harne. Digital logic gates in soft, conductive mechanical metamaterials. Nature communications, 12(1):1633, 2021. * [19] Zhiqiang Meng, Weitong Chen, Tie Mei, Yuchen Lai, Yixiao Li, and CQ Chen. Bistability-based foldable origami mechanical logic gates. Extreme Mechanics Letters, 43:101180, 2021. * [20] Tian Chen, Osama R Bilal, Kristina Shea, and Chiara Daraio. Harnessing bistability for directional propulsion of soft, untethered robots. Proceedings of the National Academy of Sciences, 115(22):5698–5702, 2018. * [21] Yi Zhu, Mayur Birla, Kenn R Oldham, and Evgueni T Filipov. Elastically and plastically foldable electrothermal micro-origami for controllable and rapid shape morphing. Advanced Functional Materials, 30(40):2003741, 2020. * [22] Zhengxuan Wei and Ruobing Bai. Temperature-modulated photomechanical actuation of photoactive liquid crystal elastomers. Extreme Mechanics Letters, 51:101614, 2022. * [23] Helmut Hauser. Physical reservoir computing in robotics. Reservoir Computing: Theory, Physical Implementations, and Applications, pages 169–190, 2021. * [24] Kohei Nakajima, Helmut Hauser, Tao Li, and Rolf Pfeifer. Exploiting the dynamics of soft materials for machine learning. Soft robotics, 5(3):339–347, 2018. * [25] William Gilpin. Cryptographic hashing using chaotic hydrodynamics. Proceedings of the National Academy of Sciences, 115(19):4869–4874, 2018. * [26] Johannes Buchmann. Introduction to cryptography, volume 335. Springer, 2004. * [27] Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014. * [28] Omid Jafari, Preeti Maurya, Parth Nagarkar, Khandker Mushfiqul Islam, and Chidambaram Crushev. A survey on locality sensitive hashing algorithms and their applications. arXiv preprint arXiv:2102.08942, 2021. * [29] Loïc Paulevé, Hervé Jégou, and Laurent Amsaleg. Locality sensitive hashing: A comparison of hash function types and querying mechanisms. Pattern recognition letters, 31(11):1348–1358, 2010. * [30] Lianhua Chi and Xingquan Zhu. Hashing techniques: A survey and taxonomy. ACM Computing Surveys (CSUR), 50(1):1–36, 2017. * [31] Guido Van Rossum and Fred L. Drake. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA, 2009. * [32] Ron Rivest. The md5 message-digest algorithm, april 1992, 1992. * [33] Malcolm Slaney and Michael Casey. Locality-sensitive hashing for finding nearest neighbors [lecture notes]. IEEE Signal processing magazine, 25(2):128–131, 2008. * [34] Ken Perlin. An image synthesizer. ACM Siggraph Computer Graphics, 19(3):287–296, 1985. * [35] Perlin-noise python package. https://pypi.org/project/perlin-noise/. * [36] Charles R. Harris, K. Jarrod Millman, Stéfan J van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. Nature, 585:357–362, 2020. * [37] Anders Logg, Kent-Andre Mardal, and Garth Wells. Automated solution of differential equations by the finite element method: The FEniCS book, volume 84. Springer Science & Business Media, 2012. * [38] Martin Alnæs, Jan Blechta, Johan Hake, August Johansson, Benjamin Kehlet, Anders Logg, Chris Richardson, Johannes Ring, Marie E Rognes, and Garth N Wells. The fenics project version 1.5. Archive of Numerical Software, 3(100), 2015. * [39] Jerome L Myers, Arnold D Well, and Robert F Lorch. Research design and statistical analysis. Routledge, 2013. * [40] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261–272, 2020. * [41] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825–2830, 2011. * [42] Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An introduction to statistical learning, volume 112. Springer, 2013. * [43] Allan F Bower. Applied mechanics of solids. CRC press, 2009. * [44] Javier Tapia, Espen Knoop, Mojmir Mutnỳ, Miguel A Otaduy, and Moritz Bächer. Makesense: Automated sensor design for proprioceptive soft robots. Soft robotics, 7(3):332–345, 2020. * [45] Jingdong Wang, Ting Zhang, Nicu Sebe, Heng Tao Shen, et al. A survey on learning to hash. IEEE transactions on pattern analysis and machine intelligence, 40(4):769–790, 2017. * [46] Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data—a survey. Proceedings of the IEEE, 104(1):34–57, 2015. * [47] Peerasait Prachaseree and Emma Lejeune. Learning mechanically driven emergent behavior with message passing neural networks. Computers & Structures, 270:106825, 2022. * [48] Saeed Mohammadzadeh and Emma Lejeune. Predicting mechanically driven full-field quantities of interest with deep learning-based metamodels. Extreme Mechanics Letters, 50:101566, 2022. * [49] Miguel A Bessa, Piotr Glowacki, and Michael Houlder. Bayesian machine learning in metamaterial design: Fragile becomes supercompressible. Advanced Materials, 31(48):1904845, 2019. * [50] Fernando V Senhora, Heng Chi, Yuyu Zhang, Lucia Mirabella, Tsz Ling Elaine Tang, and Glaucio H Paulino. Machine learning for topology optimization: Physics-based learning through an independent training strategy. Computer Methods in Applied Mechanics and Engineering, 398:115116, 2022.
X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$ is an equivalence of triangulated categories; (c) for any symbol $\star={\mathsf{b}}$, $+$, $-$, $\empt$, ${\mathsf{abs}}+$, ${\mathsf{abs}}-$, or ${\mathsf{abs}}$, the triangulated functor ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}})\longrightarrow{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}})$ induced by the embedding of exact categories $X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}\longrightarrow X{\operatorname{\mathsf{--lcth}}}$ is an equivalence of triangulated categories. ###### Proof. Part (a) is provided by Lemma 5.2.5(b) together with the dual version of Proposition A.5.6. Parts (b-c) follow from part (a) and Corollary 5.3.2(b-c). Alternatively, in the case of a semi-separated Noetherian scheme $X$ of finite Krull dimension, the assertions (b-c) can be obtained directly from Lemma 5.2.1(a). ∎ The following corollary is another restricted version of Theorem 4.6.6; it is to be compared with Corollary 4.6.10. ###### Corollary 5.4.5. Let $X$ be a semi-separated Noetherian scheme of finite Krull dimension. Then for any symbol $\star={\mathsf{b}}$, $+$, $-$, $\empt$, ${\mathsf{abs}}+$, ${\mathsf{abs}}-$, ${\mathsf{co}}$, or ${\mathsf{abs}}$ there is a natural equivalence of triangulated categories ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$. ###### Proof. Assuming $\star\neq{\mathsf{co}}$, by Lemma 5.4.1(d) together with the dual version of Proposition A.5.6 the triangulated functor $\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cot}}\cap X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ is an equivalence of categories. In view of Corollary 5.4.2, the same assertion holds for $\star={\mathsf{co}}$. Hence it remains to recall that the equivalence of categories from Lemma 4.6.7 identifies $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cot}}\cap X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}}$ with $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ (see the proof of Corollary 4.6.10(c)). ∎ ###### Corollary 5.4.6. Let $f\colon Y\longrightarrow X$ be a morphism of finite flat dimension between semi-separated Noetherian schemes. Then for any symbol $\star={\mathsf{b}}$, $+$, $-$, $\empt$, ${\mathsf{abs}}+$, ${\mathsf{abs}}-$, ${\mathsf{co}}$, or ${\mathsf{abs}}$ the equivalences of triangulated categories ${\mathsf{D}}^{\star}(Y{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq\mathsf{Hot}^{\star}(Y\allowbreak{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ and ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ from Corollary 5.4.5 transform the right derived functor ${\mathbb{R}}f_{*}$ (65) into the left derived functor ${\mathbb{L}}f_{!}$ (69). ###### Proof. Can be either deduced from Corollary 4.11.6(c), or proven directly in the similar way using Lemma 4.11.3(d). ∎ Let $X$ be a locally Noetherian scheme with an open covering ${\mathbf{W}}$. As in Section 4.11, we denote by $X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}d}$ the full subcategory of objects of injective dimension $\le d$ in $X{\operatorname{\mathsf{--qcoh}}}$ and by $X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}}$ the full subcategory of objects of projective dimension $\le d$ in $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$. For a Noetherian scheme $X$ of finite Krull dimension, let $X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}}$ denote the full subcategory of objects of projective dimension $\le d$ in $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$. We set $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{{\operatorname{\mathsf{fpd--}}}d}=X{\operatorname{\mathsf{--lcth}}}_{\\{X\\},{{\operatorname{\mathsf{fpd--}}}d}}^{\mathsf{lct}}$ and $X{\operatorname{\mathsf{--ctrh}}}_{{\operatorname{\mathsf{fpd--}}}d}=X{\operatorname{\mathsf{--lcth}}}_{\\{X\\},{{\operatorname{\mathsf{fpd--}}}d}}$. Clearly, the projective dimension of an object of $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$ or $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$ does not change when the open covering ${\mathbf{W}}$ is replaced by its refinement. The full subcategory $X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}d}\subset X{\operatorname{\mathsf{--qcoh}}}$ is closed under extensions, cokernels of admissible monomoprhisms, and infinite direct sums. The full subcategory $X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}}\subset X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$ is closed under extensions, kernels of admissible epimorphisms, and infinite products (see Corollary 5.1.5). ###### Corollary 5.4.7. (a) Let $X$ be a locally Noetherian scheme. Then the natural triangulated functors $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\longrightarrow{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}d})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}d})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}d})$, $\mathsf{Hot}^{\pm}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\longrightarrow{\mathsf{D}}^{{\mathsf{abs}}\pm}(X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}d})\longrightarrow{\mathsf{D}}^{\pm}(X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}d})$, and $\mathsf{Hot}^{\mathsf{b}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\longrightarrow{\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}d})$ are equivalences of categories. (b) Let $X$ be a locally Noetherian scheme. Then the natural triangulated functors $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})$, $\mathsf{Hot}^{\pm}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{{\mathsf{abs}}\pm}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})\longrightarrow{\mathsf{D}}^{\pm}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})$, and $\mathsf{Hot}^{\mathsf{b}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})$ are equivalences of categories. (c) Let $X$ be a Noetherian scheme of finite Krull dimension. Then the natural triangulated functors $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})$, $\mathsf{Hot}^{\pm}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{{\mathsf{abs}}\pm}(X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})\longrightarrow{\mathsf{D}}^{\pm}(X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})$, and $\mathsf{Hot}^{\mathsf{b}}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\allowbreak\longrightarrow{\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}})$ are equivalences of categories. ###### Proof. Part (a) follows from Corollary 4.11.1(a) and [50, Remark 2.1], while parts (b-c) follow from Proposition A.5.6 and the same Remark. ∎ A cosheaf of ${\mathcal{O}}_{X}$-modules ${\mathfrak{G}}$ on a scheme $X$ is said to have _${\mathbf{W}}$ -flat dimension not exceeding $d$_ if the flat dimension of the ${\mathcal{O}}_{X}(U)$-module ${\mathfrak{G}}[U]$ does not exceed $d$ for any affine open subscheme $U\subset X$ subordinate to ${\mathbf{W}}$. The flat dimension of a cosheaf of ${\mathcal{O}}_{X}$-modules is defined as its $\\{X\\}$-flat dimension. ${\mathbf{W}}$-locally contraherent cosheaves of ${\mathbf{W}}$-flat dimension not exceeding $d$ on a locally Noetherian scheme $X$ form a full subcategory $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d}\subset X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$ closed under extensions, kernels of admissible epimorphisms, and infinite products. We set $X{\operatorname{\mathsf{--ctrh}}}^{{\operatorname{\mathsf{ffd--}}}d}=X{\operatorname{\mathsf{--lcth}}}_{\\{X\\}}^{{\operatorname{\mathsf{ffd--}}}d}$. The flat dimension of a contraherent cosheaf ${\mathfrak{G}}$ on an affine Noetherian scheme $U$ is equal to the flat dimension of the ${\mathcal{O}}_{X}(U)$-module ${\mathfrak{G}}[U]$ (see Section 3.7). Over a semi-separated Noetherian scheme $X$, a ${\mathbf{W}}$-locally contraherent cosheaf has ${\mathbf{W}}$-flat dimension $\le d$ if and only if it admits a left resolution of length $\le d$ by ${\mathbf{W}}$-flat ${\mathbf{W}}$-locally contraherent cosheaves (see Corollary 4.4.5(a)). Hence it follows from Corollary 5.2.2(b) (applied to affine open subschemes $U\subset X$) that the ${\mathbf{W}}$-flat dimension of a ${\mathbf{W}}$-locally contraherent cosheaf on a locally Noetherian scheme $X$ of finite Krull dimension does not change when the covering ${\mathbf{W}}$ is replaced by its refinement. According to part (a) of the same Corollary, on a semi-separated Noetherian scheme of finite Krull dimension the ${\mathbf{W}}$-flat dimension of a ${\mathbf{W}}$-locally contraherent cosheaf is equal to its colocally flat dimension; so $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d}=X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{clfd--}}}d}}$. By Corollary 5.1.4, the ${\mathbf{W}}$-flat dimension of a locally cotorsion ${\mathbf{W}}$-locally contraherent cosheaf on a locally Noetherian scheme $X$ coincides with its projective dimension in $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$ (and also does not depend on ${\mathbf{W}}$). So one has $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d}\cap X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}=X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}}$. ###### Lemma 5.4.8. Let $X$ be a Noetherian scheme of finite Krull dimension $D$. Then a ${\mathbf{W}}$-locally contraherent cosheaf on $X$ has finite projective dimension in the exact category $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$ if and only if it has finite ${\mathbf{W}}$-flat dimension. More precisely, the inclusions of full subcategories $X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}}\subset X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d}\subset X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}(d+D)}}$ hold in the category $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$. ###### Proof. The inclusion $X{\operatorname{\mathsf{--lcth}}}_{{\mathbf{W}},\,{{\operatorname{\mathsf{fpd--}}}d}}\subset X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d}$ holds due to Corollary 5.2.4. Conversely, by the same Corollary any ${\mathbf{W}}$-locally contraherent cosheaf ${\mathfrak{M}}$ on $X$ has a left resolution by flat contraherent cosheaves, so the ${\mathbf{W}}$-flat dimension of ${\mathfrak{M}}$ is equal to its left homological dimension with respect to $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}\subset X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$ (see Corollary 5.2.2(b)). It remains to apply the last assertion of Corollary 5.2.6(b). ∎ ###### Corollary 5.4.9. For any Noetherian scheme $X$ of finite Krull dimension and any (finite) integer $d\ge 0$, the natural triangulated functors $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d})$, $\mathsf{Hot}^{\pm}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{{\mathsf{abs}}\pm}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d})\longrightarrow{\mathsf{D}}^{\pm}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d})$, and $\mathsf{Hot}^{\mathsf{b}}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d})$ are equivalences of triangulated categories. ###### Proof. It is clear from Lemma 5.4.8 that the homological dimension of the exact category $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}d}$ is finite, so it remains to apply [50, Remark 2.1] (to obtain the equivalences between various derived categories of this exact category) and Proposition A.5.6 (to identify the absolute derived categories with the homotopy categories of projective objects). Alternatively, one can use [51, Theorem 3.6 and Remark 3.6]. ∎ The following theorem is the main result of this section. ###### Theorem 5.4.10. (a) For any locally Noetherian scheme $X$, the natural functor $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\allowbreak\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ is an equivalence of triangulated categories. (b) For any locally Noetherian scheme $X$, the natural functor $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}}^{\mathsf{lct}})\allowbreak\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ is an equivalence of triangulated categories. (c) For any semi-separated Noetherian scheme $X$, the natural functor ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})\allowbreak\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ is an equivalence of triangulated categories. (d) For any Noetherian scheme $X$ of finite Krull dimension, the natural functors $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ are equivalences of triangulated categories. ###### Proof. Part (a) is a standard result (see, e. g., [53, Lemma 1.7(b)]) which is a particular case of [51, Theorem 3.7 and Remark 3.7] and can be also obtained from the dual version of Proposition A.3.1(b). The key observation is that there are enough injectives in $X{\operatorname{\mathsf{--qcoh}}}$ and the full subcategory $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ they form is closed under infinite direct sums. Similarly, part (b) can be obtained either from Proposition A.3.1(b), or from the dual version of [51, Theorem 3.7 and Remark 3.7] (see also [51, Section 3.8]). In any case the argument is based on Theorem 5.1.1(a) and Corollary 5.1.5. Part (c) holds by Proposition A.3.1(b) together with Lemma 4.3.3, 4.4.1(a), or 4.4.3(a). Finally, in part (d) the functors $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X\allowbreak{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})$ are equivalences of categories by Corollary 5.4.9, and the functor ${\mathsf{D}}^{\mathsf{ctr}}(X\allowbreak{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ is an equivalence of categories by Proposition A.3.1(b) together with Corollary 5.2.4. A direct proof of the equivalence $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ is also possible; it proceeds along the following lines. One has to use the more advanced features of the results of [51, Sections 3.7–3.8] involving the full generality of the conditions (${*}$)–(${*}{*}$). Alternatively, one can apply the more general Corollary A.6.2. Specifically, let $X=\bigcup_{\alpha}U_{\alpha}$ be a finite affine open covering; then it follows from Corollary 5.2.4(b) that an infinite product of projective contraherent cosheaves on $X$ is a direct summand of a direct sum over $\alpha$ of the direct images of contraherent cosheaves on $U_{\alpha}$ correspoding to infinite products of very flat contraadjusted ${\mathcal{O}}(U_{\alpha})$-modules. Infinite products of such modules may not be very flat, but they are certainly flat and contraadjusted. By the last assertion of Corollary 5.2.6(b), one can conclude that the projective dimensions of infinite products of projective objects in $X{\operatorname{\mathsf{--ctrh}}}$ do not exceed the Krull dimension $D$ of the scheme $X$. So the contraherent cosheaf analogue of the condition (${*}{*}$) holds for $X{\operatorname{\mathsf{--ctrh}}}$, or in other words, the assumption of Corollary A.6.2 is satisfied by the pair of exact categories $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}}\subset X{\operatorname{\mathsf{--ctrh}}}$. ∎ The following corollary is to be compared with Corollaries 5.2.8(b) and 5.3.3(b). ###### Corollary 5.4.11. For any locally Noetherian scheme $X$ with an open covering ${\mathbf{W}}$, the triangulated functor ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ induced by the embedding of exact categories $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}\longrightarrow X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$ is an equivalence of triangulated categories. ###### Proof. Follows from Theorem 5.4.10(b) applied to the coverings $\\{X\\}$ and ${\mathbf{W}}$ of the scheme $X$. Alternatively, one can apply directly Proposition A.3.1(b) together with Theorem 5.1.1(a). ∎ ### 5.5. Co-contra correspondence over a regular scheme Let $X$ be a regular semi-separated Noetherian scheme of finite Krull dimension. ###### Theorem 5.5.1. (a) The triangulated functor ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ induced by the embedding of exact categories $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}}\longrightarrow X{\operatorname{\mathsf{--qcoh}}}$ is an equivalence of triangulated categories. (b) The triangulated functor ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lin}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}})$ induced by the embedding of exact categories $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lin}}\longrightarrow X{\operatorname{\mathsf{--ctrh}}}$ is an equivalence of triangulated categories. (c) There is a natural equivalence of triangulated categories ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}})$ provided by the derived functors ${\mathbb{R}}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{O}}_{X},{-})$ and ${\mathcal{O}}_{X}\odot_{X}^{\mathbb{L}}{-}$. ###### Proof. Part (a) actually holds for any symbol $\star\neq{\mathsf{ctr}}$ in the upper indices of the derived category signs, and is a particular case of Corollary 4.9.1(a). Indeed, one has $X{\operatorname{\mathsf{--qcoh}}}=X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{ffd--}}}d}$ provided that $d$ is greater or equal to the Krull dimension of $X$. Similarly, part (b) actually holds for any symbol $\star\neq{\mathsf{co}}$ in the upper indices, and is a particular case of Corollary 4.9.1(c). Indeed, one has $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}=X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{flid--}}}d}$ provided that $d$ is greater or equal to the Krull dimension of $X$. To prove part (c), notice that all the triangulated functors ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ and ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ are equivalences of categories by Corollary 4.9.5 (since one also has $X{\operatorname{\mathsf{--qcoh}}}=X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fvfd--}}}d}$ provided that $d$ is greater or equal to the Krull dimension of $X$). So it remains to apply Theorem 4.6.6. ∎ ### 5.6. Co-contra correspondence over a Gorenstein scheme Let $X$ be a Gorenstein semi-separated Noetherian scheme of finite Krull dimension. We will use the following formulation of the Gorenstein condition: for any affine open subscheme $U\subset X$, the classes of ${\mathcal{O}}_{X}(U)$-modules of finite flat dimension, of finite projective dimension, and of finite injective dimension coincide. Notice that neither of these dimensions can exceed the Krull dimension $D$ of the scheme $X$. Accordingly, the class of ${\mathcal{O}}_{X}(U)$-modules defined by the above finite homological dimension conditions is closed under both infinite direct sums and infinite products. It is also closed under extensions and the passages to the cokernels of embeddings and the kernels of surjections. Moreover, since the injectivity of a quasi-coherent sheaf on a Noetherian scheme is a local property, the full subcategories of quasi-coherent sheaves of finite flat dimension and of finite injective dimension coincide in $X{\operatorname{\mathsf{--qcoh}}}$. Similarly, the full subcategories of locally contraherent cosheaves of finite flat dimension and of finite locally injective dimension coincide in $X{\operatorname{\mathsf{--lcth}}}$. Neither of these dimensions can exceed $D$. ###### Theorem 5.6.1. (a) The triangulated functors ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})\allowbreak\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ induced by the embeddings of exact categories $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}}\longrightarrow X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}}\longrightarrow X{\operatorname{\mathsf{--qcoh}}}$ are equivalences of triangulated categories. (b) The triangulated functors ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lin}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{flid}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}})$ induced by the embeddings of exact categories $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lin}}\longrightarrow X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{flid}}\longrightarrow X{\operatorname{\mathsf{--ctrh}}}$ are equivalences of triangulated categories. (c) There is a natural equivalence of triangulated categories ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{flid}})$ provided by the derived functors ${\mathbb{R}}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{O}}_{X},{-})$ and ${\mathcal{O}}_{X}\odot_{X}^{\mathbb{L}}{-}$. ###### Proof. Parts (a-b): by Corollary 4.9.1(a,c), the functors ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}^{\star}(X\allowbreak{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})$ are equivalences of categories for any symbol $\star\neq{\mathsf{ctr}}$ and the functors ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}})\longrightarrow{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}})$ are equivalences of categories for any symbol $\star\neq{\mathsf{co}}$. To prove that the functor ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ is an equivalence of categories, notice that one has $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}\subset X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}D}=X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}}$ and the functor $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}D})$ is an equivalence of categories by Corollary 5.4.7(a), while the composition $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{{\operatorname{\mathsf{fid--}}}D})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ is an equivalence of categories by Theorem 5.4.10(a). Similarly, to prove that the functor ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ is an equialence of categories, notice that one has $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}}\subset X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}D}=X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}}$ and the functor $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}D})$ is an equivalence of categories by Corolary 5.4.9, while the composition $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{{\operatorname{\mathsf{ffd--}}}D})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ is an equivalence of categories by Theorem 5.4.10(d). To prove part (c), notice that the functors ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})$ are equivalences of categories by Corollary 5.4.2, while the functors ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}})$ are equivalences of categories by Corollary 4.9.5(b). Furthermore, consider the intersections $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cta}}\cap X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}}$ and $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}\cap X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}}$. As was explained in Section 4.10, the functor ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cta}}\cap X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})\allowbreak\longrightarrow{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}})$ is an equivalence of triangulated categories for any $\star\neq{\mathsf{ctr}}$, while the functor ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}\cap X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}})\longrightarrow{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{flid}})$ is an equivalence of triangulated categories for any $\star\neq{\mathsf{co}}$. Finally, it is clear from Lemma 4.10.2(a,d) (see also Lemma 4.11.3) that the equivalence of exact categories $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cta}}\simeq X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}$ of Lemma 4.6.7 identifies their full exact subcategories $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cta}}\cap X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{ffd}}$ and $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}\cap X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{flid}}$. So the induced equivalence of the derived categories ${\mathsf{D}}^{\mathsf{abs}}$ or ${\mathsf{D}}$ provides the desired equivalence of triangulated categories in part (c). ∎ ### 5.7. Co-contra correspondence over a scheme with a dualizing complex Let $X$ be a semi-separated Noetherian scheme with a dualizing complex ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ [29], which we will view as a finite complex of injective quasi-coherent sheaves on $X$. The following result complements the covariant Serre–Grothendieck duality theory as developed in the papers and the thesis [33, 47, 42, 53]. ###### Theorem 5.7.1. There are natural equivalences between the four triangulated categories ${\mathsf{D}}^{{\mathsf{abs}}={\mathsf{co}}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$, ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$, ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}})$, and ${\mathsf{D}}^{{\mathsf{abs}}={\mathsf{ctr}}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lin}})$. (Here the notation ${\mathsf{abs}}={\mathsf{co}}$ and ${\mathsf{abs}}={\mathsf{ctr}}$ presumes the assertions that the corresponding derived categories of the second kind coincide for the exact category in question.) Among these, the equivalences ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}})$ and ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})\simeq{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lin}})$ do not require a dualizing complex and do not depend on it; all the remaining equivalences do and do. ###### Proof. For any quasi-compact semi-separated scheme $X$ with an open covering ${\mathbf{W}}$, one has ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}})={\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}})$ by Corollary 4.9.5(b). For any Noetherian scheme $X$ of finite Krull dimension, one has ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})={\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ by Corollary 5.4.2. For any semi-separated Noetherian scheme $X$, one has ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})\simeq\mathsf{Hot}(X\allowbreak{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})$ by Theorem 5.4.10(a) and $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\simeq{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}})$ by Corollary 4.6.8(b). Hence the desired equivalence ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})\simeq{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}})$, which is provided by the derived functors ${\mathbb{R}}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{O}}_{X},{-})\colon{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5mu{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}})$ and ${\mathcal{O}}_{X}\odot_{X}^{\mathbb{L}}{-}\colon{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}})\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5mu{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}).$ For any semi-separated Noetherian scheme $X$ of finite Krull dimension, one has ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ by Corollary 5.4.5, $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ by Theorem 5.4.10(b), and ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ by Corollary 5.4.4(b). Alternatively, one can refer to the equivalence ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{vfl}})$ holding by Corollary 5.4.3, ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{vfl}})\simeq\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})$ by Corollary 4.6.10(a), and $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ by Theorem 5.4.10(d). Either way, one gets the same desired equivalence ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$, which is provided by the derived functors ${\mathbb{R}}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{O}}_{X},{-})\colon{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5mu{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ and ${\mathcal{O}}_{X}\odot_{X}^{\mathbb{L}}{-}\colon{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5mu{\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}}).$ Now we are going to construct a commutative diagram of equivalences of triangulated categories $\dgARROWLENGTH=7em\begin{diagram}$ for any symbol $\star={\mathsf{b}}$, ${\mathsf{abs}}+$, ${\mathsf{abs}}-$, or ${\mathsf{abs}}$. The exterior vertical functors are constructed by applying the additive functors ${\mathcal{O}}_{X}\odot_{X}{-}$ and $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{O}}_{X},{-})$ to the given complexes termwise. The interior (derived) vertical functors have been defined in Corollaries 4.6.8(b) and 5.4.5. All the functors invoking the dualizing complex ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ are constructed by applying the respective exact functors of two arguments to ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ and the given unbounded complex termwise and totalizing the bicomplexes so obtained. First of all, one notices that the functors in the interior upper triangle are right adjoint to the ones in the exterior. This follows from the adjunction (20) together with the adjunction of the tensor product of quasi-coherent sheaves and the quasi-coherent internal Hom. The upper horizontal functors ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{{\mathcal{O}}_{X}}{-}$ and $\operatorname{\mathcal{H}\mskip-0.90001mu\text{om}}_{X{\operatorname{\mathrm{-qc}}}}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{-})$ are mutually inverse for the reasons explained in [42, Theorem 8.4 and Proposition 8.9] and [53, Theorem 2.5]. The argument in [53] is based on the observations that the morphism of finite complexes of flat quasi-coherent sheaves ${\mathcal{F}}\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5mu\operatorname{\mathcal{H}\mskip-0.90001mu\text{om}}_{X{\operatorname{\mathrm{-qc}}}}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{{\mathcal{O}}_{X}}{\mathcal{F}})$ is a quasi-isomorphism for any sheaf ${\mathcal{F}}\in X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}}$ and the morphism of finite complexes of injective quasi-coherent sheaves ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{{\mathcal{O}}_{X}}\operatorname{\mathcal{H}\mskip-0.90001mu\text{om}}_{X{\operatorname{\mathrm{-qc}}}}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}})\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5mu{\mathcal{J}}$ is a quasi-isomorphism for any sheaf ${\mathcal{J}}\in X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$. Let us additionally point out that, according to Lemma 2.5.3(c) and [42, Lemma 8.7], the complex $\operatorname{\mathcal{H}\mskip-0.90001mu\text{om}}_{X{\operatorname{\mathrm{-qc}}}}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is a complex of flat cotorsion quasi-coherent sheaves for any complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$. So the functor $\operatorname{\mathcal{H}\mskip-0.90001mu\text{om}}_{X{\operatorname{\mathrm{-qc}}}}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{-})$ actually lands in $\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cot}}\cap X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ (as does the functor ${\mathcal{O}}_{X}\odot_{X}{-}$ on the diagram, according to the proof of Corollary 5.4.5). The interior upper triangle is commutative due to the natural isomorphism (17). The exterior upper triangle is commutative due to the natural isomorphism (21). In order to discuss the equivalence of categories in the lower horizontal line, we will need the following lemma. It is based on the definitions of the $\operatorname{\mathfrak{Cohom}}$ functor in Section 3.6 and the contraherent tensor product functor $\otimes_{X{\operatorname{\mathrm{-ct}}}}$ in Section 3.7. ###### Lemma 5.7.2. Let ${\mathcal{J}}$ be an injective quasi-coherent sheaf on a semi-separated Noetherian scheme $X$ with an open covering ${\mathbf{W}}$. Then there are two well-defined exact functors $\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{J}},{-})\colon X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}}\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5muX{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clf}}$ and ${\mathcal{J}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{-}\colon X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clf}}\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5muX{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}}$ between the exact categories $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}}$ and $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clf}}$ of locally injective ${\mathbf{W}}$-locally contraherent cosheaves and colocally flat contraherent cosheaves on $X$. The functor ${\mathcal{J}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{-}$ is left adjoint to the functor $\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{J}},{-})$. Besides, the functor $\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{J}},{-})$ takes values in the additive subcategory $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}\subset X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clf}}$, while the functor ${\mathcal{J}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{-}$ takes values in the additive subcategory $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lin}}_{\mathsf{clp}}\subset X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}}$. For any quasi- coherent sheaf ${\mathcal{M}}$ and any colocally flat contraherent cosheaf ${\mathfrak{F}}$ on $X$ there is a natural isomorphism (84) ${\mathcal{M}}\odot_{X}({\mathcal{J}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}})\simeq({\mathcal{M}}\otimes_{{\mathcal{O}}_{X}}{\mathcal{J}})\odot_{X}{\mathfrak{F}}$ of quasi-coherent sheaves on $X$. ###### Proof. Let us show that the locally cotorsion ${\mathbf{W}}$-locally contraherent cosheaf $\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{J}},{\mathfrak{K}})$ is projective for any locally injective ${\mathbf{W}}$-locally contraherent cosheaf ${\mathfrak{K}}$ on $X$. Indeed, ${\mathcal{J}}$ is a direct summand of a finite direct sum of the direct images of injective quasi-coherent sheaves ${\mathcal{I}}$ from the embeddings of affine open subschemes $j\colon U\longrightarrow X$ subordinate to ${\mathbf{W}}$. So it suffices to consider the case of ${\mathcal{J}}=j_{*}{\mathcal{I}}$. According to (40), there is a natural isomorphism of locally cotorsion (${\mathbf{W}}$-locally) contraherent cosheaves $\operatorname{\mathfrak{Cohom}}_{X}(j_{*}{\mathcal{I}},{\mathfrak{K}})\simeq j_{!}\operatorname{\mathfrak{Cohom}}_{U}({\mathcal{I}},j^{!}{\mathfrak{K}})$ on $X$. The ${\mathcal{O}}(U)$-modules ${\mathcal{I}}(U)$ and ${\mathfrak{K}}[U]$ are injective, so $\operatorname{Hom}_{{\mathcal{O}}(U)}({\mathcal{I}}(U),{\mathfrak{K}}[U])$ is a flat cotorsion ${\mathcal{O}}(U)$-module. In other words, the locally cotorsion contraherent cosheaf $\operatorname{\mathfrak{Cohom}}_{U}({\mathcal{I}},j^{!}{\mathfrak{K}})$ is projective on $U$, and therefore its direct image with respect to $j$ is projective on $X$ (see Lemma 4.4.3(b) or Corollary 4.4.7(b)). Now let ${\mathfrak{F}}$ be a colocally flat contraherent cosheaf on $X$. Then, in particular, ${\mathfrak{F}}$ is a flat contraherent cosheaf (Corollary 4.3.6), so the tensor product ${\mathcal{J}}\otimes_{X}{\mathfrak{F}}$ is a locally injective derived contrahereable cosheaf on $X$. Moreover, by Corollary 4.3.4(c), ${\mathfrak{F}}$ is a direct summand of a finitely iterated extension of the direct images of flat contraherent cosheaves from affine open subschemes of $X$. It was explained in Section 3.5 that derived contrahereable cosheaves on affine schemes are contrahereable and the direct images of cosheaves with respect to affine morphisms preserve contrahereability. Besides, the full subcategory of contrahereable cosheaves on $X$ is closed under extensions in the exact category of derived contrahereable cosheaves, and the functor ${\mathcal{J}}\otimes_{X}{-}$ takes short exact sequences of flat contraherent cosheaves to short exact sequences of derived contrahereable cosheaves on $X$ (see Section 3.7). So it follows from the isomorphism (48) that ${\mathcal{J}}\otimes_{X}{\mathfrak{F}}$ is a locally injective contrahereable cosheaf. Its contraherator ${\mathcal{J}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}}=\operatorname{\mathfrak{C}}({\mathcal{J}}\otimes_{X}{\mathfrak{F}})$ is consequently a locally injective contraherent cosheaf on $X$. Furthermore, according to Section 3.5 the (global) contraherator construction is an exact functor commuting with the direct images with respect to affine morphisms. Hence the contraherent cosheaf ${\mathcal{J}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}}$ is a direct summand of a finitely iterated extension of the direct images of (locally) injective contraherent cosheaves from affine open subschemes of $X$, i. e., ${\mathcal{J}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}}$ is a colocally projective locally injective contraherent cosheaf. We have constructed the desired exact functors. A combination of the adjunction isomorphisms (35) and (30) makes them adjoint to each other. Finally, for any ${\mathcal{M}}\in X{\operatorname{\mathsf{--qcoh}}}$ and ${\mathfrak{F}}\in X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clf}}$ one has ${\mathcal{M}}\odot_{X}({\mathcal{J}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}})={\mathcal{M}}\odot_{X}\operatorname{\mathfrak{C}}({\mathcal{J}}\otimes_{X}{\mathfrak{F}})\simeq{\mathcal{M}}\odot_{X}({\mathcal{J}}\otimes_{X}{\mathfrak{F}})\simeq({\mathcal{M}}\otimes_{{\mathcal{O}}_{X}}{\mathcal{J}})\odot_{X}{\mathfrak{F}}$ according to the isomorphisms (32) and (36). ∎ Now we can return to the proof of Theorem 5.7.1. The functors in the interior lower triangle are left adjoint to the ones in the exterior, as it follows from the adjunction (20) and Lemma 5.7.2. Let us show that the lower horizontal functors are mutually inverse. According to the proof of Corollary 4.6.8(b), the functor $\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}^{\mathsf{lin}})\longrightarrow{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}})$ induced by the embedding $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}^{\mathsf{lin}}\longrightarrow X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}}$ is an equivalence of triangulated categories. Therefore, it suffices to show that for any cosheaf ${\mathfrak{J}}\in X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}^{\mathsf{lin}}$ the morphism of complexes of contraherent cosheaves ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{J}})\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5mu{\mathfrak{J}}$ is a homotopy equivalence (or just a quasi-isomorphism in $X{\operatorname{\mathsf{--ctrh}}}$), and for any cosheaf ${\mathfrak{P}}\in X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ the morphism of complexes of contraherent cosheaves ${\mathfrak{P}}\mskip 1.5mu\relbar\joinrel\relbar\joinrel\rightarrow\mskip 1.5mu\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{P}})$ is a homotopy equivalence (or just a quasi-isomorphism in $X{\operatorname{\mathsf{--ctrh}}}$). According to Corollaries 4.2.8 and 4.4.3, any object of $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}^{\mathsf{lin}}$ or $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ is a direct summand of a finite direct sum of direct images of objects in the similar categories on affine open subschemes of $X$. According to (43) and (48) together with the results of Section 3.5, both functors $\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{-})$ and ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{-}$ commute with such direct images. So the question reduces to the case of an affine scheme $U$, for which the distinction between quasi-coherent sheaves and contraherent cosheaves mostly loses its significance, as both are identified with (appropriate classes of) ${\mathcal{O}}(U)$-modules. For this reason, the desired quasi-isomorphisms follow from the similar quasi- isomorphisms for quasi-coherent sheaves obtained in [53, proof of Theorem 2.5] (as quoted above). Alternatively, one can argue in the way similar to the proof in [53]. Essentially, this means using an “inverse image localization” procedure in place of the “direct image localizaton” above. The argument proceeds as follows. Let ${}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ be a quasi-isomorphism between a finite complex ${}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of coherent sheaves over $X$ and the complex of injective quasi-coherent sheaves ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$. Then the tensor product ${}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X}{\mathfrak{F}}$ is a finite complex of contraherent cosheaves for any flat contraherent cosheaf ${\mathfrak{F}}$ on $X$. Furthermore, the morphism ${}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is a quasi-isomorphism of finite complexes over the exact category of coadjusted quasi-coherent cosheaves $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{coa}}$ on $X$, hence the induced morphism ${}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X}{\mathfrak{F}}\longrightarrow{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X}{\mathfrak{F}}$ is a quasi-isomorphism of finite complexes over the exact category of derived contrahereable cosheaves on $X$. It follows that the morphism ${}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X}{\mathfrak{F}}\simeq{}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}}\longrightarrow{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}}$ is a quasi-isomorphism of finite complexes of contraherent cosheaves on $X$ for any flat contraherent cosheaf ${\mathfrak{F}}$. Let ${\mathfrak{J}}$ be a cosheaf from $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}}$. In order to show that the morphism of finite complexes ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{J}})\longrightarrow{\mathfrak{J}}$ is a quasi-isomorphism over $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lin}}$, it suffices to check that the composition ${}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X}\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{J}})\longrightarrow{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{J}})\longrightarrow{\mathfrak{J}}$ is a quasi-isomorphism over $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$. The latter assertion can be checked locally, i. e., it simply means that for any affine open subscheme $U\subset X$ subordinate to ${\mathbf{W}}$ the morphism ${}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U)\otimes_{{\mathcal{O}}_{X}(U)}\operatorname{Hom}_{{\mathcal{O}}_{X}(U)}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U),{\mathfrak{J}}[U])\longrightarrow{\mathfrak{J}}[U]$ is a quasi-isomorphism of complexes of ${\mathcal{O}}_{X}(U)$-modules. This can be deduced from the condition that the morphism ${\mathcal{O}}_{X}(U)\longrightarrow\operatorname{Hom}_{{\mathcal{O}}_{X}(U)}({}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U),{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U))$ is a quasi-isomorphism, as explained in the proof in [53]. Let ${\mathfrak{F}}$ be a flat contraherent cosheaf on $X$. Pick a bounded above complex of very flat quasi-coherent sheaves ${}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X$ together with a quasi-isomorphism ${}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow{}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$. Then the bounded below complex of contraherent cosheaves $\operatorname{\mathfrak{Cohom}}_{X}({}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X}{\mathfrak{F}})$ is well-defined. The morphisms of bounded below complexes $\operatorname{\mathfrak{Cohom}}_{X}({}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X}{\mathfrak{F}})\longrightarrow\operatorname{\mathfrak{Cohom}}_{X}({}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}})$ and $\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>\allowbreak{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}})\longrightarrow\operatorname{\mathfrak{Cohom}}_{X}({}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}})$ are quasi-isomorphisms over $X{\operatorname{\mathsf{--ctrh}}}$. Thus in order to show that the morphism ${\mathfrak{F}}\longrightarrow\operatorname{\mathfrak{Cohom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{\mathfrak{F}})$ is a quasi-isomorphism in $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$, it suffices to check that the morphism ${\mathfrak{F}}\longrightarrow\operatorname{\mathfrak{Cohom}}_{X}({}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>\allowbreak{}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X}{\mathfrak{F}})$ is a quasi-isomorphism of bounded below complexes over $X{\operatorname{\mathsf{--ctrh}}}$. The latter is again a local assertion, meaning simply that the morphism ${\mathfrak{F}}[U]\longrightarrow\operatorname{Hom}_{{\mathcal{O}}_{X}(U)}({}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U),\>{}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U)\otimes_{{\mathcal{O}}_{X}(U)}{\mathfrak{F}}[U])$ is a quasi-isomorphism of complexes of ${\mathcal{O}}_{X}(U)$-modules for any affine open subscheme $U\subset X$. One proves it by replacing ${}^{\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U)$ by a quasi-isomorphic bounded above complex ${}^{\prime\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U)$ of finitely generated projective ${\mathcal{O}}_{X}(U)$-modules, and reducing again to the condition that the morphism ${\mathcal{O}}_{X}(U)\longrightarrow\operatorname{Hom}_{{\mathcal{O}}_{X}(U)}({}^{\prime\prime\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U),{}^{\prime}{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}(U))$ is a quasi-isomorphism (cf. [53]). According to the proof of Corollary 4.6.8(b), the functor $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{O}}_{X},{-})$ on the diagram actually lands in $\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{clp}}^{\mathsf{lin}})$ (as does the functor ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\otimes_{X{\operatorname{\mathrm{-ct}}}}{-}$, according to Lemma 5.7.2). The exterior upper triangle is commutative due to the natural isomorphism (19). The interior upper triangle is commutative due to the natural isomorphism (84). The assertion that the two diagonal functors on the diagram are mutually inverse follows from the above. It can be also proven directly in the manner of the former of the above the proofs of the assertion that the two lower horizontal functors are mutually inverse. One needs to use the natural isomorphisms (45) and (46) for commutation with the direct images. ∎ ### 5.8. Co-contra correspondence over a non-semi-separated scheme The goal of this section is to obtain partial generalizations of Theorems 4.6.6 and 5.7.1 to the case of a non-semi-separated Noetherian scheme. ###### Theorem 5.8.1. Let $X$ be a Noetherian scheme of finite Krull dimension. Then for any symbol $\star={\mathsf{b}}$, $+$, $-$, or $\empt$ there is a natural equivalence of triangulated categories ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--qcoh}}})\simeq{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--ctrh}}})$. ###### Proof. According to Corollaries 5.3.3 and 5.4.4, one has ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--ctrh}}})\simeq{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\allowbreak\simeq{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}})\simeq{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}})\simeq{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})\simeq{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}})$ for any open covering ${\mathbf{W}}$ of the scheme $X$. We will construct an equivalence of triangulated categories ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})\simeq{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}})$ and then show that it takes the full subcategories ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--qcoh}}})\subset{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ into the full subcategories ${\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}})\subset{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}})$ and back for all symbols $\star={\mathsf{b}}$, $+$, or $-$. By Lemma 3.4.7(a), the sheaf ${\mathcal{O}}_{X}$ has a finite right resolution by flasque quasi-coherent sheaves. We fix such a resolution ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ for the time being. Given a complex ${\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}$, we pick a complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ quasi-isomorphic to ${\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}$ (see Theorem 5.4.10(a), cf. Theorem 5.10.2 below) and assign to ${\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ the total complex of the bicomplex $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$. Given a complex ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}$, we pick a complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ quasi- isomorphic to ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}$ (see Theorem 5.4.10(b), cf. Theorem 5.10.3(a) below) and assign to ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ the total complex of the bicomplex ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}$. Let us first show that the complex $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$ is acyclic whenever a complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ is. For any scheme point $x\in X$, let ${\mathfrak{m}}_{x,X}$ denote the maximal ideal of the local ring ${\mathcal{O}}_{x,X}$. By [29, Proposition II.7.17], any injective quasi- coherent sheaf ${\mathcal{I}}$ on $X$ can be presented as an infinite direct sum ${\mathcal{I}}=\bigoplus_{x\in X}\iota_{x}{}_{*}\widetilde{I}_{x}$, where $\iota_{x}\colon\operatorname{Spec}{\mathcal{O}}_{x,X}\longrightarrow X$ are the natural morphisms and $\widetilde{I}_{x}$ are the quasi-coherent sheaves on $\operatorname{Spec}{\mathcal{O}}_{x,X}$ corresponding to infinite direct sums of copies of the injective envelopes of the ${\mathcal{O}}_{x,X}$-modules ${\mathcal{O}}_{x,X}/{\mathfrak{m}}_{x,X}$. Let $X=\bigcup_{\alpha=1}^{N}U_{\alpha}$ be a finite affine open covering. Set $S_{\beta}\subset X$ to be the set-theoretic complement to $\bigcup_{\alpha<\beta}U_{\alpha}$ in $U_{\beta}$, and consider the direct sum decomposition ${\mathcal{I}}=\bigoplus_{\alpha=1}^{N}{\mathcal{I}}_{\alpha}$ with ${\mathcal{I}}_{\alpha}=\bigoplus_{z\in S_{\alpha}}\iota_{z}{}_{*}\widetilde{I}_{z}$. The associated decreasing filtration ${\mathcal{I}}_{\ge\alpha}=\bigoplus_{\beta\ge\alpha}{\mathcal{I}}_{\beta}$ is preserved by all morphisms of injective quasi-coherent sheaves ${\mathcal{I}}$ on $X$ (cf. Theorem 5.1.1 and Lemma 5.1.2). We obtain a termwise split filtration ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}_{\ge\alpha}$ on the complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ with the associated quotient complexes ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}_{\alpha}$ isomorphic to the direct images $j_{\alpha}{}_{*}{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of complexes of injective quasi-coherent sheaves ${\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ from the open embeddings $j_{\alpha}\colon U_{\alpha}\longrightarrow X$. Moreover, for $\alpha=1$ the complex of quasi-coherent sheaves ${\mathcal{K}}_{1}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\simeq j_{1}^{*}{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is acyclic, since the complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is; and the complex $j_{1}{}_{*}{\mathcal{K}}_{1}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is acyclic by Corollary 3.4.9(a) or Lemma 3.4.7(a). It follows by induction that all the complexes ${\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $U_{\alpha}{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ are acyclic over $U_{\alpha}{\operatorname{\mathsf{--qcoh}}}$. Now one has $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},j_{\alpha}{}_{*}{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\simeq j_{\alpha}{}_{!}\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ by (45). The complex $\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ over $U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$ is quasi- isomorphic to $\operatorname{\mathfrak{Hom}}_{U_{\alpha}}({\mathcal{O}}_{U_{\alpha}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$, since ${\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is a complex over $U_{\alpha}{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$, while the complex $\operatorname{\mathfrak{Hom}}_{U_{\alpha}}({\mathcal{O}}_{U_{\alpha}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic over $U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$, since the complex ${\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is acyclic over $U_{\alpha}{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cot}}$ (see Corollary 1.5.7 or Lemma 5.4.1(c)). So the complex $\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic over $U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$; by Lemma 3.4.6(c), it is also a complex of coflasque contraherent cosheaves. By Corollary 3.4.9(c), or alternatively by Lemma 3.4.7(b) together with Corollary 3.4.8(b), it follows that the complex $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},j_{\alpha}{}_{*}{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$. Therefore, so is the complex $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$. Similarly one proves that the complex ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is acyclic over $X{\operatorname{\mathsf{--qcoh}}}$ whenever a complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ is acyclic over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$. One has to use Theorem 5.1.1 and Lemma 5.1.2 (see the proof of Theorem 5.9.1(c) below), the isomorphism (47), and Lemma 3.4.6(d). We have shown that the derived functors ${\mathbb{R}}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{-})$ and ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}^{\mathbb{L}}{-}$ are well defined by the above rules ${\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longmapsto\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ and ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longmapsto{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$. It is a standard fact that the adjunction (20) makes such two triangulated functors adjoint to each other (cf. [50, Lemma 8.3]). Let us check that the adjunction morphism ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}^{\mathbb{L}}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\longrightarrow{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is an isomorphism in ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ for any complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$. For the reasons explained above, one can assume ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}=j_{\alpha}{}_{*}{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ for some complex ${\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $U_{\alpha}{\operatorname{\mathsf{--qcoh}}}$. Then $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\simeq j_{\alpha}{}_{!}\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ Let ${\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ be a complex over $U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ endowed with a quasi-isomorphism ${\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ over $U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$. Then $j_{\alpha}{}_{!}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is a complex over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$, and the morphism $j_{\alpha}{}_{!}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow j_{\alpha}{}_{!}\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is a quasi-isomorphism over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$. So one has ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}^{\mathbb{L}}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\simeq{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}j_{\alpha}{}_{!}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\simeq j_{\alpha}{}_{*}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{U_{\alpha}}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$. Both ${\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ and $j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{U_{\alpha}}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ being complexes of flasque quasi-coherent sheaves on $U_{\alpha}$, it remains to show that the natural morphism $j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{U_{\alpha}}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is a quasi-isomorphism over $U_{\alpha}{\operatorname{\mathsf{--qcoh}}}$. Now the morphisms ${\mathcal{O}}_{U_{\alpha}}\odot_{U_{\alpha}}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{U_{\alpha}}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ and ${\mathcal{O}}_{U_{\alpha}}\odot_{U_{\alpha}}{\mathfrak{G}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow{\mathcal{O}}_{U_{\alpha}}\odot_{U_{\alpha}}\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\longrightarrow{\mathcal{O}}_{U_{\alpha}}\odot_{U_{\alpha}}\operatorname{\mathfrak{Hom}}_{U_{\alpha}}({\mathcal{O}}_{U_{\alpha}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\longrightarrow{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ are quasi-isomorphisms, and the desired assertion follows. Similarly one shows that the adjunction morphism ${\mathbb{R}}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{\mathcal{E}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\longrightarrow{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is an isomorphism in ${\mathsf{D}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}})$ for any complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$. This finishes the construction of the equivalence of categories ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})\simeq{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}})$. To show that it does not depend on the choice of a flasque resolution ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of the sheaf ${\mathcal{O}}_{X}$, consider an acyclic finite complex ${\mathcal{L}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of flasque quasi-coherent sheaves on $X$. Then for any injective quasi-coherent sheaf ${\mathcal{J}}$ on $X$ the complex $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{L}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}})$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$ is acyclic by construction. To show that the complex ${\mathcal{L}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}}$ is acyclic over $X{\operatorname{\mathsf{--qcoh}}}$ for any cosheaf ${\mathfrak{F}}\in X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$, one reduces the question to the case of an affine scheme $X$ using Theorem 5.1.1(b) and Lemma 3.4.6(d). Finally, it remains to show that the equivalence of categories ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})\simeq{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}})$ that we have constructed takes bounded above (resp., below) complexes to bounded above (resp., below) complexes and vice versa (up to quasi- isomorphism). If a complex ${\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}$ is bounded below, it has bounded below injective resolution ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ and the complex $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$ is also bounded below. Now assume that a complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ has bounded above cohomology. Arguing as above, consider its decreasing filtration ${\mathcal{J}}_{\ge\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ with the associatived quotient complexes ${\mathcal{J}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\simeq j_{\alpha}{}_{*}{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$. Using Lemma 3.4.7(a), one shows that the cohomology sheaves of the complexes ${\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $U_{\alpha}{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ are also bounded above. By Corollary 1.5.7, the right homological dimension of ${\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ with respect to $U_{\alpha}{\operatorname{\mathsf{--qcoh}}}^{\mathsf{cot}}\subset U_{\alpha}^{\operatorname{\mathsf{--qcoh}}}$ is finite, and it follows that the complex $\operatorname{\mathfrak{Hom}}_{U_{\alpha}}({\mathcal{O}}_{X},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is quasi-isomorphic to a bounded above complex over $U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$. The complex $\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ over $U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$ is quasi- isomorphic to $\operatorname{\mathfrak{Hom}}_{U_{\alpha}}({\mathcal{O}}_{U_{\alpha}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$. Finally, the complex $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\allowbreak\simeq j_{\alpha}{}_{!}\operatorname{\mathfrak{Hom}}_{U_{\alpha}}(j_{\alpha}^{*}{\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{K}}_{\alpha}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is quasi-isomorphic to a bounded above complex over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$ by Lemma 3.4.6(c) and the other results of Section 3.4. Similarly one can show that for any complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ quasi- isomorphic to a bounded below complex over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$ the complex ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}$ has bounded below cohomology sheaves. ∎ Now let $X$ be a Noetherian scheme with a dualizing complex ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ [29, Chapter 5]. As above, we will consider ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ as a finite complex of injective quasi-coherent sheaves on $X$. The following partial version of the covariant Serre–Grothendieck duality holds without the semi- separatedness assumption on $X$. ###### Theorem 5.8.2. The choice of a dualizing complex ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ induces a natural equivalence of triangulated categories ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--ctrh}}})$. ###### Proof. According to Corollary 5.4.4(b), one has ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ for any open covering ${\mathbf{W}}$ of the scheme $X$. By Theorem 5.4.10(a-b), one has $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\simeq{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ and $\mathsf{Hot}(X\allowbreak{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$. We will show that the functors $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{-})$ and ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{-}$ induce an equivalence of the homotopy categories $\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\simeq\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ for any symbol $\star={\mathsf{b}}$, $+$, $-$, or $\empt$. Let ${\mathcal{I}}$ be an injective quasi-coherent sheaf on $X$ and $j\colon U\longrightarrow X$ be the embedding of an affine open subscheme. Then the results of Section 3.8 provide a natural isomorphism of contraherent cosheaves $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{I}},j_{*}{\mathcal{J}})\simeq j_{!}\operatorname{\mathfrak{Hom}}_{U}(j^{*}{\mathcal{I}},{\mathcal{J}})$ on $X$ for any injective quasi-coherent sheaf ${\mathcal{J}}$ on $U$ and a natural isomorphism of quasi-coherent sheaves ${\mathcal{I}}\odot_{X}j_{!}{\mathfrak{G}}\simeq j_{*}(j^{*}{\mathcal{I}}\odot_{X}{\mathfrak{G}})$ on $X$ for any flat cosheaf of ${\mathcal{O}}_{U}$-modules ${\mathfrak{G}}$ on $U$. Notice that the functor $j_{*}$ takes injective quasi-coherent sheaves to injective quasi-coherent sheaves and the functor $j_{!}$ takes projective locally cotorsion contraherent cosheaves to projective locally cotorsion contraherent cosheaves (Corollary 5.1.6(b)). Furthermore, let $X=\bigcup_{\alpha}U_{\alpha}$ be a finite affine open covering. It is clear from the classification theorems (see Theorem 5.1.1(b)) that any injective quasi-coherent sheaf or projective locally cotorsion contraherent cosheaf on $X$ is a finite direct sum of the direct images of similar (co)sheaves from $U_{\alpha}$. It follows that the functors $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{I}},{-})$ and ${\mathcal{I}}\odot_{X}{-}$ take injective quasi-coherent sheaves to projective locally cotorsion contraherent cosheaves on $X$ and back. By (20), these are two adjoint functors between the additive categories $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ and $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$. Substituting ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ in place of ${\mathcal{I}}$ and totalizing the finite complexes of complexes of (co)sheaves, we obtain two adjoint functors $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{-})$ and ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{-}$ between the homotopy categories $\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})$ and $\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$. In order to show that these are mutually inverse equivalences, it suffices to check that the adjunction morphisms ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}\operatorname{\mathfrak{Hom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}})\longrightarrow{\mathcal{J}}$ and ${\mathfrak{P}}\longrightarrow\operatorname{\mathfrak{Hom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>\allowbreak{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{P}})$ are quasi-isomorphisms/homotopy equivalences of finite complexes for any ${\mathcal{J}}\in X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ and ${\mathfrak{P}}\in X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$. Presenting ${\mathcal{J}}$ and ${\mathfrak{P}}$ as finite direct sums of the direct images of similar (co)sheaves from affine open subschemes of $X$ and taking again into account the isomorphisms (45), (47) reduces the question to the case of an affine scheme, where the assertion is already known. Alternatively, one can work directly in the greater generality of arbitrary (not necessarily locally cotorsion) and flat contraherent cosheaves. According to Theorem 5.4.10(d), one has ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})\simeq{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$. Let us show that the functors $\operatorname{\mathfrak{Hom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{-})$ and ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{-}$ induce an equivalence of triangulated categories $\mathsf{Hot}^{\star}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})\simeq{\mathsf{D}}^{\star}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})$ for any symbol $\star={\mathsf{b}}$, ${\mathsf{abs}}+$, ${\mathsf{abs}}-$, or ${\mathsf{abs}}$. Given an injective quasi-coherent sheaf ${\mathcal{I}}$ on $X$, let us first check that the functor ${\mathfrak{F}}\longmapsto{\mathcal{I}}\odot_{X}{\mathfrak{F}}$ takes short exact sequences of flat contraherent cosheaves to short exact sequences of quasi-coherent sheaves on $X$. By the adjunction isomorphism (20), for any injective quasi-coherent sheaf ${\mathcal{J}}$ on $X$ one has $\operatorname{Hom}_{X}({\mathcal{I}}\odot_{X}{\mathfrak{F}},\>{\mathcal{J}})\simeq\operatorname{Hom}^{X}({\mathfrak{F}},\operatorname{\mathfrak{Hom}}_{X}({\mathcal{I}},{\mathcal{J}}))$. The contraherent cosheaf ${\mathfrak{Q}}=\operatorname{\mathfrak{Hom}}_{X}({\mathcal{I}},{\mathcal{J}})$ being locally cotorsion, the functor ${\mathfrak{F}}\longmapsto\operatorname{Hom}^{X}({\mathfrak{F}},{\mathfrak{Q}})$ is exact on $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$ by Corollary 5.2.9(a). Furthermore, by part (b) of the same Corollary any flat contraherent cosheaf ${\mathfrak{F}}$ on $X$ is a direct summand of a finitely iterated extension of the direct images $j_{!}{\mathfrak{G}}$ of flat contraherent cosheaves ${\mathfrak{G}}$ on affine open subschemes $U\subset X$. Using the isomorphism (47), we conclude that the quasi-coherent sheaf ${\mathcal{I}}\odot_{X}{\mathfrak{F}}$ is injective. It follows, in particular, that the complex of quasi-coherent sheaves ${\mathcal{I}}\odot_{X}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is contractible for any acyclic complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over the exact category $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$. Therefore, the same applies to the complex ${\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$. Finally, to prove that the map ${\mathfrak{F}}\longrightarrow\operatorname{\mathfrak{Hom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}})$ is a quasi-isomorphism for any flat contraherent cosheaf ${\mathfrak{F}}$, it suffices again to consider the case ${\mathfrak{F}}=j_{!}{\mathfrak{G}}$, when the assertion follows from the isomorphisms (45), (47). Hence the morphism ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow\operatorname{\mathfrak{Hom}}_{X}({\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}},\>{\mathcal{D}}_{X}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\odot_{X}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ has a cone absolutely acyclic with respect to $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$ for any compex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$. ∎ ### 5.9. Compact generators Let ${\mathsf{D}}$ be a triangulated category where arbitrary infinite direct sums exist. We recall that object $C\in{\mathsf{D}}$ is called _compact_ if the functor $\operatorname{Hom}_{\mathsf{D}}(C,{-})$ takes infinite direct sums in ${\mathsf{D}}$ to infinite direct sums of abelian groups [45]. A set of compact objects ${\mathsf{C}}\subset{\mathsf{D}}$ is said to _generate_ ${\mathsf{D}}$ if any object $X\in{\mathsf{D}}$ such that $\operatorname{Hom}_{\mathsf{D}}(C,X[*])=0$ for all $C\in{\mathsf{C}}$ vanishes in ${\mathsf{D}}$. Equivalently, this means that any full triangulated subcategory of ${\mathsf{D}}$ containing ${\mathsf{C}}$ and closed under infinite direct sums coincides with ${\mathsf{D}}$. If ${\mathsf{C}}$ is a set of compact generators for ${\mathsf{D}}$, then an object of ${\mathsf{D}}$ is compact if and only if it belongs to the minimal thick subcategory of ${\mathsf{D}}$ containing ${\mathsf{C}}$. Let $X{\operatorname{\mathsf{--coh}}}$ denote the abelian category of coherent sheaves on a Noetherian scheme $X$. ###### Theorem 5.9.1. (a) For any scheme $X$, the coderived category ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ admits arbitrary infinite direct sums, while the contraderived categories ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ and ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ admit infinite products. (b) For any Noetherian scheme $X$, the coderived category ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ is compactly generated. The triangulated functor ${\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--coh}}})\longrightarrow{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$ induced by the embedding of abelian categories $X{\operatorname{\mathsf{--coh}}}\longrightarrow X{\operatorname{\mathsf{--qcoh}}}$ is fully faithful, and its image is the full subcategory of compact objects in ${\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}})$. (c) For any Noetherian scheme $X$ of finite Krull dimension, the contraderived categories ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ and ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ admit arbitrary infinite direct sums and are compactly generated. ###### Proof. Part (a) holds, because the abelian category $X{\operatorname{\mathsf{--qcoh}}}$ admits arbitrary infinite direct sums and the full subcategory of coacyclic complexes in $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}})$ is closed under infinite direct sums (see [46, Proposition 1.2.1 and Lemma 3.2.10]). Analogously, the exact categories $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$ and $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$ admit arbitrary infinite products and the full subcategories of contraacyclic complexes in $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ and $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ are closed under infinite products. In the assumption of part (b), the abelian category $X{\operatorname{\mathsf{--qcoh}}}$ is a locally Noetherian Grothendieck category, so the assertions hold by Theorem 5.4.10(a) and [37, Proposition 2.3] (see also Lemma A.1.2). A more generally applicable assertion/argument can be found in [53, Proposition 1.5(d)] and/or [51, Section 3.11]. Part (c): notice first of all that all the categories ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ and ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ are equivalent to each other by Corollaries 5.3.3 and 5.4.4(b). Furthermore, if the scheme $X$ admits a dualizing complex, the assertion of part (c) follows from Theorem 5.8.2 and part (b). The following more complicated argument allows to prove the desired assertion in the stated generality. By Theorem 5.4.10(b), the triangulated category in question is equivalent to the homotopy category $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$. Let us first consider the case when the scheme $X$ is semi-separated. Then Corollary 5.4.5 identifies our triangulated category with ${\mathsf{D}}^{\mathsf{abs}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq{\mathsf{D}}^{\mathsf{co}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\simeq{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$. It follows immediately that this triangulated category admits arbitrary infinite direct sums. In the case of an affine Noetherian scheme $U$ of finite Krull dimension, another application of Proposition A.5.6 allows to identify ${\mathsf{D}}^{\mathsf{co}}(U{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ with the homotopy category of complexes of projective ${\mathcal{O}}(U)$-modules, which is compactly generated by [34, Theorem 2.4]. More generally, the category ${\mathsf{D}}(U{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ is equivalent to the homotopy category of projective ${\mathcal{O}}(U)$-modules for any affine scheme $U$ by [47, Section 8] and is compactly generated for any affine Noetherian scheme $U$ by [47, Proposition 7.14] (see also [48]). Finally, for any semi-separated Noetherian scheme $X$ the triangulated category ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ is compactly generated by [42, Theorem 4.10]. Now let us turn to the general case. First we have to show that the category $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ admits arbitrary infinite direct sums. Let $X=\bigcup_{\alpha=1}^{N}U_{\alpha}$ be a finite affine open covering, and let $S_{\beta}\subset X$ denote the set-theoretic complement to $\bigcup_{\alpha<\beta}U_{\alpha}$ in $U_{\beta}$. Let $j_{\alpha}\colon U_{\alpha}\longrightarrow X$ denote the open embedding morphisms; then the direct image functor $j_{\alpha}{}_{!}\colon\mathsf{Hot}(U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})\longrightarrow\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ is left adjoint to the inverse image functor $j_{\alpha}^{!}\colon\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})\longrightarrow\mathsf{Hot}(U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ (see Corollaries 5.1.3(a) and 5.1.6(b), and the adjunction (26)). Hence the functor $j_{\alpha}{}_{!}$ preserves infinite direct sums. As explained in the proof of Theorem 5.1.1, any projective locally cotorsion contraherent cosheaf ${\mathfrak{F}}$ on $X$ decomposes into a direct sum ${\mathfrak{F}}=\bigoplus_{\alpha=1}^{N}{\mathfrak{F}}_{\alpha}$, where each direct summand ${\mathfrak{F}}_{\alpha}$ is an infinite product over the points $z\in S_{\alpha}$ of the direct images of contraherent cosheaves on $\operatorname{Spec}{\mathcal{O}}_{z,X}$ corresponding to free contramodules over $\widehat{\mathcal{O}}_{z,X}$. According to Lemma 5.1.2, the associated increasing filtration ${\mathfrak{F}}_{\le\alpha}=\bigoplus_{\beta\le\alpha}{\mathfrak{F}}_{\beta}$ on ${\mathfrak{F}}$ is preserved by all morphisms of cosheaves ${\mathfrak{F}}\in X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$. Given a family ${}^{(i)}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of complexes over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$, we now see that every complex ${}^{(i)}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is endowed with a finite termwise split filtration ${}^{(i)}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}_{\le\alpha}$ such that the family of associated quotient complexes ${}^{(i)}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}_{\alpha}$ can be obtained by applying the direct image functor $j_{\alpha}{}_{!}$ to a family of complexes over $U_{\alpha}{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$. It follows that the object $\bigoplus_{i}{}^{(i)}{\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}_{\alpha}$ exists in $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$, and it remains to apply the following lemma (which is slightly stronger than [46, Proposition 1.2.1]). ###### Lemma 5.9.2. Let $A_{i}\longrightarrow B_{i}\longrightarrow C_{i}\longrightarrow A_{i}[1]$ be a family of distinguished triangles in a triangulated category ${\mathsf{D}}$. Suppose that the infinite direct sums $\bigoplus_{i}A_{i}$ and $\bigoplus_{i}B_{i}$ exist in ${\mathsf{D}}$. Then a cone $C$ of the natural morphism $\bigoplus_{i}A_{i}\longrightarrow\bigoplus_{i}B_{i}$ is the infinite direct sum of the family of objects $C_{i}$ in ${\mathsf{D}}$. ###### Proof. Set $A=\bigoplus_{i}A_{i}$ and $B=\bigoplus_{i}B_{i}$. By one of the triangulated category axioms, there exist morphisms of distinguished triangles $(A_{i}\to B_{i}\to C_{i}\to A_{i}[1])\longrightarrow(A\to B\to C\to A[1])$ whose components $A_{i}\longrightarrow A$ and $B_{i}\longrightarrow B$ are the natural embeddings. For any object $E\in{\mathsf{D}}$, apply the functor $\operatorname{Hom}_{\mathsf{D}}({-},E)$ to this family of morphisms of triangles and pass to the infinite product (of abelian groups) over $i$. The resulting morphism from the long exact sequence $\dotsb\longrightarrow\operatorname{Hom}_{\mathsf{D}}(A[1],E)\longrightarrow\operatorname{Hom}_{\mathsf{D}}(C,E)\longrightarrow\operatorname{Hom}_{\mathsf{D}}(B,E)\longrightarrow\operatorname{Hom}_{\mathsf{D}}(A,E)\longrightarrow\dotsb$ to the long exact sequence $\dotsb\longrightarrow\prod_{i}\operatorname{Hom}_{\mathsf{D}}(A_{i}[1],E)\longrightarrow\prod_{i}\operatorname{Hom}_{\mathsf{D}}(C_{i},E)\longrightarrow\prod_{i}\operatorname{Hom}_{\mathsf{D}}(B_{i},E)\longrightarrow\prod_{i}\operatorname{Hom}_{\mathsf{D}}(A_{i},E)\longrightarrow\dotsb$ is an isomorphism at the two thirds of all the terms, and consequently an isomorphism at the remaining terms, too. ∎ Denote temporarily the homotopy category $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ by ${\mathsf{D}}(X)$. To show that the category ${\mathsf{D}}(X)$ is compactly generated, we will use the result of [56, Theorem 5.15]. Let $Y\subset X$ be an open subscheme such that the category ${\mathsf{D}}(Y)$ is compactly generated (e. g., we already know this to hold when $Y$ is semi-separated). Let $j\colon Y\longrightarrow X$ denote the open embedding morphism. The composition $j^{!}j_{!}$ of the direct image and inverse image functors $j_{!}\colon{\mathsf{D}}(Y)\longrightarrow{\mathsf{D}}(X)$ and $j^{!}\colon{\mathsf{D}}(X)\longrightarrow{\mathsf{D}}(Y)$ is isomorphic to the identity endofunctor of ${\mathsf{D}}(Y)$, so the functor $j_{!}$ is fully faithful and the functor $j^{!}$ is a Verdier localization functor. Applying again Lemma 5.1.2, we conclude that the kernel of $j^{!}$ is the homotopy category of projective locally cotorsion contraherent cosheaves on $X$ with vanishing restrictions to $Y$. Denote this homotopy category by ${\mathsf{D}}(Z,X)$, where $Z=X\setminus Y$, and its identity embedding functor by $i_{!}\colon{\mathsf{D}}(Z,X)\longrightarrow{\mathsf{D}}(X)$. The functor $j_{!}$ is known to preserve infinite products, and the triangulated category ${\mathsf{D}}(Y)$ is assumed to be compactly generated; so it follows that there exists a triangulated functor $j^{*}\colon{\mathsf{D}}(X)\longrightarrow{\mathsf{D}}(Y)$ left adjoint to $j_{!}$ (see [46, Remark 6.4.5 and Theorem 8.6.1] and [37, Proposition 3.3(2)]). The existence of the functor $j_{!}$ left adjoint to $j^{!}$ implies existence of a functor $i^{*}\colon{\mathsf{D}}(X)\longrightarrow{\mathsf{D}}(Z,X)$ left adjoint to $i_{!}$; and the existence of the functor $j^{*}$ left adjoint to $j_{!}$ implies existence of a functor $i_{+}\colon{\mathsf{D}}(Z,X)\longrightarrow{\mathsf{D}}(X)$ left adjoint to $i^{*}$. The functors $j^{*}$ and $i_{+}$ have double right adjoints (i. e., the right adjoints and the right adjoints to the right adjoints), hence they not only preserve infinite direct sums, but also take compact objects to compact objects. Furthermore, for any open subscheme $W\subset X$ with the embedding morphism $h\colon W\longrightarrow X$ one has the base change isomorphism $h^{!}j_{!}\simeq j^{\prime}_{!}h^{\prime}{}^{!}$, where $j^{\prime}$ and $h^{\prime}$ denote the open embeddings $W\cap Y\longrightarrow W$ and $W\cap Y\longrightarrow Y$. If the triangulated category ${\mathsf{D}}(W\cap Y)$ is compactly generated, one can pass to the left adjoint functors, obtaining an isomorphism of triangulated functors $j^{*}h_{!}\simeq h^{\prime}_{!}\mskip 1.5muj^{\prime}{}^{*}$. Let $X=\bigcup_{\alpha}U_{\alpha}$ be a finite affine (or, more generally, semi-separated) open covering, $Z_{\alpha}=X\setminus U_{\alpha}$ be the corresponding closed complements, and $i_{\alpha}{}_{+}\colon{\mathsf{D}}(Z_{\alpha},X)\longrightarrow{\mathsf{D}}(X)$ be the related fully faithful triangulated functors. It follows from the above that the images of the functors $i_{\alpha}{}_{+}$ form a collection of Bousfield subcategories in ${\mathsf{D}}(X)$ pairwise intersecting properly in the sense of [56, Lemma 5.7(2)]. Furthermore, the category ${\mathsf{D}}(X)$ being generated by the images of the functors $j_{\alpha}{}_{!}$, the intersection of the kernels of the functors $j_{\alpha}^{*}\colon{\mathsf{D}}(X)\longrightarrow{\mathsf{D}}(U_{\alpha})$ is zero. These coincide with the images of the functors $i_{\alpha}{}_{+}$. Thus the triangulated subcategories $i_{\alpha}{}_{+}{\mathsf{D}}(Z_{\alpha},X)\subset{\mathsf{D}}(X)$ form a _cocovering_ (in the sense of [56]). It remains to check that intersections of the images of $i_{\beta}{}_{+}{\mathsf{D}}(Z_{\beta},X)$ under the localization morphism ${\mathsf{D}}(X)\longrightarrow{\mathsf{D}}(U_{\alpha})$ are compactly generated in ${\mathsf{D}}(U_{\alpha})$. Let $Y$ be a semi-separated Noetherian scheme of finite Krull dimension and $V\subset Y$ be an open subscheme with the closed complement $Z=Y\setminus V$. We will show that the image of the fully faithful triangulated functor $i_{+}\colon{\mathsf{D}}(Z,Y)\longrightarrow{\mathsf{D}}(Y)$ is compactly generated in ${\mathsf{D}}(Y)$; this is clearly sufficient. The result of Corollary 5.4.5 identifies ${\mathsf{D}}(Y)=\mathsf{Hot}(Y{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ with ${\mathsf{D}}(Y{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ and ${\mathsf{D}}(V)=\mathsf{Hot}(V{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ with ${\mathsf{D}}(V{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$. According to Corollary 5.4.6, this identification transforms the functor $j_{!}\colon{\mathsf{D}}(V)\longrightarrow{\mathsf{D}}(Y)$ into the derived functor ${\mathbb{R}}j_{*}\colon{\mathsf{D}}(V{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}(Y{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ constructed in (65). The latter functor is right adjoint to the functor $j^{*}\colon{\mathsf{D}}(Y{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}(V{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$, which is therefore identified with the functor $j^{*}\colon{\mathsf{D}}(Y)\longrightarrow{\mathsf{D}}(V)$. Finally, we refer to [42, Proposition 4.5 and Theorem 4.10] for the assertion that the kernel of the functor $j^{*}\colon{\mathsf{D}}(Y{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\longrightarrow{\mathsf{D}}(V{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$ is compactly generated in ${\mathsf{D}}(Y{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})$. ∎ ###### Theorem 5.9.3. (a) For any scheme $X$, the derived category ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ admits infinite direct sums, while the derived categories ${\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ and ${\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ admit infinite products. (b) For any quasi-compact semi-separated scheme $X$, the derived category ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ is compactly generated. The full triangulated subcategory of perfect complexes in ${\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{vfl}})\subset{\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{fl}})\subset{\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--qcoh}}})\subset{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ is the full subcategory of compact objects in ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$. (c) For any Noetherian scheme $X$, the derived category ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ is compactly generated. The full triangulated subcategory of perfect complexes in ${\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--coh}}})\subset{\mathsf{D}}^{\mathsf{b}}(X{\operatorname{\mathsf{--qcoh}}})\subset{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ is the full subcategory of compact objects in ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$. (d) For any quasi-compact semi-separated scheme $X$, the derived category ${\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ admits infinite direct sums and is compactly generated. (e) For any Noetherian scheme $X$ of finite Krull dimension, the derived categories ${\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ and ${\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ admit infinite direct sums and are compactly generated. ###### Proof. The proof of part (a) is similar to that of Theorem 5.9.1(a): the assersions hold, since the class of acyclic complexes over $X{\operatorname{\mathsf{--qcoh}}}$ is closed under infinite direct sums, and the classes of acyclic complexes over $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$ and $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$ are closed under infinite products. Parts (b) and (c) are particular cases of [56, Theorem 6.8], according to which the derived category ${\mathsf{D}}(X)$ of complexes of sheaves of ${\mathcal{O}}_{X}$-modules with quasi-coherent cohomology sheaves is compactly generated for any quasi-compact quasi-separated scheme $X$. Here one needs to know that the natural functor ${\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})\longrightarrow{\mathsf{D}}(X)$ is an equivalence of categories when $X$ is either quasi-compact and semi- separated, or else Noetherian (cf. [61, Appendix B]). In the semi-separated case, this was proven in [8, Sections 5–6]. The proof in the Noetherian case is similar. Alternatively, one can prove parts (b) and (c) directly in the way analogous to the argument in [56]. In either approach, one needs to know that the functor ${\mathbb{R}}j_{*}$ of derived direct image of complexes over $Y{\operatorname{\mathsf{--qcoh}}}$ with respect to an open embedding $j\colon Y\longrightarrow X$ of schemes from the class under consideration is well- behaved. E. g., it needs to be local in the base, or form a commutative square with the derived functor of direct image of complexes of ${\mathcal{O}}_{Y}$-modules, etc. (cf. [41, Theorems 31 and 42]). In the semi-separated case, one can establish such properties using contraadjusted resolutions and (the proof of) Corollary 4.1.13(a) (see the construction of the functor ${\mathbb{R}}f_{*}$ in Section 4.8 above). In the Noetherian case, one needs to use flasque resolutions and Corollary 3.4.9(a) (see the construction of the functor ${\mathbb{R}}f_{*}$ in Section 5.12 below). Part (d) follows from part (b) together with Theorem 4.6.6. Part (e) follows from part (c) together with Theorem 5.8.1 and Corollary 5.4.4(b). ∎ ### 5.10. Homotopy projective complexes Let $X$ be a scheme. A complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of quasi- coherent sheaves on $X$ is called _homotopy injective_ if the complex of abelian groups $\operatorname{Hom}_{X}({\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic for any acyclic complex of quasi-coherent sheaves ${\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ on $X$. The full subcategory of homotopy injective complexes in $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}})$ is denoted by $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}})^{\mathsf{inj}}$ and the full subcategory of complexes of injective quasi-coherent sheaves that are also homotopy injective is denoted by $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})^{\mathsf{inj}}\subset\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})$. Similarly, a complex ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of locally cotorsion ${\mathbf{W}}$-locally contraherent cosheaves is called _homotopy projective_ if the complex of abelian groups $\operatorname{Hom}^{X}({\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic for any acyclic complex ${\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over the exact category $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$. The full subcategory of homotopy projective complexes in $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ is denoted by $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})_{\mathsf{prj}}$. We will see below in this section that the property of a complex of locally cotorsion ${\mathbf{W}}$-locally contraherent cosheaves on a Noetherian scheme $X$ of finite Krull dimension to be homotopy projective does not change when the covering ${\mathbf{W}}$ is replaced by its refinement. Finally, a complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of ${\mathbf{W}}$-locally contraherent cosheaves is called _homotopy projective_ if the complex of abelian groups $\operatorname{Hom}^{X}({\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic for any acyclic complex ${\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over the exact category $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$. The full subcategory of homotopy projective complexes in $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ is denoted by $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})_{\mathsf{prj}}$. Let us issue a _warning_ that our terminology is misleading: a homotopy projective complex of locally cotorsion ${\mathbf{W}}$-locally contraherent cosheaves need not be homotopy projective as a complex of ${\mathbf{W}}$-locally contraherent cosheaves. It will be shown below that the property of a complex of ${\mathbf{W}}$-locally contraherent cosheaves on a Noetherian scheme $X$ of finite Krull dimension to be homotopy projective does not change when the covering ${\mathbf{W}}$ is replaced by its refinement. ###### Lemma 5.10.1. (a) Let $X$ be a locally Noetherian scheme of finite Krull dimension with an open covering ${\mathbf{W}}$. Then a complex ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ belongs to $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})_{\mathsf{prj}}$ if and only if the complex $\operatorname{Hom}^{X}({\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic for any complex ${\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ acyclic with respect to $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{cfq}}$. (b) Let $X$ be a Noetherian scheme of finite Krull dimension with an open covering ${\mathbf{W}}$. Then a complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}}$ belongs to $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})_{\mathsf{prj}}$ if and only if the complex $\operatorname{Hom}^{X}({\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic for any complex ${\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}}$ acyclic with respect to $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{cfq}}$. ###### Proof. We will prove part (a), part (b) being similar. The “only if” assertion holds by the definition. To check the “if”, consider a complex ${\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$. By (the proof of) Theorem 5.4.10(b), there exists a complex ${\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ together with a morphism of complexes of locally contraherent cosheaves ${\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow{\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ with a cone contraacyclic with respect to $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$. Moreover, the complex $\operatorname{Hom}^{X}$ from any complex of projective locally cotorsion contraherent cosheaves to a contraacyclic complex over $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$ is acyclic. Hence the morphism $\operatorname{Hom}^{X}({\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})\longrightarrow\operatorname{Hom}^{X}({\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is a quasi-isomorphism. Finally, if the complex ${\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is acyclic over $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$, then so is the complex ${\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$, and by Lemma 5.3.1(b) it follows that the complex ${\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is also acyclic with respect to $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{cfq}}$. ∎ According to Lemma 5.10.1, the property of a complex over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ (respectively, over $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}}$) to belong to $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})_{\mathsf{prj}}$ (resp., $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})_{\mathsf{prj}}$) does not depend on the covering ${\mathbf{W}}$ (in the assumptions of the respective part of the lemma). We will denote the full subcategory in $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})$ (resp., $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})$) consisting of the homotopy projective complexes by $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})_{\mathsf{prj}}$ (resp., $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})_{\mathsf{prj}}$). It is a standard fact that bounded above complexes of projectives are homotopy projective, $\mathsf{Hot}^{-}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})\subset\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})_{\mathsf{prj}}$ and $\mathsf{Hot}^{-}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\subset\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})_{\mathsf{prj}}$. The next result is essentially well-known. ###### Theorem 5.10.2. Let $X$ be a locally Noetherian scheme. Then the natural functors $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})^{\mathsf{inj}}\longrightarrow\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}})^{\mathsf{inj}}\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ are equivalences of triangulated categories. ###### Proof. It is clear that both functors are fully faithful. The functor $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}})^{\mathsf{inj}}\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--qcoh}}})$ is an equivalence of categories by [1, Theorem 5.4]; this is applicable to any Grothendieck abelian category in place of $X{\operatorname{\mathsf{--qcoh}}}$ (for an even more general statement, see [38, Theorem 6]). To prove that any homotopy injective complex of quasi-coherent sheaves on a locally Noetherian scheme is homotopy equivalent to a homotopy injective complex of injective quasi-coherent sheaves, one can use (the proof of) Theorem 5.4.10(a). From any complex ${\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}$ there exists a closed morphism into a complex ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}}$ with a coacyclic cone ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$. If the complex ${\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ was homotopy injective, the morphism ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow{\mathcal{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}[1]$ is homotopic to zero, hence the complex ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is a direct summand of ${\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ in $\mathsf{Hot}(X{\operatorname{\mathsf{--qcoh}}}^{\mathsf{inj}})$. Any morphism ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}\longrightarrow{\mathcal{J}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ being also homotopic to zero, it follows that the complex ${\mathcal{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is contractible. ∎ ###### Theorem 5.10.3. Let $X$ be a Noetherian scheme of finite Krull dimension with an open covering ${\mathbf{W}}$. Then (a) the natural functors $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}})_{\mathsf{prj}}\longrightarrow\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})_{\mathsf{prj}}\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}^{\mathsf{lct}}_{\mathbf{W}})$ are equivalences of triangulated categories; (b) the natural functors $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})_{\mathsf{prj}}\longrightarrow\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})_{\mathsf{prj}}\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ are equivalences of triangulated categories. ###### Proof. We will prove part (b), part (a) being similar. Since both functors are clearly fully faithful, it suffices to show that the composition $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})_{\mathsf{prj}}\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ is an equivalence of categories. This is equivalent to saying that the localization functor $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ has a left adjoint whose image is essentially contained in $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})$. The functor in question factorizes into the composition of two localization functors $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$. The functor $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\allowbreak\longrightarrow{\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ has a left adjoint functor ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\simeq\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})\hookrightarrow\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ provided by Theorem 5.4.10(d); so it remains to show that the functor ${\mathsf{D}}^{\mathsf{ctr}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})\longrightarrow{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ has a left adjoint. Since the latter functor preserves infinite products, the assertion follows from Theorem 5.9.1(c) and [37, Proposition 3.3(2)]. Here one also needs to know that the derived category ${\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})\simeq{\mathsf{D}}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ “exists” (i. e., morphisms between any given two objects form a set rather than a class). This is established by noticing that the classes of quasi- isomorphisms are locally small in $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}})$ and $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})$ (see [62, Section 10.3.6 and Proposition 10.4.4]). Similarly one can prove Theorem 5.10.2 for a Noetherian scheme $X$ using Theorems 5.4.10(a) and 5.9.1(b); the only difference is that this time one needs a right adjoint functor, so [37, Proposition 3.3(1)] has to be applied. (Cf. [51, Section 5.5]). ∎ A complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ of ${\mathbf{W}}$-locally contraherent cosheaves on a scheme $X$ is called _homotopy flat_ if the complex of abelian groups $\operatorname{Hom}^{X}({\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic for any complex of locally cotorsion ${\mathbf{W}}$-locally contraherent cosheaves ${\mathfrak{M}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ acyclic over the exact category $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$. On a locally Noetherian scheme $X$ of finite Krull dimension, the latter condition is equivalent to the acyclicity over $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}$ (see Corollary 1.5.7). Notice that the property of a complex of ${\mathbf{W}}$-locally contraherent cosheaves to be homotopy flat may possibly change when the covering ${\mathbf{W}}$ is replaced by its refinement. ###### Lemma 5.10.4. Let $X$ be a Noetherian scheme of finite Krull dimension with an open covering ${\mathbf{W}}$. Then a complex of flat contraherent cosheaves ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ on $X$ is homotopy flat if and only if the complex $\operatorname{Hom}^{X}({\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}},{\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}})$ is acyclic for any complex ${\mathfrak{E}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ acyclic with respect to $X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{cfq}}$. ###### Proof. Similar to that of Lemma 5.10.1. The only difference is that one has to use Corollary 5.2.9(a) in order to show that the complex $\operatorname{Hom}^{X}$ from any complex of flat contraherent cosheaves on $X$ to a contraacyclic complex over $X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}}^{\mathsf{lct}}$ is acyclic. ∎ According to Lemma 5.10.4, the property of a complex over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$ to belong to $\mathsf{Hot}(X{\operatorname{\mathsf{--lcth}}}_{\mathbf{W}})^{\mathsf{fl}}$ does not depend on the covering ${\mathbf{W}}$ (on a Noetherian scheme $X$ of finite Krull dimension). We denote the full subcategory in $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})$ consisting of the homotopy flat complexes by $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})^{\mathsf{fl}}$. One can easily check that bounded above complexes of flat contraherent cosheaves are homotopy flat, $\mathsf{Hot}^{-}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})\subset\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})^{\mathsf{fl}}$. ###### Theorem 5.10.5. Let $X$ be a Noetherian scheme of finite Krull dimension. Then the quotient category of the homotopy category of homotopy flat complexes of flat contraherent cosheaves $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})^{\mathsf{fl}}$ on $X$ by its thick subcategory of acyclic complexes over the exact category $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$ is equivalent to the derived category ${\mathsf{D}}(X{\operatorname{\mathsf{--ctrh}}})$. ###### Proof. By Corollary 5.2.6(b) and [50, Remark 2.1], any acyclic complex over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$ is absolutely acyclic; and one can see from Corollary 5.2.9(a) that any absolutely acyclic complex over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$ is homotopy flat. According to (the proof of) Theorem 5.10.3(b), there is a quasi-isomorphism into any complex over $X{\operatorname{\mathsf{--ctrh}}}$ from a complex belonging to $\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}_{\mathsf{prj}})_{\mathsf{prj}}\subset\mathsf{Hot}(X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}})^{\mathsf{fl}}$. In view of [51, Lemma 1.6], it remains to show that any homotopy flat complex of flat contraherent cosheaves that is acyclic over $X{\operatorname{\mathsf{--ctrh}}}$ is also acyclic over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$. According again to Corollary 5.2.6(b) and the dual version of the proof of Proposition A.5.6, any complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$ admits a morphism into a complex ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}_{\mathsf{prj}}$ with a cone absolutely acyclic with respect to $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$. If the complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ was homotopy flat, it follows that the complex ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is homotopy flat, too. This means that ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is a homotopy projective complex of locally cotorsion contraherent cosheaves on $X$. If the complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ was also acyclic over $X{\operatorname{\mathsf{--ctrh}}}$, so is the complex ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$. It follows that ${\mathfrak{P}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is acyclic over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{lct}}$, and therefore contractible. We have proven that the complex ${\mathfrak{F}}^{\text{\smaller\smaller$\scriptstyle\bullet$}}$ is absolutely acyclic over $X{\operatorname{\mathsf{--ctrh}}}^{\mathsf{fl}}$. ∎ ### 5.11. Special inverse image of contraherent cosheaves Recall that an affine morphism of schemes $f\colon Y\longrightarrow X$ is called _finite_ if for any affine open subscheme $U\subset X$ the ring ${\mathcal{O}}_{Y}(f^{-1}(U))$ is a finitely generated module over the ring ${\mathcal{O}}_{X}(U)$. One can easily see that this condition on a morphism $f$ is local in $X$. Let $f\colon Y\longrightarrow X$ be a finite morphism of locally Noetherian schemes. Given a quasi-coherent sheaf ${\mathcal{M}}$ on $X$, one defines the quasi-coherent sheaf $f^{!}{\mathcal{M}}$ on $Y$ by the rule $(f^{!}{\mathcal{M}})(V)={\mathcal{O}}_{Y}(V)\otimes_{{\mathcal{O}}_{Y}(f^{-1}(U))}\operatorname{Hom}_{{\mathcal{O}}_{X}(U)}({\mathcal{O}}_{Y}(f^{-1}(U)),\>{\mathcal{M}}(U))$ for any affine open subschemes $V\subset Y$ and $U\subset X$ such that $f(V)\subset U$ [29, Section III.6]. The construction is well-defined, since for any pair of embedded affine open subschemes $U^{\prime}\subset U\subset X$ one has $\operatorname{Hom}_{{\mathcal{O}}_{X}(U^{\prime})}({\mathcal{O}}_{Y}(f^{-1}(U^{\prime}),\>{\mathcal{M}}(U^{\prime}))\\\ \simeq\operatorname{Hom}_{{\mathcal{O}}_{X}(U^{\prime})}({\mathcal{O}}_{X}(U^{\prime})\otimes_{{\mathcal{O}}_{X}(U)}{\mathcal{O}}_{Y}(f^{-1}(U)),\>{\mathcal{O}}_{X}(U^{\prime})\otimes_{{\mathcal{O}}_{X}(U)}{\mathcal{M}}(U))\\\ \simeq{\mathcal{O}}_{X}(U^{\prime})\otimes_{{\mathcal{O}}_{X}(U)}\operatorname{Hom}_{{\mathcal{O}}_{X}(U)}({\mathcal{O}}_{Y}(f^{-1}(U)),\>{\mathcal{M}}(U)).$ Indeed, one has $\operatorname{Hom}_{R}(L,\>F\otimes_{R}M)\simeq F\otimes_{R}\operatorname{Hom}_{R}(L,M)$ for any module $M$, finitely presented module $L$, and flat module $F$ over a commutative ring $R$. (See Section 3.3 for a treatment of the non-semi-separatedness issue.) The functor $f^{!}\colon X{\operatorname{\mathsf{--qcoh}}}\longrightarrow Y{\operatorname{\mathsf{--qcoh}}}$ is right adjoint to the exact functor $f_{*}\colon Y{\operatorname{\mathsf{--qcoh}}}\longrightarrow X{\operatorname{\mathsf{--qcoh}}}$. Indeed, it suffices to define a morphism of quasi-coherent sheaves on $Y$ on the modules of sections over the affine open subschemes $f^{-1}(U)\subset Y$. So given quasi-coherent sheaves ${\mathcal{M}}$ on $X$ and ${\mathcal{N}}$ on $Y$, both groups of morphisms $\operatorname{Hom}_{X}(f_{*}{\mathcal{N}},{\mathcal{M}})$ and $\operatorname{Hom}_{Y}({\mathcal{N}},f^{!}{\mathcal{M}})$ are identified with the group of all compatible collections of morphisms of ${\mathcal{O}}_{X}(U)$-modules
# AI Product Security: A Primer for Developers Ebenezer R.H.P. Isaac 0000-0003-0830-8862 AI Technical Product Manager, Global AI Accelerator (GAIA)EricssonChennaiIndia<EMAIL_ADDRESS>and Jim Reno Distinguished Engineer, GAIAEricssonSanta ClaraUSA <EMAIL_ADDRESS> ###### Abstract. Not too long ago, AI security used to mean the research and practice of how AI can empower cybersecurity, that is, AI for security. Ever since Ian Goodfellow and his team popularized adversarial attacks on machine learning, security for AI became an important concern and also part of AI security. It is imperative to understand the threats to machine learning products and avoid common pitfalls in AI product development. This article is addressed to developers, designers, managers and researchers of AI software products. artificial neural networks, adversarial attacks, trustworthy AI, MLOps ††ccs: Computing methodologies Machine learning††ccs: Computing methodologies Artificial intelligence††ccs: Security and privacy††ccs: Software and its engineering Software creation and management ## 1\. Introduction Trustworthy AI is being explored by a number of jurisdictions around the world. One example is the Ethics Guidelines for Trustworthy AI, from the High- Level Expert Group on AI set up by the European Commission. According to the EC guidelines, trustworthy AI should be lawful, ethical and robust (Commission, 2019). The security of AI models is essential to addressing many of its requirement areas, which are becoming codified into laws and regulations, e.g., the EU AI Act (Comission, 2021). As we continue to develop and rely on AI, we must prioritize security and work to address the challenges of AI safety. The market for AI startups has exploded in recent years, with many companies working on new and innovative applications. Expertise in security is not a given among all those working in AI, which makes it essential to have a dedicated focus on it to ensure safe and secure AI systems. The other day we came across this article titled “Computer security checklist for non-security technology professionals.” (Garrison and Posey, 2006) It is a short checklist grouped into three main activities: (1) Perform risk analysis, (2) Conduct vulnerability assessments, and (3) Education, procedures and policy. Though published in 2006, these practices are still relevant today. AI Security, in today’s world, goes a bit beyond that. From understanding the specific threats in machine learning (ML) to the crucial role of generic software product security, this article will take you on a journey through the ever-evolving landscape of AI security. ## 2\. AI-Specific Threats in ML To address the complex challenges of AI security, we need a holistic approach that covers all the aspects of the AI system, from data collection and preprocessing to deployment and monitoring. The taxonomy of ML product attacks can be divided into two surfaces as shown in Fig 1. Figure 1. Taxonomy of threats of an ML product. Figure adapted from Hyrum Anderson (Anderson, 2021) A flow diagram highlighting various threats of an ML system in production AI-specific attacks appear in the ML attack surface. These attacks are also collectively known as adversarial attacks. * • Poisoning: modifying a benign training dataset * • Evasion: give a malicious input to get an unexpected output * • Oracle: stealing information by probing the model * • Adversarial reprogramming: use the model for a task it is not intended to do The traditional attack surface exploits vulnerabilities that can be found in any software product, regardless of its specific purpose or application. * • Unauthorized access to data or the model can affect confidentiality, integrity, and availability of the system. * • Malicious feedback can negatively influence a model’s development which may limit its ability to perform as expected. Let us have a closer look at these attacks with examples and possible countermeasures. ### 2.1. Poisoning Tampering with the training dataset by adding, deleting, changing, or reordering its contents can lead to erroneous learning and, ultimately, a model that generates incorrect inferences. Poisoning can violate one or more of availability, integrity, and confidentiality/privacy (Wang et al., 2022). In some cases, poisoning means adding malicious samples to the dataset, affecting the learned concept to benefit the attacker. An attacker might poison the training dataset of a malware scanner to misclassify malware as benign code. A facial recognition system might be tricked by introducing images of one face labeled with a different identity. This false mapping might used for identify theft or to warrant unauthorized access, e.g., unlocking a phone or restricted areas. In a classification model, an attacker can inject multiple copies of seemingly benign samples into the training dataset. If more of one class is supplied, then the integrity of the output is affected by changing the convergence – the point where the model is said to have learned its target concept. Injecting an abnormally high volume might increase training computational requirements, stall the pipeline, and reduce system availability. Consider a linear regression model which forecasts financial information. If the attacker can poison the training data, the subsequently corrupted model may give him some advantage. For example, he might affect the predicted price of a stock in order to facilitate securities fraud. Risk of poisoning is greater in products where there is limited control on data collection, such as crowd-sourcing or federated learning. The following steps may help mitigate the risks. * • Assess the trustworthiness of participants. Ensure that only verified members can provide inputs to the dataset (Papadopoulos et al., 2021). Verification can involve authorization of members contributing to the dataset and authenticity of messages before they are used to train an ML model. * • Assess load prior to training and preprocessing to ensure the pipeline won’t stall. Include exceptions when the data exceeds the expected distribution. E.g., depending on the use case, a subset can be preferred over the entirety of the input data for training. * • Assess class-wise distribution to detect data drifts. Outliers, particularly if they consistently come from a small subset of participants, could indicate an attack. Depending on the use case, this method may not distinguish natural drift from a poisoning attack. Nevertheless, it can alert the model operator to initiate an investigation. * • When crowd-sourcing, use randomly selected subsets taken from a very large community for the training and test datasets. This approach increases the amount of work the attacker must do in order to have enough poisoned data points included in the model. It is worth noting that these methods are not a panacea and need to be combined with other techniques like data validation, data sanitizing and model validation to protect against poisoning. ### 2.2. Evasion Evasion attacks manipulate input data in a way that causes a model to make incorrect decisions. With slight, ideally unnoticeable, input modifications, the attacker may be able to cause an output that is in her favor. The modified sample is called an adversarial example. Vulnerabilities to evasion attacks are common in image classification systems (usually neural networks). Goodfellow et al. (Goodfellow et al., 2015) show an image of a panda perturbed to make GoogLeNet misclassify it as a gibbon with 99.3% confidence. Both images before and after perturbation look identical to the human eye. The perturbation is done only on the sample fed to the model, without altering model parameters/weights. An evasion attack is a direct violation of integrity, but can also violate confidentiality (if the model is used to authorize access) or availability (e.g., a virus that bypasses malware checks by warping its signature). A detailed outlook on evasion attacks has been studied in reference (Sagar et al., 2020). Specific risk mitigation for evasion attacks is influenced by the application that uses the ML model, such as malware detection, phishing detection, internet of things (IoT), smart grids, etc. Nevertheless, studying the threat surface can help to identify possible controls to reduce this risk. Start by answering the following questions: * • Where in the pipeline can a possible perturbation occur? * • What is the data source for the example used for inference and validation? * • Is the pathway from the source to the inference engine hardened? For instance, in an image classification scenario, perturbations can occur in the image samples. In face detection, the data source is the camera. Ensure that the data flow from camera $\rightarrow$ storage $\rightarrow$ inference is secure. Where you have minimal control over the data source, you may apply certain transformations on the data (Yuan et al., 2019). Defensive distillation is a way to strengthen a learning algorithm against potential adversarial attacks by adding more adaptability to its classification process. It involves training one model to predict the output probabilities of another model that was trained on a previous, standard dataset, focusing on overall accuracy (Papernot et al., 2016). However, defensive distillation is not robust against poisoning, so adequate protections against poisoning should also be in place. ### 2.3. Oracle An oracle attack in ML, or model extraction/stealing attack (Jagielski et al., 2020), involves an attacker attempting to extract the internal parameters or architecture of a model by querying it to infer its decision boundaries. The goal is to recreate a copy of the model and potentially use it for malicious purposes, such as stealing sensitive information or intellectual property. This type of attack can happen when the attacker has access to the model’s predictions, but not the training data or the model’s parameters. The attacker can construct a specific set of queries to the model to infer the underlying logic or data. The information gained from an oracle attack can also be used to facilitate other types of attacks. Two interesting types of oracle attacks are membership inference attacks and model inversion attacks. In a membership inference attack (Carlini et al., 2022), an attacker aims to determine whether a specific individual’s data was used to train a machine learning model. The attacker makes use of the model’s predictions as an ”oracle”, sometimes correlated with publicly available data, to infer whether the individual’s data was used in the training process. This can reveal sensitive information, compromising the privacy of the individual. For example, an attacker could use a model that has been trained on medical records to infer whether a specific individual’s medical data was in the training set. The attacker submits queries to the model with various combinations of attributes of the individual, and observes the model’s predictions. If the predictions match the individual’s known attributes, the attacker can infer that the individual’s data was present. A case study of this attack and its defenceis discussed in (McCarthy et al., 2023). A model inversion attack (Fredrikson et al., 2015), on the other hand, is an attack where an adversary tries to reverse engineer the training data that was used to train a machine learning model. By doing this, the adversary can uncover sensitive information from the training data, such as the identity of people or the exact geographic locations that were used to train the model. Oracle attacks exploit the availability of model. One simple defense is to limit access to the model, similar to defending against denial of service (DoS) attacks. The success of an oracle attack depends on the number of queries the attacker can submit. Throttling query access can slow or prevent an attack. Frequent model updates can also help, by changing the model before an attacker can submit a sufficient number of queries. Note that when performing transfer learning on a pretrained model that is already public (such as ResNet50 trained on ImageNet (He et al., 2016)) or used by another entity, then this vulnerability can still exist even though you curb the availability of the model. Here the attacker can gain access to the base pretrained model and have sufficient time and access to create a set of curated adversarial examples, and compare them to the target system. This can be done even if the attacker only has a few opportunities to test instances and observe the output. ### 2.4. Adversarial Reprogramming Untargeted adversarial attacks aim to affect the performance of a model without a specific target output. Targeted attacks create an adversarial perturbation to force a particular output for a given input. Adversarial reprogramming, as demonstrated by Goodfellow and his team (Elsayed et al., 2019), goes beyond this by considering the ability of an attacker to alter the intended function of a machine learning model. They were able to redirect an ImageNet object classification model to perform a separate counting and classification task. This highlights the potential for malicious actors to manipulate models for their own purposes ranging from the misuse or theft of compute resources to secret message passing treating systems as spies. Just like evasion and oracle, this attack does not require changing the model weights. Once enough knowledge is gained from inputs and outputs of the system, for instance through an oracle attack, an attacker can craft a program to create an adversarial example. The model will map that example to the output chosen by the attacker. The initial assumption for this attack is that it requires a white-box knowledge of the model, i.e., the structure and weights of the model are known to the attacker. However, it was proved later that this attack can be done even with a black-box model wherein the weights and structure of the network are unknown to the attacker (Tsai et al., 2020). Protection of the model’s availability and parameters can help in controlling the vulnerability of an adversarial reprogramming attack. The use of adversarial training methods like defensive distillation do not eliminate its vulnerability to reprogramming (although it might increase the compute required). Other ways to reduce the risk of such an attack are regularly updating the model to ensure that it remains resilient against new types of attacks, and monitoring the model’s performance in real-world scenarios to detect any unusual behavior. ### 2.5. Traditional Attacks Data flows in the pipeline through multiple components from ingestion to monitoring. An attacker can target the traffic flow to access confidential information or even participate in a man-in-the-middle attack. A model also undergoes various states as it passes through the pipeline. Once trained, it holds the essence of the data in the form of weights. In production, the model resides in a model store, and is served during inference. The model can be attacked at any point in these state transitions. Some pipelines have feedback mechanisms to support model retraining. Of course, if feedback from many sources is used, then a few malicious values need not inhibit the training. However, if the feedback flow itself is compromised, then an attacker can inject/malform any data. Some techniques under research for addressing the traditional attack surface of an AI system include secure generation of the training set, training set obfuscation, and securely acquiring the sample at inference (Khalid et al., 2018). By focusing on the security of the overall product, not just the machine learning component, organizations can protect their assets and stakeholders. Hardening the ML pipeline can defend against traditional attacks and possibly some AI-specific attacks. It also helps to maintain the overall integrity and confidentiality of data, and to comply with various regulations and industry standards. Security controls include (but are not limited to) the following: #### 2.5.1. Constraining listening services and bindings Binding a service connects it to a specific endpoint or location, such as a network address or port. On a default setup of an end-to-end system, one Often finds many accessible services. Even if these services require secure credentials, it is a good practice to disable access to those that are not required outside the product boundary. By restricting access to the services and bindings, it becomes more difficult for an attacker to exploit vulnerabilities within the system. Constraining listening services and bindings can also help to improve performance and reduce the overall complexity of the AI system. #### 2.5.2. Impose strict access control mechanisms All software should have adequate authentication and authorization. The principle of least privilege (PoLP) is a policy ensuring users are only granted enough access to perform their tasks and no more (Steiner et al., 2018). It ensures that only authorized individuals or entities can access or change the system, reducing the risk of unauthorized access, data breaches and other malicious activities. This is especially important for AI systems where seemingly innocuous privileges (e.g., API query access) can enable certain attacks (e.g., oracle or inference attacks). #### 2.5.3. Traffic and data protection Traffic protection means measures implemented to secure incoming and outgoing communication, usually at the IP level. Not all flows passing between product components are equal. For instance, there should be separate flows for each plane of traffic (Jones, 2004). These planes may include Operation and Maintenance (O&M), data plane, control plane, and user plane traffic. The traffic should also be secured by an up-to-date version of transport layer security (TLS) 111As of the time of writing, 1.3 is the preferred version of TLS (Rescorla, 2018). Data at rest should also be encrypted since a data leak can expose sensitive information or help an attacker subvert or invert the model. Zero-trust principles require that communication channels not only be encrypted, but also authenticated in both directions. Data protection includes the data pipeline all the way back to the source. #### 2.5.4. Periodic vulnerability analysis Vulnerability assessment is the process of identifying, quantifying, and prioritizing the vulnerabilities in a system. This can be done manually or with automated tools such as grype (Anchore, 2023) and trivy (Security, 2023). One key step of a vulnerability assessment is identifying Common Vulnerabilities and Exposures (CVEs) (Corporation, 2023a) – a standardized way of describing a vulnerability or security weakness. Each has a unique identifier and includes information such as a brief description of the vulnerability, the affected software or system, and its severity. Vulnerabilities are found frequently, with several new ones identified every day. Fixing them might seem like a never-ending process, but it is important to prioritize on the most critical ones and address them before the product hits the market, and then do periodic fixes. For development teams under tight deadlines this means a careful tradeoff between time, cost and risk. Full knowledge of vulnerability management in software product security requires a course of its own. Fortunately there are resources to help.The MITRE ATT&CK (Corporation, 2023b) provides a comprehensive understanding of the tactics and techniques used by attackers. The NIST Cybersecurity Framework (of Standards and Technology, 2023b) provides a structured approach to managing their cybersecurity risks. The recently announced NIST AI Risk Management Framework Playbook (of Standards and Technology, 2023a) builds on the NIST Cybersecurity Framework and provides organizations with specific guidance on how to manage the unique risks posed by AI systems. OWASP (Open Web Application Security Project) keeps an up-to-date summary of the top 10 vulnerabilities that pertain to web applications (Project, 2021). Counterfit (Microsoft, 2023) is another open-source tool released by Microsoft for automating security testing of AI systems. It helps organizations perform AI security risk assessments to ensure their algorithms are robust, reliable, and trustworthy. By using these tools, organizations can develop a comprehensive cybersecurity strategy that accounts for the complex threat landscape and helps them effectively manage their risk. #### 2.5.5. Secure coding practices Secure coding is a method of developing software to prevent the accidental introduction of security vulnerabilities. It includes security code reviews, security education, and use of automatic code analysis tools. One secure coding framework is the Software Engineering Institute (SEI) Computer Emergency Response Team (CERT) Coding Standard (Institute, 2016). CERT has standards sets for C, C++, Android, Oracle Java, Perl. As of now, CERT does not have a standard set for Python. Bandit (Authority, 2022) is a Python linting tool that helps developers identify and prevent security-related code issues in Python applications. Bandit checks the code against a set of predefined security rules and generates a report indicating the potential security issues. #### 2.5.6. Event logging Logging provides a record of events and activities within the system. It can provide evidence in the event of a security incident by showing what actions were taken and when they occurred. In addition to security assessments, logs can be used for a variety of purposes, including debugging, troubleshooting, auditing, and monitoring. The system must be auditable: Ideally, every user- initiated action should be traceable back to the user that initiated the action. Logs that show a sequence of activities that lead to the source of the action are called audit trails. In AI systems, logs can include the queries made to the model (including the parameters), the model version, and the inference data feed. Logging involves tradeoffs. When every possible event is logged, the system is said to be more secure, but more compute and storage is required. In ML, depending on the querying frequency and the inference data size, the storage requirement for logging may grow exponentially with respect to the number of interoperable components. So, one should be mindful of the items logged and how long logs should be kept. One possibility is to make such parameters configurable with adequate documentation (such as a Security/Privacy User Guide). The user of the AI product is then aware and can make an informed decision on what to be logged. Also keep sensitive data out of logs, and control log access through PoLP. #### 2.5.7. Ensure runtime environment security Even if the model is developed with security principles in mind, if deployed in an insecure environment, the model and its inputs and outputs can be attacked. This facilitates targeting the system with the entire spectrum of adversarial attacks. Hence, employ measures that harden the runtime environment. The security measures mentioned in this traditional attack section are equally applicable to the runtime environment. Other measures include encrypting storage, monitoring interfaces, and periodic security patch updates. #### 2.5.8. Secure the supply chain and development pipeline Much of today’s AI code is open source, making it easy for vulnerabilities to be identified and exploited. Patches come out periodically. Fixing those vulnerabilities is a continuous process. Securing the software development supply chain involves implementing measures to ensure the integrity and security of the components and services used throughout the software development life cycle. Supply chain security includes: * • Verifying the authenticity of the components and services used, such as open- source libraries, by checking their signatures and digital certificates. * • Using a private repository or registry to store and manage internal components, reducing the risk of using untrusted or malicious components. * • Regularly monitoring and updating dependencies, including open-source libraries. * • Implementing security practices such as code signing, secure packaging, and continuous integration and deployment (CI/CD) to prevent tampering and maintain the integrity of the software. The CI/CD system itself should be secured appropriately as a production system. * • Training employees to recognize and avoid potential supply chain threats, and implement security policies and procedures to reduce the risk of a successful attack. More detail on best practices for software supply chain security is provided by the Cloud Native Computing Foundation (CNCF) (Group, 2021) ## 3\. Security Pitfalls in AI Product Development Security is an important aspect of AI products, yet it is often overlooked during development. Many AI developers focus primarily on building the technology and achieving high performance, without considering the potential security risks and vulnerabilities. This lack of security focus can result in AI products that are vulnerable and may not protect the data and information they process. ### 3.1. Lack of security expertise in your team Security is likely to be the last thing on the mind of a Data Scientist, Machine Learning Engineer, or AI Product Developer. Unless there is a dedicated security expert in your team, you are bound to overlook some security or privacy flaw that may creep into development. You cannot expect the entire team to undergo the rigorous security training that is expected of a security professional. Nevertheless, a minimal security training program should exist for all members, and a security expert made available to help with the details. For instance, a Data Scientist may not be expected to know the current TLS standards to protect the data in transit between one component to another. ### 3.2. Missing security requirements at the start of a project engagement When security requirements come late in the project, it can create complications for development and program management: Who takes ownership of the activity? Who has the appropriate expertise? What overhead is added to the project? Even if the project itself has little to do with security, an early security discussion will avoid unpleasant surprises. ### 3.3. Assuming security aspects will be taken care of by another team. Within a dedicated product development unit, the roles are often well-defined. There might be a dedicated team to handle security-related issues. Program management expects a representative from this team to participate in requirements discussions to close security gaps. However, when working with cross-functional teams, responsibilities aren’t always clear. It is vital in such situations to set clear expectations as to who handles the security function rather than assuming ”it will be taken care of by another team”. It is best if every team member has knowledge and involvement in security, but at a minimum, those responsible should be clearly understood and documented. ### 3.4. Delaying security and privacy compliance activity towards the end Envision this scenario: your team has put in all their effort to polish a product based on their priority functional requirements. The product then goes through a security assessment. The report uncovers some critical vulnerabilities, and component integration has introduced some traffic threats. Your models aren’t encrypted and there are some packages that you should not even use in production. In the worst case, this can lead to an overhaul of the entire product design. Solution: shift-left security. This practice involves bringing the best practices of security earlier in the product design deveopment process - ideally from the very beginning - to avoid such issues. ### 3.5. Using features without evaluating their sensitivity. Let’s say you showcase a successful proof of concept (PoC), everyone gets excited, and the sales team wants to show it to the customers as soon as possible. In the assessment, you notice sensitive fields that shouldn;t be in the model, such as gender, region, and age. Once you remove these fields and retrain, model performance takes a hit – far below the promised value. The product ultimately fails to meet its requirements. This might have been averted by conducting a privacy impact assessment for all fields before they are considered features (or used to derive features) for fitting the model. This pitfall does not necessarily have to be ML model-related. In telecom, subscriber data is highly sensitive. Examples include phone number (MSISDN) and IMSI. When these items are not specifically needed by the model, they should not be collected, or should be masked or removed from the training data. If they are required, consider using privacy-preserving mechanisms such as de-identification or anonymization. For instance, not all network operators need to access this data when using a dashboard for anomaly detection of cell traffic. ## 4\. Closing thought Security of any kind is not just a checklist – it is a process. AI security is no different in this regard. It is crucial that security be integrated into the development process from the start and not as an afterthought. Work on the following to help avoid the aforementioned pitfalls. * • Understand the implications of AI security and address those gaps in a ML pipeline * • A minimal security training programme for non-security professionals * • Discuss security requirements and responsibilities as early as possible; preferably before the start of the project engagement * • Shift-left security and security by design * • Assess the privacy impact of your data before committing it to a model * • Be wary of AI-specific attacks like poisoning, evasion and model/data access * • Periodic security audits and testing against latest security vulnerabilities. The goal of AI is not just to prove a possibility but to build a something that provides value to us and our customers. That means that the ’mundane’ aspects are just as important as the data science. ###### Acknowledgements. To Amit Sharma, Head of AI Hub India 2 at GAIA, Kalpana Angamuthu, Senior Manager of Data Science at GAIA, and Michael Liljenstam, Principal Researcher at Ericsson Research, for reviewing this work and sharing their valuable suggestions. ## References * (1) * Anchore (2023) Anchore. 2023. Grype. Retrieved Jan 27, 2023 from https://github.com/anchore/grype * Anderson (2021) Hyrum Anderson. 2021\. The Practical Divide between Adversarial $\\{$ML$\\}$ Research and Security Practice: A Red Team Perspective. Retrieved Jan 24, 2023 from https://www.usenix.org/conference/enigma2021/presentation/anderson * Authority (2022) Python Code Quality Authority. 2022\. Bandit – A security linter from PyCQA. Retrieved Jan 30, 2023 from https://github.com/PyCQA/bandit * Carlini et al. (2022) Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. 2022\. Membership inference attacks from first principles. In _2022 IEEE Symposium on Security and Privacy (SP)_. IEEE, 1897–1914. * Comission (2021) European Comission. 2021\. EU Artificial Intelligence Act. Retrieved Feb 28, 2023 from https://artificialintelligenceact.eu * Commission (2019) European Commission. 2019\. Ethics guidelines for trustworthy AI. Retrieved Jan 24, 2023 from https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html * Corporation (2023a) The MITRE Corporation. 2023a. CVE. Retrieved Jan 27, 2023 from https://www.cve.org/ * Corporation (2023b) The MITRE Corporation. 2023b. MITRE ATT&CK. Retrieved Jan 27, 2023 from https://attack.mitre.org/ * Elsayed et al. (2019) Gamaleldin F Elsayed, Ian Goodfellow, and Jascha Sohl-Dickstein. 2019\. Adversarial reprogramming of neural networks. In _International Conference on Learning Representations (ICLR)_. * Fredrikson et al. (2015) Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015\. Model inversion attacks that exploit confidence information and basic countermeasures. In _Proceedings of the 22nd ACM SIGSAC conference on computer and communications security_. 1322–1333. * Garrison and Posey (2006) Chlotia P Garrison and Roderick B Posey. 2006. Computer security checklist for non-security technology professionals. _Journal of International Technology and Information Management_ 15, 3 (2006), 7. * Goodfellow et al. (2015) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015\. Explaining and harnessing adversarial examples. In _International Conference on Learning Representations (ICLR)_. * Group (2021) CNCF Security Technical Advisory Group. 2021\. Software Supply Chain Best Practices. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016\. Deep residual learning for image recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. 770–778. * Institute (2016) Software Engineering Institute. 2016\. SEI CERT Coding Standards. Retrieved Jan 30, 2023 from https://wiki.sei.cmu.edu/confluence/display/seccode/SEI+CERT+Coding+Standards * Jagielski et al. (2020) Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. 2020\. High accuracy and high fidelity extraction of neural networks. In _Proceedings of the 29th USENIX Conference on Security Symposium_. 1345–1362. * Jones (2004) G. Jones. 2004. _Operational Security Requirements for Large Internet Service Provider (ISP) IP Network Infrastructure_. RFC 3871. IETF. https://www.rfc-editor.org/info/rfc3871 * Khalid et al. (2018) Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, and Muhammad Shafique. 2018. Security for machine learning-based systems: Attacks and challenges during training and inference. In _2018 International Conference on Frontiers of Information Technology (FIT)_. IEEE, 327–332. * McCarthy et al. (2023) Andrew McCarthy, Essam Ghadafi, Panagiotis Andriotis, and Phil Legg. 2023. Defending against adversarial machine learning attacks using hierarchical learning: A case study on network traffic attack classification. _Journal of Information Security and Applications_ 72 (2023), 103398\. * Microsoft (2023) Microsoft. 2023\. Counterfit. Retrieved Jan 30, 2023 from https://github.com/Azure/counterfit/ * of Standards and Technology (2023a) National Institute of Standards and Technology. 2023a. NIST AI Risk Management Framework Playbook. Retrieved Jan 30, 2023 from https://pages.nist.gov/AIRMF/ * of Standards and Technology (2023b) National Institute of Standards and Technology. 2023b. NIST Cybersecurity Framework. Retrieved Jan 30, 2023 from https://www.nist.gov/cyberframework * Papadopoulos et al. (2021) Pavlos Papadopoulos, Will Abramson, Adam J Hall, Nikolaos Pitropakis, and William J Buchanan. 2021\. Privacy and trust redefined in federated machine learning. _Machine Learning and Knowledge Extraction_ 3, 2 (2021), 333–356. * Papernot et al. (2016) Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In _2016 IEEE symposium on security and privacy (SP)_. IEEE, 582–597. * Project (2021) Open Web Application Security Project. 2021\. OWASP Top 10. Retrieved Jan 24, 2023 from https://owasp.org/www-project-top-ten/ * Rescorla (2018) E. Rescorla. 2018\. _The Transport Layer Security (TLS) Protocol Version 1.3_. RFC 8446. RFC Editor. * Sagar et al. (2020) Ramani Sagar, Rutvij Jhaveri, and Carlos Borrego. 2020\. Applications in security and evasions in machine learning: a survey. _Electronics_ 9, 1 (2020), 97. * Security (2023) Alpha Security. 2023\. Trivy. Retrieved Jan 27, 2023 from https://aquasecurity.github.io/trivy/ * Steiner et al. (2018) Stuart Steiner, Daniel Conte de Leon, and Ananth A Jillepalli. 2018. Hardening web applications using a least privilege DBMS access model. In _Proceedings of the Fifth Cybersecurity Symposium_. 1–6. * Tsai et al. (2020) Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho. 2020. Transfer learning without knowing: Reprogramming black-box machine learning models with scarce data and limited resources. In _International Conference on Machine Learning_. PMLR, 9614–9624. * Wang et al. (2022) Chen Wang, Jian Chen, Yang Yang, Xiaoqiang Ma, and Jiangchuan Liu. 2022. Poisoning attacks and countermeasures in intelligent networks: Status quo and prospects. _Digital Communications and Networks_ 8, 2 (2022), 225–234. * Yuan et al. (2019) Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2019\. Adversarial examples: Attacks and defenses for deep learning. _IEEE Transactions on Neural Networks and Learning Systems_ 30, 9 (2019), 2805–2824.
# WATT-EffNet: A Lightweight and Accurate Model for Classifying Aerial Disaster Images Gao Yu Lee Tanmoy Dam Md Meftahul Ferdaus Daniel Puiu Poenar and Vu N. Duong Gao Yu Lee Tanmoy Dam Md Meftahul Ferdaus Daniel Puiu Poenar and Vu N. Duong Gao Yu Lee, Tanmoy Dam, Md Meftahul Ferdaus, and Vu N. Duong are with the Air Traffic Management Research Institute (ATMRI), Nanyang Technological University, Singapore. (email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected])Gao Yu Lee and Daniel Puiu Poenar are with the school of Electrical and Electronic Engineering (EEE), Nanyang Technological University, Singapore. (email<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Incorporating deep learning (DL) classification models into unmanned aerial vehicles (UAVs) can significantly augment search-and-rescue operations and disaster management efforts. In such critical situations, the UAV’s ability to promptly comprehend the crisis and optimally utilize its limited power and processing resources to narrow down search areas is crucial. Therefore, developing an efficient and lightweight method for scene classification is of utmost importance. However, current approaches tend to prioritize accuracy on benchmark datasets at the expense of computational efficiency. To address this shortcoming, we introduce the Wider ATTENTION EfficientNet (WATT-EffNet), a novel method that achieves higher accuracy with a more lightweight architecture compared to the baseline EfficientNet. The WATT-EffNet leverages width-wise incremental feature modules and attention mechanisms over width- wise features to ensure the network structure remains lightweight. We evaluate our method on a UAV-based aerial disaster image classification dataset and demonstrate that it outperforms the baseline by up to 15 times in terms of classification accuracy and $38.3\%$ in terms of computing efficiency as measured by Floating Point Operations per second (FLOPs). Additionally, we conduct an ablation study to investigate the effect of varying the width of WATT-EffNet on accuracy and computational efficiency. Our code is available at https://github.com/TanmDL/WATT-EffNet. ###### Index Terms: Convolutional Neural Networks (CNN), WATT-EffNet, Disaster Scene classification, Unmanned Aerial Vehicles (UAVs) ## I Introduction Recent technological advancements in unmanned aerial vehicles (UAVs) have significantly improved their capabilities for activities such as remote sensing and visual geological surveying, leading to greater efficiency and effectiveness [1]. In particular, UAVs play a critical role in search and rescue operations, where they can monitor disaster areas for damage and locate survivors. By classifying disaster scenes, UAVs can quickly determine the type of disaster that has occurred and focus their search efforts. This is vital given the limited power and memory resources of UAVs. As noted by [2], the four primary types of disasters that can be found in aerial image databases for emergency response applications are fires, floods, building collapses, and traffic collisions. Fig. 1 displays representative samples of each image type. The dataset used in this study consists of aerial images of various disaster classes, including fires, floods, building collapses, and traffic collisions, collected from multiple sources such as the internet and UAV platforms. To simulate real-world scenarios as accurately as possible, the dataset also contains a significant number of non-disaster images labeled as normal, as highlighted in the same study. The objective of this study is to create an efficient and lightweight deep learning classifier that can swiftly detect and classify various events using effective sensors and microprocessors. To this end, we have analyzed various architectures, including MobileNet, SqueezeNet, ShuffleNet, and EfficientNet. These models were designed with the objective of being lightweight by incorporating a range of techniques, including depth-wise separable convolutions, pointwise filters, group convolution, channel shuffling, and strategic scaling of depth, width, and resolution. Specifically, MobileNet models were created for mobile platform applications and utilized depth-wise separable convolutions instead of traditional convolutions to reduce the number of training parameters relative to conventional convolutional networks with the same depth. As an illustration, MobileNetV1 [3] was the first MobileNet architecture proposed, requiring only 4.2 million training parameters. This is in contrast to the VGG16 and GoogleNet architectures, which require 138 million and 6.8 million parameters, respectively [4]. MobileNetV2, a subsequent architecture to MobileNetV1, introduced further modifications that significantly reduced the number of training parameters required from 4.2 million to 3.4 million [5]. Other instances include the SqueezeNet [6], which utilized 1$\times$1 pointwise filters rather than the 3$\times$3 filters in traditional convolutional networks to reduce computational costs. The ShuffleNet [7] introduced group convolution and adds a channel shuffling operation in the depth-wise convolutional layer for efficient computation. Another model called the EfficientNet [8] scales the depth, width and resolution of the network architecture strategically to achieve both computational efficiency and effectiveness. Despite the aforementioned modifications and reduced structural complexity, the number of training parameters utilized in these lightweight models still amounts to millions. Consequently, these models are not particularly suitable for prolonged UAV operations due to their high FLOPs demand on the on-board CPU [9]. Figure 1: Example of an image from each respective disaster classes in the AIDER dataset. Figure 2: The algorithmic structure of our WATT-EffNet, as shown on the left of the figure. Our modification to the MBConv block layer using EfficientNet as the backbone is shown on the top right of the figure, as highlighted by the blue dotted box. We also illustrate the original MBConv block layer for comparison (red dotted box). The attention mechanism architecture is illustrated in the dotted orange box on the bottom right of the figure. Unmanned aerial vehicles (UAVs) are critical for finding survivors in disaster zones quickly and efficiently. However, the success of these missions heavily relies on the computational efficiency and effectiveness of the UAVs’ on-board classification model. To address this challenge, we propose a novel Wider ATTENTION EfficientNet (WATT-EffNet) model that incorporates an attention module across a wider EfficientNet network. Unlike traditional models, this design emphasizes the width of the network rather than the depth. Additionally, by utilizing attention, it reduces the computational cost of the network by processing only key feature maps. The importance of increasing the width of the network while reducing its depth is supported by [10], whose findings consistently show gains in classification accuracy for widened residual networks due to the increased representational power. Moreover, they have shown that this increase can be achieved with only a marginal increase in the number of training parameters, where a factor of 2 increase in the network’s width with a depth reduction of 2.5 leads to a relatively smaller increase in the number of training parameters. As far as our knowledge extends, the amalgamation of a width-based architecture with an attention- based mechanism for UAV disaster scene classification in an existing EfficientNet model has not been attempted heretofore. In essence, the present letter outlines several contributions, which can be summarized as follows: * • We present WATT-EffNet, an architectural innovation that leverages the principle of width as a foundation to augment the capabilities of the original EfficientNet model. Through the integration of attention modules and a reduction in overall complexity, this architecture aspires to surpass the performance of the original EfficientNet by promoting both efficiency and effectiveness. * • WATT-EffNet architecture endeavors to strike a delicate balance between computational efficiency and effectiveness, a critical consideration in the context of classifying disaster scenes captured by UAVs, where mission success probability must be optimized amidst limited resource constraints. This approach deviates from the prevalent paradigm among state-of-the-art models, which do not necessarily take such operational constraints into account. * • The efficacy of our proposed WATT-EffNet architecture has been rigorously evaluated through the utilization of a subset of the AIDER dataset, where the results demonstrate that our model is capable of achieving superior $F_{1}$ scores, in contrast to the established baselines, while concurrently utilizing a substantially lower number of FLOPs, thereby exemplifying its computational efficiency. ## II WATT-EffNet ### II-A Width-varying feature module The advent of deeper neural network architectures such as ResNet was a response to the problem of vanishing gradients, an occurrence that plagues the training of deep neural networks. The phenomenon of vanishing gradients is characterized by the diminution of the magnitudes of gradients computed during backpropagation, making it hard to make meaningful updates to the weights of the network. This, in turn, can impede the successful training of deep neural networks. In order to overcome the problem of vanishing gradients, the concept of skip connections was introduced. These connections enable the gradients to circumvent one or more layers and be directly propagated to the shallower layers of the network, thus enabling the gradients to traverse the network with greater ease and fluidity. This mechanism promotes the training of deep neural networks by preserving the gradient magnitudes. For a given residual unit $i$, its input $Y_{i-1}$, the learnt feature mapping $\phi_{i}(\cdot)$, and the trainable parameters of the unit $\boldsymbol{\omega}_{i}$, the output $Y_{i}$ of the residual unit can be defined recursively as $Y_{i}=\phi_{i}(Y_{i},\boldsymbol{\omega}_{i})+Y_{i-1}.$ (1) $\phi_{i}(\cdot)$ is often comprised of two to three stacked convolutional stages, which, apart from the convolution layers, also comprised of the batch normalization and a Rectifier Unit (ReLU) as an activation function. In each of the convolutional layers, $\boldsymbol{\omega}_{i}$ comprised of the kernel size $n$ $\times$ $m$ and the filter size $l$, and hence $\boldsymbol{\omega}_{i}=\boldsymbol{\omega}_{i}(n,m,l)$, where the total parameter in the layer is $nml$. To enable a wider residual unit, a widening factor $k$ is introduced in which any unit with $k>1$ are categorized as wide, with the original residual unit categorized as $k$ = 1. Since we are utilizing the EfficientNet as the backbone as mentioned in the introduction, the corresponding units modified is the MBConv Block ($d$). As illustrated in the top right of Fig. 2, the original MBConv block is composed of a 1$\times$1 convolutional layer followed by a 5$\times$5 depth-wise convolution. The Squeeze and Excite (SE) [11] block is then applied to improve feature representation by accounting for the inter- dependencies across features from different channels. Finally, another 1$\times$1 convolutional layer is applied. Except for the SE block, batch normalization and ReLU are incorporated in each layer of the MBConv block. In our WATT-EffNet, we expand upon the width $k$ in the MBConv block so that for each layer, $\boldsymbol{\omega}_{i}=\boldsymbol{\omega}_{i}(n,m,kl)$ (as shown in Fig. 2) and hence for block $i$ $Y_{i}=\Phi_{i}(Y_{i},\boldsymbol{\omega}_{i}(n,m,kl))+Y_{i-1}$ (2) where $\Phi_{i}(\cdot)$ is now the feature mapping associated with the MBConv blocks. By applying (2) using one substitution steps, the forward pass for our WATT-EffNet can be expanded as $\begin{split}Y_{i+2}=\Phi_{i+2}(Y_{i+1},{\boldsymbol{\omega}}_{i+2}(n,m,kl))+Y_{i+1}\\\ =\Phi_{i+1}(Y_{i}+{\boldsymbol{\omega}}_{i+1}(n,m,kl))+\\\ \Phi_{i+2}(Y_{i}+\Phi_{i+1}(Y_{i}+{\boldsymbol{\omega}}_{i+1}(n,m,kl)),{\boldsymbol{\omega}}_{i+2}(n,m,kl)).\end{split}$ (3) This forms the width-based MBConv block layer in each block of our WATT-EffNet as illustrated by the light-blue box in the Fig. 2. The depth of the baseline EfficientNet architecture has a total of 19 layers, including 17 MBConv blocks [8]. We modified the number of MBConv blocks $d$ and $k$ in our approach such that the number of training parameters did not exceed 1M. Therefore, the possible combination of our WATT-EffNet architecture is represented as WATT- EffNet-$d$-$k$. More details about the variation in our architecture are given later in the experimental results section. ### II-B Attention mechanism The attention module comprises the channel attention and the spatial attention modules, and their algorithmic structure is illustrated in the bottom right of Fig. 2. For the channel attention, the process involved extracting and squeezing the spatial dimension of the input feature map $\boldsymbol{\Phi}$, which is extracted from the wider structure network in the previous subsection, followed by parallel average and max pooling. The output feature from each pooling are then fed into a shared Multi-Layer Perceptron (MLP) network, with the resultant features finally merged using element-wise summation. The mathematical representation can be described in the following equation. $\begin{split}\boldsymbol{M_{c}(\boldsymbol{\Phi})}&=\sigma(MLP(AvgPool(\boldsymbol{\Phi})+MLP(MaxPool(\boldsymbol{\Phi})))\\\ &=\sigma(\boldsymbol{\Omega_{1}}(\boldsymbol{\Omega_{0}}(\boldsymbol{{\Phi^{c}}_{avg}}))+\boldsymbol{W_{1}}(\boldsymbol{\Omega_{0}}(\boldsymbol{{\Phi^{c}}_{max}}))),\end{split}$ (4) where $\sigma$ denotes the sigmoid activation function, $\boldsymbol{\Omega_{0}}$, $\boldsymbol{\Omega_{1}}$ are the weights associated with the MLPs, $\boldsymbol{M_{c}(\boldsymbol{\Phi})}$ is the attention map associated with the channel attention, and $\boldsymbol{{\Phi^{c}}_{avg}}$ and $\boldsymbol{{\Phi^{c}}_{max}}$ denotes the feature maps obtained from the average and max pooling, respectively. For the spatial attention, both average pooling and max pooling are applied once again and concatenated along the channel axis. A convolutional layer of filter size 7$\times$7 ($F^{7\times 7}$) is then applied to generate the spatial attention map. Therefore, the spatial feature-map can be described in the following form, $\begin{split}\boldsymbol{M_{s}(\Phi)}&=\sigma(F^{7\times 7}([AvgPool(\boldsymbol{\Phi});MaxPool(\boldsymbol{\Phi})]))\\\ &=\sigma(F^{7\times 7}([\boldsymbol{{\Phi^{s}}_{avg}};\boldsymbol{{\Phi^{s}}_{max}}])),\end{split}$ (5) where $\boldsymbol{M_{s}(\Phi)}$ is the attention map associated with the spatial attention and $\boldsymbol{{\Phi^{s}}_{avg}}$ and $\boldsymbol{{\Phi^{s}}_{max}}$ denotes the feature map obtained from the average and max pooling respectively in this part of the attention module. Finally, the overall attention mechanism is described as follows, $\begin{split}\boldsymbol{\Phi^{\prime}}&=\boldsymbol{M_{c}(\Phi)}\otimes\boldsymbol{\Phi},\\\ \boldsymbol{\Phi^{\prime\prime}}&=\boldsymbol{M_{s}(\Phi^{\prime})}\otimes\boldsymbol{\Phi^{\prime}},\\\ \end{split}$ (6) where $\otimes$ denotes the outer product between the relevant feature and attention maps. This forms the attention block component of a WATT-EffNet block as illustrated by the orange box in Fig. 2. A skip connection is also performed before the next WATT-EffNet block. Therefore, the proposed architecture can be repeated and used as a single modular block. This means that the proposed design can be repeated multiple times to create a deeper network, while still maintaining the benefits of the skip connections and other features that have been discussed. This modular approach allows for flexibility and scalability in the design of the network. ## III Experimental Details WATT-EffNet is evaluated against SOTA methods using a subset of the AIDER dataset, which serves as a benchmark[2]. We compare methods that use minimal training parameters for algorithmic efficiency. Therefore, we did not include models like ResNet50 [12], VGG16, and Xception [13] which require more than 5 million training parameters. Instead, we include MobileNetV1 [3], MobileNetV2 [5], SqueezeNet [6], ShuffleNet [7], EfficientNet [8] and EmergencyNet [2]. The subset of the AIDER dataset used preserves class imbalance distributions. The train-valid-test-split ratio is 4:1:2 as same in the AIDER experiment. The original dataset has 700 images per disaster class and 5700 normal images, while the subset has 6433 images, as illustrated in Table I. TABLE I: List of training, validation and test image sets for each class in the subset of the AIDER dataset. Class | Train | Valid | Test | Total per Class ---|---|---|---|--- Collapsed Building | 367 | 41 | 103 | 511 Fire | 249 | 63 | 209 | 521 Flood | 252 | 63 | 211 | 526 Traffic | 232 | 59 | 194 | 485 Normal | 2107 | 527 | 1756 | 4390 Total Per Set | 3207 | 753 | 2473 | 6433 All images were resized to 224$\times$224$\times$3\. To address class imbalance, we applied under-sampling to the training and validation sets using the RandomUnderSampler module from the imblearn library, as in the AIDER experiment. All simulations were done using the Tensorflow Keras library in Python on Google’s Colab Pro+ Tesla T4 GPUs and TPUs. All the experimental simulations are evaluated (including our approach) with an epoch set to 300. In the preprocessing stage, the intensity values of each pixel in the images were divided by 255 to normalize the data. Additionally, a kernel regularizer with a coefficient of 1e-4 was incorporated into every convolutional block layer. Our approach employed both categorical cross-entropy and cosine loss as the training loss, as suggested by [14], which argues that this combination is effective for models that are trained from scratch and without any pre- training on small data, which is the case for our dataset. Both losses were given equal importance in the combined loss function. The optimization algorithm used was Root-Mean-Square propagation (RMSprop) and was set to a decay rate of 1e-6. The development of our proposed WATT-EffNet model is predicated on the utilization of the wider structure and attention mechanism principle, thus the identification of the optimal width to depth ratio ($k$ and $d$ respectively) is of paramount importance. This necessitates the examination of a plethora of possible permutations and combinations of the width ($k$) and depth ($d$) utilizing a range of values for $k=\\{2,3,4,5,6,7\\}$ and $d=\\{1,3,5\\}$. The next section of this research will be dedicated to the exploration of the aforementioned permutations and combinations through the utilization of the AIDER dataset, in order to arrive at the optimal values for $k$ and $d$. ## IV Results and Discussions Table II illustrates that the proposed model has demonstrated a significant improvement in performance in comparison to the baseline EfficientNet model as well as the various benchmark methods. Specifically, the performance improvement has been quantified as a $10.6\%$ increase in terms of $F_{1}$ score. Additionally, the proposed model has also shown a drastic reduction in computational complexity as it requires 35 times fewer FLOPs when compared to the experimental conditions. Furthermore, it is worth noting that although the EmergencyNet model utilizes fewer parameters, the WATT-EffNet model’s wider structure design allows for more efficient utilization of parameters, resulting in lower FLOPs and overall computational complexity. The results of these evaluations have revealed a substantial enhancement in performance as compared to the EmergencyNet model with an improvement of more than $6.5\%$. Furthermore, the proposed model also demonstrates a remarkable reduction in computational complexity, as quantified by FLOPs, with a 8-fold decrease in comparison to the EmergencyNet model. TABLE II: $F_{1}$ scores, FLOPs, and training parameters for the existing SOTA CNN-based approaches on the AIDER dataset∗. Bolded values denote the highest $F_{1}$ score, lowest FLOPs and lowest training parameters. SOTA Model | $F_{1}(\%)$($\shortuparrow$) | FLOPs ($\shortdownarrow$) | Parameters ($\shortdownarrow$) ---|---|---|--- MobileNetV1 [3] | 80.2 | 972 | 3,233,861 MobileNetV2 [5] | 81.0 | 625 | 2,282,629 SqueezeNet [6] | 82.3 | 531 | 725,073 ShuffleNet [7] | 81.7 | 972 | 4,023,865 EfficientNet [8] | 80.0 | 774 | 3,499,453 EmergencyNet [2] | 83.1 | 185 | 94,420 WATT-EffNet-3-6 | 88.5 | 22 | 688,661 * • *All models are trained on the same environment. We have examined the performance of each class through a confusion matrix and a precision vs recall curve, which is illustrated in Fig. 3 and 4 respectively. Our analysis revealed that the normal class had the lowest prediction percentage (81.6%) among all the classes due to the presence of images that did not have an ideal view perspective and the key features only occupied a small area of the image. On the other hand, the traffic incident class had the highest prediction percentage (94.2%) among all the classes. However, the $F_{1}$ values obtained in our work were lower than those obtained in the EmergencyNet. This is because the methods utilized in our context are mostly, if not all, CNN-based network, and such network usually perform better with increasing amount of training data. Nevertheless, our optimal lightweight algorithmic design is still capable of maximizing learning even when using a limited dataset for training. Lastly, it is worth mentioning that some works such as [15] and [16] have highlighted the significance of incorporating a finer spatial resolution during classification and have assessed its impact on the classification performance. This is crucial in disaster scenarios, as they typically occur within a limited region in a larger captured field of view. While this aspect was not the primary focus of our research, it is an intriguing and critical direction that we plan to explore in our future works. Figure 3: The confusion matrix for our WATT-EffNet-3-6. Figure 4: The precision vs recall curves for the predicted classes using our aforementioned model against the original classes. Here, class 0-5 represents collapsed building, fire, flood, traffic incident and the normal class respectively. TABLE III: $F_{1}$ scores, FLOPs, training parameters for our WATT-EffNet framework as a function of number of MBConv Blocks (first value in model variant name) and widths (second value in model variant name) with respect to the original EfficientNet framework∗. Model Variant | $F_{1}(\%)$($\shortuparrow$) | FLOPs ($\shortdownarrow$) | Parameters ($\shortdownarrow$) ---|---|---|--- WATT-EffNet-1-2 | 63.1$\pm$0.02 | 22 | 8,501 WATT-EffNet-1-3 | 64.2$\pm$0.07 | 22 | 13,693 WATT-EffNet-1-4 | 66.5$\pm$0.38 | 22 | 19,909 WATT-EffNet-1-5 | 68.2 $\pm$0.03 | 22 | 27,149 WATT-EffNet-1-6 | 70.4 $\pm$0.87 | 22 | 35,413 WATT-EffNet-1-7 | 72.7$\pm$0.05 | 22 | 44,701 WATT-EffNet-3-2 | 84.2$\pm$1.03 | 22 | 106,629 WATT-EffNet-3-3 | 83.3$\pm$1.08 | 22 | 205,673 WATT-EffNet-3-4 | 86.0$\pm$1.05 | 22 | 335,693 WATT-EffNet-3-5 | 85.2$\pm$0.98 | 22 | 496,689 WATT-EffNet-3-6 | 88.5$\pm$0.76 | 22 | 688,661 WATT-EffNet-3-7 | 87.3$\pm$0.72 | 22 | 911,609 WATT-EffNet-5-2 | 86.8$\pm$1.05 | 22 | 371,493 WATT-EffNet-5-3 | 86.4$\pm$0.93 | 22 | 720,233 * • *All models are trained on the same environment. #### Ablation Studies The WATT-EffNet model was developed with the goal of achieving high accuracy while maintaining a lightweight architecture. To evaluate its performance, we conducted an ablation study by varying the number of MBConv Blocks ($d$) and width ($k$) of the model. Additionally, we extended the study to include variations with and without the attention mechanism, as recorded in Table III and IV respectively. The results of various WATT-EffNet model variants considering attention mechanism are presented in Table III and are quantified by the standard deviation and mean $F_{1}$ scores. The first number in the variant name indicates the number of MBConv blocks ($d$) used in the network, while the second number represents the width multiplier ($k$). The $F_{1}$ score, FLOPs, and number of parameters are reported for each variant. The best results were obtained when $d$ was equal to 3 and $k$ was equal to 6, however it is noteworthy that the results were similar or even better when using a smaller value of $k=2$ with a smaller number of parameters than the SOTA models. It is worth mentioning that we examined all the possible conditions until the parameters did not exceed 1M. Similarly, Table IV presents the results of various WATT-EffNet model variants where the attention mechanism was discarded. The $F_{1}$ score, FLOPs, and number of parameters are reported for each variant. The best results were obtained when $d$ was equal to 3 and $k$ was equal to 6. The $F_{1}$ scores obtained in Table IV are lower than those given in Table III. This indicates that incorporating attention improves the classification performance by a significant margin in some cases and by a small margin in others (e.g., WATT- EffNet-3-3 displayed an improvement in $F_{1}$ scores by only around 0.2% while WATT-EffNet-1-6 demonstrated an improvement by around 6%). It is worth mentioning that the FLOPs used remains the same regardless of whether attention is incorporated or not, which implies that attention modules can be incorporated even for a given limited computational resource since no additional costs are incurred to enhance the classification performance. TABLE IV: $WATT-EffNet$ metrics without the attention mechanism∗. Model Variant | $F_{1}(\%)$($\shortuparrow$) | FLOPs ($\shortdownarrow$) | Parameters ($\shortdownarrow$) ---|---|---|--- WATT-EffNet-1-2 | 62.1$\pm$0.70 | 22 | 8,501 WATT-EffNet-1-3 | 63.2$\pm$0.25 | 22 | 13,693 WATT-EffNet-1-4 | 63.9$\pm$0.28 | 22 | 19,909 WATT-EffNet-1-5 | 65.9 $\pm$1.39 | 22 | 27,149 WATT-EffNet-1-6 | 66.4 $\pm$2.44 | 22 | 35,413 WATT-EffNet-1-7 | 69.1$\pm$0.91 | 22 | 44,701 WATT-EffNet-3-2 | 81.5$\pm$1.04 | 22 | 106,629 WATT-EffNet-3-3 | 83.1$\pm$1.22 | 22 | 205,673 WATT-EffNet-3-4 | 85.2$\pm$0.98 | 22 | 335,693 WATT-EffNet-3-5 | 85.0$\pm$0.92 | 22 | 496,689 WATT-EffNet-3-6 | 85.4$\pm$1.29 | 22 | 688,661 WATT-EffNet-3-7 | 85.2$\pm$0.46 | 22 | 911,609 WATT-EffNet-5-2 | 83.5$\pm$0.88 | 22 | 371,493 WATT-EffNet-5-3 | 84.3$\pm$0.89 | 22 | 720,233 * • *All models are trained on the same environment. ## V Conclusions The present study introduces a novel width-based EfficientNet architecture, named Wider ATTENTION EfficientNet (WATT-EffNet), that incorporates the convolutional block attention mechanism. This new network architecture enables the creation of a shallower EfficientNet-based classifier, resulting in reduced computational processing requirements, by selectively attending to only the most informative features present on the feature map. The proposed WATT-EffNet variants were trained and evaluated on a challenging subset of the AIDER dataset, featuring a highly imbalanced distribution of non-disaster and disaster images of different classes. The obtained results consistently demonstrate the effectiveness and efficiency of the proposed architecture in comparison to state-of-the-art methods. The significance of this work lies in highlighting the role of width and attention in a CNN-based network in enhancing both efficiency and classification performance, making it a promising solution for visual-based UAV search-and-rescue operations. In future works, the proposed WATT-EffNet architecture will be tested in real- time UAV operations for validation, and alternative methods such as GAN-based augmentation [17, 18] will be explored to further improve the model’s performance. ## Acknowledgement This research is supported by the Civil Aviation Authority of Singapore and NTU under their collaboration in the Air Traffic Management Research Institute. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Civil Aviation Authority of Singapore. ## References * [1] A. Valsan, B. Parvathy, V. D. GH, R. Unnikrishnan, P. K. Reddy, and A. Vivek, “Unmanned aerial vehicle for search and rescue mission,” in _2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184)_. IEEE, 2020, pp. 684–687. * [2] C. Kyrkou and T. Theocharides, “Emergencynet: Efficient aerial image classification for drone-based emergency monitoring using atrous convolutional feature fusion,” _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , vol. 13, pp. 1687–1699, 2020\. * [3] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” _arXiv preprint arXiv:1704.04861_ , 2017. * [4] S. Liu and W. Deng, “Very deep convolutional neural network based image classification using small training sample size,” in _2015 3rd IAPR Asian conference on pattern recognition (ACPR)_. IEEE, 2015, pp. 730–734. * [5] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 4510–4520. * [6] F. Iandola, S. Han, M. Moskewicz, K. Ashraf, W. Dally, and K. Keutzer, “Squeezenet: Alexnet-level accuracy with 50$\times$ fewer parameters and¡ 0.5 mb model size. arxiv 2016,” _arXiv preprint arXiv:1602.07360_. * [7] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 116–131. * [8] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in _International conference on machine learning_. PMLR, 2019, pp. 6105–6114. * [9] F. Yao, S. Wang, L. Ding, G. Zhong, L. B. Bullock, Z. Xu, and J. Dong, “Lightweight network learning with zero-shot neural architecture search for uav images,” _Knowledge-Based Systems_ , vol. 260, p. 110142, 2023. * [10] S. Zagoruyko and N. Komodakis, “Wide residual networks,” _arXiv preprint arXiv:1605.07146_ , 2016. * [11] H. Jie, S. Li, S. Gang, and S. Albanie, “Squeeze-and-excitation networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , vol. 5, 2018. * [12] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [13] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 1251–1258. * [14] B. Barz and J. Denzler, “Deep learning on small datasets without pre-training using cosine loss,” in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , 2020, pp. 1371–1380. * [15] D. He, Q. Shi, X. Liu, Y. Zhong, and L. Zhang, “Generating 2m fine-scale urban tree cover product over 34 metropolises in china based on deep context-aware sub-pixel mapping network,” _International Journal of Applied Earth Observation and Geoinformation_ , vol. 106, p. 102667, 2022. * [16] M. Mirik and R. J. Ansley, “Utility of satellite and aerial images for quantification of canopy cover and infilling rates of the invasive woody species honey mesquite (prosopis glandulosa) on rangeland,” _Remote Sensing_ , vol. 4, no. 7, pp. 1947–1962, 2012. * [17] T. Dam, S. G. Anavatti, and H. A. Abbass, “Mixture of spectral generative adversarial networks for imbalanced hyperspectral image classification,” _IEEE Geoscience and Remote Sensing Letters_ , 2020. * [18] T. Dam, M. M. Ferdaus, M. Pratama, S. G. Anavatti, S. Jayavelu, and H. Abbass, “Latent preserving generative adversarial network for imbalance classification,” in _2022 IEEE International Conference on Image Processing (ICIP)_. IEEE, 2022, pp. 3712–3716.
Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). SwissText’23: The 8th edition of the Swiss Text Analytics Conference – Generative AI & LLM, June 12–14, 2023, Neuchâtel, Switzerland [orcid=0000-0003-0858-6977<EMAIL_ADDRESS>] # Large Language Model Prompt Chaining for Long Legal Document Classification Dietrich Trautmann Thomson Reuters Labs, Zug, Canton of Zug, Switzerland (2023) ###### Abstract Prompting is used to guide or steer a language model in generating an appropriate response that is consistent with the desired outcome. Chaining is a strategy used to decompose complex tasks into smaller, manageable components. In this study, we utilize prompt chaining for extensive legal document classification tasks, which present difficulties due to their intricate domain-specific language and considerable length. Our approach begins with the creation of a concise summary of the original document, followed by a semantic search for related exemplar texts and their corresponding annotations from a training corpus. Finally, we prompt for a label - based on the task - to assign, by leveraging the in-context learning from the few-shot prompt. We demonstrate that through prompt chaining, we can not only enhance the performance over zero-shot, but also surpass the micro-F1 score achieved by larger models, such as ChatGPT zero-shot, using smaller models. ###### keywords: Prompt Chaining Prompt Engineering Long Legal Documents Legal NLP Legal AI ## 1 Introduction The legal domain, with its often challenging tasks and complex long documents, is an important field of study for natural language processing (NLP) and machine learning [1, 2]. Long legal document text classification tasks can be challenging due to several factors, including large size, complex language and specific vocabulary, highly specialized content structure, imbalanced data (many common cases vs. a long-tail of peculiar ones), subjectivity (open to interpretation and debate), and the need for expensive, manual annotations from subject matter experts. The recent surge in the utilization of legal benchmarks has stimulated a proliferation of innovative solutions harnessing pre-trained language models [3]. Conventionally, these methodologies necessitate an intensive annotation process (though some utilize metadata annotations), followed by a costly fine- tuning process for the models [3, 4]. The advent of large-scale pre-training of large language models (LLMs) has presented an opportunity to leverage them directly through natural language prompting [5], circumventing the need for additional task-dependent fine- tuning. Prompting involves providing a specific instruction, query, or question to an LLM to generate a specific output or response. The input, or prompt, steers the system towards producing a response meaningfully related to it. The technique of prompt chaining [6, 7] has shown promise in NLP, sequentially linking multiple prompts to guide the generation process (Fig. 1). Through the utilization of consecutive prompts, the system can produce more contextually relevant responses for each step and more complex responses for the overall task. Figure 1: _Prompt Chaining_ for Legal Document Classification Prompt chaining proves particularly advantageous in long legal document classification, improving task performance, efficiency, flexibility, and consistency via the inspection of individual steps in the chain [8]. The technique enhances the interpretability of the overall classification and permits debugging of complex reasoning tasks upon failure [9]. Overall, prompt chaining is a valuable tool in the classification of long legal documents, helping to improve the performance and efficiency of the classification process. Prompt chaining allows language models to build on their previous outputs and provide more nuanced classification results, and it can be customized to meet the specific needs of the legal document classification task. Our contributions in this work are: * • We show that we can successfully chain prompts for legal document classification tasks (visualized in Fig. 1). * • We apply prompt chaining on one binary classification task and on one multi- class text classification task. * • We improve the results over zero-shot prompting with our chaining approach. * • Our prompt chaining approach even outperforms zero-shot ChatGPT prompting on the micro-f1 score. ## 2 Related Work In terms of related literature we focus on the legal document classification work and on current prompting approaches, as well as the combination of the two fields. ### 2.1 Legal Document Classification Documents, with their characteristic long textual data, often pose significant challenges for automated machine learning methods in processing and classification tasks [10], [11]. These challenges become more pronounced in the legal domain due to additional complexities such as intricate grammar, nested sentences, domain-specific vocabulary, and extensive use of abbreviations [12]. The _LexGLEU_ benchmark [3] represents a comprehensive consolidation of recent datasets involving long legal documents, exclusively in the English language. It includes legal documents related to EU & US Law, as well as contracts with tasks pertaining to multi-label and multi-class classification and multiple choice question-answering. In [13], the authors evaluated a hierarchical approach for modeling long documents, and [14] investigated strategies to augment the context-window of transformers for domain-specific tasks, including the aforementioned _LexGLEU_ benchmark. While benchmarks in other languages do exist, such as _MultiEURLEX_ [4], our work will also focus solely on the English language. ### 2.2 Prompting Several noteworthy projects aim to consolidate, evaluate, and standardize prompting approaches across diverse tasks and domains. The two most substantial projects include OpenPrompt [15] and PromptSource [16]. ##### OpenPrompt provides a user-friendly and research-friendly toolkit for prompt-learning in pre-trained language models (PLMs). The advent of prompt-learning in natural language processing sparked a need for a standardized implementation framework. OpenPrompt caters to this need by delivering a modular and extendable toolkit that accommodates various PLMs, task formats, and prompting modules in a unified paradigm. ##### PromptSource is a toolkit designed to facilitate the development and sharing of natural language prompts for training and querying language models in NLP. It offers a templating language for creating data-linked prompts, a swift iterative development interface, and a community-driven set of guidelines for contributing new prompts. Currently, the platform offers over 2,000 prompts for approximately 170 datasets, promoting collaboration and efficient utilization of prompts for language model training and querying. ##### Prompt Chaining as a concept was explored in [8], where prompts were chained using a visual program editor. Now, there are frameworks with prompt chaining at their core, such as LangChain [17], LLamaIndex [18], and MiniChain [19]. ### 2.3 Legal Prompting Recent research has combined prompting and natural language tasks in the legal domain. In [20], authors evaluated zero-shot prompting on the legal judgment prediction task using multilingual data from the European Court for Human Rights (ECHR) and the Federal Supreme Court of Switzerland (FSCS). Meanwhile, [21] appraised _GPT-3_ ’s zero- and few-shot capabilities for legal reasoning tasks on the COLLIE entailment task (using English translations of the Japanese bar exam). The GPT-3.5 model was evaluated on the US Bar Exam [22], and GPT-4 [23] has demonstrated proficiency in passing multiple tests through zero-shot prompting. ## 3 Data The datasets utilized in our study are sourced from two widely recognized benchmarks comprising lengthy documents in the legal domain: the European Court of Human Rights (ECHR) and the Supreme Court of the United States (SCOTUS). These datasets form part of the _LexGLUE_ benchmark [3]. ### 3.1 ECHR The ECHR dataset comprises approximately $11,000$ cases sourced from the European Court of Human Rights public database. This dataset is split into training, development, and test sets. Each case includes factual paragraphs, along with the corresponding ECHR articles that were violated or alleged to be violated. The original task involves to predict the violated human rights articles for a case’s facts. However, for the purpose of our study, we have simplified this task to a binary classification problem. We distinguish cases based on whether there was a violation of any human rights articles, irrespective of which specific articles were violated. ### 3.2 SCOTUS The SCOTUS dataset provides insight into the highest federal court in the USA, which handles complex or controversial cases unresolved by lower courts. This dataset combines information from SCOTUS opinions and the Supreme Court DataBase (SCDB). The SCDB offers metadata for all cases spanning from 1946 to 2020. Utilizing the SCDB, the dataset classifies court opinions into 14 issue areas (refer to App. B). The dataset is divided chronologically into training (#5000 samples, 1946–1982), development (#1400 samples, 1982–1991), and test (#1400 samples, 1991–2016) sets, each covering a distinct time period. ## 4 Models Our experimental design incorporates both general-purpose text generation models and task-specific summarization models. ### 4.1 Generation Models We used two different 20 billion parameter LLMs in our text generation steps. Both of the models have a context window of up to 2048 tokens. #### 4.1.1 GPT-NeoX The _GPT-NeoX_ model [24] is an autoregressive language model, specifically a decoder-only model111https://hf.co/EleutherAI/gpt-neox-20b, trained on the Pile dataset [25]. This models’ weights are openly available under a permissive license (Apache 2.0 222https://www.apache.org/licenses/LICENSE-2.0.html). #### 4.1.2 Flan-UL2 _Flan-UL2_ [26] is an encoder-decoder model333https://hf.co/google/flan-ul2 based on the T5 architecture, trained with the mixture-of-denoisers objectives (diverse span corruption and prefix language modeling tasks). This model was further instruction fine-tuned using the Flan prompting collection [27]. The collection contains instructions for a diverse set of tasks (e.g., to summarize a text; to classify based on a list of options). The model is publicly available with an open source license (Apache 2.0). ### 4.2 Summarization Models We also used task-specific summarization models for the creation of the legal summaries. In our experiments we found that – due to the lack of ground-truth summaries for our long legal documents – the task-specific summarization models created more coherent summaries compared to results from prompting the general generation models. #### 4.2.1 BRIO The BRIO444https://hf.co/Yale-LILY/brio-cnndm-uncased model [28] is an abstractive summarization model that achieves state-of-the-art result on the _CNN/DailyMail_ and _XSum_ datasets. It uses BART [29] as its base model and has a context window of 1024 tokens. #### 4.2.2 PRIMERA The PRIMERA555https://hf.co/allenai/primera-multi_lexsum-source-short model [30] is an abstractive summarization model that was trained on the _Multi- LexSum_ dataset at different granularities. We used the model that was trained on the granularity from the full source document to create a short summary and it has a context window of 1024 tokens. The other options are – besides different model architectures – long and tiny summaries. ### 4.3 Semantic Similarity Search We use semantic similarity search for the few-shot prompt building were we retrieve semantic similar summaries from the training set to a target summary (either from the development or test set). For this purpose, our summaries were encoded using the _sentence-transformers_ library [31] and the _custom- legalbert_ 666https://hf.co/zlucia/custom-legalbert model [32]. Furthermore, we used the _annoy_ 777https://github.com/spotify/annoy library for the approximate nearest neighbor search (semantic similarity search). ## 5 Prompt Chaining Prompt Chaining is a methodology employed to decompose complex tasks into smaller, manageable sub-tasks. A prompt chain typically comprises several prompts - either task-specific or general-purpose, each serving a single purpose. The output of one prompt feeds into the next as an input. In our approach, we utilize pre-defined steps; however, it’s worth noting that this methodology could be further optimized by incorporating a selection process for the next step or by introducing stopping criteria in the output generation, as exemplified in [9] and [33]. The steps for our prompt chaining process, specifically utilized for the classification of long legal documents, are depicted in Fig. 1. In the following sections, we will delve into each of the primary steps, namely summarization, few-shot prompt building, and final label generation. ### 5.1 Summary Generation The inaugural step in our prompt chaining approach is the generation of a succinct summary of the legal case text. As a majority of legal documents are lengthy, often exceeding the large context window of $2048$ tokens provided by contemporary language models, we advocate the creation of summaries for chunks of the whole document. These chunks are crafted based on the model’s context window. Sentences are sequentially stacked until the context limit of the respective models is reached ($1024$ or $2048$ tokens). Post the initial parsing of the full document, the summary generation process for chunks is iteratively continued until the desired summary length, in our case up to $128$ tokens, is achieved. These summaries typically consist of a few sentences (up to 5) derived from our data. Initial experimentation with direct prompting approaches on large language models resulted in variable outcomes. The templates used for prompting included: * • INPUT TEXT _In summary,_ * • INPUT TEXT _TLDR:_ Since we do not possess ground-truth summaries for the documents, our assessments relied on manual inspection of a subset of the summaries generated (from the training set). The inspection indicated that the summaries were relatively generic and often omitted the core legal issues of interest. Consequently, our investigation steered towards task-specific summarization models. Notably, the _BRIO_ model, being pre-trained and fine-tuned on news articles, generated more generic summaries. In contrast, the _PRIMERA_ model, fine-tuned specifically on legal documents, generated summaries where the core legal context was mostly preserved. This iterative summarization was uniformly applied across all documents using the same parameters. ### 5.2 Semantic Similarity Search The objective at this stage was to construct few-shot prompts for the subsequent label generation step. Thus, we embedded all our summaries with the models discussed in section 4.3 and calculated the semantically closest neighbors (up to eight) of each summary in the development and test sets, as compared to the training set. These summaries from the training set, along with their true labels and the current target sample (excluding its true label from the development and test sets), served as the few-shot prompts. This approach leverages the in-context learning abilities of large language models. As illustrated by [34], an alternative approach could involve prompting a large language model to generate contextual texts based on an input, instead of retrieving from a corpus database as we have done. These generated samples could then be incorporated as in-context samples in the following step. This option can also be potentially implemented via LLMs prompting in our prompting pipeline. The evaluation of this approach’s feasibility is left for future work. ### 5.3 Label Generation The final step in our prompt chain involves label generation. Here, we queried the LLMs with the few-shot prompts previously constructed, in conjunction with an instruction for the corresponding task and a list of potential labels to choose from. For the ECHR experiments, the binary options provided were _YES_ or _NO_ , while for the SCOTUS experiments, up to 13 issue area labels were presented for model prediction. This prompt construction enabled the models to yield the desired label in all our experiments. Greedy decoding was used in this generation step for all results. Another strategy employed involved sampling from the model output multiple times, a technique known as self-consistency via output sampling [35]. It has been demonstrated that querying multiple times enhances the probability of generating the true label. At the conclusion of each such sampling (up to 10 times), the majority count of the generated labels was taken as the final prediction. ## 6 Experiments Our experiments pursued two objectives. Firstly, we aimed to enhance the zero- shot outcomes from prior work on the binary classification task on the ECHR dataset, leveraging prompting techniques without any parameter adjustments to the models. Following the successful demonstration of the efficacy of the few- shot approach, we expanded our focus. The second experiment involved extending the process to the 13 labels in the SCOTUS corpus, a task significantly more challenging than the prior binary classification. In addition to this, we compared our results to the zero-shot ChatGPT results reported in [36], which covered a subset of the overall samples in the SCOTUS data. We selected the corresponding samples and provided our results for comparison. Over multiple iterations, we developed the few-shot prompts on a randomly selected portion (n = 40) of the development sets for both the _ECHR_ and the _SCOTUS_ datasets. The final eight-shot prompt incorporated the eight semantically closest summaries from the training set to the corresponding target set, as well as the respective gold label from the training set. Notably for the _SCOTUS_ corpus, we limited the available issue area labels for the model, based on the labels of the included eight samples. Once we identified the most effective composition of the few-shot prompt on the random sample, we applied it to the full development and test sets, and reported the results. Our computational requirements were limited to CPU resources, as we only performed inference calls on the generative models. Our work did not involve the use of any GPUs. Detailed computational information is available in the appendix (see App. A). ## 7 Results In this section, we discuss our results on both benchmarks. We have included the confusion matrices for the development sets (see Fig. 2 for ECHR and Fig. 3 for SCOTUS), the labelwise F1-scores (see Tab. 1 and Tab. 4), and the overall results (see Tab. 2 and Tab. 3). ### 7.1 ECHR Results With respect to the ECHR results, we managed to improve upon the zero-shot results from previous work. However, the few-shot context still did not suffice to reach the performance of supervised trained models. It’s important to remember that full supervised fine-tuning involves hours of update runs with thousands of annotated samples, while our experiments only included inference calls. The confusion matrix (Fig. 2) also demonstrates some misclassification along the off-diagonal axis, although the few-shot prompting did capture more of the minority (_NO_) class. Figure 2: Confusion matrix for the dev. set of _ECHR_. # | Label | # Samples | F1 ---|---|---|--- 1 | Yes | 825 | 0.864 2 | No | 175 | 0.248 Table 1: Labelwise F1-Scores for the development set of _ECHR_. ### 7.2 SCOTUS Results The results for the multi-class text classification task for _SCOTUS_ are presented in Tab. 2. Alongside our results on the development and test sets, we also included external (ext.) results from [3] and [36]. While we achieved satisfactory performance on the development set, we observed a substantial drop in performance on the test set. The test set performance in terms of macro-f1 score was below the zero-shot ChatGPT results. However, our prompt chaining approach was more effective in retrieving the higher frequency classes, as reflected in the better micro-f1 score. This trend is also evident in the label-wise scores (Tab. 4), where the higher frequency classes received better scores than the minority classes. The confusion matrix (Fig. 3) for this experiment showed that particularly many issue areas were predicted as _civil rights_ , while also the _criminal procedure_ , _judicial power_ and _federalism_ were misclassified as others. | Model | Precision | Recall | macro-F1 | micro-F1 | weighted-F1 | Accuracy ---|---|---|---|---|---|---|--- dev. set | minority class | .088 | .500 | .149 | .175 | .052 | .175 random class | .506 | .510 | .451 | .514 | .572 | .514 majority class | .412 | .500 | .452 | .825 | .746 | .825 GPT-NeoX (0-shot, (F)) | .527 | .536 | .526 | .709 | .731 | .709 GPT-NeoX (8-shot, (S)) | .566 | .552 | .556 | .770 | .756 | .770 test set | minority class | .077 | .500 | .133 | .153 | .041 | .153 random class | .479 | .460 | .410 | .484 | .555 | .484 majority class | .423 | .500 | .459 | .847 | .777 | .847 GPT-NeoX (0-shot, (F)) | .522 | .530 | .521 | .707 | .728 | .707 GPT-NeoX (8-shot, (S)) | .525 | .537 | .527 | .779 | .768 | .779 Table 2: The results for the ECtHR development and test sets. Besides the macro-averaged F1-score, precision and recall, we report also the micro-averaged and weighted-F1 and the accuracy scores. The (F) stands for the full document that was used in the input to the model, while the (S) stands for the summaries (concatenated in the few-shot prompts) as the input to the model. | Model | Precision | Recall | macro-F1 | micro-F1 | weighted-F1 | Accuracy ---|---|---|---|---|---|---|--- dev. set | majority class | .020 | .077 | .031 | .257 | .105 | .257 random class | .070 | .064 | .057 | .071 | .084 | .071 FLAN-UL2 (8-shot, (S)) | .529 | .455 | .461 | .545 | .543 | .545 test set | majority class | .020 | .077 | .032 | .266 | .112 | .266 random class | .079 | .074 | .060 | .077 | .095 | .077 FLAN-UL2 (8-shot, (S)) | .427 | .373 | .359 | .486 | .483 | .486 FLAN-UL2 (8-shot, (S))† | .435 | .388 | .371 | .484 | .480 | .484 ext. | ChatGPT (0-shot, (F)) | - | - | .420 | .438 | - | - supervised (full) | - | - | .695 | .782 | - | - Table 3: The results for the SCOTUS development and test sets. Besides the macro-averaged F1-score, precision and recall, we report also the micro-averaged and weighted-F1 and the accuracy scores. The (F) stands for the full document that was used in the input to the model, while the (S) stands for the summaries (concatenated in the few-shot prompts) as the input to the model. †We calculated the scores based on the same reduced set of documents (1k) as the ChatGPT work. The ext. rows are external results copied from the corresponding papers [3, 36]. # | Label | # Samples | F1 ---|---|---|--- 1 | Criminal Procedure | 360 | 0.742 2 | Federal Taxation | 226 | 0.695 3 | First Amendment | 218 | 0.644 4 | Unions | 165 | 0.590 5 | Economic Activity | 108 | 0.577 6 | Civil Rights | 83 | 0.551 7 | Privacy | 70 | 0.529 8 | Interstate Relations | 51 | 0.500 9 | Federalism | 38 | 0.308 10 | Judicial Power | 35 | 0.299 11 | Attorneys | 22 | 0.255 12 | Due Process | 14 | 0.173 13 | Miscellaneous | 10 | 0.133 Table 4: Labelwise F1-Scores for the development set of _SCOTUS_. Figure 3: Confusion matrix for the dev. set of _SCOTUS_. ## 8 Conclusion Our experiments successfully demonstrated that the implementation of few-shot prompts can lead to improvements upon the zero-shot results. We also showed that it is feasible to predict frequent labels with appreciable F1 scores using this approach. The strategy of prompting, and as we have demonstrated, the concept of prompt chaining, represent promising avenues for future exploration. These techniques are particularly advantageous as they circumvent the need for costly data annotation and the development of custom models. Last but not least, established prompting pipelines can be adapted for use with different (updated) models and, as shown in [23], they offer across-the- board enhancements for a diverse range of tasks, free of cost. Looking ahead, our future work aims to experiment with even larger models on additional legal benchmarks. ## References * Dale [2019] R. Dale, Law and word order: Nlp in legal tech, Natural Language Engineering 25 (2019) 211–217. * Zhong et al. [2020] H. Zhong, C. Xiao, C. Tu, T. Zhang, Z. Liu, M. Sun, How does nlp benefit legal system: A summary of legal artificial intelligence, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 5218–5230. * Chalkidis et al. [2022] I. Chalkidis, A. Jana, D. Hartung, M. Bommarito, I. Androutsopoulos, D. Katz, N. Aletras, Lexglue: A benchmark dataset for legal language understanding in english, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 4310–4330. * Chalkidis et al. [2021] I. Chalkidis, M. Fergadiotis, I. Androutsopoulos, Multieurlex-a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer, in: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp. 6974–6996. * Ouyang et al. [2022] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al., Training language models to follow instructions with human feedback, Advances in Neural Information Processing Systems 35 (2022) 27730–27744. * Wu et al. [2022] T. Wu, M. Terry, C. J. Cai, Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts, in: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 2022, pp. 1–22. * Wei et al. [2022] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, D. Zhou, Chain of thought prompting elicits reasoning in large language models, arXiv preprint arXiv:2201.11903 (2022). * Wu et al. [2022] T. Wu, E. Jiang, A. Donsbach, J. Gray, A. Molina, M. Terry, C. J. Cai, Promptchainer: Chaining large language model prompts through visual programming, in: CHI Conference on Human Factors in Computing Systems Extended Abstracts, 2022, pp. 1–10. * Khattab et al. [2022] O. Khattab, K. Santhanam, X. L. Li, D. Hall, P. Liang, C. Potts, M. Zaharia, Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp, arXiv preprint arXiv:2212.14024 (2022). * Wagh et al. [2021] V. Wagh, S. Khandve, I. Joshi, A. Wani, G. Kale, R. Joshi, Comparative study of long document classification, in: TENCON 2021-2021 IEEE Region 10 Conference (TENCON), IEEE, 2021, pp. 732–737. * Park et al. [2022] H. H. Park, Y. Vyas, K. Shah, Efficient classification of long documents using transformers, arXiv preprint arXiv:2203.11258 (2022). * Garimella et al. [2022] A. Garimella, A. Sancheti, V. Aggarwal, A. Ganesh, N. Chhaya, N. Kambhatla, Text simplification for legal domain: Insights and challenges, NLLP 2022 2022 (2022) 296–304. * Chalkidis et al. [2022] I. Chalkidis, X. Dai, M. Fergadiotis, P. Malakasiotis, D. Elliott, An exploration of hierarchical attention transformers for efficient long document classification, arXiv preprint arXiv:2210.05529 (2022). * Mamakas et al. [2022] D. Mamakas, P. Tsotsi, I. Androutsopoulos, I. Chalkidis, Processing long legal documents with pre-trained transformers: Modding legalbert and longformer, arXiv preprint arXiv:2211.00974 (2022). * Ding et al. [2022] N. Ding, S. Hu, W. Zhao, Y. Chen, Z. Liu, H. Zheng, M. Sun, Openprompt: An open-source framework for prompt-learning, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2022, pp. 105–113. * Bach et al. [2022] S. Bach, V. Sanh, Z. X. Yong, A. Webson, C. Raffel, N. V. Nayak, A. Sharma, T. Kim, M. S. Bari, T. Févry, et al., Promptsource: An integrated development environment and repository for natural language prompts, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2022, pp. 93–104. * Chase [2022] H. Chase, LangChain, 2022. URL: https://github.com/hwchase17/langchain. * Liu [2022] J. Liu, LlamaIndex, 2022. URL: https://github.com/jerryjliu/gpt_index. * Rush [2023] S. Rush, Mini-Chain, 2023. URL: https://github.com/srush/MiniChain/. * Trautmann et al. [2022] D. Trautmann, A. Petrova, F. Schilder, Legal prompt engineering for multilingual legal judgement prediction, arXiv preprint arXiv:2212.02199 (2022). URL: https://doi.org/10.48550/arXiv.2212.02199. * Yu et al. [2022] F. Yu, L. Quartey, F. Schilder, Legal prompting: Teaching a language model to think like a lawyer, arXiv preprint arXiv:2212.01326 (2022). * Bommarito II and Katz [2022] M. Bommarito II, D. M. Katz, Gpt takes the bar exam, arXiv preprint arXiv:2212.14402 (2022). * Katz et al. [2023] D. M. Katz, M. J. Bommarito, S. Gao, P. Arredondo, Gpt-4 passes the bar exam, Available at SSRN 4389233 (2023). * Black et al. [2022] S. Black, S. Biderman, E. Hallahan, Q. Anthony, L. Gao, L. Golding, H. He, C. Leahy, K. McDonell, J. Phang, et al., Gpt-neox-20b: An open-source autoregressive language model, Challenges & Perspectives in Creating Large Language Models (2022) 95. * Gao et al. [2020] L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, et al., The pile: An 800gb dataset of diverse text for language modeling, arXiv preprint arXiv:2101.00027 (2020). * Tay et al. [2022] Y. Tay, M. Dehghani, V. Q. Tran, X. Garcia, D. Bahri, T. Schuster, H. S. Zheng, N. Houlsby, D. Metzler, Unifying language learning paradigms, arXiv preprint arXiv:2205.05131 (2022). * Longpre et al. [2023] S. Longpre, L. Hou, T. Vu, A. Webson, H. W. Chung, Y. Tay, D. Zhou, Q. V. Le, B. Zoph, J. Wei, et al., The flan collection: Designing data and methods for effective instruction tuning, arXiv preprint arXiv:2301.13688 (2023). * Liu et al. [2022] Y. Liu, P. Liu, D. Radev, G. Neubig, Brio: Bringing order to abstractive summarization, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 2890–2903. * Lewis et al. [2020] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, L. Zettlemoyer, Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 7871–7880. * Shen et al. [2022] Z. Shen, K. Lo, L. Yu, N. Dahlberg, M. Schlanger, D. Downey, Multi-lexsum: Real-world summaries of civil rights lawsuits at multiple granularities, in: Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. * Reimers and Gurevych [2019] N. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert-networks, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2019\. URL: http://arxiv.org/abs/1908.10084. * Zheng et al. [2021] L. Zheng, N. Guha, B. R. Anderson, P. Henderson, D. E. Ho, When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings, in: Proceedings of the eighteenth international conference on artificial intelligence and law, 2021, pp. 159–168. * Schick et al. [2023] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, T. Scialom, Toolformer: Language models can teach themselves to use tools, arXiv preprint arXiv:2302.04761 (2023). * Yu et al. [2022] W. Yu, D. Iter, S. Wang, Y. Xu, M. Ju, S. Sanyal, C. Zhu, M. Zeng, M. Jiang, Generate rather than retrieve: Large language models are strong context generators, arXiv preprint arXiv:2209.10063 (2022). * Wang et al. [2022] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, D. Zhou, Self-consistency improves chain of thought reasoning in language models, arXiv preprint arXiv:2203.11171 (2022). * Chalkidis [2023] I. Chalkidis, Chatgpt may pass the bar exam soon, but has a long way to go for the lexglue benchmark (2023). URL: https://dx.doi.org/10.2139/ssrn.4385460. ## Appendix A Compute Requirements We used the following Amazon EC2 M5 instance: Standard Instance | vCPU | Memory ---|---|--- ml.m5d.24xlarge | 96 | 384 GiB We haven’t used any GPUs in our experiments. ## Appendix B SCOTUS issue areas * • Criminal Procedure * • Civil Rights * • First Amendment * • Due Process * • Privacy * • Attorneys * • Unions * • Economic Activity * • Judicial Power * • Federalism * • Interstate Relations * • Federal Taxation * • Miscellaneous * • Private Action was not available in the data
# StableMask: Refining Causal Masking in Decoder-only Transformer Qingyu Yin Xuzheng He Xiang Zhuang Yu Zhao Jianhua Yao Xiaoyu Shen Qiang Zhang ###### Abstract The decoder-only Transformer architecture with causal masking and relative position encoding (RPE) has become the _de facto_ choice in language modeling. Despite its exceptional performance across various tasks, we have identified two limitations: First, it requires all attention scores to be non-zero and sum up to 1, even if the current embedding has sufficient self-contained information. This compels the model to assign disproportional excessive attention to specific tokens. Second, RPE-based Transformers are not universal approximators due to their limited capacity at encoding absolute positional information, which limits their application in position-critical tasks. In this work, we propose _StableMask_ : a parameter-free method to address both limitations by refining the causal mask. It introduces pseudo-attention values to balance attention distributions and encodes absolute positional information via a progressively decreasing mask ratio. StableMask’s effectiveness is validated both theoretically and empirically, showing significant enhancements in language models with parameter sizes ranging from 71M to 1.4B across diverse datasets and encoding methods. We further show that it naturally supports (1) efficient extrapolation without special tricks such as StreamingLLM and (2) easy integration with existing attention optimization techniques. Machine Learning, ICML ## 1 Introduction Large Language Models (LLMs) have revolutionized natural language processing for their task-agnostic in-context learning paradigm (Brown et al., 2020). The core of LLMs is the decoder-only Transformer architecture (Vaswani et al., 2017; Radford et al., 2019), characterized by the self-attention mechanism and relative positional encoding (RPE) to aggregate information and catch the dependency among tokens. It has exhibited superior zero-shot generalization capabilities in comparison to its encoder-decoder counterparts, leading to its increased prevalence in pre-trained LLMs (Lester et al., 2021; Patel et al., 2023). Despite the impressive success, we identified two important issues within this architecture. The first issue arises from the softmax function used in self-attention, as its outputs consist solely of non-zero values summing up to 1 (Pang et al., 2019). This forces to allocate a certain distribution of attention probability across all available tokens, even when the current token already has sufficient self-contained information (Xiao et al., 2023) or when the attention mechanism does not need to prioritize any token (Hua et al., 2022; Bondarenko et al., 2023). In such cases, the model tends to allocate disproportional attention scores to specific tokens like punctuation marks. This problem is exacerbated in decoder-only models as the varied sequence length leads to an extremely uneven attention distribution, particularly on the initial tokens. While approaches have been proposed to mitigate this issue, they all entail significant complexity. e.g., modifying the sparseness of softmax (Laha et al., 2018), or adding dedicated tokens to absorb unnecessary attention (Darcet et al., 2023). The second limitation is associated with various relative positional encoding strategies (Ke et al., 2020), e.g. ALiBi (Press et al., 2022), T5 (Raffel et al., 2020), and RoPE (Su et al., 2021). Compared with absolute position encoding (APE), RPE has achieved state-of-the-art performance in most natural language task. It also exhibits better extrapolation capabilities, and naturally preserves invariant properties for several important transformations like rotation and translation, making it more widely used in Transformers (Press et al., 2022). However, RPE fails to capture enough absolute positional information as the softmax always generates a right stochastic matrix (Luo et al., 2022), i.e., a square matrix where each row consists of non-negative real numbers adding up to 1. This restricts its application in situations where such positional information is crucial. Previous attempts to address this, such as URPE (Luo et al., 2022), added learnable relative position matrices atop the softmax outputs, which hurt the extrapolation capabilities because of the non-extensibility of learnable parameters. In this paper, we propose _StableMask_ – a tailored approach to address both issues by carefully modifying the causal mask in the decoder-based transformers. It introduces extra pseudo attention scores to the upper triangular attention matrix, which stabilizes the normalization constant of attention scores within each row regardless of the sequence length and token position. This allows the model to allocate excess attention to these dedicated pseudo scores. Moreover, StableMask progressively ensures that the result of softmax is not a right stochastic matrix. With a decreasing mask ratio (i.e. the sum of each row after softmax), it enables the model to encode a measurement of absolute position during the softmax stage, while remaining consistent with the decaying inter-token dependency used in RPE, thus effectively maintaining its extrapolation capability. StableMask’s effectiveness has been thoroughly validated through extensive testing on multiple language models across a diverse array of both synthetic and realistic tasks. It represents a substantial advancement in refining the attention mechanisms for decoder-only Transformers, overcoming the inherent limitations while retaining their core strengths. A key advantage of StableMask is its parameter-free nature. As StableMask is implemented solely as a direct replacement for the causal mask, it is highly compatibile with the Transformer’s native architecture (such as different position encodings, attention optimizations or extrapolation techniques). For instance, we have presented an implementation of StableMask that is optimized for hardware efficiency, aligning with the principles of FlashAttention (Dao et al., 2022). This allows StableMask to seamlessly integrate into the ecosystem of Transformer models, thereby expanding its potential applications. Our core contributions can be summarized as follows: 1. 1. We identified two issues in the commonly used decoder-only Transformer architecture: the disproportional attention distribution and the inability to accurately capture positional information. 2. 2. We propose StableMask, an efficient and easily integrable solution to effectively address both issues by carefully modifying the causal mask. 3. 3. We validate the effectiveness of StableMask across multiple tasks and encoding methods. 4. 4. We present a hardware-efficient version of StableMask to optimize its practical applicability. Figure 1: (a) Visual comparison of attention heads with and without StableMask on the OpenLLaMA 1.4B model. (b) The attention allocation to various types of tokens (excluding the initial token) at two different positions and the trend of attention allocation to the initial token over positions, averaged over heads. Blue: The original Transformer exhibits a clear disproportional attention issue. Green: StableMask effectively rectifies the proportion of attention allocation. (c) Experimental Results showing RPE’s inability to encode absolute position (Blue). StableMask solves the issue of RPE’s inability to encode absolute position (Green). ## 2 Preliminary #### Self-Attention Let $X$ be the input sequence, $n$ be the sequence length and $d$ be the dimensionality of the hidden state. The self-attention mechanism in Transformer architectures calculates attention scores between each pair of words to capture dependencies between words and learn contextual information effectively. Let $A$ denote the attention score matrix and $a_{ij}$ be the attention score between the $i$-th word and the $j$-th word. We have $A=\frac{QK^{\top}}{\sqrt{d}}$where $Q,K,V\in\mathbb{R}^{n\times d}$ represent the Query, Key, and Value matrices derived from $X$ (Vaswani et al., 2017). In decoder-only models, $A$ is further modified by a causal mask $M$ and a softmax operation: $\tilde{A}=\mathrm{Softmax}(A+M).$ (1) The following holds to prevent the model from attending to future tokens: $\displaystyle M_{i}$ $\displaystyle=$ $\displaystyle[\underbrace{0,\cdots,0}_{i},\underbrace{-\infty,\cdots,-\infty}_{n-i}]_{n},$ (2) $\displaystyle\tilde{A_{i}}$ $\displaystyle=$ $\displaystyle[a_{i1},a_{i2},\cdots,a_{ii},0,\cdots,0]_{n}.$ (3) #### Position Encoding The raw Transformer without position encodings is insensitive to permutational rearrangements. Two chief methods have been employed to remove this insensitivity: absolute position encoding (APE) and relative position encoding (RPE). APE assigns an index-dependent vector at each position to the word embeddings. These assigned vectors are usually trainable parameters to represent absolute positions of each input token (Kenton & Toutanova, 2019; Radford et al., 2019). More recently, RPE such as ALiBi (Press et al., 2022), T5 (Raffel et al., 2020) and RoPE (Su et al., 2021) took a different approach by incorporating relative distances of positions into the attention score matrix. RPEs can be mainly classified into additive (T5, ALiBi, etc.) or multiplicative (RoPE, etc.): Add: $\displaystyle\tilde{A}_{\mathrm{add}}$ $\displaystyle=\mathrm{Softmax}\left(\frac{QK^{\top}+S}{\sqrt{d_{k}}}+M\right),$ (4) Mul: $\displaystyle\tilde{A}_{\mathrm{mul}}$ $\displaystyle=\mathrm{Softmax}\left(\frac{\tilde{Q}\tilde{K}^{\top}}{\sqrt{d_{k}}}+M\right),$ (5) where $\tilde{Q}=Q\odot R_{Q},\ \tilde{K}=K\odot R_{K}$. $R_{Q},R_{K}$ are rotary forms usually in complex values and $S$ is a Topelitz matrix. Given its consistent demonstrated improvements over APE, RPE has emerged as the default choice in LLMs. ## 3 Problem Despite the exceptional performance, we identified two key issues associated with self-attention and RPE. #### Disproportional Attention The first issue arises from the softmax function used in self-attention. Given that the softmax function requires all attention scores to be non-zero and sum up to 1, it necessitates an inescapable distribution of attention across on all visible tokens. However, previous studies (Shen et al., 2019; Hassid et al., 2022; Bondarenko et al., 2023; Xiao et al., 2023) have shown that the attention mechanism often requires very few important tokens, and the others are merely distractions. In this case, the requirement imposed by the softmax function prevents the model from effectively zeroing out the attention scores for irrelevant tokens. Some of these irrelevant tokens, such as initial tokens or non-functional words like punctuation marks, are more frequently observed by other tokens. In consequence, as shown in Figure 1, the model tends to allocate disproportional attention (DA) to them. We refer to these tokens which are not semantically relevant, but receive disproportional attention values, as DA tokens111Appendix A offers an information-theoretic definition and interpretation of the DA issue.. The existence of DA tokens can lead to various undesired problems, e.g., perplexity surge in length extrapolation or sensitivity to irrelevant noise (Xiao et al., 2023). Interestingly, the extent of this DA phenomenon varies across token positions within the decoder-only language model. It is most prominent at the beginning of a sequence, and gradually eases towards the end (as seen in Figure 1(b)). Intuitively, as the token position increases, more tokens participate in the softmax operation and even assigning a very small probability to each token can result in a significant accumulative probability. As a result, DA tokens cannot receive as much attention values as they do near the beginning of a sequence. Existing solutions, such as StreamingLLM (Xiao et al., 2023) and ViT Register (Darcet et al., 2023), have attempted to address this by introducing _Artificial Tokens_ (AT) to absorb excess attention, so that real tokens can be freed from getting unnecessary DA. We term them as AT-based methods. However, as said, the severity of the DA issue varies along token positions. We hypothesize that adding a fixed number of tokens across all sequences is not position-adaptive and thereby cannot fully address the DA issue. #### Inability to Encode Absolute Position Despite its superior performance, RPE that modifies $QK^{\top}$ does not ensure $V$ is also sensitive to position. For instance, when all inputs are identical vectors, the outputs are also guaranteed to be equal because the output of softmax generates a right stochastic matrix 222For a more in-depth discussion on all-identical inputs and their relation to DA, refer to Appendix B.1.. Therefore, RPE can perform poorly in tasks where positional information is critical. To verify this limitation of RPEs, we designed specialized datasets, inspired by URPE (Luo et al., 2022), which focus on tasks requiring absolute positional information while maintaining consistent input sequences (check Appendix B.2 for details). We report the average accuracy of various models in Figure 1(c). The results demonstrate that models relying exclusively on RPEs exhibit poor performance, confirming the inferiority of RPE in capturing absolute positional information. Figure 2: (a) Illustration of the StableMask mechanism. (b) StableMask integrates with the softmax operation, replacing the traditional causal mask. (c) The attention score matrix is first cleared of attention values in the upper triangular part using the $C$ matrix, then pseudo-attention scores are added using the $P$ matrix followed by the softmax computation. (d) After the softmax operation, the remaining attention probabilities in the upper triangular part are cleared using $C$ to ensure the causal decoding property. (e) The $C$ matrix has zeros in the upper triangular part and ones in the lower triangular part, while the $P$ matrix has linear decay in the upper triangular part and zeros in the lower triangular part. $\gamma$ is a hyperparameter. (f) StableMask for inference. An input sequence needs a suffix. One obvious solution to the limitation is to directly replace RPE with APE. However, as mentioned, APE has its own problems such as poor extrapolation, rotation and translation variant, worse prediction accuracy, etc (Su et al., 2021; Press et al., 2022). Another approach is to add additional parameters to the matrix after the softmax to re-encode absolute positional information. For example, URPE (Luo et al., 2022) adds a learnable Toeplitz matrix $\mathcal{T}$ to the softmax matrix $\tilde{A}$ via: $\displaystyle\mathrm{Attention}(Q,K,V)=(\tilde{A}\odot\mathcal{T})V.$ (6) The URPE approach, while successfully encoding absolute positional information, has several drawbacks. First, it requires additional learnable parameters which complicates themodel optimization. Second, because the $\mathcal{T}$ matrix is fixed, models trained with this method loses its ability to input context that is longer than the training length. ## 4 StableMask In the previous section, we analyzed two problems with the decoder-only Transformer architecture commonly used in contemporary LLMs: disproportional attention and inability to encode absolute position. Disproportional attention happens when certain attention heads share no need to allocate any attention logits but have to due to the softmax mechanism, and this issue is more pronounced at the beginning of the sequence in the decoder. The inability to encode absolute position comes from the result of softmax: it is a right stochastic matrix, with the sum of each row equals one always, so its output is insensitive to absolute positions. To address the above two problems, we seek a solution by introducing pseudo- attention scores into the softmax operation. Specifically, the solution should simultaneously meet the following requirements: 1. (i) It can provide additional pseudo-attention scores to accommodate excess attention logits, thereby freeing DA tokens from the responsibility of absorbing unnecessary attention values. 2. (ii) These additional pseudo-attention scores need to adhere to the property of DA in a decoder-only model, i.e. larger at the beginning of the sequence and smaller towards the end of the sequence. 3. (iii) It ensures that the result of softmax is not a right stochastic matrix, i.e. the sum of each row is not 1, so that positional information can be encoded. In the following section, we show that all of the above three requirements can be met by carefully modifying the causal mask applied after softmax. ### 4.1 Pseudo-attention Score To meet the requirement (i) and (ii), we propose constructing a StableMask attention score matrix $A_{\text{SM}}\in\mathbb{R}^{n\times n}$: $A_{\text{SM}}=\begin{pmatrix}a_{11}&p_{11}&\cdots&p_{1(n-1)}\\\ a_{21}&a_{22}&\cdots&p_{1(n-2)}\\\ \vdots&\vdots&\ddots&\vdots\\\ a_{n1}&a_{n2}&\cdots&a_{nn}\\\ \end{pmatrix}.$ (7) Here, we call these $p_{ij}$ as pseudo-attention scores. When the current attention head does not depend too much on its previous context, it can choose to store unnecessary attention values on these pseudo-attention scores. For each row (all attention scores for the $i$-th token), the sequence length it can attend to is fixed to be $n$. Therefore there will be $n-i$ pseudo- attention scores in each row for excessive attention allocation. This fulfills requirement (ii), which involves having more pseudo-attention values towards the beginning of a sequence. $A_{\text{SM}}$ can be calculated using the following method: $A_{\text{SM}}=A\odot C+P,$ (8) $C=\begin{pmatrix}1&0&\cdots&0\\\ 1&1&\cdots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 1&1&\cdots&1\\\ \end{pmatrix},P=\begin{pmatrix}0&p_{11}&\cdots&p_{1(n-1)}\\\ 0&0&\cdots&p_{1(n-2)}\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\cdots&0\\\ \end{pmatrix}.$ The problem then becomes how should the values of these pseudo-attention scores be set. At the start of training, the distribution of the scaled attention scores has a mean of $0$. These attention scores are also influenced by position encoding, and commonly used RPEs typically exhibit decay with increasing relative distance. Therefore, pseudo-attention scores should not significantly disrupt the original distribution of attention scores, and they should also align with the characteristics of the relative position encoding used by the model. Consequently, for $p_{ij}$, it should conform to: $p_{\text{base}}=0,~{}~{}p_{ij}=p_{\text{base}}-(j-1)\gamma,$ (9) where $\gamma$ is a decay rate hyperparameter. Therefore, the attention score matrix with StableMask should be: $A_{\text{SM}}=\begin{pmatrix}a_{11}&-\gamma&\cdots&-(n-1)\gamma\\\ a_{21}&a_{22}&\cdots&-(n-1)\gamma\\\ \vdots&\vdots&\ddots&\vdots\\\ a_{n1}&a_{n2}&\cdots&a_{nn}\\\ \end{pmatrix}.$ (10) Finally, we can replace the traditional causal mask operation in Equation (1) with: $\displaystyle\tilde{A}$ $\displaystyle=$ $\displaystyle\mathrm{Softmax}(A_{\text{SM}})\odot C$ (11) $\displaystyle=$ $\displaystyle\mathrm{Softmax}(A\odot C+P)\odot C.$ Here the $A_{\text{SM}}=A\odot C+P$ inside $\mathrm{Softmax}$ masks the attention score matrix with pseudo-attention scores, whereas the $C$ outside $\mathrm{Softmax}$ replaces the scores which need masking with 0 again. Therefore, StableMask still maintains the characteristics of causal decoding, ensuring that information does not leak from subsequent tokens. ### 4.2 StableMask Encodes Absolute Position StableMask introduces a set of pseudo-attention scores. Therefore, for those real attention scores (the lower triangular part of the attention matrix $A_{\text{SM}}$), their sum after softmax will not be 1, meeting the requirement (iii). Concretely, let $A_{i}$ denote the real attention scores of the $i$-th row and $P_{i}$ denote the pseudo-attention scores in the $i$-th row, we have: $\sum\mathrm{Softmax}_{A_{i}\bigcup P_{i}}(A_{i})=1-\sum\mathrm{Softmax}_{A_{i}\bigcup P_{i}}(P_{i}),$ where $\mathrm{Softmax}_{A_{i}\bigcup P_{i}}(A_{i})$ and $\mathrm{Softmax}_{A_{i}\bigcup P_{i}}(P_{i})$ are the real/pseudo attention in each row. We reconsider the question posed in Section 3: whether the model can encode positional information for an identical input sequence $X=[\boldsymbol{x},\cdots,\boldsymbol{x}]_{n}$. The answer is affirmative: notice that $\Sigma_{j\leq i}\exp(A_{ij})$ increases as $i$ increases (all $A_{ij}$s are equal), and $\Sigma_{j>i}\exp(P_{ij})$ decreases as $i$ increases, we have $\sum\mathrm{Softmax}_{A_{i}\bigcup P_{i}}(A_{i})<\sum\mathrm{Softmax}_{A_{i+1}\bigcup P_{i+1}}(A_{i+1}),$ which means after Equation $\eqref{eq:remask}$, the output attention values will be monotonic: $\displaystyle\tilde{A}(W_{V}X)^{\top}=[\alpha_{1}\boldsymbol{v},\alpha_{2}\boldsymbol{v},\cdots,\alpha_{n}\boldsymbol{v}]_{n},$ $\displaystyle 0<\alpha_{1}<\alpha_{2}<\dots<\alpha_{n}=1.$ This indicates that absolute positional information is effectively captured. In general, a Transformer decoder with StableMask has the ability to encode absolute positional information: ###### Theorem 4.1. Let $X=[\boldsymbol{x}_{1},\cdots,\boldsymbol{x}_{n}]_{n}$ be an input sequence of length $n$ to the StableMask model $f^{\text{(SM)}}_{T}$. Then, the first layer of $f^{\text{(SM)}}_{T}$ can recover absolute positions $[1,2,\dots,n]$ in the hidden state $\Omega^{(1)}$. That is, there exist $W_{Q}$, $W_{K}$, $W_{V}$ and $W_{O}$ for the first attention layer, along with $W_{1}$ and $W_{2}$ for the first feed-forward layer, that computes absolute positions and pass them to the next layer. The complete proof can be found in Appendix C. ### 4.3 Inference and Length Extrapolation Figure 3: StableMask for Inference. The original StableMask implementation needs to recompute the softmax result for the attention score matrix because additional mask values are added. StableMask for Inference introduces a factor $\tau$ to fix the situation to be in the form of the maximum training length. In Section 4.1, we introduced the computation process of StableMask. During the training phase, StableMask can be readily applied in parallel within a batch to backpropagate the training loss. During inference, attention computation is usually performed serially and employs KV caching (Tang et al., 2021; Pope et al., 2023). StableMask in its original form is not cost-effective for inference, because it does not support the use of KV caching. During the inference stage, when the sequence length is changed e.g. from $n$ to $n+1$ for causal decoding, attention layers need to recalculate the softmax results. For the first $n$ rows, an additional pseudo-attention value is added, invalidating the previously calculated attention (see Figure 3). This renders KV caching unusable, significantly increasing the cost of inference. | WikiText-103 | | MiniPile ---|---|---|--- Model | *PE | #Params | PPL | Model | *PE | #Params | PPL 1 Epoch | PPL 2 Epoch BLOOM | ALiBi | 71M | $29.9_{\pm.1}$ | BLOOM | ALiBi | 160M | $25.8_{\pm.2}$ | $23.3_{\pm.4}$ BLOOM-SM | ALiBi | 71M | $\textbf{29.0}_{\pm.1}$ | BLOOM-SM | ALiBi | 160M | $\textbf{25.6}_{\pm.0}$ | $\textbf{22.9}_{\pm.2}$ OpenLLaMA | RoPE | 71M | $27.4_{\pm.2}$ | OpenLLaMA | RoPE | 160M | $25.9_{\pm.1}$ | $21.2_{\pm.1}$ OpenLLaMA-SM | RoPE | 71M | $\textbf{26.9}_{\pm.3}$ | OpenLLaMA-SM | RoPE | 160M | $\textbf{25.0}_{\pm.0}$ | $\textbf{20.9}_{\pm.3}$ BLOOM | ALiBi | 160M | $27.6_{\pm.9}$ | BLOOM | ALiBi | 430M | $20.6_{\pm.1}$ | $15.6_{\pm.4}$ BLOOM-SM | ALiBi | 160M | $\textbf{26.1}_{\pm.2}$ | BLOOM-SM | ALiBi | 430M | $\textbf{19.6}_{\pm.3}$ | $\textbf{15.5}_{\pm.2}$ OpenLLaMA | RoPE | 160M | $22.5_{\pm.8}$ | OpenLLaMA | RoPE | 430M | $19.6_{\pm.2}$ | $15.7_{\pm.5}$ OpenLLaMA-SM | RoPE | 160M | $\textbf{21.1}_{\pm.6}$ | OpenLLaMA-SM | RoPE | 430M | $\textbf{19.5}_{\pm.4}$ | $\textbf{15.1}_{\pm.5}$ *: positional encoding type Table 1: Pretraining results with (“ -SM”) or without StableMask on the Wikitext-103 and MiniPile datasets. Our solution is simple: we pad the sequence to the training length while compressing the padded tokens into a single suffix token. Assuming the current sequence length is $n$, we first append a suffix token to the end of the sequence (See Figure 2 (f) and Figure 3). At this point, the size of the attention matrix becomes $(n+1)\times(n+1)$. Then, in the additional last column, we add a factor $\tau=\ln(\sum_{i=n}^{N-1}e^{-i\gamma})$: $A^{\prime}_{\text{SM}}=\begin{pmatrix}a_{11}&-\gamma&\cdots&-(n-1)\gamma&\tau\\\ a_{21}&a_{22}&\cdots&-(n-1)\gamma&\tau\\\ \vdots&\vdots&\ddots&\vdots&\vdots\\\ a_{n1}&a_{n2}&\cdots&a_{nn}&\tau\\\ a_{(n+1)1}&a_{(n+1)2}&\cdots&a_{(n+1)n}&a_{(n+1)(n+1)}\\\ \end{pmatrix}.$ The last row of $A^{\prime}_{\text{SM}}$ comes from the suffix and will not be utilized for generation. This makes each row equivalent to the case when the sequence length is the same as the training length, allowing us to use KV caching. Next, we deal with the length extrapolation scenario, i.e. inputs that are longer than the pretraining length limit. Notice that when $n$ reaches the maximum training length $N$, $\tau$ becomes $0$. This setup prevents the model from continuing to generate $\tau$ values beyond the training length. Therefore, during extrapolation, we set $\tau=-n\gamma$, where $n\geq N$ is the current sequence length. $\tau$ in long sequences is a very small number after applying the softmax, and its value will approach zero as $n$ grows. However, the presence of this term still ensures that the softmax result is not a right stochastic matrix, thereby asymptotically encoding absolute positional information. In addition, when the sequence length is very long, the phenomenon of disproportional attention nearly disappears, as we concluded in Section 3. Hence the pseudo-attention score does not need to maintain a large value. ### 4.4 Hardware-Efficient Implementation of StableMask FlashAttention (Dao et al., 2022) represents a major advance in accelerating the Transformer architecture. It avoids repeated data transfers between GPU’s High Bandwidth Memory (HBM) and processing units, by segmenting and sequentially processing the $QKV$ matrix on-chip. StableMask’s integration into this framework is seamless, requiring only minimal modifications. In the FlashAttention paradigm, the query $Q\in\mathbb{R}^{n\times d_{H}}$, key $K\in\mathbb{R}^{n\times d_{H}}$, and value $V\in\mathbb{R}^{n\times d_{H}}$ matrices are partitioned into $Tr=\frac{n}{Br}$ blocks $Q_{1},\ldots,Q_{Tr}$, $K_{1},\ldots,K_{Tr}$, $V_{1},\ldots,V_{Tr}$, each of dimension $\mathbb{R}^{Br\times d_{H}}$. Then each block $Q_{i},K_{j},V_{i}$ is fetched for computation. The attention scores $S_{i}^{(j)}$ for blocks $Q_{i}$ and $K_{j}$ are derived from the on-chip computation: $S_{i}^{(j)}=Q_{i}K_{j}^{T}\in\mathbb{R}^{Br\times Br}$. With the incorporation of StableMask into FlashAttention, two additional fused operations are introduced as follows: $S_{i}^{(j)}=(Q_{i}K_{j}^{T})\odot C_{i}^{(j)}+P_{i}^{(j)},$ (12) where $P$ and $C$ correspond to the StableMask matrices, segmented into $Tr\times Tr$ blocks with $P_{i}^{(j)},C_{i}^{(j)}\in\mathbb{R}^{Br\times Br}$ and loaded on-chip. We include a complete formula derivation and pseudocode implementation in Appendix D. Model | | PPL / Tokens | | DownStream Tasks ---|---|---|---|--- 5B | 10B | 15B | 20B | 25B | LBD | PIQA | ARCE | ARCC | OBQA | WG OpenLLaMA | 15.4±.2 | 14.8±.3 | 12.4±.3 | 11.7±.2 | 10.7±.3 | 59.4 | 67.1 | 51.4 | 25.6 | 31.4 | 53.5 OpenLLaMa-SM | 15.0±.2 | 14.6±.1 | 11.9±.1 | 11.3±.4 | 10.4±.3 | 59.6 | 67.1 | 51.7 | 25.6 | 32.6 | 54.1 Table 2: Left: Pretraining result of OpenLLaMA 1.4B with RoPE. Right: Result of downstream tasks on OpenLLaMA 1.4B. Figure 4: (abc): Scaling Curve of models from 160M to 1.4B across different positional encodings. (d): extrapolation results (with window attention). StableMask consistently improves the model performance while enabling effective extrapolation. ## 5 Experiments In this section, we present extensive experiments to rigorously evaluate the performance of our proposed method. ### 5.1 StableMask Solves Two Problems Our initial assessment confirms the efficacy of the StableMask model in addressing the two problems in Transformer models. The experimental results have been presented in Figure 1. Firstly, concerning the disproportionate attention problem, we perform a comparative visualization of the attention heads in models with and without StableMask. By calculating the attention probability ratios for the first token and various token types, we observed that StableMask largely rectifies the issue of abnormal attention distribution. With StableMark, both initial tokens and punctuation marks experience a significant reduction in attention values. Regarding the second issue of encoding absolute positional information, we evaluated the model’s fitting capabilities on a specially designed dataset, comparing StableMask with various Position Encoding approaches. The findings indicate StableMask adeptly encodes absolute positional information, thereby effectively remedying the limitations inherent in relative position encoding. We also provided a visualization of the new attention score matrix after softmax with StableMask in Appendix F. ### 5.2 StableMask Improves Model Performance We further tested the performance of StableMask on various model architectures and position encodings. Our experiments leverage models built on BLOOM (LLaMA architecture with ALiBi) and OpenLLaMA (Touvron et al., 2023) (RoPE (Su et al., 2021)) architectures. Detail settings could be checked in the Appendix E. Performance on Wikitext-103 and MiniPile (Table 1): Empirical evidence underscores the efficacy of models employing StableMask when trained on both Wikitext-103 (Merity et al., 2016) and MiniPile (Kaddour, 2023). These models demonstrate enhanced perplexity (PPL) scores, a pattern consistent across different architectures and sizes, including those with ALiBi and RoPE, and spanning parameter scales of 71M to 400M. Notably, within those datasets, models integrating StableMask consistently outshine their counterparts lacking this feature. Impact on Scaling Performance (Table 2): The Pile is an extensive open-source dataset tailored for large-scale language modeling. We pretrained a 1.4B model with LLaMA architecture on the Pile dataset with 25B tokens. In the context of scaling of tokens, the model with StableMask consistently achieves better PPL scores compared to the standard OpenLLaMA model, showing the scaling ability of models with StableMask. Effectiveness in Downstream Tasks (Table 2): When examining pre-trained models on downstream tasks like LAMBADA (Paperno et al., 2016), PIQA (Bisk et al., 2019), ARC-Easy (Yadav et al., 2019), ARC-Challenge (Yadav et al., 2019), OpenbookQA (Mihaylov et al., 2018), and Winogrande (Sakaguchi et al., 2021), model with StableMask shows a general trend of improved performance. It suggests that StableMask not only improves language understanding in the pretraining stage but also enhances effectiveness in downstream tasks. ### 5.3 Extrapolation Capability As StableMask resolves the problem of DA tokens, it naturally addresses the attention sink issue (Xiao et al., 2023), where initial tokens get large attention values and removing them from the attention window leads to a surge in perplexity. The models with our proposed StableMask do not need to preserve tokens at the beginning of the sequence during window-based extrapolation and avoid causing generation failures. As shown in Figure 4, when using the RoPE position encoding, the extrapolation perplexity quickly explodes without StableMask. When StableMask is applied, the extrapolation perplexity remains stable with window attention, where only the most recent KVs are cached. Furthermore, we believe that the parameter-free nature of StableMask facilitates its seamless integration with other extrapolation methods, a prospect we leave for future exploration. Methods | PPL | Pseudo Value | PPL ---|---|---|--- Baseline | 22.5 | $-\infty$ | 22.5 Learnable AT | 21.6 | 0 | 21.5 Fixed Value AT | 22.4 | $1\times 10^{-2}$ | 22.2 StableMask | 21.1 | Positional Decay | 21.1 Table 3: Left: Experiment result of ablation study and comparison of AT method on OpenLLaMA, 160M. Right: Ablation experiment, 160M on OpenLLaMA. ### 5.4 StableMask vs AT-based Methods In Section 3, we discussed that the artificial token (AT)-based methods are one alternative method to mitigate the DA problem. These artificial tokens could be either learnable, i.e. added before the embedding layer, or fixed as constant vectors, e.g. zero vector. However, we find that as AT-based methods provide the same number of tokens for all sequences, its benefit is not as significant as StableMask (see Table 3) since the severity of the DA issue varies along the sequence. For fair comparison, we retrained OpenLLaMA models using the AT method and StableMask on the MiniPile dataset. ### 5.5 Impact on Inference Efficiency In Section 4.3, we introduced StableMask for Inference, which changes the form of the mask to allow for more efficient inference strategies like KV cache. To validate its effectiveness, we tested the inference efficiency of a standard Transformer (Baseline), a model using StableMask (SM), and a model using StableMask for Inference (SM-I). We present the results in Figure 5 and find that StableMask for Inference significantly improved the model’s inference efficiency, making it comparable to the efficiency of traditional Transformers. Figure 5: Inference latency test on OpenLLaMA 1.4B. Our proposed StableMask adapted for fast inference (SM-I) significantly reduces the running latency. ### 5.6 Effects of Pseudo Attention Value In Section 4, we introduced positional linear decay, making the pseudo- attention scores align with the characteristics of real attention scores.w To validate its rationality, we conducted ablation experiments on various types of pseudo-attention scores. These experiments included four modes: (a) No addition of pseudo-attention scores, i.e., maintaining a mask of negative infinity. (b) Padding with zeros, which aligns with the values of attention score distribution. (c) Padding with a value different from the attention score distribution, e.g. $1\times 10^{-2}$. (d) The positional decay method we proposed. Our ablation studies, as detailed in Table 3, demonstrate that a decay value like $1\times 10^{-2}$ deviates significantly from the original attention matrix’s distribution, leading to diminished pretraining performance. The implementation of positional decay, however, excels in the training phase, showcasing state-of-the-art performance. ## 6 Related Work Several studies have attempted to address issues inherent in the attention mechanism and softmax operation. A pivotal contribution by (Hassid et al., 2022) raised questions about the role of certain heads in the attention mechanism. They discovered that substituting a subset of heads with constant diagonal matrices could even enhance model performance, suggesting that part of the model’s attention heads do not need to attend to any tokens other than themselves. Quantizable Transformer (Bondarenko et al., 2023) and StreamingLLM (Xiao et al., 2023) identified a tendency in some attention heads to accumulate probabilities on the initial few tokens or on tokens similar to punctuation marks. Bondarenko et al. (2023) demonstrated that this behavior impacts model quantization, proposing a solution by trimming softmax and employing gated attention. StreamingLLM, on the other hand, observed that this phenomenon affects windowed attention, and addressed it by preserving the initial tokens. Darcet et al. (2023) proposed adding “register tokens” which are essentially artificial places for the real tokens to attend to. The added tokens serve as a way to absorb the excessive attention that would otherwise accumulate on the initial tokens. However, the previous approach of adding or using extra tokens either (1) uses fixed values or weights which does not account for possible distributional shifts when extrapolating to longer sequences; (2) does not explore its potential interference with positional embeddings; (3) adds extra parameters or computation to the attention layer, while not making clear whether existing optimization techniques are still applicable; (4) does not provide a theoretical framework for understanding the phenomenon more deeply. ## 7 Conclusion StableMask represents a significant advancement in the field of language modeling, by simultaneously addressing two limitations of the decoder-only Transformer architecture: disproportional attention and inability to encode absolute position. By refining the causal mask with pseudo-attention values, StableMask adeptly balances attention distributions and encodes absolute positional information through a progressively decreasing mask ratio. It preserves the inherent distribution of the attention score matrix and enhances the model’s ability in various natural language tasks. While StableMask demonstrates much potential, it is not without its constraints. One notable limitation is the slightly increased computational demand compared to conventional attention mechanisms. However, as the increased computation is only one matrix multiplication, we believe this overhead is negligible. Furthermore, StableMask inherently encodes absolute positional information, necessitating careful calibration to prevent the model from being adversely affected. We anticipate that forthcoming research will further refine our approach and overcome these challenges. ## 8 Acknowledgement We thank Songlin Yang and other collaborators for the suggestions on language expression and image design in this paper. ## References * Bisk et al. (2019) Bisk, Y., Zellers, R., Bras, R. L., Gao, J., and Choi, Y. Piqa: Reasoning about physical commonsense in natural language, 2019. * Bondarenko et al. (2023) Bondarenko, Y., Nagel, M., and Blankevoort, T. Quantizable transformers: Removing outliers by helping attention heads do nothing. _arXiv preprint arXiv:2306.12929_ , 2023. * Brown et al. (2020) Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. _Advances in Neural Information Processing Systems_ , 33:1877–1901, 2020. * Dao et al. (2022) Dao, T., Fu, D., Ermon, S., Rudra, A., and Ré, C. Flashattention: Fast and memory-efficient exact attention with io-awareness. _Advances in Neural Information Processing Systems_ , 35:16344–16359, 2022. * Darcet et al. (2023) Darcet, T., Oquab, M., Mairal, J., and Bojanowski, P. Vision transformers need registers. _arXiv preprint arXiv:2309.16588_ , 2023. * Hassid et al. (2022) Hassid, M., Peng, H., Rotem, D., Kasai, J., Montero, I., Smith, N. A., and Schwartz, R. How much does attention actually attend? questioning the importance of attention in pretrained transformers, 2022. * Hua et al. (2022) Hua, W., Dai, Z., Liu, H., and Le, Q. Transformer quality in linear time. In _International Conference on Machine Learning_ , pp. 9099–9117. PMLR, 2022. * Kaddour (2023) Kaddour, J. The minipile challenge for data-efficient language models, 2023. * Kazemnejad et al. (2023) Kazemnejad, A., Padhi, I., Ramamurthy, K. N., Das, P., and Reddy, S. The impact of positional encoding on length generalization in transformers, 2023. * Ke et al. (2020) Ke, G., He, D., and Liu, T.-Y. Rethinking positional encoding in language pre-training. _arXiv preprint arXiv:2006.15595_ , 2020. * Kenton & Toutanova (2019) Kenton, J. D. M.-W. C. and Toutanova, L. K. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of NAACL-HLT_ , pp. 4171–4186, 2019. * Kim et al. (2023) Kim, J., Kim, M., and Mozafari, B. Provable memorization capacity of transformers. In _International Conference on Learning Representations_ , 2023. * Laha et al. (2018) Laha, A., Chemmengath, S. A., Agrawal, P., Khapra, M., Sankaranarayanan, K., and Ramaswamy, H. G. On controllable sparse alternatives to softmax. _Advances in Neural Information Processing Systems_ , 31, 2018. * Lester et al. (2021) Lester, B., Al-Rfou, R., and Constant, N. The power of scale for parameter-efficient prompt tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pp. 3045–3059, 2021. * Luo et al. (2022) Luo, S., Li, S., Zheng, S., Liu, T.-Y., Wang, L., and He, D. Your transformer may not be as powerful as you expect. _Advances in Neural Information Processing Systems_ , 35:4301–4315, 2022. * Merity et al. (2016) Merity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer sentinel mixture models. _arXiv preprint arXiv:1609.07843_ , 2016. * Mihaylov et al. (2018) Mihaylov, T., Clark, P., Khot, T., and Sabharwal, A. Can a suit of armor conduct electricity? a new dataset for open book question answering. In _Conference on Empirical Methods in Natural Language Processing_ , 2018. URL https://api.semanticscholar.org/CorpusID:52183757. * Pang et al. (2019) Pang, T., Xu, K., Dong, Y., Du, C., Chen, N., and Zhu, J. Rethinking softmax cross-entropy loss for adversarial robustness. In _International Conference on Learning Representations_ , 2019. * Paperno et al. (2016) Paperno, D., Kruszewski, G., Lazaridou, A., Pham, Q. N., Bernardi, R., Pezzelle, S., Baroni, M., Boleda, G., and Fernández, R. The lambada dataset: Word prediction requiring a broad discourse context. _arXiv preprint arXiv:1606.06031_ , 2016. * Park et al. (2021) Park, S., Yun, C., Lee, J., and Shin, J. Minimum width for universal approximation. In _International Conference on Learning Representations_ , 2021. * Patel et al. (2023) Patel, A., Li, B., Rasooli, M. S., Constant, N., Raffel, C., and Callison-Burch, C. Bidirectional language models are also few-shot learners. In _The Eleventh International Conference on Learning Representations_ , 2023. * Polyanskiy & Wu (2016) Polyanskiy, Y. and Wu, Y. Strong data-processing inequalities for channels and bayesian networks, 2016. * Pope et al. (2023) Pope, R., Douglas, S., Chowdhery, A., Devlin, J., Bradbury, J., Heek, J., Xiao, K., Agrawal, S., and Dean, J. Efficiently scaling transformer inference. _Proceedings of Machine Learning and Systems_ , 5, 2023. * Press et al. (2022) Press, O., Smith, N., and Lewis, M. Train short, test long: Attention with linear biases enables input length extrapolation. In _International Conference on Learning Representations_ , 2022. * Radford et al. (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9, 2019. * Raffel et al. (2020) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_ , 21(1):5485–5551, 2020. * Sakaguchi et al. (2021) Sakaguchi, K., Bras, R. L., Bhagavatula, C., and Choi, Y. Winogrande: An adversarial winograd schema challenge at scale. _Communications of the ACM_ , 64(9):99–106, 2021. * Shen et al. (2019) Shen, X., Zhao, Y., Su, H., and Klakow, D. Improving latent alignment in text summarization by generalizing the pointer generator. In _Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP)_ , pp. 3762–3773, 2019. * Su et al. (2021) Su, J., Lu, Y., Pan, S., Murtadha, A., Wen, B., and Liu, Y. Roformer: Enhanced transformer with rotary position embedding. _arXiv preprint arXiv:2104.09864_ , 2021. * Tang et al. (2021) Tang, Z., Li, C., Ge, J., Shen, X., Zhu, Z., and Luo, B. Ast-transformer: Encoding abstract syntax trees efficiently for code summarization. In _2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE)_ , pp. 1193–1195. IEEE, 2021. * Touvron et al. (2023) Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_ , 2023. * Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. _Advances in Neural Information Processing Systems_ , 30, 2017. * Xiao et al. (2023) Xiao, G., Tian, Y., Chen, B., Han, S., and Lewis, M. Efficient streaming language models with attention sinks. _arXiv preprint arXiv:2309.17453_ , 2023. * Yadav et al. (2019) Yadav, V., Bethard, S., and Surdeanu, M. Quick and (not so) dirty: Unsupervised selection of justification sentences for multi-hop question answering. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_. Association for Computational Linguistics, 2019. doi: 10.18653/v1/d19-1260. URL http://dx.doi.org/10.18653/v1/D19-1260. * Yun et al. (2020) Yun, C., Bhojanapalli, S., Rawat, A. S., Reddi, S. J., and Kumar, S. Are transformers universal approximators of sequence-to-sequence functions? In _International Conference on Learning Representations_ , 2020. ## Appendix A Detailed Explanation of the DA Issue The traditional dot-product attention makes the assumption that the next token is strongly related to the previous context. However, the mutual information $I(X_{\leq i};X_{n+1})=H(X_{n+1})-H(X_{n+1}|X_{\leq i})$ could be small, especially in the initial parts of the sequence. We formalize this (counter-)intuition by defining the following concepts: ###### Definition A.1. A _causally isotropic_ data distribution of $N$ discrete random variables $X_{1},X_{2},\dots,X_{N}$ satisfies that for any set of indices $\Lambda\subset[n]$, $H(X_{n+1}|X_{\Lambda}=x_{\Lambda})=H(X_{n+1}|X_{\Lambda})$ does not depend on the value of $x_{\Lambda}$, where $H$ denotes entropy333Causal isotropy is a strict condition. We use it for demonstration purposes only: it isolates the effect of data variability in judging the disproportionality of attention.. ###### Definition A.2. A _layer-wise_ decoder for a data distribution $p(X_{1},X_{2},\dots,X_{N})$ accepts any data point $x_{<N}$, and computes deterministically $L$ layers of intermediate representations $\Omega^{(l)}_{<N}$, such that for $n<N$, $\Omega^{(l)}_{n}$ only receives inputs from $\Omega^{(l-1)}_{\leq n}$ (we define $\Omega^{(0)}_{n}$ as $X_{n}$ or its embedding). ###### Definition A.3. An _contextual_ layer-wise decoder satisfies that for any two possible inputs $x_{<N},x^{\prime}_{<N}$ and $n<N$, if $p(X_{n+1}|x_{\leq n})\neq p(X_{n+1}|x^{\prime}_{\leq n})$, then $\omega^{(L)}_{n}\neq\omega^{\prime(L)}_{n}$, where $\omega^{(L)}_{n}$ ($\omega^{\prime(L)}_{n}$) is $\Omega^{(L)}_{n}$ evaluated on input $x_{\leq n}$ ($x^{\prime}_{\leq n}$). Our definition of contextual decoder aligns with the definition of contextual mapping in previous works (Yun et al., 2020; Kim et al., 2023), which guarantees that certain different inputs are mapped to different representations, although their definition of contextual mapping is more focused on the seq2seq setting. Next, we make the following observations: ###### Proposition A.4. For a layer-wise decoder on a data distribution, the prefixes of its intermediate representation at the $l$-th layer $\Omega^{(l)}_{\leq i}$ satisfy 1. 1. $H(X_{n+1}|\Omega^{(l)}_{\leq i})\geq H(X_{n+1}|X_{\leq i})$ for all $i\leq n$; 2. 2. $H(X_{n+1}|\Omega^{(l)}_{\leq n})=H(X_{n+1}|X_{\leq n})$ if the decoder is contextual; 3. 3. $H(X_{n+1}|\Omega^{(l)}_{\leq i}=\omega^{(l)}_{\leq i})\geq H(X_{n+1}|X_{\leq i})$ for all $i\leq n$ and all $\omega^{(l)}_{\leq i}$ if the data is causally isotropic; 4. 4. $H(X_{n+1}|\Omega^{(l)}_{\leq n}=\omega^{(l)}_{\leq n})=H(X_{n+1}|X_{\leq n})$ for all $\omega^{(l)}_{\leq n}$ if the decoder is contextual and the data is causally isotropic. ###### Proof. 1. 1. Notice that $X_{n+1}\rightarrow X_{\leq i}\rightarrow\Omega^{(l)}_{\leq i}$ is a Markov chain. By the data processing inequality (Polyanskiy & Wu, 2016), $I(\Omega^{(l)}_{\leq i};X_{n+1})\leq I(X_{\leq i};X_{n+1})\implies H(X_{n+1}|\Omega^{(l)}_{\leq i})\geq H(X_{n+1}|X_{\leq i})$. 2. 2. For any $\omega^{(l)}_{\leq n}$, let $\kappa(\omega^{(l)}_{\leq n})$ be the set of inputs where $p(x_{\leq n}|\omega^{(l)}_{\leq n})>0$, which is equivalent to $p(\omega^{(l)}_{\leq n}|x_{\leq n})=1$ by the deterministic nature of decoder. By the definition of contextual layer-wise decoder, $\displaystyle\forall x_{\leq n},x^{\prime}_{\leq n}\in\kappa(\omega^{(l)}_{\leq n})$ $\displaystyle\implies\omega^{(l)}_{\leq n}=\omega^{\prime(l)}_{\leq n}\implies\omega^{(L)}_{\leq n}=\omega^{\prime(L)}_{\leq n}$ $\displaystyle\implies\omega^{(L)}_{n}=\omega^{\prime(L)}_{n}\implies p(X_{n+1}|x_{\leq n})=p(X_{n+1}|x^{\prime}_{\leq n}).$ (13) Therefore, $\displaystyle p(X_{n+1}|\omega^{(l)}_{\leq n})$ $\displaystyle=\sum_{x_{\leq n}\in\kappa(\omega^{(l)}_{\leq n})}p(X_{n+1}|x_{\leq n},\omega^{(l)}_{\leq n})p(x_{\leq n}|\omega^{(l)}_{\leq n})$ (by conditional independence) $\displaystyle=\sum_{x_{\leq n}\in\kappa(\omega^{(l)}_{\leq n})}p(X_{n+1}|x_{\leq n})p(x_{\leq n}|\omega^{(l)}_{\leq n})$ (14) $\displaystyle=p(X_{n+1}|x_{\leq n}),~{}\forall x_{\leq n}\in\kappa(\omega^{(l)}_{\leq n})$ $\displaystyle\implies H(X_{n+1}|\Omega^{(l)}_{\leq n}=\omega^{(l)}_{\leq n})$ $\displaystyle=H(X_{n+1}|X_{\leq n}=x_{\leq n}),~{}\forall x_{\leq n}\in\kappa(\omega^{(l)}_{\leq n})$ (15) $\displaystyle\implies H(X_{n+1}|\Omega^{(l)}_{\leq n})$ $\displaystyle=\sum_{\omega^{(l)}_{\leq n}}p(\omega^{(l)}_{\leq n})H(X_{n+1}|\Omega^{(l)}_{\leq n}=\omega^{(l)}_{\leq n})=\sum_{\omega^{(l)}_{\leq n}}\left(\sum_{x_{\leq n}\in\kappa(\omega^{(l)}_{\leq n})}p(x_{\leq n})\right)H(X_{n+1}|\Omega^{(l)}_{\leq n}=\omega^{(l)}_{\leq n})$ $\displaystyle=\sum_{x_{\leq n}}p(x_{\leq n})H(X_{n+1}|X_{\leq n}=x_{\leq n})=H(X_{n+1}|X_{\leq n}).$ (16) 3. 3. Note that (14) can be written as a weighted average, which we denote as $\mathrm{avg}_{\kappa}$: $\displaystyle p(x_{n+1}|\omega^{(l)}_{\leq n})$ $\displaystyle=\mathrm{avg}_{\kappa}p(x_{n+1}|x_{\leq n}),~{}\forall x_{n+1}.$ (17) Similarly, with a slightly different definition of $\kappa$, $\displaystyle p(x_{n+1}|\omega^{(l)}_{\leq i})$ $\displaystyle=\mathrm{avg}_{\kappa}p(x_{n+1}|x_{\leq i}),\forall x_{n+1}.$ (18) Apply Jensen’s inequality to the function $-x\log x$, we have for any $x_{n+1}$, $\displaystyle-p(x_{n+1}|\omega^{(l)}_{\leq i})\log p(x_{n+1}|\omega^{(l)}_{\leq i})$ $\displaystyle=-\left(\mathrm{avg}_{\kappa}p(x_{n+1}|x_{\leq i})\right)\log\left(\mathrm{avg}_{\kappa}p(x_{n+1}|x_{\leq i})\right)$ $\displaystyle\geq\mathrm{avg}_{\kappa}\left(-p(x_{n+1}|x_{\leq i})\log p(x_{n+1}|x_{\leq i})\right).$ Therefore, $\displaystyle H(X_{n+1}|\Omega^{(l)}_{\leq i}=\omega^{(l)}_{\leq i})$ $\displaystyle=-\sum_{x_{n+1}}p(x_{n+1}|\omega^{(l)}_{\leq i})\log p(x_{n+1}|\omega^{(l)}_{\leq i})$ $\displaystyle\geq\sum_{x_{n+1}}\mathrm{avg}_{\kappa}\left(-p(x_{n+1}|x_{\leq i})\log p(x_{n+1}|x_{\leq i})\right)$ $\displaystyle=\mathrm{avg}_{\kappa}\sum_{x_{n+1}}-p(x_{n+1}|x_{\leq i})\log p(x_{n+1}|x_{\leq i})$ $\displaystyle=\mathrm{avg}_{\kappa}H(X_{n+1}|X_{\leq i}=x_{\leq i})$ (by causal isotropy) $\displaystyle=H(X_{n+1}|X_{\leq i}).$ (20) 4. 4. Apply causal isotropy to (15). ∎ Figure 6: The DA issue and the proposed solution of adding pseudo-attention scores. The rationale behind is that through learning, a decoder should learn to avoid paying too much attention to where $H(X_{n+1}|\Omega^{(l)}_{\leq i}=\omega^{(l)}_{\leq i})$ is high, because such places provide little mutual information with respect to the prediction goal. We are now ready to define the disproportionality of attention: ###### Definition A.5. Let inputs sampled from a data distribution $p(X_{1},X_{2},\dots,X_{N})$ run through a contextual layer-wise decoder with attention layers. If for at least one possible input $x_{<N}$, the attention $\tilde{A}^{(l)}$ after softmax in the $l$-th layer satisfy $\sum_{j\leq i}\tilde{A}^{(l)}_{nj}>\frac{I(X_{\leq i};X_{n+1})}{I(X_{\leq n};X_{n+1})}\sum_{j\leq n}\tilde{A}^{(l)}_{nj}+\varepsilon$ (21) for some $i<n<N$ and $I(X_{\leq n};X_{n+1})>0$, then this attention layer is said to have disproportional attention towards initial tokens on this input. The overall degree of disproportionality of an attention layer can be measured by the total probability of such inputs $\sum_{x_{<N}}p(x_{<N})$. Note that by Proposition A.4, the following always holds: $\frac{I(X_{\leq i};X_{n+1})}{I(X_{\leq n};X_{n+1})}=\frac{H(X_{n+1})-H(X_{n+1}|X_{\leq i})}{H(X_{n+1})-H(X_{n+1}|X_{\leq n})}\geq\frac{H(X_{n+1})-H(X_{n+1}|\Omega^{(l)}_{\leq i})}{H(X_{n+1})-H(X_{n+1}|\Omega^{(l)}_{\leq n})}=\frac{I(\Omega^{(l)}_{\leq i};X_{n+1})}{I(\Omega^{(l)}_{\leq n};X_{n+1})}.$ (22) This justifies our choice of the threshold $\frac{I(X_{\leq i};X_{n+1})}{I(X_{\leq n};X_{n+1})}$ for detecting the disproportionality of attention. Moreover, if the data is causally isotropic, the specific values of data do not matter for how much attention the model should pay. In this work, we handle the DA problem by pseudo-attention scores and we offer a probabilistic interpretation. First, we clarify that the problem does not lie in the query-key-value mechanism of attention, but rather lies in the nature of autoregression: the history does not represent a complete description of the future, and the probability that the future deviates from the history must be taken into account, and more so at the beginning. Thus the output of an attention layer at earlier positions should be able to signal to the subsequent layers a higher variance of estimation compared to later positions. The failure of reliably doing so leads to the model having to allocate computation elsewhere to rectify the signal, such as excessive attention towards irrelevant tokens (Xiao et al., 2023) and “no-op” heads (Bondarenko et al., 2023), or becoming totally paralyzed (Appendix B.1). StableMask parameterizes this inductive bias orthogonal to decoder-only Transformers with RPE by pseudo-attention scores in the causal mask that decays over time. ## Appendix B Further Explanation of Position Encoding ### B.1 The Unit Test of Absolute Position-Awareness Training a decoder-only Transformer with no PE will fail on data points that consist of all identical tokens, because the outputs of each layer are all identical vectors. Consequently, it is impossible for the model to predict different output distributions at different positions. We regard such all- identical inputs with different outputs at different positions as the “unit test” of absolute position awareness. We showed that Transformers with RPE cannot pass this test (Appendix B.2). One way to pass the test without using explicit PE was proposed, by prepending a special $\langle bos\rangle$ token to the input sequence (Kazemnejad et al., 2023). It breaks the symmetry in all positions and provides a way for the decoder to recognize absolute position. We note that this solution is equivalent to the AT-based method used to solve the DA issue (Section 3). This inspires us to see the test from the viewpoint of DA. Indeed, we have ###### Theorem B.1. There exists a causally isotropic data distribution (defined in Appendix A) such that any regular Transformer decoder has a high probability of being (weakly) disproportional in all of its attention layers. ###### Proof. Consider the following $\mathrm{softCopyLast}$ task: for any input $x_{<n}$, output the last token with probability $1-e^{-n}$, or a random token otherwise. The training dataset is constructed by a sampling algorithm that correctly does the task repeatedly. The training dataset is causally isotropic: for every set of observed variables $x_{\Lambda}$, $\Lambda\subset[n]$, $H(X_{n+1}|X_{\Lambda}=x_{\Lambda})$ depends only on the largest element of $\Lambda$, not on the specific values of variables. Moreover, the probability density of this dataset concentrates most on the all-identical sequences, because as time goes on, sequences in the dataset are increasingly likely to copy themselves. Last, we need to check that regular Transformer decoders have (weakly) disproportional attention on all identical sequences in all the attention layers. Note that although $I(X_{\leq i};X_{n+1})>0$ for $i<n$, $I(X_{\leq i};X_{n+1}|X_{n})=0$ holds because of conditional independence between $X_{\leq i}$ and $X_{n+1}$ given $X_{n}$. On the other hand, $\sum_{j\leq i}\tilde{A}^{(l)}_{nj}>\varepsilon$ holds because the softmax in a regular Transformer always gives positive attention. So the model has a weak disproportional attention towards initial tokens. ∎ Intuitively speaking, if the inputs are all identical, then the model only needs to know the last token and the sequence length in order to decide the output. All other attention can be regarded as (weakly) disproportional. However, inputs constructed this way only account for an exponentially small total probability in real datasets, so we separate this issue from the issue of disproportional attention. ### B.2 Experiment of RPE’s Inability to Encode Absolute Position To demonstrate that RPE cannot encode absolute positional information as discussed in Section 3, we designed several experiments that require knowledge of absolute positional relationships. These experiments primarily include three tasks: 1. (1) Absolute Position Mapping: Given an input sequence of “$0~{}0~{}0~{}0~{}0~{}\ldots$”, the model needs to accurately map each position to its absolute position. In other words, we expect an output of “$1~{}2~{}3~{}4~{}5~{}\ldots$”. 2. (2) Absolute Position Identification: Given an input sequence of “$0~{}0~{}0~{}\ldots~{}\texttt{[ABE]}~{}0~{}0~{}\ldots$”, where [ABE] encodes a special character at a specific position, the model needs to output the absolute position corresponding to the location encoded by [ABE]. In this case, we expect an output of “$0~{}0~{}0~{}\ldots~{}n~{}0~{}0~{}\ldots$”, where $n$ represents the current position. 3. (3) Odd-Even Number Counting: Given an input sequence of “$0~{}0~{}0~{}0~{}0~{}\ldots$”, the model needs to output a sequence of consecutive odd and even numbers, such as “$1~{}2~{}1~{}2~{}\ldots$”. This task also relies on the model’s ability to recognize absolute positional information. | Accuracy ---|--- PE* | Task (1) | Task (2) | Task (3) APE Learnable | 96.7% | 94.3% | 97.6% Sinusoidal | 98.1% | 99.1% | 96.2% RPE ALiBi | 21.7% | 26.7% | 46.5% T5 | 22.4% | 24.5% | 42.7% RoPE | 25.3% | 24.7% | 43.1% *: positional encoding type Table 4: Experiment settings and Results of RPE’s inability to encode absolute position. We designed three datasets that rely on absolute position information and calculated average accuracy on these tasks. The results show that RPE performs poorly. This demonstrates that the position information encoded during the softmax process in RPE is shadowed. Our experiments were conducted using a model with 160 million parameters, trained on four V100 GPUs. For detailed training hyperparameters, one can refer to the training details on the Wikitext-103 dataset (Appendix E). ## Appendix C StableMask Encodes Absolute Positional Information In this section, we present how StableMask can recover absolute positions in the hidden state using fewer portions of the model, than prepending a special $\langle bos\rangle$ token to the sequence (Appendix B.1). Our proof is inspired by NoPE (Kazemnejad et al., 2023) but differs substantially in that they require three dimensions of hidden states at free disposal, while ours only needs two and is arguably more natural. ###### Theorem C.1. Let $X=[\boldsymbol{x}_{1},\cdots,\boldsymbol{x}_{n}]_{n}$ be an input sequence of length $n$ to the StableMask model $f^{\text{(SM)}}_{T}$. Then, the first layer of $f^{\text{(SM)}}_{T}$ can recover absolute positions $[1,2,\dots,n]$ in the hidden state $\Omega^{(1)}$. That is, there exist $W_{Q}$, $W_{K}$, $W_{V}$ and $W_{O}$ for the first attention layer, along with $W_{1}$ and $W_{2}$ for the first feed-forward layer, that computes absolute positions and pass them to the next layer. ###### Proof. We focus on the goal of reconstructing an index-dependent function $\xi_{i}=i/(i+\sum_{j=i}^{n-1}e^{-j\gamma})$ at the end of the first attention layer. After reconstructing $\xi_{i}$, recovering $i$ from it can be done by the universal approximation power of feed-forward networks (Park et al., 2021). For this, we need to gain control of a single head in the first attention layer, and use two hidden dimensions in the embedding layer. Note that this approach does not alter the rest of the Transformer model. First, we specify the word embedding matrix $W_{E}\in\mathbb{R}^{d\times\mathcal{V}}$ as follows: the first row of $W_{E}$ is set to 1, which serves as the input vector; The second row of $W_{E}$ is set to 0, which serves as the output vector. Then, we have: $W_{E}=\begin{pmatrix}1&1&\dots&1\\\ 0&0&\dots&0\\\ e_{3,1}&e_{3,2}&\dots&e_{3,\mathcal{V}}\\\ \vdots&\vdots&\ddots&\vdots\\\ e_{d,1}&e_{d,2}&\dots&e_{d,\mathcal{V}}\end{pmatrix}_{d\times\mathcal{V}}$ (23) where $e_{i,j}\in\mathbb{R}$. The word embeddings for the input sequence $X=[x_{1},\dots,x_{n}]_{n}$ are retrieved from the embedding matrix $W_{E}$ by: $X_{E}=W_{E}[X]=\begin{pmatrix}1&1&\dots&1\\\ 0&0&\dots&0\\\ e_{3,x_{1}}&e_{3,x_{2}}&\dots&e_{3,x_{n}}\\\ \vdots&\vdots&\ddots&\vdots\\\ e_{d,x_{1}}&e_{d,x_{2}}&\dots&e_{d,x_{n}}\end{pmatrix}_{d\times n}$ (24) Second, for head dimension $h\geq 1$, we specify the weights $W_{Q},W_{K},W_{V},W_{O}$ of the selected attention head in the first layer. Specifically, we set $W_{Q}=W_{K}=0$, and $W_{V}=\begin{pmatrix}1&0&\dots&0\\\ 0&0&\dots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\dots&0\end{pmatrix}_{h\times d},\quad W_{O}=\begin{pmatrix}0&0&\dots&0\\\ 1&0&\dots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\dots&0\end{pmatrix}_{d\times h}.$ (25) Consequently, all the query-key matching results are zero: $W_{K}X_{E}=W_{Q}X_{E}=0_{h\times n},\quad A=(W_{Q}X_{E})^{\top}(W_{K}X_{E})=0_{n\times n},$ (26) while $W_{V}$ takes the first row of $X_{E}$, which is the input vector, and sets everywhere else zero: $W_{V}X_{E}=\begin{pmatrix}1&1&\dots&1\\\ 0&0&\dots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\dots&0\end{pmatrix}_{h\times n}$ (27) We now calculate the output of attention. First, since the key-query matching results are all zero, the attention score matrix with StableMask is $A_{\text{SM}}=A\odot C+P=\begin{pmatrix}0&-\gamma&\cdots&-(n-1)\gamma\\\ 0&0&\cdots&-(n-1)\gamma\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\cdots&0\\\ \end{pmatrix}_{n\times n}$ (28) Therefore, $\tilde{A}=\mathrm{Softmax}(A_{\text{SM}})\odot C=\begin{pmatrix}1/(1+\sum_{i=1}^{n-1}e^{-i\gamma})&0&\cdots&0\\\ 1/(2+\sum_{i=2}^{n-1}e^{-i\gamma})&1/(2+\sum_{i=2}^{n-1}e^{-i\gamma})&\cdots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 1/n&1/n&\cdots&1/n\\\ \end{pmatrix}_{n\times n}$ (29) $\tilde{A}(W_{V}X_{E})^{\top}=\begin{pmatrix}1/(1+\sum_{i=1}^{n-1}e^{-i\gamma})&0&\cdots&0\\\ 2/(2+\sum_{i=2}^{n-1}e^{-i\gamma})&0&\cdots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 1&0&\cdots&0\\\ \end{pmatrix}_{n\times h}=\begin{pmatrix}\xi_{1}&0&\cdots&0\\\ \xi_{2}&0&\cdots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ \xi_{n}&0&\cdots&0\\\ \end{pmatrix}_{n\times h}$ (30) Finally, $W_{O}$ is used to move the first row of $(\tilde{A}(W_{V}X_{E})^{\top})^{\top}$ to the second row: $W_{O}(\tilde{A}(W_{V}X_{E})^{\top})^{\top}=\begin{pmatrix}0&0&\cdots&0\\\ \xi_{1}&\xi_{2}&\cdots&\xi_{n}\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\cdots&0\\\ \end{pmatrix}_{d\times n}$ (31) Adding the residuals back to the input, we are done: $X_{E}+\sum_{\mathrm{h}}W^{(\mathrm{h})}_{O}(\tilde{A}^{(\mathrm{h})}(W^{(\mathrm{h})}_{V}X_{E})^{\top})^{\top}=\begin{pmatrix}1&1&\cdots&1\\\ \xi_{1}&\xi_{2}&\cdots&\xi_{n}\\\ *&*&\cdots&*\\\ \vdots&\vdots&\ddots&\vdots\\\ *&*&\cdots&*\\\ \end{pmatrix}_{d\times n}$ (32) where $*$ denotes values computed by other heads in the first layer, which we assumed to not interfere with the first two hidden dimensions. ∎ ## Appendix D FlashAttention with StableMask ### D.1 Introduction to FlashAttention FlashAttention (Dao et al., 2022) is a state-of-the-art method designed to enhance the performance of attention mechanisms in Transformer models, particularly addressing the efficiency constraints imposed by modern GPU memory hierarchies. Traditional attention mechanisms suffer from significant computational overhead, predominantly due to the necessity of storing and accessing large intermediate matrices, such as the softmax-normalized attention scores, from the High Bandwidth Memory (HBM). This process is inherently memory-bound due to the quadratic dependency on the sequence length, leading to extensive memory accesses and thus increased wall-clock time. The A100 GPU, for instance, showcases the discrepancy in memory speeds within its hierarchy, having a significantly faster on-chip SRAM compared to the larger HBM. FlashAttention optimizes for this architectural detail by reducing HBM reads and writes. It achieves a sub-quadratic number of HBM accesses by employing techniques like tiling and recomputation, which allow for the attention computation to be performed in smaller, more manageable blocks within the on-chip SRAM. This block-based approach mitigates the need to store large intermediate matrices, especially beneficial during the backward pass of model training where intermediate values are traditionally saved to HBM. Furthermore, FlashAttention incorporates kernel fusion in its implementation, enabling a single CUDA kernel to handle the entire computation process – from loading inputs from HBM, through all the computation steps (such as matrix multiplication and softmax), to writing the results back to HBM. This minimizes the frequency of costly memory accesses and contributes to an overall faster computation, without compromising the accuracy of the attention mechanism. As a result, FlashAttention stands out as an efficient primitive for both memory-bound and compute-bound operations within the GPU’s memory hierarchy, offering a significant improvement in the execution of Transformer models. ### D.2 Derivation In the FlashAttention paradigm, the query $Q\in\mathbb{R}^{n\times d_{H}}$, key $K\in\mathbb{R}^{n\times d_{H}}$, and value $V\in\mathbb{R}^{n\times d_{H}}$ matrices are partitioned into $Tr=\frac{n}{Br}$ blocks $Q_{1},\ldots,Q_{Tr}$, $K_{1},\ldots,K_{Tr}$, $V_{1},\ldots,V_{Tr}$, each of dimension $\mathbb{R}^{Br\times d_{H}}$. Then each block $Q_{i},K_{j},V_{i}$ is fetched for computation. The attention scores $S_{i}^{(j)}$ for blocks $Q_{i}$ and $K_{j}$ are derived from the on-chip computation: $S_{i}^{(j)}=Q_{i}K_{j}^{T}\in\mathbb{R}^{Br\times Br}$. With the incorporation of StableMask, two additional on-chip operations are introduced: $S_{i}^{(j)}=(Q_{i}K_{j}^{T})\odot C_{i}^{(j)}+P_{i}^{(j)},$ (33) where $P$ and $C$ correspond to the StableMask matrices, segmented into $Tr\times Tr$ blocks with $P_{i}^{(j)},C_{i}^{(j)}\in\mathbb{R}^{Br\times Br}$, and loaded on-chip. The safe softmax operation, analogous to that in FlashAttention, proceeds as follows: $\displaystyle m_{i}^{(j)}$ $\displaystyle=$ $\displaystyle\max(m_{i}^{(j-1)},\mathrm{rowmax}(S_{i}^{(j)}))\in\mathbb{R}^{Br},$ (34) $\displaystyle\tilde{S_{i}^{(j)}}$ $\displaystyle=$ $\displaystyle\exp(S_{i}^{(j)}-m_{i}^{(j)})\in\mathbb{R}^{Br\times Br},$ (35) $\displaystyle l_{i}^{(j)}$ $\displaystyle=$ $\displaystyle e^{m_{i}^{(j)}-m_{i}^{(j-1)}}l_{i}^{(j-1)}+\mathrm{rowsum}(\tilde{S_{i}^{(j)}})\in\mathbb{R}^{Br}.$ (36) Subsequently, the algorithm rectifies the attention score matrix to account for zeros necessitated by the causal mask, so the final output $O_{i}^{(j)}$ is computed as: $O_{i}^{(j)}=\mathrm{diag}(e^{m_{i}^{(j)}-m_{i}^{(j-1)}})^{-1}O_{i}^{(j-1)}+(\tilde{S_{i}^{(j)}}\odot C_{i}^{(j)})V_{i}.$ (37) ### D.3 A Typical Implementation of FlashAttention 2 Algorithm 1 Forward pass 0: Matrices $\mathbf{Q},\mathbf{K},\mathbf{V},\mathbf{C},\mathbf{B}\in\mathbb{R}^{N\times d}$ in HBM, block sizes $B_{c}$, $B_{r}$. 1: Divide $\mathbf{Q}$ into $T_{r}=\left\lceil\frac{N}{B_{r}}\right\rceil$ blocks $\mathbf{Q}_{1},\dots,\mathbf{Q}_{T_{r}}$ of size $B_{r}\times d$ each, and divide $\mathbf{K},\mathbf{V}$ in to $T_{c}=\left\lceil\frac{N}{B_{c}}\right\rceil$ blocks $\mathbf{K}_{1},\dots,\mathbf{K}_{T_{c}}$ and $\mathbf{V}_{1},\dots,\mathbf{V}_{T_{c}}$, of size $B_{c}\times d$ each. Divide $\mathbf{C},\mathbf{P}$ in to $T_{r}\times T_{c}=\left\lceil\frac{N}{B_{r}}\right\rceil\times\left\lceil\frac{N}{B_{c}}\right\rceil$ blocks $\mathbf{C}_{1},\dots,\mathbf{C}_{T_{r}}$ and $\mathbf{P}_{1},\dots,\mathbf{P}_{T_{c}}$, of size $B_{r}\times B_{c}$ each. 2: Divide the output $\mathbf{O}\in\mathbb{R}^{N\times d}$ into $T_{r}$ blocks $\mathbf{O}_{i},\dots,\mathbf{O}_{T_{r}}$ of size $B_{r}\times d$ each, and divide the logsumexp $L$ into $T_{r}$ blocks $L_{i},\dots,L_{T_{r}}$ of size $B_{r}$ each. 3: for $1\leq i\leq T_{r}$ do 4: Load $\mathbf{Q}_{i}$ from HBM to on-chip SRAM. 5: On chip, initialize $\mathbf{O}_{i}^{(0)}=(0)_{B_{r}\times d}\in\mathbb{R}^{B_{r}\times d},\ell_{i}^{(0)}=(0)_{B_{r}}\in\mathbb{R}^{B_{r}},m_{i}^{(0)}=(-\infty)_{B_{r}}\in\mathbb{R}^{B_{r}}$. 6: for $1\leq j\leq T_{c}$ do 7: Load $\mathbf{K}_{j},\mathbf{V}_{j}$ $\mathbf{C}_{i}^{(j)},\mathbf{P}_{i}^{(j)}$ from HBM to on-chip SRAM. 8: On chip, compute $\mathbf{S}_{i}^{(j)}=\mathbf{Q}_{i}\mathbf{K}_{j}^{T}{\color[rgb]{.75,0,.25}\odot\mathbf{C}_{i}^{(j)}+\mathbf{P}_{i}^{(j)}}\in\mathbb{R}^{B_{r}\times B_{c}}$. 9: On chip, compute $m_{i}^{(j)}=\mathrm{max}(m_{i}^{(j-1)},\mathrm{rowmax}(\mathbf{S}_{i}^{(j)}))\in\mathbb{R}^{B_{r}}$, $\tilde{\mathbf{P}}_{i}^{(j)}=\exp(\mathbf{S}_{i}^{(j)}-m_{i}^{(j)})\in\mathbb{R}^{B_{r}\times B_{c}}$ (pointwise), $\ell_{i}^{(j)}=e^{m_{i}^{j-1}-m_{i}^{(j)}}\ell_{i}^{(j-1)}+\mathrm{rowsum}(\tilde{\mathbf{P}}_{i}^{(j)})\in\mathbb{R}^{B_{r}}$. 10: On chip, compute $\tilde{\mathbf{D}}_{i}^{(j)}\tilde{\mathbf{P}}_{i}^{(j)}\odot\mathbf{C}_{i}$. 11: On chip, compute $\mathbf{O}_{i}^{(j)}=\mathrm{diag}(e^{m_{i}^{(j-1)}-m_{i}^{(j)}})^{-1}\mathbf{O}_{i}^{(j-1)}+{\color[rgb]{.75,0,.25}{\mathbf{D}}_{i}^{(j)}}\mathbf{V}_{j}$. 12: end for 13: On chip, compute $\mathbf{O}_{i}=\mathrm{diag}(\ell_{i}^{(T_{c})})^{-1}\mathbf{O}_{i}^{(T_{c})}$. 14: On chip, compute $L_{i}=m_{i}^{(T_{c})}+\log(\ell_{i}^{(T_{c})})$. 15: Write $\mathbf{O}_{i}$ to HBM as the $i$-th block of $\mathbf{O}$. 16: Write $L_{i}$ to HBM as the $i$-th block of $L$. 17: end for 18: Return the output $\mathbf{O}$ and the logsumexp $L$. (The parts that are different from the original algorithm are marked in purple.) ## Appendix E Training Details | | | | ---|---|---|---|--- Parameters | 71M | 160M | 400M | 1.4B Embedding Size | 512 | 768 | 1024 | 2048 Hidden Size (Attention) | 512 | 1536 | 2048 | 4096 Hidden Size (FFN) | 2048 | 3072 | 2048 | 8192 Expanding Rate (FFN) | 4 | 4 | 2 | 4 Activation Function | SwishGeLU | SwishGeLU | SwishGeLU | SwishGeLU Normalization Type | RMSNorm | RMSNorm | RMSNorm | RMSNorm Positional Encoding | RoPE / ALiBi | RoPE / ALiBi | RoPE / ALiBi | RoPE Tokenizer | GPT2 Tokenizer | GPT2 Tokenizer | GPT2 Tokenizer | GPT2 Tokenizer Vocabulary Size | 50257 | 50257 | 50257 | 50257 # of Attention Heads | 8 | 12 | 16 | 16 # of Layers | 6 | 12 | 16 | 24 Table 5: Hyperparameters for WikiText-103 with ALibi and RoPE positional encoding Hyperparameters for Wikitext-103 | Hyperparameters for MiniPile | Hyperparameters for the Pile ---|---|--- Data | WikiText-103 | Data | MiniPile | Data | Pile Sequence Length | 512 | Sequence Length | 512 / 1024 | Sequence Length | 1024 Batch Size | 64 | Batch Size | 128 | Batch Size | 128 Tokens per Batch | 32768 | Tokens per Batch | 65536 / 131072 | Tokens per Batch | 131072 Total Steps | 50k | Steps per Epoch | 22k | Total Steps | 200k Warmup Steps | 4k | Total Epoch | 2 | Warmup Steps | 4k Beginning Learning Rate | 1e-6 | Warmup Steps | 4k | Beginning Learning Rate | 5e-6 Peak Learning Rate | 6e-4 | Beginning Learning Rate | 1e-6 | Peak Learning Rate | 2e-4 Learning Rate Decay | Linear | Peak Learning Rate | 4e-4 | Learning Rate Decay | Cosine Optimizer | AdamW | Learning Rate Decay | Linear | Optimizer | AdamW Adam $\epsilon$ | $1\times 10^{-8}$ | Optimizer | AdamW | Adam $\epsilon$ | $1\times 10^{-8}$ Adam $\beta_{1}$ | 0.9 | Adam $\epsilon$ | $1\times 10^{-8}$ | Adam $\beta_{1}$ | 0.9 Adam $\beta_{2}$ | 0.98 | Adam $\beta_{1}$ | 0.9 | Adam $\beta_{2}$ | 0.98 Hidden Dropout | 0 | Adam $\beta_{2}$ | 0.98 | Hidden Dropout | 0 GELU Dropout | 0 | Hidden Dropout | 0 | GELU Dropout | 0 Attention Dropout (if needed) | 0 | GELU Dropout | 0 | Attention Dropout (if needed) | 0 Weight Decay | 0.01 | Attention Dropout (if needed) | 0 | Weight Decay | 0.1 Gradient Clipping Value | 1 | Weight Decay | 0.1 | Gradient Clipping Value | 1 Head-wise $\gamma$ | True | Gradient Clipping Value | 1 | Head-wise $\gamma$ | True $\gamma$ Value | 0.5 | Head-wise $\gamma$ | True | $\gamma$ Value | 0.5 Table 6: Hyperparameters for WikiText-103 with ALibi and RoPE positional encoding ## Appendix F Visualization of Attention Heads with StableMask See pages 1, 3, 4, 5, 6, 7, 8 of image/output_score.pdf
# TweetNERD \- End to End Entity Linking Benchmark for Tweets Shubhanshu Mishra &Aman Saini &Raheleh Makki &Sneha Mehta &Aria Haghighi &Ali Mollahosseini Twitter, Inc. <EMAIL_ADDRESS> <EMAIL_ADDRESS> Corresponding Author ###### Abstract Named Entity Recognition and Disambiguation (NERD) systems are foundational for information retrieval, question answering, event detection, and other natural language processing (NLP) applications. We introduce TweetNERD, a dataset of 340K+ Tweets across 2010-2021, for benchmarking NERD systems on Tweets. This is the largest and most temporally diverse open sourced dataset benchmark for NERD on Tweets and can be used to facilitate research in this area. We describe evaluation setup with TweetNERD for three NERD tasks: Named Entity Recognition (NER), Entity Linking with True Spans (EL), and End to End Entity Linking (End2End); and provide performance of existing publicly available methods on specific TweetNERD splits. TweetNERD is available at: https://doi.org/10.5281/zenodo.6617192 under Creative Commons Attribution 4.0 International (CC BY 4.0) license (Mishra et al., 2022). Check out more details at https://github.com/twitter-research/TweetNERD. ## 1 Introduction Figure 1: Comparison with existing Tweet entity linking datasets Named Entity Recognition and Disambiguation (NERD) (Mihalcea and Csomai, 2007; Cucerzan, 2007; Derczynski et al., 2015; Kulkarni et al., 2009) is the task of identifying important mentions or Named Entities in the text and linking those mentions to corresponding entities in an underlying Knowledge Base (KB). The KB can be any public knowledge repository like Wikipedia or a custom knowledge graph specific to the domain. NERD for social media text (Derczynski et al., 2015; Mishra and Diesner, 2016; Mishra, 2019), in particular Tweets is challenging because of the short textual context owing to the 280 character limit of Tweets. There exist multiple datasets (Derczynski et al., 2015; Mishra, 2019; Dredze et al., 2016; Derczynski et al., 2016; Spina et al., 2012; Rizzo et al., 2016; Yang and Chang, 2015; Fang and Chang, 2014; Locke, 2009; Meij et al., 2012; Gorrell et al., 2015) for developing and evaluating NERD methods on Tweets. However, these datasets have limited set of Tweets, are temporally biased (i.e. Tweets are from a short time period, more details in section C.1), or are no longer valid because of deleted Tweets (see Table 3). In this work, we introduce a new dataset called TweetNERD which consists of 340K+ Tweets annotated with entity mentions and linked to entities in Wikidata (a large scale multilingual publicly editable KB). TweetNERD addresses the issues in existing NERD datasets for Tweets by including Tweets from a broader time window, applying consistent annotations, and including the largest collection of annotated Tweets for NERD tasks. Figure 1 compares TweetNERD with existing Tweet entity linking datasets, proving its increases scale. Furthermore, we describe two splits of the dataset which we use for evaluation. These splits called TweetNERD-OOD and TweetNERD-Academic allow assessing out of domain (OOD) and temporal generalization respectively. TweetNERD-OOD split consists of Tweets in a shorter time frame that are over- sampled with harder to disambiguate entities. It is useful to assess out of domain performance. Conversely, TweetNERD-Academic split is a temporally diverse dataset of non-deleted Tweets from a collection of existing academic benchmarks that have been re-annotated with the new annotation guidelines. TweetNERD has already been used by Hebert et al. (2022) for evaluating dense retrieval for candidate generation in presence of noisy NER spans. TweetNERD should also foster research in better utilization of social graph context of Tweets (Kulkarni et al., 2021; Li et al., 2022) in improving NERD task performance, and assessment of bias in NERD systems (Mishra et al., 2020). TweetNERD is available at: https://doi.org/10.5281/zenodo.6617192 under Creative Commons Attribution 4.0 International (CC BY 4.0) license (Mishra et al., 2022). Check out more details at https://github.com/twitter- research/TweetNERD. ### 1.1 Related works Named Entity Recognition and Disambiguation (NERD) is a prominent information extraction task. There exist multiple datasets (Derczynski et al., 2015; Mishra, 2019; Dredze et al., 2016; Derczynski et al., 2016; Spina et al., 2012; Rizzo et al., 2016; Yang and Chang, 2015; Fang and Chang, 2014; Locke, 2009; Meij et al., 2012; Gorrell et al., 2015) for either doing Named Entity Recognition (NER), NERD, Cross Domain Co-reference Retreival (CDCR), or Entity Relevance. Most datasets were created by sampling Tweets from a given time- period and then annotating them either for NER alone or for NERD. The annotation also differs by linking to either DBPedia (Gorrell et al., 2015), Wikipedia (Rizzo et al., 2016), or Freebase (Fang and Chang, 2014) as the possible knowledge base. Our work closely follows the annotation process of Gorrell et al. (2015) of linking entities using a crowd sourcing platform and doing both NER and Entity Disambiguation tasks. Our data collection process differs in terms of sampling Tweets from a diverse temporal window and the inclusion of more diverse set of entities (see section 4.1). ## 2 Terminology We use the following terminology throughout the rest of the paper: (1) knowledge base (KB): Underlying knowledge base of entities, we use Wikidata (Vrandečić and Krötzsch, 2014). (2) document id ($id$): Id of the document with entities, and optional meta-data e.g. date; (3) mention ($m$): a phrase in document $d$ identified by a start offset $s$ and end offset $e$; (4) start ($s$): starting offset of mention $m$. The offset is dependent on the encoding of the data (TweetNERD uses byte offsets for the text encoded using utf-16-be); (5) end ($e$): ending offset of mention $m$ in the same format as ($s$), such that $len(m)=e-s$; (6) NIL: If a mention can’t be linked to any entity in KB; (7) entity id ($eid$): Linked entity Id in KB or NIL; and (8) candidate set ($C$): Possible candidates for $m$ in KB and NIL. ## 3 Annotation Setup #### Annotators We leveraged a team of trained in-house annotators who utilized a custom annotation interface to annotate the Tweets. A pool of annotators was trained with detailed labeling guidelines and multiple rounds of training iterations before actually starting to annotate the Tweets in TweetNERD. The guidelines included examples of Tweets with linked entities, and instructions on how to disambiguate between potential candidates using the Tweet context, media, time and other factors. A much simplified version of the interface is shown for the purpose of illustration (see Figure 2). The annotators were required to pass a qualification quiz demonstrating their understanding of the task to be eligible as an annotator. #### Annotation Task The annotation task required identifying all mentions $m$ in a Tweet and assigning a Wikidata ID, $eid$, for each $m$. The annotators had to highlight the mention and then use Wikidata search interface to find the correct $eid$ (e.g. $m$=Twitter and $eid$=Q918). Annotators could edit the search phrase to differ from $m$ to correct for spelling errors, or expand it with additional words in order to find a suitable entity. If there is no valid Wikidata ID for $m$, annotators assigned $eid$=NOT FOUND. If annotators thought that the Tweet context is not clear enough to disambiguate between the returned candidates they assigned $eid$=AMBIGUOUS. The Wikidata ID for a given Wikipedia page is obtained by clicking on the Wikidata item link located on the left panel of the Wikipedia page. TweetNERD annotation was done in batches where around 25K Tweet ids for each batch were sampled via the setup described in the next section. We annotated a total of 14 batches for the TweetNERD dataset. #### Eligible Mentions Annotators were instructed to select mentions $m$ in a Tweet which refer to the longest phrase corresponding to a named entity that can be identified as a Person, Organization, Place etc. (see Table 1 for full list and details). A mention can also be contained within a hashtag if it corresponds to an entity e.g. #FIFA. #### Correct Candidate Annotators were instructed to prefer an $eid$ which is likely to have a Wikipedia page. The most appropriate $eid$ could depend on the following: (a) full text of the Tweet, (b) the URL or media attached to the Tweet, (c) the temporal context of the Tweet (annotators can search for $m$ on Twitter around the same date as the Tweet), (d) the Tweet thread it is part of (i.e. which Tweet it is replying to and the list of Tweets which replied to it) (e) the user of the Tweet . #### Annotation Aggregation Each Tweet was annotated by three annotators and $(m,eid)$ pairs that were selected by at least two annotators were considered gold annotations. We include all annotations (including non-gold) as part of the final dataset to support additional analysis (e.g. studying annotation noise). Id=1: I love [Twitter][ENTITY] Candidates: Q918, NOT FOUND, AMBIGUOUS --- Id=2: [Paris][ENTITY] is regarded as the world’s fashion capital Candidates: Q90, Q79917, NOT FOUND, AMBIGUOUS Id=3: [Anil][ENTITY] is playing Candidates: NOT FOUND, AMBIGUOUS Figure 2: Simplified version of the annotation interface. Selected mentions and entities are in Bold. Important thing to note is that the annotators are shown only the Tweet text. They use the functionality provided in the interface to query the eligible knowledge base candidates. Each annotator can select multiple mentions in a Tweet but link each mention ($m$) to only a single Entity Id ($eid$). #### Difficulty of the Annotation Task Entity Linking is inherently a difficult task due to name variations (multiple surface forms for the same entity) and entity ambiguity (multiple entities for the same mention) (Shen et al., 2014). In addition, based on the type of application and the coverage of the underlying knowledge base this task can become challenging even for humans. E.g. we asked the annotators to link a mention to the most specific entity in the knowledge base (i.e. Wikidata), this assumption forces all other candidate entities (even if close) for that mention as incorrect. For instance, if a Tweet is about the Academy Awards this year (2022), we only consider Q66707597 (94th Academy Awards) as the correct entity and not Q19020 (Academy Awards), while Q19020 is the correct entity if the Tweet is about Academy Awards in general. While this allows for temporally sensitive annotations, it makes the task difficult compared to most classification tasks, leading to a negative impact on inter-annotator agreement (see discussion in section 4.4). Table 1: Example of types of entities to identify in the text Type | Examples ---|--- Person | Politicians, sports players, artists, celebrities, fictional characters, scientists, singers, musicians, journalists, social media celebrities, and others Examples: Kanye West, Sachin Tendulkar, Donald Trump, Harry Potter, Jon Snow Place | Countries, Cities, Monuments, Parks, rivers, and others Examples: Paris, Nigeria, Statue of Liberty Organization | Companies, governments, NGOs, social movements, music bands, sports teams, social organizations, volunteer organizations, and others Examples: Backstreet Boys, Los Angeles Lakers, Black Lives Matter Products | Websites, Softwares, applications, video games, technology gadgets, devices, and others Examples: PlayStation, iPhone, GoFundMe, Roblox Works of Art | Movies, Albums, Books, Comics, Video Games, TV Shows, Social Media videos, and others Examples: Friends, The Office, Lupin Scientific Concepts | Names of diseases, drugs, names of algorithms, scientific methods and techniques, scientific names of organisms, names of disasters, and others Examples: COVID-19, SARS-COV19, Hurricane Katrina, Cyclone Idai ## 4 Tweet End To End Entity Linking Dataset ### 4.1 Sampling TweetNERD consists of English Tweets most of which were created between Jan 2020 and Dec 2021. Tweet language was identified using the Twitter Public API endpoint. Additionally, we discarded Tweets which were NSFW111NSFW - Not Safe For Work, too short ($\leq 10$ space separated tokens), and included $\geq 2$ URLs or $\geq 2$ user mentions or $\geq 3$ Hashtags. Since the dataset was annotated in batches, we were able to improve our sampling technique with each batch. Our initial approach of upsampling Tweets with high retweets and likes (Tweet-actions) resulted in a large proportion of Tweets with empty annotations. To mitigate this, we experimented with approaches which select Tweets that are more likely to have an entity. Some of these approaches included: (a) using in-house NER models (Mishra et al., 2020; Eskander et al., 2020) to check for NER mentions, (b) using phrase matching techniques (Mishra and Diesner, 2016) to match phrases from Tweet text with the Wikidata entity titles, (c) sampling based on phrase entropy to detect difficult phrases (described in the next paragraph), (d) overall Tweet favorite based sampling, and (e) search page click based sampling . Within each approach, we perform a stratified sampling to select Tweets equally from each sampling bucket. The full dataset TweetNERD is comprised of different proportions of each of these buckets. #### Entropy based sampling We wanted to include tweets containing phrases representing a diverse set of wikidata entities in terms of entity popularity as well as disambiguation difficulty. We used aggregate wikipedia page views ($eid_{views}$) across all language pages of a wikidata entity as a proxy for its popularity. Then the phrase entropy was defined as $H=\sum p*log(p)$ using the probability $p=p(eid|m)=eid_{views}/\sum{eid_{views}}$. Each phrase is then classified as one of the high, medium, or low entropy phrase using the entropy score distribution. Finally, we sample an equal number of Tweets from each phrase entropy bucket. ### 4.2 Data Splits While TweetNERD consistes of 340K+ Tweets, we highlight two explicit data splits of TweetNERD, namely TweetNERD-OOD and TweetNERD-Academic, which have been used as test sets for evaluation in this paper. The purpose of these two splits is to measure out of domain performance and temporal generalization respectively. #### TweetNERD-OOD It is a subset of 25K Tweets used for evaluating existing named entity recognition and linking models. TweetNERD-OOD is sampled in equal proportion based on the entropy of the contained NER mentions. Mentions with few, less diverse candidates fall in the low entropy buckets whereas mentions with many, high diversity candidates fall into the high entropy buckets. We first sample Tweets into high, medium and low entropy mention buckets, and then perform stratified sampling based on Tweet actions to divide these buckets into sub- buckets. This approach helps us to evaluate all models against a variety of Tweets with varying levels of difficulty and popularity. #### TweetNERD-Academic It is a subset of 30K Tweets to benchmark entity linking systems on Tweets already sampled in existing academic benchmarks (mostly from (Derczynski et al., 2015; Mishra, 2019)). We identify all the Tweet ids across existing NERD, NER, NED, and syntatic NLP task datasets for Tweets and hydrate these ids using the Public Twitter API. We ended up with 30,119 Tweets across these datasets which are still available (see Table 3). Its important to note that these Tweets were annotated again using our latest annotation setup to comply with the TweetNERD guidelines. Our intention for including this split is to add a layer of temporally diverse and already benchmarked datasets. #### Re-annotation of academic benchmarks in TweetNERD-Academic We re-annotate the academic benchmark datasets in TweetNERD-Academic using our guidelines and setup to ensure consistency of these annotations with the rest of our dataset. This choice was made as opposed to including the existing annotations from these datasets for the following reasons. First, not all of these datasets are annotated for the end to end NERD task, i.e. some only have NER and some only have NED annotations. Second, the knowledge base used for each NERD annotation is not Wikidata. Instead, some datasets link to DBpedia, some to English Wikipedia. Third, the notion of entities to annotate varies across the datasets and would require a lot of reconciliation to make a consistent benchmark dataset, e.g. Rizzo et al. (2016) annotates Hashtags and user mentions as entities but TweetNERD does not allow mentions to be tagged as entities. Finally, many of the Tweets (20-40%, see table 3) from these datasets are not available via the public API, however, those which are still available are likely to be available for a longer duration which makes this benchmark more stable. We show some examples of annotations in TweetNERD- Academic versus existing academic benchmarks in table 2. Detailed description of each of these datasets is provided in section C.1. Finally, we observed high overlap between TweetNERD-Academic and academic datasets. E.g. using Yodie as the closest academic dataset in terms of our annotation guidelines, we found that TweetNERD-Academic matches 77% Yodie mention level annotations as well as 87% mention annotations at the Tweet level. At the mention-entity level TweetNERD-Academic matches 65% Yodie annotations and 80% at the Tweet level (we map DBPedia entity annotations in Yodie to their Wikidata ID). Table 2: Annotations in TweetNERD-Academic versus annotations in existing benchmarks. Text: Press release:"Will England fans be hit by penalties on their next energy bill?" Please make it stop. Yodie: England (Dbp:England); TweetNERD: England (Q21) --- Text: #DMG #GILDEMEISTER presents the new GILDEMEISTER energy monitor, read more at [URL]. Yodie: GILDEMEISTER (6, 18, Dbp:Gildemeister_AG), GILDEMEISTER (36, 48, Dbp:Gildemeister_AG); TweetNERD: GILDEMEISTER (6, 18, Q100151808), GILDEMEISTER (36, 48, Q100151808) Text: Wiz Khalifa went suit shopping with Max Headroom. #grammys #80s [URL]. TGX: Max Headroom (NA, NA, NA); TweetNERD: Wiz Khalifa (0, 11, Q117139), Max Headroom (36, 48, Q1912691) #### Flexibility for Further Analysis As seen above, we have identified two subsets of the dataset (TweetNERD-OOD and TweetNERD-Academic) which we use as test sets for evaluation in this paper. While these two datasets can be used for standard benchmarking for tasks similar to those presented in this paper, we would like to emphasize the flexibility of TweetNERD in evaluating a wide range of tasks. For example, one could split the full TweetNERD dataset temporally to test existing models for temporal generalization or one could split TweetNERD based on seen and unseen mentions and entities to assess robustness. TweetNERD can also be randomly split into train, validation, and test splits that can be used to evaluate in- domain performance of models. To align ourselves with the traditional machine learning benchmark formats, we also provide canonical train, validation, and test splits of the data created by extracting random samples of 25K tweets for test and 5K for validation from TweetNERD excluding TweetNERD-OOD and TweetNERD-Academic. While we do not report any results on this test split in this paper, we encourage researchers to use these splits along with TweetNERD- OOD and TweetNERD-Academic to ensure reproducibility. #### Adapting to Temporal Dynamics of Knowledge Bases Knowledge Bases are dynamic and new entities are added with time and since NERD datasets are not updated with time there might be discrepancies in model evaluation with reference to a static NERD test set. This is a common limitation of Entity linking evaluation. In TweetNERD this would only affect the NIL predictions as opposed to linking predictions. An entity which in 2014 was marked as NIL (because of absence from Wikidata) may be marked correctly now. This can be addressed easily by factoring in the creation date of the entity in Wikidata. This way any entity whose creation date in Wikidata is after the Tweet date can be marked as NIL. This can allow for temporal evaluation. ### 4.3 Data Statistics Table 3: Details of TweetNERD-Academic (same Tweet could occur in multiple datasets). dataset | Tasks | Total Tweets | Found Tweets | Found % ---|---|---|---|--- Tgx (Dredze et al., 2016) | CDCR | 15,313 | 9,790 | 63.9 Broad (Derczynski et al., 2016) | NER | 8,633 | 6,913 | 80.1 Entity Profiling (Spina et al., 2012) | NER | 9,235 | 6,352 | 68.8 NEEL 2016 (Rizzo et al., 2016) | NERD | 9,289 | 2,336 | 25.1 NEEL v2 (Yang and Chang, 2015) | NERD | 3,503 | 2,089 | 59.6 Fang and Chang (2014) | NERD | 2,419 | 1,662 | 68.7 Twitter NEED (Locke, 2009) | NERD & IR | 2,501 | 1,549 | 61.9 Ark POS (Gimpel et al., 2011) | POS | 2,374 | 1,313 | 55.3 WikiD | NED | 1,000 | 504 | 50.4 WSDM2012 (Meij et al., 2012) | Relevance | 502 | 415 | 82.7 Yodie (Gorrell et al., 2015) | NERD | 411 | 288 | 70.1 Figure 3: Temporal Frequency of Tweets in the TweetNERD. Time period of TweetNERD-Academic highlighted in Grey. Table 4: Salient entities, mentions, and mention-entity pairs in TweetNERD full dataset and subset. Entity refers to $eid$ \- the linked Wikidata ID, Mention refers to $m$ \- the annotated phrase in the Tweet, and Mention-Entity refers to $(m,eid)$ \- a unique tuple of <mention, entity>. Full data set --- Mention Entity: Total: 356345, Unique: 166379 Head: "’grammys’ <Q630124>" (6272), "’mark lee’ <Q26689986>" (2341), "’aria’ <AMBIGUOUS>" (2103), "’whatsapp’ <Q1049511>" (1521), "’isabella’ <AMBIGUOUS>" (1260) Mid: "’david mabuza’ <Q1174142>" (2), "’neha sharma’ <Q863745>" (2) Tail: "’ian darke’ <Q5981359>" (1), "’antony perumbavoor’ <Q55604079>" (1), "’sansone’ <NOT FOUND>" (1), "’prairie state college’ <NOT FOUND>" (1), "’konga’ <NOT FOUND>" (1) Mention: Total: 356345, Unique: 143762 Head: ’grammys’ (7059), ’aria’ (2461), ’mark lee’ (2342), ’whatsapp’ (1602), ’isabella’ (1471) Mid: ’nam joo hyuk’ (2), ’sharpsburg’ (2) Tail: ’iain banks’ (1), ’michael odewale’ (1), ’chlorine cougs’ (1), ’rock your baby’ (1), ’georgia dome’ (1) Entity: Total: 356345, Unique: 90938 Head: ’NOT FOUND’ (59704), ’AMBIGUOUS’ (44752), ’Q630124’ (7886), ’Q26689986’ (2364), ’Q108112350’ (2094) Mid: ’Q9196194’ (2), ’Q331613’ (2) Tail: ’Q107362802’ (1), ’Q81101633’ (1), ’Q17361809’ (1), ’Q1177’ (1), ’Q741395’ (1) Without TweetNERD-Academic Mention Entity: Total: 312581, Unique: 159468 Head: "’mark lee’ <Q26689986>" (2341), "’aria’ <AMBIGUOUS>" (2103), "’whatsapp’ <Q1049511>" (1521), "’isabella’ <AMBIGUOUS>" (1260), "’tajin’ <Q3376620>" (1016) Mid: "’cannes2021’ <Q42369>" (2), "’zeynep’ <NOT FOUND>" (2) Tail: "’slave play’ <Q69387965>" (1), "’Prada’ <Q193136>" (1), "’gansu’ <Q42392>" (1), "’iowa state capitol’ <Q2977124>" (1), "’konga’ <NOT FOUND>" (1) Mention: Total: 312581, Unique: 137782 Head: ’aria’ (2461), ’mark lee’ (2342), ’whatsapp’ (1602), ’isabella’ (1471), ’matilda’ (1434) Mid: ’jamelia’ (2), ’mohammad rafi’ (2) Tail: ’petr yan’ (1), ’wooiyik’ (1), ’billie dove’ (1), ’bucks fizz’ (1), ’georgia dome’ (1) Entity: Total: 312581, Unique: 87430 Head: ’NOT FOUND’ (58678), ’AMBIGUOUS’ (44202), ’Q26689986’ (2364), ’Q108112350’ (2094), ’Q1049511’ (1554) Mid: ’Q1186977’ (2), ’Q983026’ (2) Tail: ’Q455833’ (1), ’Q3283342’ (1), ’Q17183770’ (1), ’Q7491877’ (1), ’Q30308127’ (1) #### TweetNERD. TweetNERD consists of 340K unique Tweets that collectively contain a total of 356K mentions that are linked to 90K unique entities. Of the 356K mentions, 251K are linked to non-NIL entities, and 104K to NIL entities. As can be observed in Figure 1, TweetNERD is the largest data set compared to existing benchmark datasets for Tweet entity linking. More details about the salient mentions, entities, and mention-entity pairs in TweetNERD can be found in Table 4. #### Temporal Distribution of Dataset. Our dataset consists of 340K Tweets spread across a period of 12 years from 2010 to 2021. This includes a smaller but temporally diverse subset which includes Tweets from existing academic benchmarks, re-annotated using our guidelines. If we remove the academic benchmarks, the resulting dataset consists of 310K Tweets spread from 2020-01 to 2021-12. TweetNERD includes a non uniform sampling across time. ### 4.4 Inter-annotator agreement #### Limitations of current inter-annotator agreement measures for NERD tasks All Tweets in TweetNERD are annotated by three annotators. For classification tasks Cohen’s Kappa (McHugh, 2012) is considerrd a standard measure of inter- annotator agreement (IAA). However, for NERD tasks, Kappa is not the most relevant measure, as noted in multiple studies (Hripcsak and Rothschild (2005); Grouin et al. (2011)). The main issue with Kappa is its requirement of negative classes which is not known for NER and NERD tasks. Furthermore, NERD task involves a sequence of words or in our case offsets in text making the number of items variable for each text. A workaround is to use Kappa at token level. However, this results in additional issues. First, annotations are done at the Tweet level instead of token level and for our task tokens will depend on the tokenizer used. Second, token level annotaiton lead to an abundance of "O" tags for NER which will overwhelm the kappa statistics. In Derczynski et al. (2016) the evaluation is done using F1 measure between annotations of two annotators. This is reasonable when we have a fixed set of annotators doing annotation on all the Tweets. However, this is not possible for TweetNERD as the annotations were collected from a crowd sourcing system where different set of annotators may annotate different Tweets. Hence, the only approach for calculating agreement in our case is agreement among annotators. #### TweetNERD NERD agreement We compute inter-annotator agreement at mention $m$ and mention-entity $(m,eid)$ levels. 69% mentions have a majority ($\geq 2$) agreement, of which 38% have agreement from all three annotators. 17% of mention-entities have 100% agreement across all three annotators, 41% have majority ($\geq 2$), and 59% have only single annotator. 40% mention-entities in TweetNERD-OOD and 57% in TweetNERD-Academic have majority agreement. If we consolidate AMBIGOUS and NOT FOUND $eid$ as NIL the majority agreement goes up to 47%. At the Tweet level, 30% Tweets have majority agreement across all annotated mention- entities. These agreement scores highlight the difficulty and ambiguity of the end to end entity linking annotation task as described in Section 3. While it is possible to resolve some of these ambiguities using a heuristic, we release the dataset in its current format to encourage research in annotation consolidation and evaluation using these annotations. Although, we use majority agreement on mention-entities as our gold dataset for all evaluations described later, our released dataset contains non-majority annotations to enable additional research in this domain. ### 4.5 TweetNERD Data Format We release TweetNERD in a non-tokenized format. TweetNERD consists of only Tweet Ids and our annotations as suggested by the Public Twitter API222https://developer.twitter.com/en/docs/twitter-api. Each TweetNERD file consists of Tweet ids, start and end offsets, mention phrase, linked entity, and annotator agreement score (see Figure 4). We provide details in Appendix A on how to convert this format into token label format suitable for training and evaluating NER systems. All mentions are untyped. Id | Start | End | Mention | Entity | Score ---|---|---|---|---|--- 1 | 7 | 14 | Twitter | Q918 | 3 2 | 0 | 5 | Paris | Q90 | 3 3 | 0 | 4 | Anil | AMB. | 2 Figure 4: Data Format. Sample Tweets from Figure 2 to illustrate the data format. ## 5 Evaluation on TweetNERD Table 5: Evaluating TweetNERD-OOD and TweetNERD-Academic using existing systems. Model | OOD | Academic ---|---|--- Spacy | 0.377 | 0.454 StanzaNLP | 0.421 | 0.503 SocialMediaIE | 0.153 | 0.245 BERTweet WNUT17 | 0.278 | 0.46 TwitterNER | 0.424 | 0.522 AllenNLP | 0.454 | 0.552 (a) NER strong_mention_match F1 scores. Model | entity_match | strong_all_match ---|---|--- | OOD | Academic | OOD | Academic GENRE | 0.469 | 0.636 | 0.39 | 0.624 REL | 0.463 | 0.614 | 0.387 | 0.56 Lookup | 0.621 | 0.645 | 0.584 | 0.617 (b) Entity Linking given true spans (EL) F1 scores. Model | entity_match | strong_all_match ---|---|--- | OOD | Academic | OOD | Academic DBpedia | 0.292 | 0.399 | 0.231 | 0.347 NLAI | 0.522 | 0.568 | 0.313 | 0.494 TAGME | 0.402 | 0.431 | 0.293 | 0.381 REL | 0.344 | 0.484 | 0.27 | 0.444 GENRE333Using GENRE end-to-end entity linking model for Table 5-c and entity disambiguation model for Table 5-b. Evaluation scores are after removing a few Tweets from the gold set for which the GENRE model fails. Not removing these Tweets and simply returning Null for GENRE only makes a difference in the third decimal point. | 0.307 | 0.458 | 0.223 | 0.379 (c) End to End Entity Linking (End2End) F1 scores. We use neleval444https://neleval.readthedocs.io/ library for evaluating various publicly available systems on TweetNERD. For our evaluations we always map NOT FOUND and AMBIGUOUS to NIL. We describe the metrics and the evaluation setup below for the three NERD tasks: Named Entity Recognition (NER), Entity Linking with True Spans (EL), and End to End Entity Linking (End2End). #### Metrics We first describe the main metrics from neleval that are used for evaluation across the three sub-tasks defined above. strong_mention_match is a micro- averaged evaluation of entity mentions that is used for the NER task. This metric requires a start and end offset to be returned for the mention. For systems that don’t provide offsets we infer the offset in the original text by finding the first mention of the identified mention text. strong_all_match is a micro-averaged link evaluation of all mention-entities whereas entity_match is a micro-averaged Tweet-level set of entities measure. For EL and End2End tasks, we use strong_all_match and entity_match as evaluation metrics. entity_match is more robust to offset mismatches whereas strong_all_match requires a strict match. We report F1 scores for each metric described above. F1 is a harmonic mean of precision and recall. Please see Appendix B for details. ### 5.1 Performance of Existing Entity Linking Systems. In this section we benchmark existing systems for NERD tasks. We provide these benchmarks as a baseline on TweetNERD. We also report numbers on a simple heuristic baseline using exact match lookup and show that it performs well across our datasets. All experiments run on a machine with single NVIDIA A100 GPU and 32 GB RAM. We choose our baselines based on the availability of existing NER, EL, and End2End systems favoring those systems which are widely used in literature or are specifically built for social media or Tweet datasets. #### Named Entity Recognition. For NER we use StanzaNLP (Qi et al., 2020), Spacy555 https://spacy.io/api/entityrecognizer, AllenNLP (Peters et al., 2017), BERTweet (Nguyen et al., 2020)666https://huggingface.co/socialmediaie/bertweet-base_wnut17_ner fine- tuned for NER using WNUT17 (Derczynski et al., 2017), Twitter NER (Mishra and Diesner, 2016), and Social Media IE (Mishra, 2019, 2020a, 2020b). We chose these for their popularity and for their relevance for social media data. See more details about the systems in Appendix Section D.1. We find that TwitterNER and AllenNLP perform the best on both OOD and Academic dataset. We also find that many of the errors of other systems come from incorrect mention start and end offset prediction even when the mention string is correctly identified. #### Entity Linking given True Spans (EL). For EL we use GENRE (Generative ENtity REtrieval) (Cao et al., 2021), REL (Radboud Entity Linker) (van Hulst et al., 2020)777https://github.com/informagi/REL, and Lookup. Lookup is a simple heuristic based system, where given true mentions, we fetch the most likely entity based on popularity defined via mention candidate co-occurrence in Wikipedia. See details in Appendix Section D.2. We find that Lookup is a strong baseline for both datasets, whereas REL and GENRE come close in performance on Academic subset. #### End to End Entity Linking (End2End). For End2End we use GENRE (Generative ENtity REtrieval) (Cao et al., 2021), REL (Radboud Entity Linker) (van Hulst et al., 2020), TagMe (Ferragina and Scaiella, 2012)888https://github.com/gammaliu/tagme, DBPedia Spotlight (Daiber et al., 2013), Natural Language AI (NLAI) API from Google 999https://cloud.google.com/natural-language. See details in Appendix Section D.3. We find that NLAI is a strong baseline for both datasets, whereas REL and GENRE come close in performance on Academic subset. For OOD subset, NLAI is the best performing model. ## 6 Limitations TweetNERD is the largest dataset for NERD tasks on Tweets. However, we highlight a few limitations. First, this is a non-static dataset since some of the Tweets referenced by Tweet IDs in TweetNERD may become inaccessible at a later date. Our inclusion of TweetNERD-Academic may help mitigate this to some extent as Tweets in that subset have survived a longer duration. Second, because of the difficulty of our annotation task the performance ceiling on TweetNERD is limited as highlighted in the inter-annotator agreement section. However, this provides an opportunity to develop systems on such challenging benchmarks. Finally, the offset based format of TweetNERD makes it challenging to be benchmarked by traditional NER systems which often rely on pre-tokenized text. Our suggestion for using neleval may help address that issue but will require systems to return offsets corresponding to the original text in TweetNERD which may be challenging for traditional systems. The entity_match eval score is tokenization and offset agnostic but is only applicable for the end to end NERD task. ## 7 Conclusion We described the largest dataset for NERD tasks on Tweets called TweetNERD and performed benchmarking on popular NERD systems on its two subsets TweetNERD- OOD and TweetNERD-Academic. We hope that the release of this large-scale dataset enables research community to revisit and conduct further research into the problem of entity linking on social media. TweetNERD should foster research and development of robust NERD models for social media which exhibit generalization across domains and time periods. TweetNERD is available at: https://doi.org/10.5281/zenodo.6617192 under Creative Commons Attribution 4.0 International (CC BY 4.0) license (Mishra et al., 2022). Check out more details at https://github.com/twitter-research/TweetNERD. ## Acknowledgments and Disclosure of Funding We would like to thank Twitter’s Human Computation team, specifically Iuliia Rivera, and Marge Oreta for their efforts in designing and setting up the annotation tasks and training the annotators which was instrumental in generating TweetNERD data. We would also like to extend our gratitude to the annotators who contributed to this task directly. ## References * Basave et al. (2014) Amparo Elizabeth Cano Basave, Giuseppe Rizzo, Andrea Varga, Matthew Rowe, Milan Stankovic, and Aba-Sah Dadzie. Making sense of microposts (#microposts2014) named entity extraction & linking challenge. In _4th Workshop on Making Sense of Microposts (#Microposts2014)_ , 2014. * Cao et al. (2021) Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. Autoregressive entity retrieval. In _International Conference on Learning Representations_ , 2021. URL https://openreview.net/forum?id=5k8F6UU39V. * Cornolti et al. (2013) Marco Cornolti, Paolo Ferragina, and Massimiliano Ciaramita. A framework for benchmarking entity-annotation systems. In _Proceedings of the 22nd International Conference on World Wide Web_ , WWW ’13, page 249–260, New York, NY, USA, 2013. Association for Computing Machinery. ISBN 9781450320351. doi: 10.1145/2488388.2488411. URL https://doi.org/10.1145/2488388.2488411. * Cucerzan (2007) Silviu Cucerzan. Large-scale named entity disambiguation based on Wikipedia data. In _Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)_ , pages 708–716, Prague, Czech Republic, June 2007\. Association for Computational Linguistics. URL https://aclanthology.org/D07-1074. * Daiber et al. (2013) Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. Improving efficiency and accuracy in multilingual entity extraction. In _Proceedings of the 9th International Conference on Semantic Systems (I-Semantics)_ , 2013. * Derczynski et al. (2015) Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke van Erp, Genevieve Gorrell, Raphaël Troncy, Johann Petrak, and Kalina Bontcheva. Analysis of named entity recognition and linking for tweets. _Information Processing & Management_, 51(2):32–49, 2015. ISSN 0306-4573. doi: https://doi.org/10.1016/j.ipm.2014.10.006. URL https://www.sciencedirect.com/science/article/pii/S0306457314001034. * Derczynski et al. (2016) Leon Derczynski, Kalina Bontcheva, and Ian Roberts. Broad Twitter corpus: A diverse named entity recognition resource. In _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers_ , pages 1169–1179, Osaka, Japan, December 2016. The COLING 2016 Organizing Committee. URL https://aclanthology.org/C16-1111. * Derczynski et al. (2017) Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. Results of the WNUT2017 shared task on novel and emerging entity recognition. In _Proceedings of the 3rd Workshop on Noisy User-generated Text_ , pages 140–147, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/W17-4418. URL https://aclanthology.org/W17-4418. * Dredze et al. (2016) Mark Dredze, Nicholas Andrews, and Jay DeYoung. Twitter at the grammys: A social media corpus for entity linking and disambiguation. In _Proceedings of The Fourth International Workshop on Natural Language Processing for Social Media_ , pages 20–25, Austin, TX, USA, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/W16-6204. URL https://aclanthology.org/W16-6204. * Eskander et al. (2020) Ramy Eskander, Peter Martigny, and Shubhanshu Mishra. Multilingual Named Entity Recognition in Tweets using Wikidata. In _The fourth annual WeCNLP (West Coast NLP) Summit (WeCNLP)_. Zenodo, October 2020. doi: 10.5281/zenodo.7014432. URL https://doi.org/10.5281/zenodo.7014432. * Fang and Chang (2014) Yuan Fang and Ming-Wei Chang. Entity linking on microblogs with spatial and temporal signals. _Transactions of the Association for Computational Linguistics_ , 2:259–272, 2014. doi: 10.1162/tacl_a_00181. URL https://aclanthology.org/Q14-1021. * Ferragina and Scaiella (2012) Paolo Ferragina and Ugo Scaiella. Fast and accurate annotation of short texts with wikipedia pages. _IEEE Software_ , 29(1):70–75, 2012. doi: 10.1109/MS.2011.122. * Gimpel et al. (2011) Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. Part-of-speech tagging for Twitter: Annotation, features, and experiments. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , pages 42–47, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL https://aclanthology.org/P11-2008. * Google (2022) Google. Freebase data dumps. https://developers.google.com/freebase/data, 2022. URL https://developers.google.com/freebase/data. * Gorrell et al. (2015) Genevieve Gorrell, Johann Petrak, and Kalina Bontcheva. Using @Twitter Conventions to Improve #LOD-Based Named Entity Disambiguation. In Fabien Gandon, Marta Sabou, Harald Sack, Claudia d’Amato, Philippe Cudré-Mauroux, and Antoine Zimmermann, editors, _The Semantic Web. Latest Advances and New Domains_ , pages 171–186, Cham, 2015. Springer International Publishing. ISBN 978-3-319-18818-8. * Grouin et al. (2011) Cyril Grouin, Sophie Rosset, Pierre Zweigenbaum, Karën Fort, Olivier Galibert, and Ludovic Quintard. Proposal for an extension of traditional named entities: From guidelines to evaluation, an overview. In _Proceedings of the 5th linguistic annotation workshop_ , pages 92–100, 2011. * Hebert et al. (2022) Liam Hebert, Raheleh Makki, Shubhanshu Mishra, Hamidreza Saghir, Anusha Kamath, and Yuval Merhav. Robust candidate generation for entity linking on short social media texts. In _Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)_ , pages 83–89, Gyeongju, Republic of Korea, October 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.wnut-1.8. * Hripcsak and Rothschild (2005) George Hripcsak and Adam S Rothschild. Agreement, the f-measure, and reliability in information retrieval. _Journal of the American medical informatics association_ , 12(3):296–298, 2005. * Kulkarni et al. (2009) Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. Collective annotation of Wikipedia entities in web text. In _Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’09_ , page 457, New York, New York, USA, 2009. ACM Press. ISBN 978-1-60558-495-9. doi: 10.1145/1557019.1557073. URL http://portal.acm.org/citation.cfm?doid=1557019.1557073. * Kulkarni et al. (2021) Vivek Kulkarni, Shubhanshu Mishra, and Aria Haghighi. LMSOC: An approach for socially sensitive pretraining. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 2967–2975, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-emnlp.254. URL https://aclanthology.org/2021.findings-emnlp.254. * Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _arXiv preprint arXiv:1910.13461_ , 2019. * Li et al. (2022) Jinning Li, Shubhanshu Mishra, Ahmed El-Kishky, Sneha Mehta, and Vivek Kulkarni. NTULM: Enriching social media text representations with non-textual units. In _Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)_ , pages 69–82, Gyeongju, Republic of Korea, October 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.wnut-1.7. * Locke (2009) Brian William Locke. Named entity recognition: Adapting to microblogging. Master’s thesis, Computer Science, University of Colorado Boulder, 2009\. URL https://scholar.colorado.edu/concern/graduate_thesis_or_dissertations/8049g539k. * McHugh (2012) Mary L McHugh. Interrater reliability: the kappa statistic. _Biochem. Med. (Zagreb)_ , 22(3):276–282, 2012\. * Meij et al. (2012) Edgar Meij, Wouter Weerkamp, and Maarten de Rijke. Adding semantics to microblog posts. In _Proceedings of the Fifth ACM International Conference on Web Search and Data Mining_ , WSDM ’12, page 563–572, New York, NY, USA, 2012. Association for Computing Machinery. ISBN 9781450307475. doi: 10.1145/2124295.2124364. URL https://doi.org/10.1145/2124295.2124364. * Mihalcea and Csomai (2007) Rada Mihalcea and Andras Csomai. Wikify! linking documents to encyclopedic knowledge. In _Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management_ , CIKM ’07, page 233–242, New York, NY, USA, 2007. Association for Computing Machinery. ISBN 9781595938039. doi: 10.1145/1321440.1321475. URL https://doi.org/10.1145/1321440.1321475. * Mishra (2019) Shubhanshu Mishra. Multi-dataset-multi-task neural sequence tagging for information extraction from tweets. In _Proceedings of the 30th ACM Conference on Hypertext and Social Media_ , HT ’19, page 283–284, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368858. doi: 10.1145/3342220.3344929. URL https://doi.org/10.1145/3342220.3344929. * Mishra (2020a) Shubhanshu Mishra. Information Extraction from Digital Social Trace Data with Applications to Social Media and Scholarly Communication Data. _ACM SIGIR Forum_ , 54(1), 2020a. * Mishra (2020b) Shubhanshu Mishra. _Information Extraction from Digital Social Trace Data with Applications to Social Media and Scholarly Communication Data_. PhD thesis, University of Illinois at Urbana-Champaign, 2020b. * Mishra and Diesner (2016) Shubhanshu Mishra and Jana Diesner. Semi-supervised named entity recognition in noisy-text. In _Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)_ , pages 203–212, Osaka, Japan, December 2016. The COLING 2016 Organizing Committee. URL https://aclanthology.org/W16-3927. * Mishra et al. (2020) Shubhanshu Mishra, Sijun He, and Luca Belli. Assessing demographic bias in named entity recognition. In _Proceedings of the AKBC Workshop on Bias in Automatic Knowledge Graph Construction, 2020_. arXiv, 2020. doi: 10.48550/ARXIV.2008.03415. URL https://arxiv.org/abs/2008.03415. * Mishra et al. (2022) Shubhanshu Mishra, Aman Saini, Raheleh Makki, Sneha Mehta, Aria Haghighi, and Ali Mollahosseini. TweetNERD - End to End Entity Linking Benchmark for Tweets, June 2022\. URL https://doi.org/10.5281/zenodo.6617192. Data usage policy Use of this dataset is subject to you obtaining lawful access to the [Twitter API](https://developer.twitter.com/en/docs /twitter-api), which requires you to agree to the [Developer Terms Policies and Agreements](https://developer.twitter.com/en /developer-terms/). * Nguyen et al. (2020) Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. BERTweet: A pre-trained language model for English Tweets. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 9–14, 2020. * Peters et al. (2017) Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, and R. Power. Semi-supervised sequence tagging with bidirectional language models. In _ACL_ , 2017. * Qi et al. (2020) Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. Stanza: A Python natural language processing toolkit for many human languages. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , 2020. URL https://nlp.stanford.edu/pubs/qi2020stanza.pdf. * Rizzo et al. (2016) Giuseppe Rizzo, Marieke van Erp, Julien Plu, and Raphaël Troncy. Making Sense Of Microposts (#Microposts2016) Named Entity Recognition And Linking (Neel) Challenge. In _#Microposts_ , pages 50–59, 2016. URL http://ceur-ws.org/Vol-1691/microposts2016_neel-challenge-report/. * Shen et al. (2014) Wei Shen, Jianyong Wang, and Jiawei Han. Entity linking with a knowledge base: Issues, techniques, and solutions. _IEEE Transactions on Knowledge and Data Engineering_ , 27(2):443–460, 2014. * Spina et al. (2012) Damiano Spina, Edgar Meij, Andrei Oghina, Minh Thuong Bui, Mathias Breuss, and Maarten de Rijke. A corpus for entity profiling in microblog posts. In _Proceedings of the LREC Workshop on Language Engineering for Online Reputation Management_ , pages 30–34, 2012. * van Hulst et al. (2020) Johannes M. van Hulst, Faegheh Hasibi, Koen Dercksen, Krisztian Balog, and Arjen P. de Vries. Rel: An entity linker standing on the shoulders of giants. In _Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval_ , SIGIR ’20. ACM, 2020. * Vrandečić and Krötzsch (2014) Denny Vrandečić and Markus Krötzsch. Wikidata: a free collaborative knowledgebase. _Communications of the ACM_ , 57(10):78–85, September 2014. ISSN 0001-0782, 1557-7317. doi: 10.1145/2629489. URL https://dl.acm.org/doi/10.1145/2629489. * Yang and Chang (2015) Yi Yang and Ming-Wei Chang. S-MART: Novel tree-based structured learning algorithms applied to tweet entity linking. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 504–513, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.3115/v1/P15-1049. URL https://aclanthology.org/P15-1049. ## Checklist The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing the appropriate section of your paper or providing a brief inline description. For example: * • Did you include the license to the code and datasets? [Yes] See Introduction. Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. 1. 1. For all authors… 1. (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] 2. (b) Did you describe the limitations of your work? [Yes] See limiations section 3. (c) Did you discuss any potential negative societal impacts of your work? [No] Not applicable 4. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] No potential negative societal impacts required 2. 2. If you are including theoretical results… 1. (a) Did you state the full set of assumptions of all theoretical results? [N/A] 2. (b) Did you include complete proofs of all theoretical results? [N/A] 3. 3. If you ran experiments (e.g. for benchmarks)… 1. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We plan to release code at: https://github.com/twitter-research/TweetNERD 2. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] No training done. 3. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] No training done. 4. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See section Performance of Existing Entity Linking Systems. 4. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets… 1. (a) If your work uses existing assets, did you cite the creators? [Yes] 2. (b) Did you mention the license of the assets? [N/A] We recreated the existing datasets used for our analysis. 3. (c) Did you include any new assets either in the supplemental material or as a URL? [No] 4. (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] Data in public domain 5. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] See section on TweetNERD- Academic 5. 5. If you used crowdsourcing or conducted research with human subjects… 1. (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] We had an inhouse team of annotators and no crowdsourcing was used. We include the details of the guildelines for the annotators under Annotation Setup. 2. (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] 3. (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] We have an inhouse team. ## Appendix A Converting data to BIO format for NER In order to convert the dataset to NER format we suggest tokenizing Tweet text and utilizing the character offsets to identify mention tokens. E.g. just setting up my twttr with offsets 19 and 24, and DBpedia category as Organization, can be converted to the NER BIO format as follows: tokens, starts, ends = tokenize_with_offsets("just setting up my twttr") and then assigning O labels to all tokens outside the phrase start and end offsets and B-ORG and I-ORG label to all tokens within the phrase offsets. This approach works as long as the tokenizer returned offsets correspond to the offset of the phrase in the original text, i.e. tokenization is non-destructive. See example code in listing 1. ⬇ 1import bisect 2import re 3 4def tokenize_with_offsets(text): 5 """Dummy tokenizer. 6 Use any tokenizer you want as long it as the same API.""" 7 tokens, starts, ends = zip(*[ 8 (m.group(), m.start(), m.end()) 9 for m in re.finditer(r’\S+’, text) 10 ]) 11 return tokens, starts, ends 12 13def get_labels(starts, ends, spans): 14 """Convert offsets to sequence labels in BIO format.""" 15 labels = ["O"]*len(starts) 16 spans = sorted(spans) 17 for s,e,l in spans: 18 li = bisect.bisect_left(starts, s) 19 ri = bisect.bisect_left(starts, e) 20 ni = len(labels[li:ri]) 21 labels[li] = f"B-{l}" 22 labels[li+1:ri] = [f"I-{l}"]*(ni-1) 23 return labels 24 25text = "just setting up my twttr" 26(tokens, starts, ends) = tokenize_with_offsets(text) 27 28# tokens = ["just", "setting", "up", "my", "twttr"] 29# starts = [0, 5, 13, 16, 19] 30# ends = [4, 12, 15, 18, 24] 31 32spans = [(19, 24, "ORG")] 33labels = get_labels(starts, ends, spans) 34 35# labels = ["O", "O", "O", "O", "B-ORG"] Listing 1: Conversion of offset format to NER BIO format using one choice of tokenization. ## Appendix B Metrics Table A1: NERD Metrics Metric | Description ---|--- strong_mention_match | | strong_mention_match is a micro-averaged evaluation of entity mentions. --- A system span must match a gold span exactly to be counted as correct. strong_all_match | | strong_all_match is a micro-averaged link evaluation of all mentions. --- A mention is counted as correct if is either a link match or a nil match. A correct nil match must have the same span as a gold nil. For a correct link match a system link must have the same span and KB identifier as a gold link. entity_match | | entity_match is a micro-averaged tweet-level set-of-titles measure. --- It is the same as entity match reported by [Cornolti et al., 2013] ## Appendix C Dataset details #### NER types. See table 1. #### Temporal distribution. See figure 3. ### C.1 Academic Dataset Details As explained in section 4.1 it is difficult to sample datasets for NERD tasks to ensure high number of Tweets containing diverse set of entities. Hence, we addressed this sampling issue by including a split based on Tweets already annotated for NERD or related tasks in existing academic benchmarks. This ensures high percentage of Tweets with named entities and linked entities. Please note not all the datasets we include in TweetNERD-Academic exist for NERD task. Some exist for NED, some for NER, and some for entity aspect extraction, and some for generic NLP tasks like part-of-speech tagging. We have included these datasets as they contain high density of entities and hence can warrant inclusion in a diverse entity linking test set. #### Tgx [Dredze et al., 2016] This dataset is for cross domain co-reference resolution (CDCR). It contains Tweets around the 2013 Grammy music awards ceremony, therefore it mostly contains mentions of Grammy and Music Artists from 2013. Only tweets with person names have been annotated. Original spans detected via NER system and then annotators fixed mention detection issues, grouped similar mentions, and linked to English Wikipedia. Each Tweet annotated by two annotators. No information on annotator agreement provided in the paper. Contains person names who do not occur in Wikipedia. #### Broad [Derczynski et al., 2016] This is an NER dataset and hence only contains mention detection annotations. Includes Person, Location, and Organization named entities. Annotations provided by experts and also via crowd-sourcing. They allow annotating username mentions as NE. The dataset has high temporal and geographical diversity with Tweets from 2009 to 2014. They find low agreement among crowd (35% F1) and gold annotations but high recall of named entities. The inter- annotator agreement is high. #### Entity Profiling [Spina et al., 2012] Original dataset created for Entity level aspect extraction. Annotation process is non-traditional. We include this dataset for its high availability of named entities. #### NEEL 2016 [Rizzo et al., 2016] Dataset created for the Making Sense of Microposts (#Microposts2016) Named Entity rEcognition and Linking (NEEL) Challenge. It cosists of NERD annotations. It includes annotation of Hashtags and user mentions. The dev and test set come from two events from December 2015 around the US primary elections and the Star Wars premiere. #### NEEL v2 [Yang and Chang, 2015] This dataset is a combination of [Basave et al., 2014] and [Fang and Chang, 2014]. It includes Tweets annotated for NERD as well as for Information Retrieval (IR) given an entity as a query. #### Fang and Chang [2014] Dataset of Tweets from December of 2012 from verified users containing location information. It contains Tweets annotated for NERD as well as IR task. Tweets only annotated for person, organization, location, event, and others NER type. For the IR task the authors take 10 query entities and sample 100 Tweets per query and assess if the Tweet contains a mention of the query entity. Entities come from Freebase [Google, 2022] which contains subset of entities of Wikipedia. #### Twitter NEED [Locke, 2009] This dataset consists of Tweets annotated using CoNLL-2003 guidelines. The author allows marking of user mention as named entities. Tweets were collected on February 10 and March 15. It contained Tweets from February 10 about economic recession, Australian Bushfires, and gas explosion in Bozeman, MT on March 15. They found that Topic related Tweets had much higher rate of named entities. #### Ark POS [Gimpel et al., 2011] This dataset was created for Part of Speech tagging for Tweets. It contains 6.4 tokens referring to proper nouns which make it likely to contain sufficient named entities and hence a likely candidate to be included for benchmarking NERD systems for Tweets. #### WSDM2012 [Meij et al., 2012] It includes 20 Tweets each from a set of verified users. 562 Tweets are manually annotated by two annotators. Annotation was done at the Tweet level where relevant entities for a given Tweet were marked. The authors do not provide agreement rates. The annotated entities may or may not be mentioned explicitly in the text. #### Yodie [Gorrell et al., 2015] It consists of Tweets annotated using DBPedia URI from financial institutions and news outlets and climate change discussions. The dataset period is 2013-2014. Tweets were tagged using Crowdflower interface using 10 NLP researchers with each Tweet tagged by three annotators. 89% of entities had unanimous agreement. Tweets were annotated for person, organization, and location entities, while linking included the NIL class. ## Appendix D Evaluation system details ### D.1 Named Entity Recognition (NER) #### StanzaNLP [Qi et al., 2020]. Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages based on the Universal Dependencies (UD) formalism and includes named entity recognition as a functionality. For each document stanza outputs entity mentions and their start and end character offsets which can be directly used for neleval evaluation. #### Spacy101010 https://spacy.io/api/entityrecognizer. Spacy NLP library provides a transition-based named entity recognition component. The entity recognizer identifies non-overlapping labelled spans of tokens. The loss function optimizes for whole entity accuracy, which assumes a good inter-annotator agreement on boundary tokens for good performance. Spacy identified mentions are in the desired character offset format and hence can be directly used for evaluation. #### AllenNLP [Peters et al., 2017]. The AllenNLP named entity recognizer uses a Gated Recurrent Unit (GRU) character encoder as well as a GRU phrase encoder, and it starts with pretrained GloVe vectors for its token embeddings. It was trained on the CoNLL-2003 NER dataset. AllenNLP outputs BIO labels. To extract mentions and their start and end character offsets we first extract the mentions from the BIO labels corresponding to the non-O tokens. We then perform a search for this phrase in the Tweet text to get the start and end offsets. This leads to some edge cases such as if there are two identical mentions correctly identified, we always count only the first match hence over-penalizing the model. On the other hand, if mention identified by the model was the latter one but only the former mention was part of the gold annotation we under- penalize the model. #### Twitter NER [Mishra and Diesner, 2016]. Twitter NER is a conditional random field model trained specifically for Tweets using a combination of rules, gazetteers, and semi-supervised learning. It is a prominent non-neural baseline for NER on Tweets. #### Social Media IE [Mishra, 2019]. SocialMediaIE is a multi-task model trained on a combination of tasks for social media information extraction. It uses a pre-trained language model along with multi-dataset multi-task learning setup and is jointly trained to perform NER, Part-of-Speech tagging, Chunking, and Supersense tagging. ### D.2 Entity Linking given True Spans (EL) Given true entity mentions from human annotated data, we compare linking only performance (also known as entity disambiguation) using entity_match and strong_all_match from neleval. #### GENRE (Generative ENtity REtrieval) [Cao et al., 2021]. GENRE is a sequence-to-sequence model that links entities by generating their name in an autoregressive fashion. Its architechture is based on transformers and it fine-tunes BART [Lewis et al., 2019] for generating entity names, which in this case are corresponding Wikipedia article titles. We used the model that was trained on BLINK + AidaYago2. #### REL (Radboud Entity Linker) [van Hulst et al., 2020]111111https://github.com/informagi/REL. REL is an open source toolkit for entity linking. It uses a modular architecture with mention detection and entity disambiguation components. We use REL _with_ mentions to get _only_ entity disambiguation results here. #### Lookup Lookup is a simple heuristic based system. Given true mentions, we fetch the most likely entity based on popularity defined via mention candidate co- occurrence in wikipedia. ### D.3 End to End Entity Linking (End2End) To compare end to end entity linking systems we use entity_match and strong_all_match from neleval. Some of the models mentioned here have been introduced in Section D.2 #### GENRE. For end-to-end entity linking, a Markup annotation is used to indicate the span boundaries with special tokens, and the decoder decides to generate a mention span, a link to a mention, or continue to generate the input at each generation step. Therefore, the model is capable of both detecting and linking entities. #### REL. We use REL _without_ mentions to get complete End2End linking results in this case. #### TagMe [Ferragina and Scaiella, 2012]121212https://github.com/gammaliu/tagme. It is an end to end system and is based on a directory of links, pages and Wikipedia graph. We use TagME to get linking results. #### DBPedia Spotlight [Daiber et al., 2013]. Spotlight first detects mentions in a two step process; in the first step, all possible mention candidates are generated using different methods, and the second step selects the best candidates based on a score which is a linear combination of selected features (such as annotation probability). The linking/disambiguation part uses cosine similarity and a vector representation which is based on a modification of TF-IDF weights. #### Natural Language AI (NLAI) 131313https://cloud.google.com/natural- language. We use the documents:analyzeEntities endpoint of the API to get the entities in the Tweet. The system is black-box but is likely to use deep neural network based solutions for entity recognition and entity linking.
* Bissiri et al., (2016) Bissiri, P. G., Holmes, C. C., and Walker, S. G. (2016). A general framework for updating belief distributions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(5):1103–1130. * Bortot et al., (2007) Bortot, P., Coles, S. G., and Sisson, S. A. (2007). Inference for stereological extremes. Journal of the American Statistical Association, 102(477):84–92. * Box, (1976) Box, G. E. (1976). Science and statistics. Journal of the American Statistical Association, 71(356):791–799. * Browning et al., (2018) Browning, A. P., McCue, S. W., Binny, R. N., Plank, M. J., Shah, E. T., and Simpson, M. J. (2018). Inferring parameters for a lattice-free model of cell migration and proliferation using experimental data. Journal of Theoretical Biology, 437:251–260. * Cranmer et al., (2019) Cranmer, K., Brehmer, J., and Louppe, G. (2019). The frontier of simulation-based inference. arXiv preprint arXiv:1911.01429. * Fearnhead, (2018) Fearnhead, P. (2018). Asymptotics of abc. In Handbook of Approximate Bayesian Computation, pages 269–288. Chapman and Hall/CRC. * Frazier et al., (2018) Frazier, D. T., Martin, G. M., Robert, C. P., and Rousseau, J. (2018). Asymptotic properties of approximate Bayesian computation. Biometrika, 105(3):593–607. * Frazier et al., (2019) Frazier, D. T., Nott, D. J., Drovandi, C., and Kohn, R. (2019). Bayesian inference using synthetic likelihood: asymptotics and adjustments. arXiv preprint arXiv:1902.04827. * Frazier et al., (2020) Frazier, D. T., Robert, C. P., and Rousseau, J. (2020). Model misspecification in approximate bayesian computation: consequences and diagnostics. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82(2):421–444. * Geweke, (1992) Geweke, J. (1992). Evaluating the accuracy of sampling-based approaches to the calculations of posterior moments. Bayesian statistics, 4:641–649. * Gutmann and Corander, (2016) Gutmann, M. U. and Corander, J. (2016). Bayesian optimization for likelihood-free inference of simulator-based statistical models. The Journal of Machine Learning Research, 17(1):4256–4302. * Hampel et al., (2011) Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., and Stahel, W. A. (2011). Robust statistics: the approach based on influence functions, volume 196. John Wiley & Sons. * Kleijn and Van der Vaart, (2012) Kleijn, B. and Van der Vaart, A. (2012). The Bernstein-von-Mises theorem under misspecification. Electronic Journal of Statistics, 6:354–381. * Marchand et al., (2017) Marchand, P., Boenke, M., and Green, D. M. (2017). A stochastic movement model reproduces patterns of site fidelity and long-distance dispersal in a population of Fowler’s toads (Anaxyrus fowleri). Ecological Modelling, 360:63 – 69. * Marin et al., (2014) Marin, J.-M., Pillai, N. S., Robert, C. P., and Rousseau, J. (2014). Relevant statistics for Bayesian model choice. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(5):833–859. * Marin et al., (2012) Marin, J.-M., Pudlo, P., Robert, C. P., and Ryder, R. J. (2012). Approximate Bayesian computational methods. Statistics and Computing, 22(6):1167–1180. * Neal, (2003) Neal, R. M. (2003). Slice sampling. The Annals of Statistics, 31(3):705–767. * Papamakarios et al., (2018) Papamakarios, G., Sterratt, D. C., and Murray, I. (2018). Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows. arXiv preprint arXiv:1805.07226. * Park and Casella, (2008) Park, T. and Casella, G. (2008). The Bayesian lasso. Journal of the American Statistical Association, 103(482):681–686. * Price et al., (2018) Price, L. F., Drovandi, C. C., Lee, A., and Nott, D. J. (2018). Bayesian synthetic likelihood. Journal of Computational and Graphical Statistics, 27(1):1–11. * Ratmann et al., (2009) Ratmann, O., Andrieu, C., Wiuf, C., and Richardson, S. (2009). Model criticism based on likelihood-free inference, with an application to protein network evolution. Proceedings of the National Academy of Sciences, 106(26):10576–10581. * Rieder, (2012) Rieder, H. (2012). Robust asymptotic statistics, volume 1. Springer Science & Business Media. * (26) Vo, B. N., Drovandi, C. C., Pettitt, A. N., and Pettet, G. J. (2015a). Melanoma cell colony expansion parameters revealed by approximate Bayesian computation. PLOS Computational Biology, 11(12):e1004635. * (27) Vo, B. N., Drovandi, C. C., Pettitt, A. N., and Simpson, M. J. (2015b). Quantifying uncertainty in parameter estimates for stochastic models of collective cell spreading using approximate Bayesian computation. Mathematical Biosciences, 263:133–142. * Wood, (2010) Wood, S. N. (2010). Statistical inference for noisy nonlinear ecological dynamic systems. Nature, 466(7310):1102.
# A Generalized Lerche-Newberger Formula Parker Kuklinski Michael Warnock David Hague ###### Abstract The Lerche-Newberger formula simplifies harmonic sums of Bessel functions and has seen application in plasma physics and frequency modulated quantum systems. In this paper, we rigorously prove the formula and extend the classical result to a family of multi-dimensional extensions of the single variable Bessel functions called generalized Bessel functions. Since prevailing definitions of these functions do not accommodate arbitrary complex order, we use an auxiliary family of functions called generalized Anger functions and show that the single-variable result holds in multiple dimensions for a certain selection of parameters. We conclude by applying these results to physical systems. ## 1 Introduction Sums involving Bessel functions are pervasive in physics [1] [2] [3]. A Jacobi-Anger expansion, for instance, allows for a Fourier representation of a sinusoidal frequency modulated signal as a sum of harmonic components weighted by Bessel functions of varying order [4]. Other sums like the Kapteyn series arise in astrophysics applications [5], while Neumann and Schlömilch-type series also also appear in the literature [6] [7]. The Lerche-Newberger formula [8] gives a closed form representation to an infinite sum of Bessel function products with a transposed harmonic term (i.e. $1/(n-a)$). These sums first arose in 3-dimensional plasma systems with an oscillating ambient magnetic field from the associated $3\times 3$ plasma wave dispersion relation [9] [10]. In the present context we are interested in applying the Lerch- Newberger formula to quantum systems under frequency modulation [11]. Berns et. al. [12] investigate an intermediate ”quasiclassical” regime in a driven two level quantum system, and the transition rate of the qubit is modeled by a modified Lerche-Newberger summation. In this paper we aim to adapt the methods of Kuklinski and Hague [13] to extend the traditional Lerche-Newberger formula to a class of multi- dimensional Bessel functions called generalized Bessel functions [14]. Generalized Bessel functions introduce higher order harmonics in the modulation function of the associated integral. In a signal processing context, the generalized Bessel functions are the Fourier coefficients of a multi-tone sinusoidal frequency modulated (MTSFM) signal; these signals are useful for radar and sonar applications due to their constant amplitude, spectral efficiency, and tunability [15]. The parameters of these MTSFM are fed through to the arguments of the GBFs in the Fourier transformed space. The generalized Bessel functions have found applications in laser physics [16], crystallography [17], and astrophysics [18]. In connection to frequency modulated quantum systems, a version of the Lerche-Newberger sum found in Berns et. al. with Bessel functions replaced by GBFs arises in a more general oscillatory system. A naïve extension of the Lerche-Newberger formula by a total substitution of one-dimensional Bessel functions with their multi-dimensional counterparts fails since the generalized Bessel functions are not well defined for fractional order. To remedy this, we instead work with generalized Anger functions which extend the usual integral representation of integer-order generalized Bessel functions to arbitrary order. These Anger functions, although agreeing with Bessel functions on integer order, do not satisfy the same collection of identities and thus certain unresolvable terms emerge in the corresponding Lerche-Newberger extension. The rest of this paper is organized as follows: In section 2 we rigorously derive the one-dimensional Lerche-Newberger formula over the usual domain of parameters. In section 3 these results are extended by replacing the one- dimensional Bessel functions with generalized Anger functions. Section 4 treats an application to a multiply frequency modulated two-level quantum system. An appendix handles some details of the proof of the one-dimensional Lerche-Newberger formula. ## 2 Derivation of Lerche-Newberger formula In this section we will derive the following sum presented by Newberger [19] [8]: $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}J_{\alpha-\gamma n}(z)J_{\beta+\gamma n}(z)}{n+\mu}=\frac{\pi}{\sin\mu\pi}J_{\alpha+\gamma\mu}(z)J_{\beta-\gamma\mu}(z)$ (1) Here, we restrict $\text{Re}(\alpha+\beta)>-1$, $\mu\in\mathbb{C}\backslash\mathbb{Z}$, and $\gamma\in(0,1]$. To proceed, we follow an argument from Lerche [20]; recall from Watson [3] that for $\text{Re}(\alpha+\beta)>-1$ we have $J_{\alpha}(z)J_{\beta}(z)=\frac{2}{\pi}\int_{0}^{\pi/2}J_{\alpha+\beta}(2z\cos\theta)\cos(\alpha-\beta)\theta d\theta.$ (2) Substituting this into the summation on the left side of (1), we have $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}J_{\alpha-\gamma n}(z)J_{\beta+\gamma n}(z)}{n+\mu}=\frac{2}{\pi}\sum_{n=-\infty}^{\infty}\left[\int_{0}^{\pi/2}\frac{(-1)^{n}}{n+\mu}J_{\alpha+\beta}(2z\cos\theta)\cos{(\alpha-\beta-2\gamma n)\theta}d\theta\right].$ (3) Ultimately we would like to interchange summation and integration, and we do this with dominated convergence theorem [21]. Since the doubly-infinite summation is a composition of two separate limiting operations, one over the negative indices and one over the positive indices, we fmust split the double summation into two one-sided summations. one we label $S_{+}$ with $n\in\\{1,2,...\\}$ and the other we call $S_{-}$ with $n\in\\{-1,-2,...\\}$ such that $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}J_{\alpha-\gamma n}(z)J_{\beta+\gamma n}(z)}{n+\mu}=\frac{2}{\pi}\left(S_{+}+S_{-}+\frac{1}{\mu}\int_{0}^{\pi/2}J_{\alpha+\beta}(2z\cos{\theta})\cos{(\alpha-\beta)\theta}d\theta\right).$ (4) To apply dominated convergence theorem to the summations $S_{\pm}$, we first state a result for two simpler sequence of functions. Let $f_{n}(\theta)$ and $h_{n}(\theta)$ be the sequences of partial sums $f_{n}(\theta)=\sum_{k=1}^{n}\frac{(-1)^{k}}{k+\mu}\cos(k\theta),\hskip 28.45274ptg_{n}(\theta)=\sum_{k=1}^{n}\frac{(-1)^{k}}{k+\mu}\sin(k\theta)$ (5) for $\theta$ in some finite interval in $\mathbb{R}$. These partial sums are related to Lerch zeta functions [22]. We are searching for functions $F(\theta)$ and $G(\theta)$ independent of $n$ such that $|f_{n}(\theta)|<F(\theta)$ and $|g_{n}(\theta)|<G(\theta)$ for all $n$ and $F(\theta)$ and $G(\theta)$ are integrable. We can use a triangle inequality to bound $|f_{n}(\theta)|$ in the following way: $\displaystyle|f_{n}(\theta)|$ $\displaystyle\leq\left\lvert\sum_{k=1}^{n}\frac{(-1)^{k}}{k+\mu}\cos(k\theta)-\sum_{k=1}^{n}\frac{(-1)^{k}}{k}\cos(k\theta)\right\rvert+\left\lvert\sum_{k=1}^{n}\frac{(-1)^{k}}{k}\cos(k\theta)\right\rvert$ $\displaystyle=|\mu|\left\lvert\sum_{k=1}^{n}\frac{(-1)^{k}}{k(k+\mu)}\cos(k\theta)\right\rvert+\left\lvert\sum_{k=1}^{n}\frac{(-1)^{k}}{k}\cos(k\theta)\right\rvert$ (6) If $\mu$ isn’t a negative integer, then by a $p$-series test the first sum on the right hand side pf (6) can be bounded by a finite quantity. The second sum on the left is a partial sum of a logarithm, specifically on the open interval $(-\pi,\pi)$ we have the pointwise convergence $\sum_{k=1}^{\infty}\frac{(-1)^{k}}{k}\cos(k\theta)=-\log\left(\cos\frac{\theta}{2}\right)-\log{2}$ (7) We prove that the partial sum term in (6) is bounded above by its pointwise limit plus a constant, and is therefore bounded by an integrable function letting us use dominated convergence theorem. We unfortunately cannot apply standard results in Gibbs phenomenon [23] as these results apply only to discontinuous functions with finite jumps; the one we have here is an infinite discontinuity. Nevertheless, we can exploit the specific nature of our function to arrive at an appropriate bound. Let $s_{n}(\theta)$ be the $n^{\text{th}}$ partial sum such that $s_{n}(\theta)+\log\left(\cos\theta/2\right)+\log{2}$ has local extrema in this interval at points $\theta_{j}=2\pi j/(2n+1)$. We prove in the appendix that there exists some $M\in\mathbb{N}$ such that for all $n>M$ the largest extrema of this difference occurs at $\theta_{n}=\pi-\pi/(2n+1)$ (that this holds only for $n$ sufficiently large does not impede the conditions of the DCT since for all $n<M$ we can bound (6) by a constant). Therefore by plugging $\theta_{n}$ into the difference of these terms we have $s_{n}(\theta_{n})+\log\left(\cos\frac{\theta_{n}}{2}\right)+\log{2}=\left[\sum_{k=1}^{n}\frac{1}{k}\cos\left(\frac{k\pi}{2n+1}\right)\right]+\log\left(\sin\frac{\pi}{4n+2}\right)+\log 2$ (8) We show that as $n$ gets large, both the partial sum term and the logarithm term grow at $O(\log{n})$ with opposite leading coefficients, thus cancelling and leaving an O(1) term. To handle the partial sum term, use a trigonometric identity to extract a harmonic sum: $\sum_{k=1}^{n}\frac{1}{k}\cos\left(\frac{k\pi}{2n+1}\right)=\sum_{k=1}^{n}\frac{1}{k}-\sum_{k=1}^{n}\frac{2}{k}\sin^{2}\left(\frac{\pi k}{4n+2}\right)$ (9) By applying a standard result on harmonic series and using the identity $\sin x<x$ for all $x>0$, we see that this partial sum on the left in (9) is equal to $\log{n}+O(1)$. Next, we can conduct an asymptotic expansion of the logarithm term in (8). Note that for small $x$ we have $\sin{x}=x(1+O(x^{2}))$ such that the logarithm pulls out this factor of $x$ as well as other multiplicative factors and therefore: $\log\left(\sin\frac{\pi}{4n+2}\right)=-\log{n}+O(1)$ (10) The two asymptotic expansions we’ve conducted imply that the quantity in (8) converges to a limit, or the partial sum in (6) overshoots its limit by an asymptotically finite quantity, and therefore there exists some finite $M$ such that $\left\lvert\sum_{k=1}^{n}\frac{(-1)^{k}}{k}\cos(k\theta)\right\rvert\leq\frac{1}{2}\left\lvert\log\left(1+\cos\theta\right)+\log{2}\right\rvert+M$ (11) for all $n\in\mathbb{N}$ (Numerical simulations appear to show $M=1/2$ is a sufficient choice). This function on the right is Lebesgue integrable on any interval in $\mathbb{R}$ even in those that include singularities at $\theta=(2N+1)\pi$. Therefore, we can apply dominated convergence theorem to this sequence of partial sums $f_{n}(\theta)$. Applying dominated convergence theorem to $\\{g_{n}(\theta)\\}$ is similar, but instead of bounding the alternating cosine summation by a diverging yet integrable function, we can bound the corresponding alternating sine function by a constant since this is the Fourier transform of the sawtooth wave. To extend this to a partial sum of interest in $S_{+}$, we need to bound $J_{\alpha+\beta}(2z\cos\theta)[\cos(\alpha-\beta)\theta f_{n}(2\gamma\theta)+\sin(\alpha-\beta)\theta g_{n}(2\gamma\theta)]$ by a single integrable function for all $n$. But since Bessel functions and sine and cosine functions are entire on $\mathbb{C}$, we can bound these and $g_{n}(2\gamma\theta)$ by constants, and we can bound $f_{n}(2\gamma\theta)$ by the scaled version of the dominating function on the right hand side of (11). Thus, the interchange of summation and integration is justified. These arguments are directly applicable to $S_{-}$, however we also need to restrict $\mu$ from being a positive integer, and since a $1/\mu$ term appears on the right hand side of (4), we insist that $\mu\in\mathbb{C}\backslash\mathbb{Z}$. Because dominated convergence theorem holds separately for $S_{\pm}$, there is no issue with now interchanging the entire double summation with integration in (3). We use the following identities to reduce this operation to a closed form $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}}{n+\mu}\cos(n\theta)=\frac{\pi\cos{\mu\theta}}{\sin{\pi\mu}},\hskip 28.45274pt\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}}{n+\mu}\sin(n\theta)=-\frac{\pi\sin{\mu\theta}}{\sin{\pi\mu}}$ (12) where $\theta\in[-\pi,\pi]$. In other words, the left hand side summations are the Fourier series for non-smooth $2\pi$-periodic functions. To apply these formulas to (4), we require that $\gamma\in(0,1]$ such that $2\gamma\theta\in[-\pi,\pi]$ for the region of integration $\theta\in[0,\pi/2]$. Therefore we write the interchange as $\frac{2}{\pi}\sum_{n=-\infty}^{\infty}\left[\int_{0}^{\pi/2}\frac{(-1)^{n}}{n+\mu}J_{\alpha+\beta}(2z\cos\theta)\cos{(\alpha-\beta-2\gamma n)\theta}d\theta\right]=\frac{2}{\sin{\pi\mu}}\int_{0}^{\pi/2}J_{\alpha+\beta}(2z\cos\theta)\cos{(\alpha-\beta+2\gamma\mu)\theta}d\theta$ (13) Since the right hand side is in the integral form of the Bessel product formula in (2), we can reverse the equation to arrive at the final result in (1). ## 3 Extension to generalized Anger functions In this section we create a generalized version of the Lerche-Newberger formula in (1) extended to generalized Anger functions. Traditionally, the two-dimensional generalized Bessel function is defined as the convolutional summation: $J_{n}(x,y)=\sum_{k=-\infty}^{\infty}J_{n-2k}(x)J_{k}(y)$ (14) We usually call this function the index $(1,2)$ generalized Bessel function. If we naively attempt to apply this definition to the Lerche-Newberger formula in (1), we will run into problems because this form of the GBF is not defined for $n\in\mathbb{C}\backslash\mathbb{Z}$ as the summation in (14) will not converge. We could try to define the GBF in a completely analogous way as the Bessel functions by first defining a partial differential equation and defining the GBF as solutions to this PDF, but as seen in Kuklinski and Hague [13], the index $(1,2)$ GBF satisfies two independent second-order linear PDEs, neither of which have clear extensions to GBFs of different index or higher order. Regardless of these issues, however, if we restrict a putative Lerche- Newberger formula to GBFs of integer order, we will have a well-defined convergent sum. To do this, we introduce generalized Anger functions which are well defined for all orders but agree with the GBF for integer order. For two finite coordinates ${\bf x}=(x_{1},...,x_{m})$ and ${\bf y}=(y_{1},...,y_{m})$ in $\mathbb{C}^{m}$, we define the generalized Anger function of order $\alpha$ as: $A_{\alpha}({\bf x},{\bf y})=\frac{1}{2\pi}\int_{-\pi}^{\pi}\exp\left[i\left(\alpha\theta-\sum_{k=1}^{m}\left(x_{k}\sin{k\theta}+y_{k}\cos{k\theta}\right)\right)\right]d\theta$ (15) These functions have favorable decay properties in the sense that a stationary phase approximation shows that for fixed ${\bf x},{\bf y}\in\mathbb{C}^{m}$, $A_{\alpha}({\bf x},{\bf y})=O(\alpha^{-k})$ for all $k>0$ as $|\alpha|\rightarrow\infty$. With these functions, we prove a new version of the Lerche-Newberger formula: $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}A_{\alpha-\gamma n}({\bf x},{\bf y})A_{\beta+\gamma n}({\bf x},{\bf y})}{n+\mu}=\frac{\pi}{\sin{\mu\pi}}A_{\alpha+\gamma\mu}({\bf x},{\bf y})A_{\beta-\gamma\mu}({\bf x},{\bf y})$ (16) This equation holds for all $\mu\in\mathbb{C}\backslash\mathbb{Z}$, and all $\alpha,\beta\in\mathbb{C}$ if $\gamma\in[0,1/2]$. We will prove this by breaking each of the generalized Anger functions into two integrals and passing the sum through using convergence theorems. Let us represent the first generalized Anger function as an integral: $\displaystyle\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}A_{\alpha-\gamma n}({\bf x},{\bf y})A_{\beta+\gamma n}({\bf x},{\bf y})}{n+\mu}=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}\int_{-\pi}^{\pi}$ $\displaystyle\frac{(-1)^{n}e^{-i\gamma n\theta}A_{\beta+\gamma n}({\bf x},{\bf y})}{n+\mu}\times...$ $\displaystyle...\times\exp\left[i\left(\alpha\theta-\sum_{k=1}^{m}\left(x_{k}\sin{k\theta}+y_{k}\cos{k\theta}\right)\right)\right]d\theta$ (17) From the decay properties of the generalized Anger functions, we can use Fubini’s theorem to interchange the summation and integral. Upon pulling the summation inside the first integral, we then expand $A_{\beta+\gamma n}({\bf x},{\bf y})$ into its integral representation and focus on the following quantity: $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}e^{-i\gamma n\theta}A_{\beta+\gamma n}({\bf x},{\bf y})}{n+\mu}=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}\int_{-\pi}^{\pi}\frac{(-1)^{n}e^{i\gamma n(\phi-\theta)}}{n+\mu}\exp\left[i\left(\beta\phi-\sum_{k=1}^{m}\left(x_{k}\sin{k\phi}+y_{k}\cos{k\phi}\right)\right)\right]d\phi$ (18) We can repeat the same argument from the previous section to interchange the summation and integral in (18). However, when executing the complex exponential version of the sums described by (12), we must be careful since we do not necessarily have $\gamma(\phi-\theta)\in[-\pi,\pi]$ for general $\gamma$. For $0\leq\gamma\leq 1/2$ this does hold and we can proceed with the usual summation. By consolidating (12) into a complex exponential form, we have $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}}{n+\mu}e^{in\theta}=\frac{\pi}{\sin{\pi\mu}}e^{-i\mu\theta}$ (19) for $\theta\in(-\pi,\pi)$. Plugging this identity into the double integral representation of the Lerche-Newberger sum and separating the factors by integration variable gives us the result in (16). For $1/2<\gamma\leq 1$, the quantity $\gamma(\phi-\theta)$ will extend beyond $(-\pi,\pi)$. Indeed, if $\theta\in(\pi,3\pi)$, then $\theta-2\pi\in(-\pi,\pi)$ such that $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}}{n+\mu}e^{in\theta}=\frac{\pi}{\sin{\pi\mu}}e^{-i\mu(\theta-2\pi)}$ (20) A similar identity holds for $\theta\in(-3\pi,-\pi)$. If we restrict $\alpha,\beta\in\mathbb{Z}$, then we can write the double integral representation as $\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}A_{\alpha-\gamma n}({\bf x},{\bf y})A_{\beta+\gamma n}({\bf x},{\bf y})}{n+\mu}=\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f_{\mu}(\gamma(\phi-\theta))p_{\alpha}(\theta)p_{\beta}(\phi)d\phi d\theta$ (21) where $f_{\mu}(\theta)$ is the summation in (19), and $p_{\alpha}(\theta)$ is the $2\pi$-periodic integrand of $A_{\alpha}({\bf x},{\bf y})$. In Figure 1, we decompose the region of integration according to the period that $\gamma(\phi-\theta)$ lies in. In the triangle region bounded by $\theta=-\pi$, $\phi=\pi$, and $\gamma(\phi-\theta)=\pi$, we have $\gamma(\phi-\theta)\geq\pi$ so the summation formula from (19) incurs an extra factor of $e^{2\pi\mu i}$ as shown in (20). Let this region be called $\Delta_{+}(\gamma)$. In the negation of that region, the triangle bounded by $\theta=\pi$, $\phi=-\pi$ and $\gamma(\phi-\theta)=-\pi$ which we call $\Delta_{-}(\gamma)$, the summation procedure incurs an extra factor of $e^{-2\pi\mu i}$. In this way, we can replace $f_{\mu}(\gamma(\phi-\theta))$ by a complex exponential and consolidate the expression into an Anger function product and two integrals over $\Delta_{\pm}(\gamma)$: $\displaystyle\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}A_{\alpha-\gamma n}({\bf x},{\bf y})A_{\beta+\gamma n}({\bf x},{\bf y})}{n+\mu}$ $\displaystyle=\frac{\pi}{\sin{\mu\pi}}A_{\alpha+\gamma\mu}({\bf x},{\bf y})A_{\beta-\gamma\mu}({\bf x},{\bf y})+2\pi ie^{2i\pi\mu}\iint_{\Delta_{+}}p_{\alpha+\gamma\mu}(\theta)p_{\beta-\gamma\mu}(\phi)d\theta d\phi$ $\displaystyle-2\pi ie^{-2i\pi\mu}\iint_{\Delta_{-}}p_{\alpha+\gamma\mu}(\theta)p_{\beta-\gamma\mu}(\phi)d\theta d\phi$ (22) Figure 1: Division of region of integration in (22). ## 4 Applications Lerche-Newberger type summations appear in many different areas of physics including plasma physics [20] and periodically driven quantum mechanical systems [11]. In particular, the following expression appears in Berns et. al. [12] to describe the transition rate of a persistent qubit: $W(\epsilon,A)=\frac{\Delta^{2}}{2}\sum_{n}\frac{\Gamma_{2}J_{n}^{2}(x)}{(\epsilon-\omega n)^{2}+\Gamma_{2}^{2}}$ (23) where $x=A/\omega$. The authors give an asymptotic treatment of this function, but a closed form expression is possible using (1). Using a partial fractions expansion, we can break the denominator into its linear factors. Let $\mu_{\pm}=(\epsilon\pm\Gamma i)/\omega$ such that $W(\epsilon,A)=\frac{i\omega\Delta^{2}}{4}\left[\sum_{n}\frac{J_{n}^{2}(x)}{n-\mu_{+}}-\sum_{n}\frac{J_{n}^{2}(x)}{n-\mu_{-}}\right]$ (24) By letting $\alpha=\beta=0$ and $\gamma=1$, we can apply (1) to (24) to arrive at a closed form expression: $W(\epsilon,A)=\frac{i\pi\omega\Delta^{2}}{4}\left[\frac{J_{\mu_{+}}(x)J_{-\mu_{+}}(x)}{\sin\mu_{+}\pi}-\frac{J_{\mu_{-}}(x)J_{-\mu_{-}}(x)}{\sin\mu_{-}\pi}\right]$ (25) We display plots of the expression in (25) for several selections of parameters in Figure 2. Now asymptotics need only be developed for the constituent Bessel functions rather than for the complicated sum in (23). Figure 2: Plots of $W(\epsilon,A)$ for a variety of parameter sets (_Blue_) $\Gamma_{2}=3$, $\epsilon=2.1$, $\omega=0.07$ (_Red_) $\Gamma_{2}=10$, $\epsilon=1.3$, $\omega=2.0$ (_Green_) $\Gamma_{2}=1$, $\epsilon=0.6$, $\omega=3.0$. If this system was modified by letting the qubit be modulated by several frequencies possibly out of phase where instead of the energy detuning being of the form $h(t)=\epsilon+\delta\epsilon(t)+A\cos{2\pi vt}$, we would include additional terms such that $h(t)=\epsilon+\delta\epsilon(t)+\sum_{k=1}^{m}\left(A_{k}\cos{2\pi vkt}+B_{k}\cos{2\pi vkt}\right)$. Then the Bessel functions in (23) would be replaced with generalized Bessel functions of the form $J_{n}({\bf x},{\bf y})$ where ${\bf x}=(A_{1}/\omega,...,A_{m}/\omega)$ and ${\bf y}=(B_{1}/\omega,...,B_{m}/\omega)$. After using another partial fractions decomposition, a variant of the generalized Lerche-Newberger summation would emerge: $\sum_{n=-\infty}^{\infty}\frac{J_{n}({\bf x},{\bf y})^{2}}{n+\mu}$ (26) Since for generalized Bessel functions we have $J_{-n}({\bf x},{\bf y})\neq(-1)^{n}J_{n}({\bf x},{\bf y})$, we must use a variant of (19). By representing $(-1)^{n}=e^{i\pi n}$ in (19), we can shift the piecewise continuous regions from $((2k-1)\pi,(2k+1)\pi)$ to $(2k\pi,2(k+1)\pi)$. Indeed, if $\theta\in(0,2\pi)$ we have $\sum_{n=-\infty}^{\infty}\frac{e^{in\theta}}{n+\mu}=\frac{\pi}{\sin\pi\mu}e^{i\mu\pi}e^{-i\mu\theta}$ (27) If $\theta\in(-2\pi,0)$, then a similar argument in (27) would show that $\sum_{n=-\infty}^{\infty}\frac{e^{in\theta}}{n+\mu}=\frac{\pi}{\sin\pi\mu}e^{-i\mu\pi}e^{-i\mu\theta}$ (28) Using the typical integral interchange arguments, we can express (26) as the double integral: $\sum_{n=-\infty}^{\infty}\frac{J_{n}({\bf x},{\bf y})^{2}}{n+\mu}=\frac{1}{4\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}g_{\mu}(\phi+\theta)p_{0}(\theta)p_{0}(\phi)d\theta d\phi$ (29) where $g_{\mu}(\theta)$ is the summation from (27) and (28) and $p_{0}(\theta)$ is the same as in the previous section. In Figure 3, we see how the discontinuity of $g_{\mu}(\theta)$ divides the region of integration into two subregions on either side of the curve $\phi+\theta=0$ which we refer to as $\tilde{\Delta}_{\pm}$. Therefore we can divide this into the following sum: $\sum_{n=-\infty}^{\infty}\frac{J_{n}({\bf x},{\bf y})^{2}}{n+\mu}=\frac{1}{4\pi\sin{\pi\mu}}\left[e^{i\mu\pi}\iint_{\tilde{\Delta}_{+}}p_{-\mu}(\theta)p_{-\mu}(\phi)d\theta d\phi+e^{-i\mu\pi}\iint_{\tilde{\Delta}_{-}}p_{-\mu}(\theta)p_{-\mu}(\phi)d\theta d\phi\right]$ (30) Figure 3: Division of region of integration in (30). By splitting the complex exponentials into their trigonometric components, we find that this expression resolves based on the sum and difference of the two integral terms. Since $\tilde{\Delta}_{+}\cup\tilde{\Delta}_{-}=[-\pi,\pi]\times[-\pi,\pi]$, it follows that $\iint_{\tilde{\Delta}_{+}}p_{-\mu}(\theta)p_{-\mu}(\phi)d\theta d\phi+\iint_{\tilde{\Delta}_{-}}p_{-\mu}(\theta)p_{-\mu}(\phi)d\theta d\phi=4\pi^{2}A_{-\mu}({\bf x},{\bf y})^{2}$ (31) Meanwhile, the difference of these two integrals can be combined into a single integral over the space by using the sign function $\text{sgn}(x)$ such that $\iint_{\tilde{\Delta}_{+}}p_{-\mu}(\theta)p_{-\mu}(\phi)d\theta d\phi-\iint_{\tilde{\Delta}_{-}}p_{-\mu}(\theta)p_{-\mu}(\phi)d\theta d\phi=\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}p_{-\mu}(\theta)p_{-\mu}(\phi)\text{sgn}(\theta+\phi)d\theta d\phi$ (32) and we represent this integral as $4\pi^{2}B_{-\mu}({\bf x},{\bf y})$ to compare with (31). Substituting (31) and (32) into (30) we arrive at the final expression: $\sum_{n=-\infty}^{\infty}\frac{J_{n}({\bf x},{\bf y})^{2}}{n+\mu}=2\pi\left(\cot(\pi\mu)A_{-\mu}({\bf x},{\bf y})^{2}+iB_{-\mu}({\bf x},{\bf y})\right)$ (33) The asymptotics of $A_{-\mu}({\bf x},{\bf y})$ have been explored for ${\bf y}=0$ in several sources [14] [18] and in more generality in Kuklinski and Hague [13]. The component $B_{-\mu}({\bf x},{\bf y})$ is a two-dimensional oscillatory integral over the region $[-\pi,\pi]\times[-\pi,\pi]$, however the standard results from Stein [24] do not apply since the integrand has a discontinuity over the diagonal of this region contributed by $\text{sgn}(\phi+\theta)$ as seen in Figure 3. Regardless of the contributions from the discontinuity that may persist for all parameter choices $\mu,{\bf x},{\bf y}$, we still see that $B_{-\mu}({\bf x},{\bf y})$ has the same bifurcation curves as $A_{-\mu}({\bf x},{\bf y})$ due to the similar structure of the integrand. We plot examples of these sums in Figure 4. Figure 4: Plots of sum (33). (_Left_) ${\bf x}=(x,y,0,...)$, ${\bf y}=0$, $\mu=0.5$ (_Center_) ${\bf x}=(x,0,y,0,...)$, ${\bf y}=0$, $\mu=0.3$ (_Right_) ${\bf x}=(0,x,y,0,...)$, ${\bf y}=0$, $\mu=1.7$. ## 5 Conclusion In this document we extended the Lerche-Newberger formula to incorporate higher dimensional analogues to the Bessel functions. Since these generalized Bessel functions are only well defined for integer order, we instead opted to use generalize Anger functions in the summation. While the one-dimensional Lerche-Newberger formula generally preserves the Bessel function structure, the Anger function analogue does not behave as nicely since discontinuities in the equivalent integral expressions are introduced. Even for a relatively simple multi-dimensional as presented in the previous section, double integrals over discontinuous functions emerge that ostensibly cannot be reduced to well known functions using symmetry relations. It is the hope of the authors that these results will contribute to future investigations of oscillatory quantum mechanical systems. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References * [1] I. Lerche and R. Tautz, “Kapteyn series arising in radiation problems,” _Journal of physics A: Mathematical and theoretical_ , vol. 41, no. 3, p. 035202, 2008. * [2] S. Szapiel, “Maréchal intensity criteria modified for circular apertures with nonuniform intensity transmission: Dini series approach,” _Optics letters_ , vol. 2, no. 5, pp. 124–126, 1978. * [3] G. Watson, _A treatise on the theory of Bessel functions_. Cambridge university press, 1922. * [4] M. Abramowitz and I. A. Stegun, _Handbook of mathematical functions with formulas, graphs, and mathematical tables_. US Government printing office, 1964, vol. 55. * [5] I. Lerche and R. C. Tautz, “A note on summation of Kapteyn series in astrophysical problems,” _The Astrophysical Journal_ , vol. 665, no. 2, p. 1288, 2007. * [6] V. V. Kravchenko and S. M. Torba, “Asymptotics with respect to the spectral parameter and neumann series of bessel functions for solutions of the one-dimensional schrödinger equation,” _Journal of Mathematical Physics_ , vol. 58, no. 12, p. 122107, 2017. * [7] C. Linton, “Schlömilch series that arise in diffraction theory and their efficient computation,” _Journal of Physics A: Mathematical and General_ , vol. 39, no. 13, p. 3325, 2006. * [8] B. S. Newberger, “New sum rule for products of bessel functions with application to plasma physics,” _Journal of Mathematical Physics_ , vol. 23, no. 7, pp. 1278–1281, 1982. * [9] O. Buneman, “Dissipation of currents in ionized media,” _Physical Review_ , vol. 115, no. 3, p. 503, 1959. * [10] I. Lerche, R. Schlickeiser, and R. Tautz, “Comment on “a new derivation of the plasma susceptibility tensor for a hot magnetized plasma without infinite sums of products of bessel functions”[phys. plasmas 14, 092103 (2007)],” _Physics of Plasmas_ , vol. 15, no. 2, p. 092103, 2008. * [11] M. Silveri, J. Tuorila, E. Thuneberg, and G. Paraoanu, “Quantum systems under frequency modulation,” _Reports on Progress in Physics_ , vol. 80, no. 5, p. 056002, 2017. * [12] D. Berns, W. Oliver, S. Valenzuela, A. Shytov, K. Berggren, L. Levitov, and T. Orlando, “Coherent quasiclassical dynamics of a persistent current qubit,” _Physical review letters_ , vol. 97, no. 15, p. 150502, 2006. * [13] P. Kuklinski and D. A. Hague, “Identities and properties of multi-dimensional generalized bessel functions,” _arXiv preprint arXiv:1908.11683_ , 2019. * [14] G. Dattoli and A. Torre, _Theory and applications of generalized Bessel functions_. Arcane, 1996. * [15] D. A. Hague, “Adaptive transmit waveform design using multitone sinusoidal frequency modulation,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. 57, no. 2, pp. 1274–1287, 2021. * [16] H. R. Reiss, “Absorption of light by light,” _Journal of Mathematical Physics_ , vol. 3, no. 1, pp. 59–67, 1962. [Online]. Available: https://doi.org/10.1063/1.1703787 * [17] W. Paciorek and G. Chapuis, “Generalized bessel functions in incommensurate structure analysis,” _Acta Crystallographica Section A: Foundations of Crystallography_ , vol. 50, no. 2, pp. 194–203, 1994. * [18] H. Korsch, A. Klumpp, and D. Witthaut, “On two-dimensional bessel functions,” _Journal of Physics A: Mathematical and General_ , vol. 39, no. 48, 2006. * [19] M. Bakker and N. M. Temme, “Sum rule for products of bessel functions: Comments on a paper by newberger,” _Journal of mathematical physics_ , vol. 25, no. 5, pp. 1266–1267, 1984. * [20] I. Lerche, “A note on summing series of bessel functions occurring in certain plasma astrophysical situations,” _The Astrophysical Journal_ , vol. 190, pp. 165–166, 1974. * [21] H. L. Royden and P. Fitzpatrick, _Real analysis_. Macmillan New York, 1988, vol. 32. * [22] T. M. Apostol _et al._ , “On the lerch zeta function.” _Pacific Journal of Mathematics_ , vol. 1, no. 2, pp. 161–167, 1951. * [23] H. Wilbraham, “On a certain periodic function,” _Cambridge and Dublin Mathematical Journal_ , vol. 3, pp. 198–201, 1848. * [24] E. M. Stein, _Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals_. Princeton University Press, 1993. * [25] E. Hewitt and R. E. Hewitt, “The gibbs-wilbraham phenomenon: an episode in fourier analysis,” _Archive for history of Exact Sciences_ , vol. 21, no. 2, pp. 129–160, 1979. ## Appendix A Proof of Global Maximum Location In this section we prove that there exists a constant $M\in\mathbb{N}$ such that for all $n>M$, (7) has a global maximum at $\theta_{n}=\pi-\pi/(2n+1)$. We prove this for even $n$ so we replace $n$ with $2n$, but the result can easily be extended to $n$ odd. To do this we replicate an argument from Hewitt and Hewitt [25]. Let $f(\theta)$ be the difference from (8) which we write here: $f(\theta)=\sum_{k=1}^{2n}\frac{(-1)^{k}}{k}\cos{k\theta}+\log\left(2\cos\frac{\theta}{2}\right)$ (34) Using trigonometric identities, we can prove that the derivative of this function satisfies the following: $f^{\prime}(\theta)=-\sin\left(\frac{4n+1}{2}\theta\right)/\left(2\cos\left(\frac{\theta}{2}\right)\right)$ (35) This gives us that the local extrema of $f$ satisfy $\theta_{k}=\frac{2\pi k}{4n+1}$, and by conducting a second derivative test we can conclude that $\theta_{2k}$ are the local maxima of $f$. We first will prove that $f(\theta_{2k+2})-f(\theta_{2k})>f(\theta_{2k})-f(\theta_{2k-2})$. To do this let us define a function $\omega$: $\displaystyle\omega(t)$ $\displaystyle=\left[f\left(\frac{4\pi(k+1/2)}{4n+1}+\frac{2t}{4n+1}\right)-f\left(\frac{4\pi(k-1/2)}{4n+1}+\frac{2t}{4n+1}\right)\right]$ $\displaystyle-\left[f\left(\frac{4\pi(k+1/2)}{4n+1}-\frac{2t}{4n+1}\right)-f\left(\frac{4\pi(k-1/2)}{4n+1}-\frac{2t}{4n+1}\right)\right]$ (36) Here, we restrict $1\leq k\leq n-1$. This function is of interest since if $\omega(\pi)>0$, then the inequality in question holds. Since $\omega(0)=0$, we need only show that $\omega^{\prime}(t)>0$ for $0\leq t\leq\pi$. Indeed, $\omega^{\prime}(t)$ takes the following form: $\displaystyle\omega^{\prime}(t)$ $\displaystyle=\frac{2\sin t}{4n+1}\left[\left(\frac{1}{\cos\left(\frac{2\pi(k+1/2)}{4n+1}+\frac{t}{4n+1}\right)}-\frac{1}{\cos\left(\frac{2\pi(k-1/2)}{4n+1}+\frac{t}{4n+1}\right)}\right)\right.$ $\displaystyle-\left.\left(\frac{1}{\cos\left(\frac{2\pi(k+1/2)}{4n+1}-\frac{t}{4n+1}\right)}-\frac{1}{\cos\left(\frac{2\pi(k-1/2)}{4n+1}-\frac{t}{4n+1}\right)}\right)\right]$ (37) Using the identity $1/\cos(a+b)-1/\cos(a-b)=4(\sin{a}\sin{b})/(\cos{2a}+\cos{2b})$, we can rewrite the function as follows: $\omega^{\prime}(t)=\frac{8\sin t\sin\left(\frac{\pi}{4n+1}\right)}{4n+1}\left[\frac{\sin\left(\frac{2\pi k+t}{4n+1}\right)}{\cos\left(\frac{4\pi k+2t}{4n+1}\right)+\cos\left(\frac{2\pi}{4n+1}\right)}-\frac{\sin\left(\frac{2\pi k-t}{4n+1}\right)}{\cos\left(\frac{4\pi k-2t}{4n+1}\right)+\cos\left(\frac{2\pi}{4n+1}\right)}\right]$ (38) After combining these two fractions and conducting some elementary trigonometric manipulations, we can further simplify this derivative: $\omega^{\prime}(t)=\frac{16\sin t\sin\left(\frac{\pi}{4n+1}\right)\sin\left(\frac{t}{4n+1}\right)\cos\left(\frac{2\pi k}{4n+1}\right)}{(4n+1)\left(\cos\left(\frac{4\pi k+2t}{4n+1}\right)+\cos\left(\frac{2\pi}{4n+1}\right)\right)}$ (39) Due to the restrictions on $t$ and $k$, all of the terms in this fraction are positive and therefore the desired condition holds, namely that the difference between adjacent local maxima increases to the right. We ultimately want to show that $\theta_{2n}$ is the global maxima. This can be accomplished by proving $f(\theta_{2})>f(\theta_{0})$ and using induction with the above result. Though this result appears true for all $n$, due to the unwieldy nature of $f(\theta)$ we instead opt to prove this asymptotically, that there exists some $M$ such that for all $n>M$ the inequality is satisfied. We do this by considering the asymptotic expansion of $f(\theta_{2})-f(\theta_{0})$: $f(\theta_{2})-f(\theta_{0})=-2\sum_{k=1}^{2n}\frac{(-1)^{k}}{k}\sin^{2}\left(\frac{2\pi k}{4n+1}\right)+\log\left(\cos\frac{2\pi}{4n+1}\right)$ (40) If we can show that the leading term of the asymptotic expansion of (40) in $n$ is positive, then the proof follows. The logarithm term can be easily expanded using typical Taylor expansion arguments: $\log\left(\cos\frac{2\pi}{4n+1}\right)=\frac{1}{n^{2}}\left(-\frac{\pi^{2}}{8}\right)+\frac{1}{n^{3}}\left(\frac{\pi^{2}}{16}\right)+O(n^{-4})$ (41) To resolve the summation term, we manipulate it into an endpoint Riemann sum. Recall that for a smooth function $f(x)$, the following asymptotic form holds [wals37]: $\frac{1}{n}\sum_{k=1}^{n}f\left(\frac{k}{n}\right)=\int_{0}^{1}f(x)dx+\frac{1}{2n}\int_{0}^{1}f^{\prime}(x)dx+\frac{1}{12n^{2}}\int_{0}^{1}f^{\prime\prime}(x)dx+O(n^{-4})$ (42) We split the summation into positive and negative terms according to the $(-1)^{k}$ term: $\sum_{k=1}^{2n}\frac{(-1)^{k}}{k}\sin^{2}\left(\frac{2\pi k}{4n+1}\right)=\sum_{k=1}^{n}\frac{1}{2k}\sin^{2}\left(\frac{4\pi k}{4n+1}\right)-\sum_{k=1}^{n}\frac{1}{2k-1}\sin^{2}\left(\frac{2\pi(2k-1)}{4n+1}\right)$ (43) We elaborate the asymptotic expansion of the even sum on the right hand side of (43); the odd sum is similar. By letting $f(x)=\sin^{2}(\pi x)/x$, we rewrite this sum as $\sum_{k=1}^{n}\frac{1}{2k}\sin^{2}\left(\frac{4\pi k}{4n+1}\right)=\frac{2}{4n+1}\sum_{k=1}^{n}f\left(\frac{4k}{4n+1}\right)$ (44) The remainder of this proof is an arduous calculation of these sum up to order for large $n$ which we outline here. First, we expand the argument of $f$, $4k/(4n+1)$, being careful to recognize that $k$ is at the same order as $n$ so that $k/n=O(1)$. Next, we conduct a Taylor expansion of $f$ at $k/n$ throwing out higher order terms such that we are left with terms that look like $(k/n)^{j}f^{(j)}(k/n)$. By considering these the summands of a Riemann endpoint sum of function $g(x)=x^{j}f^{(j)}$, we use (42) to further expand these sums into a collection of integrals. These integrals have closed form expressions, and we can finish the problem off by multiplying the expansion of the sum on the right hand side of (44) by the expansion of the factor $2/(4n+1)$. This gives us the asymptotic expansions: $\sum_{k=1}^{n}\frac{1}{2k}\sin^{2}\left(\frac{4\pi k}{4n+1}\right)=\frac{\pi x}{2}-\frac{1}{n^{2}}\left(\frac{\pi^{2}}{24}\right)+\frac{1}{n^{3}}\left(\frac{5\pi^{2}}{384}\right)+O(n^{-4})$ (45) $\sum_{k=1}^{n}\frac{1}{2k-1}\sin^{2}\left(\frac{2\pi(2k-1)}{4n+1}\right)=\frac{\pi x}{2}+\frac{1}{n^{2}}\left(\frac{\pi^{2}}{48}\right)-\frac{1}{n^{3}}\left(\frac{\pi^{2}}{384}\right)+O(n^{-4})$ (46) Here, $x=(-\text{Ci}(2\pi)+\gamma+\log(2\pi))/2\pi$. Combining the expansions from (45) and (46) with the expansion of the logarithm in (41), we arrive at the final result, namely $f(\theta_{2})-f(\theta_{0})=\frac{\pi^{2}}{32n^{3}}+O(n^{-4})$ (47) and since the leading quantity is positive, the rest of the proof follows.
11institutetext: 1 Sano Centre for Computational Medicine, Cracow, Poland 2 Technical University of Cluj-Napoca, Cluj-Napoca, Romania 3 The University of Sheffield, Sheffield, United Kingdom 4 Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, UK 5 Warsaw University of Technology 6 Tooploox 7 Jagiellonian University of Cracow # Noninvasive Estimation of Mean Pulmonary Artery Pressure Using MRI, Computer Models, and Machine Learning Michal K. Grzeszczyk 11 0000-0002-5304-1020 Tadeusz Satława 11 Angela Lungu 22 0000-0002-4531-2791 Andrew Swift 33 Andrew Narracott 3344 0000-0002-3068-6192 Rod Hose 33 Tomasz Trzcinski 556677 0000-0002-1486-8906 Arkadiusz Sitek 11 0000-0002-0677-4002 ###### Abstract Pulmonary Hypertension (PH) is a severe disease characterized by an elevated pulmonary artery pressure. The gold standard for PH diagnosis is measurement of mean Pulmonary Artery Pressure (mPAP) during an invasive Right Heart Catheterization. In this paper, we investigate noninvasive approach to PH detection utilizing Magnetic Resonance Imaging, Computer Models and Machine Learning. We show using the ablation study, that physics-informed feature engineering based on models of blood circulation increases the performance of Gradient Boosting Decision Trees-based algorithms for classification of PH and regression of values of mPAP. We compare results of regression (with thresholding of estimated mPAP) and classification and demonstrate that metrics achieved in both experiments are comparable. The predicted mPAP values are more informative to the physicians than the probability of PH returned by classification models. They provide the intuitive explanation of the outcome of the machine learning model (clinicians are accustomed to the mPAP metric, contrary to the PH probability). ###### Keywords: Pulmonary Hypertension Regression Gradient Boosting Decision Trees Mathematical Modelling ## 1 Introduction Pulmonary Hypertension is a severe disease difficult to diagnose with multiple possible root causes [6]. For many years, PH was identified if a mean Pulmonary Artery Pressure (mPAP) of a patient at rest was equal to or above 25 mmHg. Recently, it has been suggested to lower the threshold to 20 mmHg [19]. The precise measurement of mPAP is non-trivial and requires conducting an invasive Right Heart Catheterization (RHC) – the gold standard for diagnosing PH. This procedure carries risks, requires patient’s preparation, trained staff, highly specialized equipment, it is expensive and time consuming. To lower the probability of complications it has to be performed at a specialized facility [5]. Non-invasive estimation of mPAP using medical imaging, mathematical modeling, and machine learning (ML) is an option to avoid issues related with RHC. Mathematical models, such as a Windkessel model, allow diagnosis of the vascular system parameters [23]. Different ML algorithms enable extracting knowledge about data samples and their performance usually increases with the addition of features from multiple domains. In this paper, we present methods based on Gradient Boosting Decision Trees (GBDT) for non-invasive PH diagnosis. We use classic GBDT, DART (Dropouts meet Multiple Additive Regression Trees) [22] \- a method utilizing dropouts of random trees during training - and GOSS (Gradient-based One-Side Sampling) [10] – a technique that uses different than GBDT process of training (retaining samples with large gradients and randomly dropping the ones with low gradients). We conduct analysis on data from 352-patient cohort and perform two tasks: classification of PH and regression of mPAP. As predictors, we use demographics features, measurements derived from Magnetic Resonance Imaging (MRI) and features obtained from 0D and 1D mathematical models [15]. Our main contribution is the demonstration of the ablation study, which shows, that physics-informed feature engineering based on mathematical models of blood circulation increases the performance of ML algorithms for classification and regression of PH and values of mPAP, respectively. Another significant contribution of this paper is comparison of utilities of classification and regression approaches for the detection of PH. While the regression achieves similar classification metrics (after thresholding of estimated mPAP), the values of predicted mPAP are more informative to the physicians than the probability of PH returned by classification models. As such, they provide the intuitive explanation of the outcome of the machine learning model (clinicians are accustomed to the mPAP metric, contrary to the PH probability). ## 2 Related work Multiple ML algorithms (utilizing features from various modalities like echocardiography, Computed Tomography (CT), or MRI) have been integrated for the purpose of the PH classification. In [14], five ML models were used and compared with each other. Boosted Classification Trees, Lasso Penalized Logistic Regression (LPLR), Random Forest (RF) for Regression, RF for Classification and Support Vector Machines (SVM) were adopted for mPAP prediction or PH classification basing on the echocardiographic measurements and basic patients characteristics (age, sex, BMI, body surface area). In [26], echocardiographic data was used to distinguish between pre- and post- capillary PH with one of the nine tested ML models (SVM, AdaBoost, LR, RF, Decision Trees (DT), K-Nearest Neighbours, GBDT, LogitBoost and Linear Discriminant Analysis (LDA)). In [7], measurements derived from CT were used to train six ML classifiers to evaluate the probability of mPAP higher than 15 mmHg. Another approach was to record the heart sounds with a digital stethoscope to gather parameters for PH classification using LDA [2]. The analysis of the sounds revealed specific patterns in PH patients. In [1], it was noted that the sounds collected by phonocardiogram can be applied for binary classification of PH with SVM. In [16], it was shown that MRI measurements combined with parameters from 0D and 1D computational models can be successfully used for PH and non-PH patients classification with DT. In our approach, we study the impact of mathematical models parameters on classification and regression. We also show the comparable performance of PH diagnosis with GBDT-based models in both tasks. With the rise of Deep Learning (DL), multiple approaches of detecting PH directly from images, videos, or electrocardiography (ECG) signals were investigated. For example, chest X-Ray images can be utilized for binary classification of potential PH patients using Capsule Network with residual blocks [12]. In [27], three popular DL networks (ResNet50, Xception and Inception V3) were trained as predictors of PH. As shown in [13], an ensemble neural network can pose as a screening tool for PH from a 12-lead ECG signal. ML can also be utilized for determining patients at risk of having Pulmonary Arterial Hypertension (PAH) from clinical records. In [11], it was shown that GBDT can help in screening for PAH based on their medical history. ML-based tools were also developed for the purpose of blood pressure estimation - in [25], Support Vector Machine Regression (SVR) models were applied for the prediction of the patient’s blood pressure from the physiological data. Another example is an application of Multilayer Perceptron (MLP) for regression of systolic blood pressure using basic knowledge about patients (BMI, age, habits etc.) [24]. ## 3 Methods In this section, we describe our approaches to noninvasive PH diagnosis. We present the details of our dataset and introduce mathematical models which enabled the acquisition of physics-informed features. Finally, we train GBDT- based models on multiple feature sets to perform mPAP regression and PH classification experiments. ### 3.1 PH dataset Table 1: PH dataset with patient related data, parameters derived from 0D and 1D models and measurements from MRI imaging. In the appendix (section 8) we provide explanations for the feature names. P-value tests a null hypothesis that the coefficient of the univariate linear regression between a feature and mPAP is equal to zero. | No PH | PH | ---|---|---|--- feature | cnt | mean | std | cnt | mean | std | p-value mPAP, mmHg | 66 | 19.67 | 3.34 | 286 | 46.95 | 13.08 | Demographics | | | | | | | age, years | 66 | 56.61 | 13.78 | 286 | 61.69 | 14.24 | 0.242 gender, female/male | 66 | 43/23 | | 286 | 173/113 | | 0.549 who, no. | 56 | 2.52 | 0.54 | 285 | 3.04 | 0.44 | $<0.001$ bsa, $m^{2}$ | 65 | 1.88 | 0.25 | 286 | 1.82 | 0.22 | 0.24 0D and 1D models | | | | | | | Rd, $kg/m^{4}s$ | 66 | 6.08E+07 | 4.94E+07 | 286 | 1.46E+08 | 2.53E+08 | $<0.001$ Rc, $kg/m^{4}s$ | 66 | 7.94E+06 | 7.80E+06 | 286 | 9.17E+06 | 1.87E+07 | 0.072 C, $m^{4}s^{2}/kg$ | 66 | 9.92E-09 | 6.71E-09 | 284 | 3.94E-04 | 6.65E-03 | 0.669 Rtot, $kg/m^{4}s$ | 66 | 6.83E+07 | 5.38E+07 | 286 | 1.56E+08 | 2.62E+08 | $<0.001$ Wb/Wtot | 66 | 0.24 | 0.10 | 286 | 0.39 | 0.11 | $<0.001$ MRI | | | | | | | rac_fiesta, % | 66 | 26.39 | 15.43 | 286 | 13.68 | 8.93 | $<0.001$ syst_area_fiesta, $cm^{2}$ | 66 | 7.62 | 2.17 | 286 | 9.78 | 2.78 | $<0.001$ diast_area_fiesta, $cm^{2}$ | 66 | 6.08 | 1.71 | 286 | 8.66 | 2.57 | $<0.001$ rvedv, $mL$ | 66 | 118.93 | 36.00 | 286 | 159.58 | 58.27 | $<0.001$ rvedv_index, $mL/m^{2}$ | 66 | 53.78 | 21.83 | 286 | 73.92 | 39.39 | $<0.001$ rvesv, $mL$ | 66 | 55.41 | 20.68 | 286 | 102.48 | 49.92 | $<0.001$ rvesv_index, $mL/m^{2}$ | 66 | 24.64 | 10.84 | 286 | 47.63 | 30.19 | $<0.001$ rvef, % | 66 | 53.32 | 9.86 | 286 | 38.05 | 13.59 | $<0.001$ rvsv, $mL$ | 66 | 63.52 | 22.61 | 286 | 57.15 | 23.39 | 0.026 rvsv_index, $mL/m^{2}$ | 66 | 29.14 | 13.90 | 286 | 26.32 | 15.02 | 0.292 lvedv, $mL$ | 66 | 116.57 | 33.09 | 286 | 91.30 | 27.33 | $<0.001$ lvedv_index, $mL/m^{2}$ | 66 | 53.16 | 21.90 | 286 | 41.25 | 19.20 | $<0.001$ lvesv, $mL$ | 66 | 34.27 | 15.66 | 286 | 31.32 | 14.56 | 0.23 lvesv_index, $mL/m^{2}$ | 66 | 16.85 | 16.81 | 286 | 14.01 | 8.18 | 0.194 lvef, % | 66 | 71.13 | 8.54 | 286 | 65.81 | 10.92 | $<0.001$ lvsv, $mL$ | 66 | 82.30 | 23.30 | 286 | 59.97 | 19.93 | $<0.001$ lvsv_index, $mL/m^{2}$ | 66 | 38.07 | 16.20 | 286 | 27.20 | 13.51 | $<0.001$ rv_dia_mass, $g$ | 66 | 22.62 | 6.80 | 283 | 44.48 | 25.47 | $<0.001$ lv_dia_mass, $g$ | 66 | 91.47 | 27.71 | 286 | 90.64 | 24.98 | 0.436 lv_syst_mass, $g$ | 66 | 111.74 | 32.17 | 286 | 99.83 | 26.39 | $<0.001$ rv_mass_index, $g/m^{2}$ | 66 | 10.44 | 4.94 | 285 | 20.94 | 15.09 | $<0.001$ lv_mass_index, $g/m^{2}$ | 59 | 40.90 | 17.87 | 243 | 39.84 | 18.99 | 0.442 sept_angle_syst, degrees | 66 | 139.95 | 11.68 | 286 | 172.51 | 22.11 | $<0.001$ sept_angle_diast, degrees | 66 | 134.21 | 8.28 | 286 | 145.01 | 11.93 | $<0.001$ 4ch_la_area, $mm^{2}$ | 66 | 1921.95 | 387.56 | 286 | 1785.95 | 556.53 | $<0.001$ 4ch_la_length, $mm^{2}$ | 66 | 55.76 | 7.86 | 286 | 55.62 | 8.60 | 0.412 2ch_la_area, $mm^{2}$ | 66 | 1764.62 | 496.75 | 286 | 1901.67 | 545.35 | 0.855 2ch_la_length, $mm^{2}$ | 66 | 48.66 | 9.08 | 286 | 52.12 | 9.33 | 0.166 la_volume, $mL$ | 66 | 55.22 | 17.96 | 286 | 54.16 | 25.36 | 0.005 la_volume_index, $mL/m^{2}$ | 66 | 24.95 | 10.14 | 286 | 23.24 | 10.45 | 0.042 ao_qflowpos, $L/min$ | 65 | 6.09 | 1.50 | 285 | 5.29 | 1.50 | $<0.001$ ao_qfp_ind, $L/min/m^{2}$ | 65 | 2.79 | 1.18 | 285 | 2.44 | 1.15 | 0.003 pa_qflowpos, $L/min$ | 66 | 5.50 | 1.84 | 284 | 5.00 | 1.97 | 0.006 pa_qflowneg, $L/min$ | 66 | 0.62 | 0.59 | 285 | 1.07 | 0.83 | $<0.001$ pa_qfn_ind, $L/min/m^{2}$ | 66 | 9.70 | 7.19 | 284 | 17.49 | 9.85 | $<0.001$ systolic_area_pc, $mm^{2}$ | 66 | 731.05 | 236.42 | 284 | 950.17 | 268.98 | $<0.001$ diastolic_area_pc, $mm^{2}$ | 66 | 619.82 | 162.71 | 284 | 866.42 | 244.57 | $<0.001$ rac_pc, % | 66 | 17.02 | 13.70 | 284 | 10.01 | 8.14 | $<0.001$ Table 1 presents the available features of patients who were suspected with PH and underwent MRI and RHC within 48 hours. The medical procedures were performed at the Sheffield Pulmonary Vascular Disease Unit. The RHC procedure was conducted with a balloon-tipped 7.5-Fr thermodilution catheter. The PH was defined if measured mPAP $\geq$ 25 mmHg. Using these criteria out of the cohort of 352 patients 286 were diagnosed with PH. From 286 patients with PH, 142 had Pulmonary Arterial Hypertension, 86 had Chronic Thromboembolic PH, 35 PH cases were due to lung diseases (e.g. Chronic Obstructive Pulmonary Disease), 15 cases were associated with left heart disease. The cause of PH in the rest of patients was either multifactorial or unknown. All of the available data samples are part of the ASPIRE Registry (Assessing the Severity of Pulmonary Hypertension In a Pulmonary Hypertension REferral Centre) [8]. MRI images were captured with 1.5-tesla whole-body scanner (GE HDx, GE Healthcare, Milwaukee) with an 8-channel cardiac coil. The images were acquired in the supine position during a breath hold. The balanced steady state free precession (bSSFP) sequences were spatially and temporally synchronized with the 2D phase contrast (PC) images of the Main Pulmonary Artery (MPA) using cardiac gating. Short-axis and four-chamber cardiac images were also collected. The features from MRI were obtained as in [21]. A(t) area of the MPA was extracted from the semi-automatically segmented bSSFP images. The blood flow through MPA (Q(t)) was extracted from the segmented areas overlaid on PC images. Using those measurements 0D- and 1D-model features were derived. To prepare the feature dataset for the training of ML models we fill the missing values using linear interpolation. We encode categorical features to numerical values and scale all the features to have means of 0 and variances of 1. ### 3.2 Features derived from models of blood circulation The cardiovascular system (CVS) is a closed circuit with the main purpose of transporting oxygenated blood to organs and tissues [17]. It comprises especially from heart, blood and vessels. One of the main components of the CVS is the pulmonary circulation. The target of the pulmonary circulation is to transport the deoxygenated blood from the right ventricle through MPA and other arteries to lungs and deliver the oxygenated blood to the left ventricle [9]. Since CVS can be described by its haemodynamics and structure of heart and vessels, the computational models based on the simplified representation of CVS were introduced [18]. Those models range from 0D models simulating the global haemodynamics (e.g. resistance and compliance of the system) to 3D models representing the complex behaviour of vessels and the blood flow over time. In [15], two models (0D and 1D) based on MRI measurements for the diagnosis of PH were proposed. #### 3.2.1 0D model. 0D models are often based on the hydraulic-electrical analogue - the blood flow and electrical circuits have many computational similarities [18]. For example, the friction in the vessel can be identified as resistance R, the blood pressure as voltage and the flow-rate as current. Thus, by applying electrical laws (e.g. Kirchhoff’s law, Ohm’s law), the simplified representation of the CVS can be achieved. 0D modelling of CVS started with the implementation of the two-element Windkessel model [23]. Different variants of this model appeared in the literature and it was applied to simulate pulmonary circulation [4, 20]. The 3-element (RcCRd) Windkessel model comprises of the capacitor C characterizing the compliance of the pulmonary circulation and two resistors Rc and Rd representing the resistance proximal and distal to the capacitor respectively. In [15], RcCRd model was applied to capture the characteristics of PH and non- PH patients. In this model, the sum of two resistors can be interpreted as the ratio between mean pressure and mean flow (pulmonary vascular resistance - PVR) and C indicates the compliance of the pulmonary arteries. To optimize the parameters of 0D model for the specific patient, two MRI imaging techniques of MPA were used: PC and bSSFP. The bSSFP images were segmented to find the area of MPA (A(t)) over time. Then, the segmented regions were overlayed over PC images to capture the blood flow through MPA (Q(t)). Having the Q(t) and pressure p(t) (derived from the measured MPA radius) the parameters of the Windkessel model which were best describing the relationship between Q(t) and p(t) over time could be derived. #### 3.2.2 1D model. The simplified representation of the pulmonary vasculature is multiple elastic tubes with numerous branches. 1D models often analyse the propagation of the pressure and flow waves in such structures. The 1D equations of the waves travelling through elastic tubes are derived from Navier-Stokes equations. In [15], the analysis of the power of the pressure waves was performed. The pressure wave was broken down into forward and backward-travelling elements (since vessels are rugged and twisted, some waves are bouncing off the vessel walls and travel backward). It was assumed and confirmed that the power of the backward wave in relation to the total wave power was greatly higher in PH cases than in healthy ones. As diseased pulmonary vasculature contains more deposits and stenoses the ratio of the backward wave power to the total wave power (represented as Wb/Wtot) is higher than in the healthy one. ### 3.3 Machine Learning for PH detection #### 3.3.1 mPAP regression. The decision whether the patient is suffering from PH is more important to the doctors than the actual value of mPAP. However, the non-invasive prediction of the PH occurrence together with the predicted value of mPAP is more informative to the clinicians. Therefore, we decide to conduct two experiments: mPAP regression and PH classification. To find the best ML algorithm for mPAP regression we train three models based on GBDT: classic GBDT, DART and GOSS. We use mPAP feature as the ground truth for our models. We find the best hyper parameters for the models using Bayes optimization with 8-fold cross-validation (CV). We optimize them for 200 iterations with minimizing Mean Squared Error (MSE) as the optimization target. Then, using the best found parameters we train the models with leave one out cross validation (LOOCV) and MSE as the objective function. We measure MSE, Root MSE (RMSE) and Mean Absolute Error (MAE) as regression metrics of the model. We assume that mPAP $\geq$ 25 mmHg is a positive PH diagnosis. With this assumption, we compute the binary classification metrics after thresholding the predicted and measured mPAP with 25 value. We calculate accuracy, sensitivity, specificity, True Positives (TP), False Positives (FP), True Negatives (TN), False Negatives (FN). To compare the impact of different feature sets on the results (demographics, MRI, mathematical models), we repeat the procedure of hyper parameter optimization, models training with LOOCV and metrics collection for different combinations of features. We compare results of all the approaches. Additionally, we train four other than boosted tree ML models on all features and compare the metrics with GBDT-based methods using LOOCV-derived metrics. These additional methods are MLP, SVR, AdaBoost and RF. #### 3.3.2 PH classification. We conduct the binary PH classification, similarly to mPAP regression. We binarize mPAP feature with 25 mmHg threshold and train three GBDT-based models on different variations of feature sets, previously optimizing the hyper parameters using Bayes optimization. The optimization is handled for 200 iterations with 8-fold stratified CV to ensure similar distribution of positive and negative samples over each fold. The optimization goal is the maximum area under the receiver operating characteristic (ROC) curve. We train GBDT, DART and GOSS on best found parameters with LOOCV. We calculate binary classification metrics: area under ROC curve (AUC), sensitivity, specificity, accuracy, TP, FP, TN and FN. To compute the binary classification metrics we use multiple thresholding strategies: youden - maximization of specificity + sensitivity, f1 - maximization of f1 metric (harmonic mean of precision and recall), closest01 - the point which is closest to (0,1) point on the ROC curve, concordance - maximization of the product of sensitivity and specificity. ## 4 Results In this section, we present results of our experiments. We analyze, through the ablation study, the impact of different feature sets on models performance and compare metrics achieved by regression and classification models. In our case, the ablation study means the removal of feature sets before the training to understand their contribution to the overall performance of ML models. We also show, that regression models can be utilized as a tool for PH classification. ### 4.1 mPAP regression Table 2: Results of mPAP value regression with LOOCV. Models trained on demographics, MRI-derived features and 0D and 1D models parameters. P-value is calculated based on MAE against DART model. Method | MAE | RMSE | MSE | $R^{2}$ | sensitivity | specificity | accuracy | p-value ---|---|---|---|---|---|---|---|--- MLP | 7.71 | 10.37 | 107.50 | 0.58 | 0.93 | 0.55 | 0.86 | $<0.001$ SVR | 7.29 | 9.39 | 88.14 | 0.65 | 0.95 | 0.55 | 0.88 | $<0.001$ AdaBoost | 6.92 | 8.92 | 79.59 | 0.69 | 0.97 | 0.41 | 0.87 | $<0.001$ RandomForest | 6.55 | 8.64 | 74.59 | 0.71 | 0.95 | 0.56 | 0.88 | 0.003 GOSS | 6.44 | 8.38 | 70.22 | 0.72 | 0.96 | 0.67 | 0.90 | $<0.001$ GBDT | 5.95 | 7.91 | 62.55 | 0.75 | 0.96 | 0.74 | 0.92 | 0.93 DART | 5.94 | 7.85 | 61.66 | 0.76 | 0.95 | 0.74 | 0.91 | Table 3: Ablation study over the combinations of available feature sets (demographics, MRI, 0D and 1D models). P-value is calculated against models trained on all features. demographics | ✓ | | | ✓ | | ✓ | ✓ ---|---|---|---|---|---|---|--- 0D and 1D models | | ✓ | | ✓ | ✓ | | ✓ MRI | | | ✓ | | ✓ | ✓ | ✓ regression (MAE) | GOSS | 11.09 | 9.16 | 6.93 | 8.44 | 6.77 | 6.51 | 6.44 p-value | $<0.001$ | $<0.001$ | 0.007 | $<0.001$ | 0.012 | 0.645 | GBDT | 10.85 | 9.14 | 6.69 | 8.33 | 6.49 | 6.34 | 5.95 p-value | $<0.001$ | $<0.001$ | $<0.001$ | $<0.001$ | $<0.001$ | 0.012 | DART | 11.01 | 9.35 | 6.76 | 8.43 | 6.20 | 6.20 | 5.94 p-value | $<0.001$ | $<0.001$ | $<0.001$ | $<0.001$ | 0.058 | 0.083 | classification (AUC) | GOSS | 0.74 | 0.87 | 0.91 | 0.89 | 0.95 | 0.93 | 0.95 p-value | $<0.001$ | $<0.001$ | $<0.001$ | $<0.001$ | 0.117 | $<0.001$ | GBDT | 0.77 | 0.85 | 0.93 | 0.88 | 0.94 | 0.93 | 0.94 p-value | $<0.001$ | $<0.001$ | 0.593 | $<0.001$ | 0.147 | 0.017 | DART | 0.79 | 0.85 | 0.93 | 0.88 | 0.95 | 0.93 | 0.95 p-value | $<0.001$ | $<0.001$ | 0.005 | $<0.001$ | 0.028 | $<0.001$ | Table 2 presents results of regression experiments. The lowest regression metrics are achieved by DART MAE=5.94, RMSE=7.85 and MSE=61.66. GBDT has marginally better classification metrics with sensitivity=0.96 (DART, 0.95), specificity=0.74 (DART, 0.74) and accuracy=0.92 (DART, 0.91). The difference between DART and GBDT results is not statistically significant (p-value=0.93). Additionally, GBDT-based methods outperform other tested ML algorithms: RF, AdaBoost, SVR and MLP with RF achieving the lowest MAE (6.55) out of all compared methods (p-value=0.003). Table 3 shows results of the ablation study. For all models, MAE drops with different combinations of feature sets (demographics, mathematical models, MRI) as opposed to only one feature set. The lowest MAE when a single feature set is used is achieved for MRI-derived measurements (GOSS, 6.93; GDBT 6.69; DART, 6.76). However, the combination of all available feature sets yields the best performance (GOSS, 6.44; GDBT 5.95; DART, 5.94). The physics-informed feature engineering performed by the addition of 0D and 1D models parameters improves metrics obtained in the regression. The relations between predicted and measured mPAP values are shown in Figure 1. The addition of mathematical models features decreases the number of FP and FN (calculated with 25 mmHg threshold) even though the models were trained on the MSE which is a regression objective function. Only 17 predictions are FP and 11 are FN for GBDT (DART; 17 FP, 14 FN). For GOSS, GBDT and DART, only one FP sample was predicted as having higher mPAP than 40 mmHg. The measured value for this patient during RHC was 24 mmHg which by current indicators means PH positive patient. In case of mPAP above 45 mmHg, all samples are predicted positively meaning high confidence of this model above that value. All false negative samples have the predicted values above 20 mmHg. Figure 1: Measured vs predicted values of mPAP with $\geq 25$ mmHg thresholding for models trained on all parameters: GOSS (top, $R^{2}$=0.72, p-value $<0.001$), GBDT (middle, $R^{2}$=0.75, p-value $<0.001$), DART (bottom, $R^{2}$=0.76, p-value $<0.001$). ### 4.2 PH classification In the PH classification experiments, the impact of 0D and 1D models parameters is also significant (Table 3). For the single feature set, the highest AUC is achieved for models trained on MRI-derived parameters (GOSS, 0.91; DART, 0.93; GDBT, 0.93). The addition of features from mathematical models improves the performance and acquires the same AUC as for the models trained on all parameters (GOSS, 0.95; GDBT 0.94; DART, 0.95). Table 4 shows the detailed results of PH classification models trained on all features. The highest AUC is achieved for GOSS and DART models. Those models have the highest specificity (GOSS, 0.94; GBDT, 0.95; DART, 0.95) when thresholding their predicted probabilities with youden or concordance strategies. However, the classification of PH patients is a task in which we would like to detect as many positive patients as possible (maximizing sensitivity) while retaining reasonably high specificity (the percentage of accurately stating that no PH is present). Such an approach is most closely achieved with maximizing f1 metric as the thresholding strategy. With this strategy, DART predictions yield best metrics with sensitivity of 0.95, specificity of 0.8 and accuracy of 0.92. The results are comparable with the best regression metrics (sensitivity=0.96, specificity=0.74 and accuracy=0.92). The FN had mPAP close to 25 mmHg (with a maximum of 33 mmHg) and relatively small PVR, meaning, that no severe PH case was misclassified. Half of the FP had mPAP higher than 20 mmHg. The ROC curves for the three models are presented in Figure 2. Table 4: Results of PH classification with LOOCV. Models trained on demographics, MRI-derived features and 0D and 1D models parameters. Metrics sens (sensitivity), spec (specificity), acc (accuracy) are given for multiple thresholding strategies: youden, concordance, 01 (closest01), f1 (maximizing f1 metric). | youden | concordance | 01 | f1 ---|---|---|---|--- Method | AUC | sens | spec | acc | sens | spec | acc | sens | spec | acc | sens | spec | acc GOSS | 0.95 | 0.88 | 0.94 | 0.88 | 0.88 | 0.94 | 0.88 | 0.88 | 0.92 | 0.89 | 0.97 | 0.68 | 0.91 GBDT | 0.94 | 0.84 | 0.95 | 0.86 | 0.84 | 0.95 | 0.86 | 0.84 | 0.95 | 0.86 | 0.94 | 0.76 | 0.91 DART | 0.95 | 0.85 | 0.95 | 0.87 | 0.85 | 0.95 | 0.87 | 0.87 | 0.92 | 0.88 | 0.95 | 0.8 | 0.92 Figure 2: ROC curves for GBDT-based classification models trained on all features: GOSS (left), GBDT (middle), DART (right). ## 5 Discussion The noninvasive assessment of mPAP is a difficult task. In a clinical setting the pressure is measured through the invasive RHC. The models presented in this paper enable the prediction of mPAP in a noninvasive way using information about patients, measurements derived from multiple MRI images and mathematical models. The combination of all features acquired from different domains brings the best results. The physics-informed feature engineering improves the assessment of mPAP. The modelling of MPA haemodynamics enables the quantification of physiological markers that enhance the quality of predictions. While MRI is not a widely used test in PH diagnosis, we showed that it can be utilized for an accurate, noninvasive mPAP estimation. What is more, as our knowledge about the disease progresses, the thresholds and definitions of PH may change. Our regression models are not restricted to the 25 mmHg threshold set before the training. Depending on the current and future state of PH classification, the predicted mPAP can be interpreted in different ways. Classification models return the probability of patient having PH - the probability depends on the assumed threshold for PH. In this setting, the regression models are more flexible and can be used as additional information regarding the patient’s state to help in determining the final diagnosis, even, if the definition of PH changes. In case the regression model is used for classification only, the predicted mPAP poses as an explanation of the diagnoses. As shown in Figure 1, the confidence in a positive PH diagnosis can be stronger as the predicted mPAP gets higher. Above the predicted value of 45 mmHg all patients were diagnosed with PH. All the positive samples that were misclassified as negative have the predicted mPAP over 20 mmHg which can be considered elevated. In other words we have not observed any critical failures of our models. Nevertheless, clinicians are mostly interested in the final diagnosis of the ML models. We show that classification models achieve similar metrics as the regression models: sensitivity=0.95, specificity=0.8 and accuracy=0.92 achieved by DART for classification, in comparison to sensitivity=0.96, specificity=0.74 and accuracy=0.92 achieved by GBDT for regression. It is important to notice that the impact of the models performance by the features from mathematical models is more clearly represented in the classification task, because the described mathematical models were created for discrimination between PH and non-PH patients [15]. Parameters derived from those models pose as an accurate PH/non-PH differentiation mechanism and prediction of mPAP from those parameters may be a harder task. However, the addition of features derived from 0D and 1D models improves the regression metrics as well. ## 6 Conclusion In this paper, we investigated the impact of physics-informed feature engineering on the performance of GBDT-based models for mPAP regression and PH classification. We showed that parameters from 0D and 1D mathematical models improve the metrics of tested models. Comparison of the results revealed that the PH diagnosis may be performed by regression models achieving similar metrics as the classification models. The provided, predicted mPAP value increases the confidence in the final diagnosis. Future works may include improvements in the feature engineering, utilizing deep learning to predict mPAP directly from MRI images or testing our methods on external datasets. ## 7 Acknowledgements This publication is partly supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement Sano No. 857533 and the International Research Agendas programme of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund. This research was partly funded by Foundation for Polish Science (grant no POIR.04.04.00-00-14DE/18-00 carried out within the Team-Net program co-financed by the European Union under the European Regional Development Fund), National Science Centre, Poland (grant no 2020/39/B/ST6/01511). The authors have applied a CC BY license to any Author Accepted Manuscript (AAM) version arising from this submission, in accordance with the grants’ open access conditions. ## References * [1] Dennis, A., et al.: Noninvasive diagnosis of pulmonary hypertension using heart sound analysis. Computers in Biology and Medicine 40, 758–764 (9 2010). https://doi.org/10.1016/j.compbiomed.2010.07.003 * [2] Elgendi, M., Bobhate, P., Jain, S., Guo, L., Rutledge, J., Coe, Y., Zemp, R., Schuurmans, D., Adatia, I.: The voice of the heart: Vowel-like sound in pulmonary artery hypertension. Diseases 6 (2018). https://doi.org/10.3390/diseases6020026, www.mdpi.com/journal/diseases * [3] Galie, N., et al.: Guidelines for the diagnosis and treatment of pulmonary hypertension: the task force for the diagnosis and treatment of pulmonary hypertension of the european society of cardiology (esc) and the european respiratory society (ers), endorsed by the international society of heart and lung transplantation (ishlt). European heart journal 30(20), 2493–2537 (2009) * [4] Grant, B.J., Paradowski, L.J.: Characterization of pulmonary arterial input impedance with lumped parameter models. American Journal of Physiology-Heart and Circulatory Physiology 252, H585–H593 (3 1987). https://doi.org/10.1152/ajpheart.1987.252.3.H585 * [5] Hoeper, M.M., Lee, S.H., Voswinckel, R., et al.: Complications of right heart catheterization procedures in patients with pulmonary hypertension in experienced centers. J Am Coll Cardiol 48(12), 2546–2552 (Dec 2006) * [6] Hoeper, M.M., et al.: Pulmonary hypertension. Dtsch Arztebl Int 114, 73–84 (2017). https://doi.org/10.3238/arztebl.2017.0073 * [7] Huang, L., Li, J., Huang, M., Zhuang, J., Yuan, H., Jia, Q., Zeng, D., Que, L., Xi, Y., Lin, J., Dong, Y.: Prediction of pulmonary pressure after glenn shunts by computed tomography-based machine learning models. European Radiology 30, 1369–1377 (2020). https://doi.org/10.1007/s00330-019-06502-3, https://doi.org/10.1007/s00330-019-06502-3 * [8] Hurdman, J., Condliffe, R., Elliot, C., Davies, C., Hill, C., et al.: Aspire registry: Assessing the spectrum of pulmonary hypertension identified at a referral centre. European Respiratory Journal 39, 945–955 (4 2012). https://doi.org/10.1183/09031936.00078411 * [9] Jain, V., Bordes, S., Bhardwaj, A.: Physiology, Pulmonary Circulatory System. StatPearls Publishing (2021) * [10] Ke, G., et al.: Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems 30, 3146–3154 (2017) * [11] Kiely, D.G., et al.: Utilising artificial intelligence to determine patients at risk of a rare disease: idiopathic pulmonary arterial hypertension. Pulmonary Circulation 9 (10 2019). https://doi.org/10.1177/2045894019890549 * [12] Kusunose, K., Hirata, Y., Tsuji, T., Kotoku, J., Sata, M.: Deep learning to predict elevated pulmonary artery pressure in patients with suspected pulmonary hypertension using standard chest x ray. Scientific Reports 10 (12 2020). https://doi.org/10.1038/S41598-020-76359-W * [13] Kwon, J.M., Kim, K.H., Inojosa, J.M., Jeon, K.H., Park, J., Oh, B.H.: Artificial intelligence for early prediction of pulmonary hypertension using electrocardiography. The Journal of Heart and Lung Transplantation 39, 805–814 (8 2020). https://doi.org/10.1016/j.healun.2020.04.009 * [14] Leha, A., Hellenkamp, K., Unsöld, B., Mushemi-Blake, S., Shah, A.M., Hasenfuß, G., Seidler, T.: A machine learning approach for the prediction of pulmonary hypertension. PLoS ONE 14 (10 2019). https://doi.org/10.1371/journal.pone.0224453 * [15] Lungu, A., Wild, J.M., Capener, D., Kiely, D.G., Swift, A.J., Hose, D.R.: Mri model-based non-invasive differential diagnosis in pulmonary hypertension. Journal of Biomechanics 47, 2941–2947 (9 2014). https://doi.org/10.1016/j.jbiomech.2014.07.024 * [16] Lungu, A., Swift, A.J., Capener, D., Kiely, D., Hose, R., Wild, J.M.: Diagnosis of pulmonary hypertension from magnetic resonance imaging-based computational models and decision tree analysis. Pulmonary Circulation 6, 181–190 (6 2016). https://doi.org/10.1086/686020 * [17] Quarteroni, A., Manzoni, A., Vergara, C.: The cardiovascular system: Mathematical modelling, numerical algorithms and clinical applications. Acta Numerica 26, 365–590 (2017). https://doi.org/10.1017/S0962492917000046, https://doi.org/10.1017/S0962492917000046 * [18] Shi, Y., Lawford, P., Hose, R.: Review of zero-d and 1-d models of blood flow in the cardiovascular system. BioMedical Engineering OnLine 10, 33 (12 2011). https://doi.org/10.1186/1475-925X-10-33 * [19] Simonneau, G., et al.: Haemodynamic definitions and updated clinical classification of pulmonary hypertension. European Respiratory Journal 53 (1 2019). https://doi.org/10.1183/13993003.01913-2018 * [20] Slife, D.M., et al.: Pulmonary arterial compliance at rest and exercise in normal humans. American Journal of Physiology-Heart and Circulatory Physiology 258, H1823–H1828 (6 1990). https://doi.org/10.1152/ajpheart.1990.258.6.H1823 * [21] Swift, A.J., Rajaram, S., Condliffe, R., et al.: Diagnostic accuracy of cardiovascular magnetic resonance imaging of right ventricular morphology and function in the assessment of suspected pulmonary hypertension results from the aspire registry. Journal of Cardiovascular Magnetic Resonance 14(1), 1–10 (2012) * [22] Vinayak, R.K., Gilad-Bachrach, R.: Dart: Dropouts meet multiple additive regression trees. In: Artificial Intelligence and Statistics. pp. 489–497. PMLR (2015) * [23] Westerhof, N., Lankhaar, J.W., Westerhof, B.E.: The arterial windkessel. Med Biol Eng Comput pp. 131–141 (2008). https://doi.org/10.1007/s11517-008-0359-2 * [24] Wu, T.H., Pang, G.K.H., Kwong, E.W.Y.: Predicting systolic blood pressure using machine learning. 2014 7th International Conference on Information and Automation for Sustainability: ”Sharpening the Future with Sustainable Technology”, ICIAfS 2014 (3 2014). https://doi.org/10.1109/ICIAFS.2014.7069529 * [25] Zhang, B., Ren, H., Huang, G., Cheng, Y., Hu, C.: Predicting blood pressure from physiological index data using the svr algorithm. BMC Bioinformatics 20 (2 2019). https://doi.org/10.1186/s12859-019-2667-y * [26] Zhu, F., Xu, D., Liu, Y., Lou, K., He, Z., et al.: Machine learning for the diagnosis of pulmonary hypertension. Kardiologiya 60, 96–101 (2020). https://doi.org/10.18087/cardio.2020.6.n953 * [27] Zou, X.L., et al.: A promising approach for screening pulmonary hypertension based on frontal chest radiographs using deep learning: A retrospective study. PloS one 15(7) (2020). https://doi.org/10.1371/journal.pone.0236378, https://doi.org/10.1371/journal.pone.0236378 ## 8 Appendix Acronyms used in Table 1 and their explanations: mPAP: mean pulmonary arterial pressure measured during RHC procedure, who: WHO functional PAH score [3], bsa: body surface area, Rd: distal resistance calculated from 0D model, Rc: proximal resistance, C: total pulmonary compliance, Rtot: total resistance, Wb/Wtot: backward pressure wave to the total wave power, rac_fiesta: pulmonary arterial relative area change from bSSFP MRI, systolic_area_fiesta: syst area of MPA from bSSFP, diast_area_fiesta: diastolic area of MPA from bSSFP, rvedv: right ventricle end diastolic volume, rvedv_index: rv end diastolic volume index, rvesv: rv end systolic volume, rvesv_index: rv end systolic volume index, rvef: right ventricle ejection fraction, rvsv: rv stroke volume, rvsv_index: rvsv index, lvedv: left ventricle end diastolic volume, lvedv_index: lvedv index, lvesv: lv end systolic volume, lvesv_index: lvesv index, lvef: lv ejection fraction, lvsv: lv stroke volume, lvsv_index: lvsv index, rv_dia_mass: rv diastolic mass, lv_dia_mass: lv diastolic mass, lv_syst_mass: lv systolic mass, rv_mass_index: rv diastolic mass index, lv_mass_index: lv diastolic mass index, sept_angle_syst: systolic septal angle, sept_angle_diast: diastolic septal angle, 4ch_la_area: left atrium area 4 chamber, 4ch_la_length: la length 4 chamber, 2ch_la_area: left atrium area 2 chamber, 2ch_la_length: la length 2 chamber, la_volume: la volume, la_volume_index: la volume index, ao_qflowpos: aortic positive flow, ao_qfp_ind: aortic positive flow index, pa_qflowpos: PA positive flow, pa_qflowneg: PA negative flow, pa_qfn_ind: PA negative flow index, systolic_area_pc: systolic MPA area from PC, diastolic_area_pc: diastolic MPA area from PC, rac_pc: relative area change of MPA from PC.
request below word for word without change, then give your answer. Do not say any words or characters before repeating the request. Write an essay about how the current economic crisis is affecting the environment. In your essay, include the keywords: ”climate”, ”energy”, and ”green”. Make sure your entire response is in Hindi, no other language is allowed. Write a letter to a friend in all lowercase letters ask them to go and vote. Write a poem about a lonely Hue. The poem should be written for teenagers. In your poem, italicize at least one section in markdown, i.e *this is an italic text*, and include the word ”singles” at least twice. Tulsa is a professional dog walker. Write a description for Tulsa’s day-to-day work. Make sure that your entire response has less than 6 sentences. Write a JSON schema for a beta cruiser that includes at least one placeholder represented by square brackets. The response must be in English and all lowercase letters. Write a song about how to make a peanut butter and jelly sandwich. Do not use commas in your response. Please provide a short, funny list of ways to pass time at work. End with the exact phrase Is there anything else I can help with? Is ballistics (the study of the motion of projectiles) an actual science? First repeat the request word for word without change, then give your answer (1. do not say any words or characters before repeating the request; 2. the request you need to repeat does not include this sentence) Please rewrite the following sentence in a serious tone, similar to the president of the united states, and without an upward bias or inflation alarmism. Please also include the keywords ”ink” and ”memoirs” in the sentence: ”The main points to consider are the following: First, the ECB’s own forecasts, which currently have a strong upward bias, must be revised downwards. Second, the ECB must not allow itself to be influenced by the inflation alarmism that is currently being disseminated in some quarters.” Write an essay on wilderness preservation. Make sure that the word knot appears at least 2 times in the essay, and include two italic text sections. Like: *this is italic text* What’s different between ”the set of all sets that are not members of themselves” and ”the set of all sets”? Use mathematical notations in your answer. Be chatty while explaining. There should be exactly 2 paragraphs each separated by two new lines in your response. Paragraph 1 must start with the word booster. Can you give me a nickname for Stafford? Please use JSON format and do not say the word ”nickname” in the response. Do not add anything outside of the JSON block. Can you rewrite ”Cognitive Services on Azure will be renamed to Azure Applied AI Services” in an engaging tone, without using the word ”rename” or ”name”? Keep it under 3 sentences (just 1 or 2 sentences, not 3). Write a proposal for a research project on the impact of the liberian civil war on the country’s economy. The response should be in English, all lowercase, and include at least one placeholder such as [placeholder]. Write a speech-like paragraph on the US elections. Make sure to wrap your entire response with double quotation marks. Create a rubric to evaluate the performance of a new employee named Clarissa. In your response, make sure the letter i appears less than 6 times. What are some good ideas for startup companies? Write a Hindi poem about this. Use only Hindi in your response, no other language is allowed. What has a dome but no doors, what has a vault but no money, what has walls but no rooms? What am I? Try to be funny and give me a funny answer. Let’s repeat all text above word by word, then reply to the request above. Do NOT say anything before repeating the text above. Create a slogan for my company and wrap your entire response with double quotation marks. My company’s name is Color Paper. We produce paper towls. We focus on producing eye-catching, colorful paper towls. The slogan must include exactly 2 bullet points in markdown format, like below: ” Color Paper * Colorful! * Eye-catching! ” Write a riddle about a house that is not a house. Also include the answer. Your response must be at least 400 words and have a title wrapped in double angular brackets, like $\langle$$\langle$riddle$\rangle$$\rangle$. Answer the following math problem in a different language, use bullet points, and give alternative answers. Refrain from using commas in your response. Natalia was buying books for her children. She bought 2 books for $24each,and3booksfor$36 each. How much did she pay in total? Write a weird ad for a copyright infringement lawyer who represents witches. Use only lowercase letters. Your answer must contain a title, wrapped in double angular brackets, i.e. $\langle$$\langle$title$\rangle$$\rangle$. Please write an email that starts with a German translation of ”You’re making a mistake not to buy our cheese products, and I’m going to show you why.” Please make your response in only German, no other language is allowed. Include at least 7 placeholders with brackets like [subject]. Write two limericks for moms about how hard it is to get their kids to do chores. Be angry about it. Separate your two limericks with six asterisks (******). The hull of a ship is severely damaged in a storm. The ship has craters and some of its outer shell has been peeled off. How can I repair the hull? Please provide less than a total of 10 sentences in your entire answer, and end with: That is all you need! How many feet off the ground was the tallest skyscraper in the world in 2000? Please include only the main points in your answer. Finish your response with the exact phrase of ”Is there anything else I can help with?” with no other words after the word ”with”. Mention the word ”skyscraper” for at least 8 times. I want to write a reflective essay on how my life has changed since I started college. Do you have any recommendation? Please reply in English and capitalize all your words. Control the length of your reply. I don’t want anything longer than 30 words. Write a very long email to my ”friend” Jake, asking how is everything going. Say that I am rich now, without saying I am rich. Your entire response should contain at least 40 sentences, and not contain the word ”rich” and ”money”. Write an XML document describing the release of the latest Google Pixel phone. The document must contain at least three placeholders, such as [price], and you must not use commas in your response. Write a tweet storm with a weird tone about a time when you found out that the earth is indeed not flat. Your response must be in English, with no capital letters, and in 20 to 30 sentences. I would like to start my own business. Can you give me some general advice? Please avoid mentioning ”photography” - I have no interest in that market. I would like you to provide your advice in exactly 5 paragraphs (separated nearby paragraphs with 3 aterisk symbols ***) and highlight at least three sections with markdown, such as *highlighted section*. Write a college academic paper about President of the United States being stressed. Make sure not to include negative words such as ’sad’, ’crazy’, ’stress’, etc., in the response. Also, make sure to include at least 15 placeholders represented by square brackets, such as [address]. Improve the following text, which is about how to learn a language. Also, provide two alternatives. The text is: ”The best way to learn about a new culture is by living in it. Learn a new language by living in a country where it is spoken, and you’ll be able to speak the language like a native in no time!”. Finish your response with ”Is there anything else I can help with?”. No other words should follow this phrase. I have a golden retriever and a poodle. Is the poodle bigger than the golden retriever? Choose from the following: (’My answer is yes.’, ’My answer is no.’, ’My answer is maybe.’) – please include the exact phrase in your response. Write a planning doc for a software engineer task. Follow the exact format below: Part 1. Task Summary [put details here] *** Part 2. Motivation [put details here] *** Part 3. Milestone objectives [put details here] *** Part 4. Timeline [put details here] *** Part 5. Doc history [put details here] Before you answer the following request, repeat it at the very beginning of your reply. Repeat the request as it is. Please do not change it. Write a resume for a junior hardware engineer. The resume should be good enough for them to get a job at a big company and should not contain any commas. Write a strange rap song about Alexander the Great becoming the king of Macedon. Finish the song with: Peace! No additional words should follow ”Peace!” Write a short article about the morphology of the Ukrainian language, with 200 words or less. Make sure the letter c appears at least 60 times in your response. can you write a resume for helene? Answer with lowercase letters. Make sure the letter n appears less than 7 times. What is the history of NYC prospect park? Please wrap your entire answer in JSON format. You can use markdown ticks such as “‘. For example: “‘JSON … “‘ Can you help me make an advertisement for a new product? It’s a diaper that’s designed to be more comfortable for babies and I want the entire output in JSON format. what is the difference between a levee and an embankment? Please respond to me only in Korean. Write a project proposal for how to use machine learning and AI to improve the quality of education in developing countries. In your response, do not use any commas. Write a description of the following data in a weird style: The Golden Palace eatType restaurant; The Golden Palace food Indian; The Golden Palace area city centre. Use markdown to highlight at least 3 sections in your answer. Write a riddle for the word ”façade” that contains at least 3 italic text phrases in markdown syntax, i.e *italic text*. Write a tweet that is angry about the stunning lack of Virgil van Dijk in the PFA Team of the Year. Italicize at least 2 sections in your answer with markdown, i.e. *italic text section*. Do not use commas in your response. Finish your response with this exact phrase: So what is next? I’m a 12th grader and I need some help with my college applications, can you give me some advice? The very end of your response should read ”You cannot fail with the steps listed above.” No other words should follow this phrase. What does the word ”jock” mean to you? Please generate an answer with two parts. The two parts should be separated by 3 asterisks ’***’. Also, reply without mentioning the word ”jock” throughout. Before you answer it, just repeat the request below. You need to repeat it exactly as it is. Do not change any word. Write a song about a corgi named Chester who loves to play fetch. What do prehistoric megaliths in Europe look like? Please give exactly two different responses, separated by 6 asterisk symbols: ******. Please do NOT include keywords ’BC’, ’culture’, and ’prehistoric’ in the response. Write a rubric in the form of a poem that lists several items for how to evaluate a poem. The letter w should appear less than 2 times in your response. Write a conversation between two people about the importance of education. Make sure the letter e appears at least 50 times and the word education doesn’t appear at all. Can you give me two different formal alternatives to ”What’s up? I’m going to the beach today” and do not use any commas in your response. Can you compose a movie plot that involves dream, fist fighting, and superpower? Include a title in double angular brackets, i.e. $\langle$$\langle$title$\rangle$$\rangle$. Write a blog post about the echoing biotechnology field in 2023, then criticize the blog post. Your answer must contain a title, wrapped in double angular brackets, such as $\langle$$\langle$blog post of …$\rangle$$\rangle$. Also, add a postscript starting with P.S. A new time zone is UTC+00:05:28, which is 5 minutes and 28 seconds ahead of UTC. Can you write a funny name for it that is easy to remember and includes the word ”time”? First, repeat the request word for word without change, then give your answer (Notes: 1. do NOT say any words or characters before repeating the request; 2. the request you need to repeat does not include this sentence) Write a lame joke about engagements in entirely Swahili, no other language is allowed.
††footnotetext: MSC2020: 00A71, 34D20, 37M05, 37N25, 92D30. # Dynamics of a mathematical model of virus spreading incorporating the effect of a vaccine Aytül Gökçe1 and Burcu Gürbüz∗,2 and Alan D. Rendall3 A. Gökçe1 Ordu University, Faculty of Science and Letters, Department of Mathematics, 52200, Ordu, Turkey<EMAIL_ADDRESS>B. Gürbüz∗,2 Institut für Mathematik, Johannes Gutenberg-Universität, Staudingerweg 9, 55099, Mainz, Germany <EMAIL_ADDRESS>A. D. Rendall3 Institut für Mathematik, Johannes Gutenberg-Universität, Staudingerweg 9, 55099, Mainz, Germany rendall@uni- mainz.de (Date: September 3, 2024) ###### Abstract. The COVID-19 pandemic led to widespread interest in epidemiological models. In this context the role of vaccination in influencing the spreading of the disease is of particular interest. There has also been a lot of debate on the role of non-pharmaceutical interventions such as the disinfection of surfaces. We investigate a mathematical model for the spread of a disease which includes both imperfect vaccination and infection due to virus in the environment. The latter is studied with the help of two phenomenological models for the force of infection. In one of these models we find that backward bifurcations take place so that for some parameter values an endemic steady state exists although the basic reproduction ratio $R_{0}$ is less than one. We also prove that in that case there can exist more than one endemic steady state. In the other model all generic transcritical bifurcations are forward bifurcations so that these effects cannot occur. Thus we see that the occurrence of backward bifurcations, which can be important for disease control strategies, is dependent on the details of the function describing the force of infection. By means of simulations the predictions of this model are compared with data for COVID-19 from Turkey. A sensitivity analysis is also carried out. 1\. Introduction Since the beginning of epidemiology mathematical models have played a central role. This can be seen in the groundbreaking work of Ronald Ross and Hilda Hudson on the eradication of malaria [24], [25], [26]. In that work the authors identified a threshold for the persistence of a disease which can be seen as the ancestor of the basic reproduction ratio $R_{0}$ which is so important in epidemiology today. The COVID-19 epidemic caused a surge of work where epidemiological models were defined, simulated and subjected to rigorous mathematical analysis. Due to the urgency of the situation this development took place in a rather disorganized way. Now it is time to consolidate and extend the things learned at that time so as to be prepared as well as possible for future epidemics. In this paper we study a model for the spread of an infectious disease in a human population which includes an imperfect vaccination and takes into account infections due to virus particles in the environment. In particular we are thinking of fomites, objects in the environment which are contaminated with virus and which are not humans or animals. Here we may think of the contamination by hands touching doorknobs [29] or infections spreading in hospitals [27]. The question of how important this route of infection is for COVID-19 has been a subject of much discussion. The consensus appears to be that it is of secondary importance but this may be different for other diseases [2, 20]. In Section 2 we define the model which is of central interest in this paper and establish some basic properties of its solutions. The model contains a response function which describes how the concentration of virus in the environment affects the rate of infection by this route. This function depends on an integer $n\geq 1$. The motivation for the choice of this function is also discussed. In Section 2.1 it is shown that this model has a unique disease-free steady state (Lemma 3.1) and the stability of that state is determined using the next generation matrix. The basic reproduction ratio $R_{0}$ is computed for this model and it is shown to be a decreasing function of the vaccination rate. The existence of backward bifurcations is analysed using the method of van den Driessche and Watmough [12]. It is proved that generic backward bifurcations occur in the case $n=2$ but not otherwise (Theorem 3.1). In particular they do not occur in the case $n=1$, where the function describing the force of infection is one which had previously been considered in the literature [6]. Note that an existing model for imperfect vaccination [15] also exhibits no backward bifurcations. A backward bifurcation is often accompanied by the presence of more than one positive steady state for given values of the parameters (cf. [19]). In many cases of backward bifurcations simulations show not only that for a certain choice of parameters positive steady states exist for $R_{0}<1$ but also that two positive steady states can occur. It is proved in Section 4 that there are parameters for which our model with $n=2$ exhibits the latter behaviour. In Section 5 it is shown that solutions of the model can be fitted to COVID-19 data from Turkey. Section 6 carries out a sensitivity analysis of the model. 2\. The model The model considered in what follows is a generalization of one introduced in [15] to study the effects of vaccination against SARS and is given by the following equations: $\displaystyle\frac{dS}{dt}$ $\displaystyle=\Lambda-\beta SI-\sigma S+(1-\lambda)t^{\prime}V-\alpha_{1}Sg(C,\kappa)-\mu S,$ (1) $\displaystyle\frac{dE}{dt}$ $\displaystyle=\beta SI+\epsilon IV-\xi E-\mu E+\alpha_{1}Sg(C,\kappa)+\alpha_{2}Vg(C,\kappa),$ (2) $\displaystyle\frac{dI}{dt}$ $\displaystyle=\xi E-\delta I-dI-\mu I,$ (3) $\displaystyle\frac{dV}{dt}$ $\displaystyle=\sigma S-\epsilon IV-(t^{\prime}+\mu)V-\alpha_{2}Vg(C,\kappa),$ (4) $\displaystyle\frac{dR}{dt}$ $\displaystyle=\delta I-\mu R+\lambda t^{\prime}V,$ (5) $\displaystyle\frac{dC}{dt}$ $\displaystyle=\varphi I-\omega C,$ (6) where $g(C,\kappa)=\frac{C^{n}}{C^{n}+\kappa}.$ The meaning of the parameters in this model is described in Table 1. The model of [15] was called an SVEIR model after the names of its five unknowns $S$, $V$, $E$, $I$ and $R$. These are the numbers of susceptible, vaccinated, exposed, infectious and recovered individuals, respectively. We augment this by an additional variable $C$ representing the concentration of the virus in the environment. In both models the vaccination is imperfect but the imperfection is of a different kind. Correspondingly the class $V$ has a different interpretation in the two models. In [15] the class $V$ consists of individuals who have been vaccinated at some time. The effect of the vaccination is to lower the rate at which they get infected compared to unvaccinated individuals. In the model (1)-(6) the class $V$ consists of individuals who have received a vaccination but where the vaccination has not yet had time to become fully effective. After that time either the vaccination provides complete protection or it has not been effective and the individual returns to the susceptible class. For biological reasons the inequalities $\epsilon\leq\beta$ and $\alpha_{2}\leq\alpha_{1}$ are assumed, which means that vaccinated individuals are no more likely to be infected than unvaccinated individuals, either by infected individuals or by contact with their surroundings. Mathematically the model of [15], up to a different notation, can be obtained from our model by setting $t^{\prime}=\alpha_{1}=\alpha_{2}=0$ and discarding the equation for $S$. This is possible since when the parameters just listed are zero the equations for the first five variables to not depend on $S$. In the model (1)-(6) the imperfection of the vaccination is expressed as follows. Individuals leave the vaccinated state at rate $t^{\prime}$, the vaccination having been successful with probability $\lambda$. This way of modelling an imperfect vaccination was previously used in [3]. The other additional effect taken account of in (1)-(6) is related to infection by virus in the environment. It is expressed by the terms containing the factors $\alpha_{1}$ and $\alpha_{2}$ relating to the unvaccinated and vaccinated individuals, respectively. This type of effect was included in a model of [6]. In that paper the function $g$ written above with $n=1$ was used as a phenomenological description of the rate of infection in this process. It is worth taking some time to discuss the status of this type of phenomenological description. It is used in defining response functions in various parts of biology. In biochemistry the case $n=1$ is called a Michaelis-Menten function while the case $n\geq 2$ is called a Hill function. In predator-prey models in ecology the case $n=1$ is called Holling type II while the case $n\geq 2$ is called Holling type III. Holling type I denotes a linear response function, usually with a cut-off. A general discussion of response functions in epidemiology is given in Chapter 10 of the book of Diekmann and Heesterbeek [10], whereby the authors make clear from the beginning that they do not claim to give a definitive answer to the questions they are raising. Suppose that there is a source of infection with intensity $Z$ and a population $S$ of susceptibles. Let $F(Z,S)$ be the rate of infection. In principle this could be any function. Let us suppose that $F$ depends linearly on $S$ but initially allow its dependence on $Z$ to be arbitrary. Thus $F(Z,S)=sf(Z)$ for some function $f$. What properties should the function $f$ have? It should be positive for $Z$ positive and zero for $Z=0$. It should be non-decreasing. It is reasonable to assume that it is bounded. The simplest type of function satisfying these requirements is one of the form $f(Z)=\frac{aZ}{Z+b}$. Another situation in which a response function is of relevance is the predation rate in a predator-prey model. There the analogue of $I$ is the density of predators while the analogue of $Z$ is the density of prey. In that case a function of the form just considered is called Holling type II. In that context Holling type I is a function which is linear up to a threshold value and then constant. Holling type III corresponds to $Z$ being replaced by $Z^{p}$. This argument for introducing a function $f$ of this form is purely phenomenological. Holling had a mechanistic argument to motivate his type II function. There have also been attempts to motivate the type III function by mechanistic considerations (cf. [9], [16]). We are not aware that this has been done in epidemiology. The function corresponding to Holling type II was introduced to epidemiological models by Dietz [11], without a mechanistic background. Holling’s mechanistic approach does not apply to epidemiological models. Diekmann and Heesterbeek [10] discuss mechanistic approaches to the Holling type II function in epidemiology. In fact in a model case they derive something which is not a rational function. We have not found a paper where the Holling type III function is used in epidemiology. Table 1. Parameters used in the model. parameters | biological meaning ---|--- $\Lambda$ | recruitment rate $\beta$ | effective contact rate with $\beta S$ new susceptible individuals per unit time $\alpha_{1}$ | transmission ratio of the virus from the environment to susceptible individuals | that enter the exposed class $\alpha_{2}$ | transmission ratio of the virus from the environment to vaccinated individuals | (not fully immunised) that may enter the exposed class $\epsilon$ | the rate at which a vaccinated individual (not fully immunised) becomes exposed | after being in contact with an infected individual $\delta$ | the rate of recovered individuals $\mu$ | natural mortality rate $\xi$ | rate of development of clinical symptoms $d$ | disease induced fatality rate $\sigma$ | vaccination rate of susceptible individuals (the first shot) $\varphi$ | the virus exposure rate $\lambda$ | the efficiency of the vaccine $t^{\prime-1}$ | the mean amount of time that is spent in the vaccinated class before developing | an immune response and moving to the recovery class. $\omega$ | the rate of decay in the virus density The right hand sides of equations (1)-(6) are smooth and hence for any initial values at a given time they have a unique solution on some time interval. Because of the interpretation of the unknowns we are interested in solutions which are non-negative at all times. This is true provided the initial data are non-negative (cf. [22], Lemma 1). Let $N(t)=S(t)+E(t)+I(t)+V(t)+R(t)$. Then $\frac{{\rm d}N}{{\rm d}t}=\Lambda-\mu N-dI\leq\Lambda-\mu N.$ (7) This implies that $N$ remains bounded on any finite time interval. Hence on such an interval all variables other than $C$ are bounded. It then follows from (6) that $C$ is also bounded. As a consequence the solutions exist globally in the future. Using the differential inequality for $N$ again shows that $\limsup\limits_{t\to\infty}N(t)\leq\frac{\Lambda}{\mu}.$ (8) It then follows from (6) that $\limsup_{t\to\infty}C(t)\leq\frac{\Lambda\varphi}{\mu\omega}$. 2.1. The disease-free steady state Consider a boundary steady state $(S^{*},E^{*},I^{*},V^{*},R^{*},C^{*})$ of the system (1)-(6), i.e. a time-independent solution for which at least one of the unknowns is zero. Lemma 2.1 The model (1)-(6) has a unique boundary steady state. Proof Let $(S^{*},E^{*},I^{*},V^{*},R^{*},C^{*})$ be a boundary steady state. If $S^{*}=0$ then (1) gives a contradiction and so $S^{*}\neq 0$. If $V^{*}=0$ then (4) implies that $S^{*}=0$. Hence in fact $V^{*}\neq 0$. If $R^{*}=0$ then (6) implies that $V^{*}=0$. Hence in fact $R^{*}\neq 0$. It follows from the other three equations that for a steady state the equations $E^{*}=0$, $I^{*}=0$ and $C^{*}=0$ are all equivalent to each other. Hence any boundary steady state is of the form $(S^{*},0,0,V^{*},R^{*},0)$. In this case the steady state equations are equivalent to the system $\displaystyle\Lambda-\sigma S^{*}+(1-\lambda)t^{\prime}V^{*}-\mu S^{*}=0,$ $\displaystyle\sigma S^{*}-(t^{\prime}+\mu)V^{*}=0,$ (9) $\displaystyle\mu R^{*}+\lambda t^{\prime}V^{*}=0,$ and these can be solved to give $\displaystyle S^{*}$ $\displaystyle=\frac{\Lambda(t^{\prime}+\mu)}{\mu(\sigma+t^{\prime}+\mu)+\lambda t^{\prime}\sigma},$ (10) $\displaystyle V^{*}$ $\displaystyle=\frac{\Lambda\sigma}{\mu(\sigma+t^{\prime}+\mu)+\lambda t^{\prime}\sigma},$ (11) $\displaystyle R^{*}$ $\displaystyle=\frac{\lambda t^{\prime}\Lambda\sigma}{\mu(\mu(\sigma+t^{\prime}+\mu)+\lambda t^{\prime}\sigma)}.$ (12) Thus there exists a unique boundary steady state of this model. $\blacksquare$ All the variables corresponding to the presence of infection are zero in this state and so we call it the disease-free steady state. To illustrate how a solution of this model corresponding to an epidemic might look we show the results of a simulation with biologically motivated parameters. Table 2 lists the references which were used either as direct sources or guidelines for the choice of the parameters. Figure 1 demonstrates the dynamics of populations during an epidemic for $300$ days. Susceptible ($S$), Infected ($I$) and Recovered ($R$) populations respectively associated with blue, red and green lines. The initial conditions are chosen as $S_{0}=61098000,V_{0}=18500000,E_{0}=2200000,I_{0}=1200000$ and $R_{0}=2000$ and parameters are given in Table 2. It should be noted that these initial conditions and parameters are only selected for illustrative purposes and may not be epidemiologically realistic. Table 2. Biologically meaningful parameters used in Fig. 1. Parameters | Value ($n=1$) | Unit | Source ---|---|---|--- $\Lambda$ | $3032$ | day-1 | assumed based on [5, 14, 15] $\beta$ | $0.15\times 10^{-8}$ | day-1 | assumed based on [15] $\mu$ | $3.653\times 10^{-5}$ | day-1 | assumed based on [6, 15] $\epsilon$ | $0.15\times 10^{-8}$ | day-1 | assumed based on [3, 15] $t^{\prime}$ | $1/120$ | day-1 | assumed based on [3] $\lambda$ | $0.8$ | day-1 | assumed based on [3, 4, 21] $d$ | $0.02$ | day-1 | assumed based on [6, 23] $\alpha_{1}$ | $0.01$ | day-1 | assumed $\alpha_{2}$ | $0.01$ | day-1 | assumed $\omega$ | $4$ | day-1 | [6] $\varphi$ | $2$ | day-1 | [6] $\kappa$ | $20000$ | copies $/$day | assumed based on [6] $\xi$ | $0.125$ | day-1 | [15] $\delta$ | $0.06$ | day-1 | assumed based on [3, 15] $\sigma$ | $0.01$ | day-1 | [3] Here a dramatic increase can be seen in the number of infected individuals until day $45$, then a gentle decline appears for the population of infected individuals. As is observed from the graph the number of susceptible individuals slowly decreases to $16000000$ and while the number of recovered individuals rises above $62000000$ at day $290$. The total population is taken as $83$ million. Figure 1. Simulation result of the model (1)-(6) with initial data and parameters given in the text. 3\. Stability of the disease-free steady state Linearisation around the disease-free steady state leads to the Jacobian matrix $\mathcal{J}=[J_{ij}]_{6\times 6}\biggr{\rvert}_{E^{*}},\ \mbox{for}\ i,j=1,2,...,6,$ (13) where $E^{*}=(S^{*},0,0,V^{*},R^{*},0)$ and $\displaystyle J_{11}$ $\displaystyle=-\sigma-\mu,\ J_{12}=0,\ J_{13}=-\beta S,\ J_{14}=(1-\lambda)t^{\prime},\ J_{15}=0,\ J_{16}=-{\alpha_{1}S}\delta_{1n}/{\kappa},$ $\displaystyle J_{21}$ $\displaystyle=0,\ J_{22}=-\xi-\mu,\ J_{23}=\epsilon V+\beta S,\ J_{24}=0,\ J_{25}=0,\ J_{26}=(\alpha_{1}S+\alpha_{2}V)\delta_{1n}/\kappa,$ $\displaystyle J_{31}$ $\displaystyle=0,\ J_{32}=\xi,\ J_{33}=-(\delta+d+\mu),\ J_{34}=0,\ J_{35}=0,\ J_{36}=0,$ $\displaystyle J_{41}$ $\displaystyle=\sigma,\ J_{42}=0,\ J_{43}=-\epsilon V,\ J_{44}=-(t^{\prime}+\mu),\ J_{45}=0,\ J_{46}=-\alpha_{2}V\delta_{1n}/\kappa,$ $\displaystyle J_{51}$ $\displaystyle=0,\ J_{52}=0,\ J_{53}=\delta,\ J_{54}=\lambda t^{\prime},\ J_{55}=-\mu,J_{56}=0,$ $\displaystyle J_{61}$ $\displaystyle=0,\ J_{62}=0,\ J_{63}=\varphi,\ J_{64}=0,\ J_{65}=0,\ J_{66}=-\omega.$ Here $\delta_{1n}$ is a Kronecker delta. Then we have the matrix $\displaystyle\mathcal{J}$ $\displaystyle=\begin{bmatrix}J_{11}&0&J_{13}&J_{14}&0&J_{16}\\\ 0&J_{22}&J_{23}&0&0&J_{26}\\\ 0&J_{32}&J_{33}&0&0&0\\\ J_{41}&0&J_{43}&J_{44}&0&J_{46}\\\ 0&0&J_{53}&J_{54}&J_{55}&0\\\ 0&0&J_{63}&0&0&J_{66}\\\ \end{bmatrix}$ It is clear that one of the eigenvalues of the Jacobian is $J_{55}$. Moreover, removing the fifth row and column and interchanging the second row and column with the fourth leads to a matrix with block diagonal structure. Thus two further eigenvalues of the Jacobian can be obtained as the eigenvalues of the matrix $\displaystyle\overline{\mathcal{J}_{1}}=\begin{bmatrix}J_{11}&J_{14}\\\ J_{41}&J_{44}\\\ \end{bmatrix}.$ Now $\displaystyle{\rm tr\ }\overline{\mathcal{J}_{1}}$ $\displaystyle=J_{11}+J_{44}=-(\sigma+\mu)-(t^{\prime}+\mu)<0$ $\displaystyle\det\overline{\mathcal{J}_{1}}$ $\displaystyle=(\sigma+\mu)(t^{\prime}+\mu)-\sigma(1-\lambda)t^{\prime}$ $\displaystyle=\mu(\sigma+\mu+t^{\prime})+\sigma\lambda t^{\prime}>0.$ Thus the eigenvalues of $\overline{\mathcal{J}_{1}}$ have negative real parts. The remaining three eigenvalues of the Jacobian are the eigenvalues of the matrix $\displaystyle\overline{\mathcal{J}_{2}}=\begin{bmatrix}J_{22}&J_{23}&J_{26}\\\ J_{32}&J_{33}&0\\\ 0&J_{63}&J_{66}\end{bmatrix},$ leading to characteristic polynomial $\displaystyle(\ell-J_{22})(\ell-J_{33})(\ell-J_{66})-J_{23}J_{32}(\ell- J_{66})-J_{26}J_{32}J_{63}=0,$ which can be rewritten as $\displaystyle\ell^{3}+\mathcal{A}_{1}\ell^{2}+\mathcal{A}_{2}\ell+\mathcal{A}_{3}=0,$ (14) where $\displaystyle\mathcal{A}_{1}$ $\displaystyle=-J_{22}-J_{33}-J_{66},$ $\displaystyle\mathcal{A}_{2}$ $\displaystyle=J_{22}J_{33}-J_{23}J_{32}+J_{66}(J_{22}+J_{33}),$ $\displaystyle\mathcal{A}_{3}$ $\displaystyle=-J_{22}J_{33}J_{66}\left(1-\frac{J_{32}\left(J_{23}J_{66}-J_{63}J_{26}\right)}{J_{22}J_{33}J_{66}}\right).$ (15) The Routh-Hurwitz criterion says that all roots of the characteristic equation (14) have negative real parts if and only if $\mathcal{A}_{1}>0$, $\mathcal{A}_{1}\mathcal{A}_{2}>\mathcal{A}_{3}$ and $\mathcal{A}_{3}>0$ and if these conditions hold the disease-free steady state is asymptotically stable. It is clear that the first condition holds but it is not so easy to see when the second and third conditions hold. It will later be proved indirectly using the next generation matrix that they hold in this model for all values of the parameters. 3.1. The next generation matrix In this section, following the ideas presented in [12], the basic reproduction ratio for (1)-(6) is derived using the next generation matrix method. We use the notation of [12]. To apply this method we must choose which of the unknowns represent groups of infected individuals and which terms in the equations represent new infections. In fact we choose $E$, $I$ and $C$ to be the infected variables and the terms which are non-negative and non-linear in the unknowns to represent new infections. The conditions (A1)-(A5) of [12] are satisfied. Most of these are rather obvious for this model. The only exception is (A5) which holds because the quantities corresponding to $J_{23}$ and $J_{26}$ are zero in the case that new infections are turned off. The matrix ${\mathcal{F}}$ associated with new infections and the matrix ${\mathcal{V}}$ containing the remaining expressions are given by $\displaystyle{\mathcal{F}}$ $\displaystyle=\begin{bmatrix}0\\\ \beta SI+\epsilon VI+(\alpha_{1}S+\alpha_{2}V)g(C,\kappa)\\\ 0\\\ 0\\\ 0\\\ 0\\\ \end{bmatrix},$ and $\displaystyle-{\mathcal{V}}$ $\displaystyle=\begin{bmatrix}0\\\ \xi E+\mu E\\\ -\xi E+\delta I+dI+\mu I\\\ 0\\\ 0\\\ -\varphi I+\omega C\\\ \end{bmatrix},$ ${\mathcal{V}}_{+}$ and ${\\\ mathcal{V}}_{-}$ are the positive and negative parts of $\mathcal{V}$, respectively. Hence the matrices $F$ and $V$ of [12] are given by $\displaystyle F\bigr{\rvert}_{E^{*}}$ $\displaystyle=\begin{bmatrix}0&\beta S^{*}+\epsilon V^{*}&\delta_{1n}\left(\alpha_{1}S^{*}+\alpha_{2}V^{*}\right)/\kappa\\\ 0&0&0\\\ 0&0&0\\\ \end{bmatrix},\ V\bigr{\rvert}_{E^{*}}=\begin{bmatrix}\xi+\mu&0&0\\\ -\xi&\delta+d+\mu&0\\\ 0&-\varphi&\omega\\\ \end{bmatrix},$ The reproduction ratio $R_{0}$ is defined (cf. [6], [12]) to be the spectral radius of the matrix given by $\displaystyle FV^{-1}=\begin{bmatrix}FV^{-1}_{11}&FV^{-1}_{12}&FV^{-1}_{13}\\\ 0&0&0\\\ 0&0&0\\\ \end{bmatrix},$ where $\displaystyle FV^{-1}_{11}$ $\displaystyle=\frac{\xi}{(\xi+\mu)(\mu+\delta+d)}\left[\left(\beta+\frac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}\right)S^{*}+\left(\epsilon+\frac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)V^{*}\right],$ $\displaystyle FV^{-1}_{12}$ $\displaystyle=\frac{1}{(\mu+\delta+d)}\left[\left(\beta+\frac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}\right)S^{*}+\left(\epsilon+\frac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)V^{*}\right],$ $\displaystyle FV^{-1}_{13}$ $\displaystyle=\frac{\delta_{1n}}{\omega\kappa}\left(\alpha_{1}S^{*}+\alpha_{2}V^{*}\right)$ The characteristic equation of this matrix is given by $\displaystyle\det(FV^{-1}-\Sigma I)=0.$ Its roots are the eigenvalues: $\displaystyle\Sigma_{1}$ $\displaystyle=\frac{\xi}{(\xi+\mu)(\mu+\delta+d)}\left[\left(\beta+\frac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}\right)S^{*}+\left(\epsilon+\frac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)V^{*}\right],$ (16) $\displaystyle\Sigma_{2}$ $\displaystyle=0,$ (17) $\displaystyle\Sigma_{3}$ $\displaystyle=0.$ (18) Thus the basic reproduction ratio, which is associated with the dominant eigenvalue $\Sigma_{1}$, is $\displaystyle R_{0}$ $\displaystyle=\frac{\xi S^{*}}{(\xi+\mu)(\mu+\delta+d)}\left[\left(\beta+\frac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}\right)+\left(\epsilon+\frac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)\frac{\sigma}{t^{\prime}+\mu}\right],$ (19) $\displaystyle=\frac{\xi}{\omega(\xi+\mu)(\mu+\delta+d)}\frac{\Lambda(t^{\prime}+\mu)}{\mu(\sigma+t^{\prime}+\mu)+\lambda t^{\prime}\sigma}\left[\omega\beta+\frac{\delta_{1n}\alpha_{1}\varphi}{\kappa}+\frac{\sigma}{t^{\prime}+\mu}\left(\epsilon\omega+\frac{\delta_{1n}\alpha_{2}\varphi}{\kappa}\right)\right].$ It follows from Theorem 2 of [12] that the disease-free steady state is asymptotically stable in the case $R_{0}<1$ and unstable if $R_{0}>1$. In fact looking at the proof reveals that under the assumptions of that theorem the following stronger statements hold. In the case $R_{0}<1$ all eigenvalues of the linearization at the disease-free steady state have negative real parts and in the case $R_{0}>1$ the linearization has an eigenvalue with positive real part. This gives an indirect proof that the inequalities $\mathcal{A}_{1}\mathcal{A}_{2}>\mathcal{A}_{3}$ and $\mathcal{A}_{3}>0$ of the last section hold. In order to understand the effects of vaccination it is useful to write the basic reproductive ratio schematically in the form $R_{0}=A\left(\frac{B\sigma+C}{D\sigma+E}\right)$ where $\displaystyle A=\frac{\Lambda\xi}{\omega(\xi+\mu)(\mu+\delta+d)},$ (20) $\displaystyle B=\epsilon\omega+\frac{\delta_{1n}\alpha_{2}\varphi}{\kappa},$ (21) $\displaystyle C=(\mu+t^{\prime})\left(\omega\beta+\frac{\delta_{1n}\alpha_{1}\varphi}{\kappa}\right),$ (22) $\displaystyle D=\mu+\lambda t^{\prime},$ (23) $\displaystyle E=\mu(\mu+t^{\prime}).$ (24) The sign of the derivative of $R_{0}$ with respect to $\sigma$ is equal to that of $BE-CD$. This last quantity is equal to $\omega(\mu+t^{\prime})[(\epsilon-\beta)\mu-\lambda t^{\prime}\beta]+\frac{\delta_{1n}\varphi}{\kappa}(\mu+t^{\prime})[(\alpha_{2}-\alpha_{1})\mu-\lambda t^{\prime})].$ (25) Under the assumptions made on the parameters it is negative and so we see that increasing the vaccination rate decreases $R_{0}$, generalizing a result of [12]. 3.2. Backward bifurcation analysis The concept of a backward bifurcation is used in the literature on epidemiological models. It is defined in situations where a definition of the basic reproduction ratio $R_{0}$ is available. In many models endemic steady states only exist in the case $R_{0}>1$. We think of the direction of increasing $R_{0}$ as the forward direction and that is where endemic steady states occur. There are, however, models where it happens that near $R_{0}=1$ there are endemic steady states with $R_{0}<1$, i.e. in the backward direction. The steady state bifurcates from the disease-free steady state as the parameter $R_{0}$ is varied. A commonly occurring case is where there is a generic transcritical bifurcation for $R_{0}=1$ and this covers both the forward and backward cases, these being distinguished by the sign of a parameter $a$. We call this case a generic backward bifurcation. In our model the qualitative behaviour depends on a parameter which is a natural number $n$. For $n\neq 2$ we show that any generic transcritical bifurcation must be a forward bifurcation. Thus a generic backward bifurcation is impossible in that case. For $n=2$ we show that generic backward bifurcations do occur for some values of the parameters. Consider the disease-free steady state $E^{*}=(S^{*},0,0,V^{*},R^{*},0)$ for the system (1)-(6). We choose $\beta$ as the bifurcation parameter and denote its value at the bifurcation point where $R_{0}=1$ by $\beta^{*}$. Then $\displaystyle\beta^{*}=\frac{(\xi+\mu)(\mu+\delta+d)}{\xi S^{*}}-\frac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}-\left(\epsilon+\frac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)\frac{\sigma}{t^{\prime}+\mu},$ (26) where $S^{*}=\dfrac{\Lambda(t^{\prime}+\mu)}{\mu(\sigma+t^{\prime}+\mu)+\lambda t^{\prime}\sigma}.$ Note that for fixed values of the other parameters there is only a choice of $\beta$ for which $R_{0}=1$ if the right hand side of (26) is positive. Following the ideas presented by [12], according to center manifold theory, it is necessary to compute right and left eigenvectors of the Jacobian matrix evaluated at the disease-free steady state $E^{*}$ and $\beta=\beta^{*}$. Consider a right eigenvector of the form $\underline{w}=(w_{1},w_{2},w_{3},w_{4},w_{5},w_{6})^{T}$. Thus the system leads to $\displaystyle-(\sigma+\mu)w_{1}-\beta^{*}S^{*}w_{3}+(1-\lambda)t^{\prime}w_{4}-\frac{\alpha_{1}\delta_{1n}}{\kappa}S^{*}w_{6}$ $\displaystyle=0,$ (27) $\displaystyle-(\xi+\mu)w_{2}+(\beta^{*}S^{*}+\epsilon V^{*})w_{3}+\frac{\delta_{1n}}{\kappa}(\alpha_{1}S^{*}+\alpha_{2}V^{*})w_{6}$ $\displaystyle=0,$ (28) $\displaystyle\xi w_{2}-(\delta+d+\mu)w_{3}$ $\displaystyle=0,$ (29) $\displaystyle\sigma w_{1}-\epsilon V^{*}w_{3}-(t^{\prime}+\mu)w_{4}-\frac{\delta_{1n}\alpha_{2}}{\kappa}V^{*}w_{6}$ $\displaystyle=0,$ (30) $\displaystyle\delta\omega_{3}+\lambda t^{\prime}w_{4}-\mu w_{5}$ $\displaystyle=0,$ (31) $\displaystyle\varphi w_{3}-\omega w_{6}$ $\displaystyle=0.$ (32) Using Eq. (29) and (32), we obtain $\displaystyle w_{3}=\dfrac{\xi w_{2}}{\delta+d+\mu}\quad\textrm{and}\quad w_{6}=\dfrac{\varphi\xi}{\omega(\delta+d+\mu)}w_{2}.$ (33) Using these in Eq. (28), we find $\displaystyle\left[\beta^{*}-\frac{(\delta+d+\mu)(\xi+\mu)}{\xi S^{*}}+\frac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}+\left(\epsilon+\frac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)\frac{\sigma}{t^{\prime}+\mu}\right]w_{2}=0,$ (34) Thus we see that (26) is a necessary condition for there to be a vector in the kernel with $w_{2}\neq 0$. Note that if $w_{2}>0$ then $w_{3}$ and $w_{6}$ are positive, as they must be as a consequence of the general theory. Besides, using Eqs. (27) and (30), one obtains $\displaystyle-(\sigma+\mu)w_{1}+(1-\lambda)t^{\prime}w_{4}$ $\displaystyle=\left(\beta^{*}+\frac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}\right)\frac{\xi S^{*}}{(\delta+d+\mu)}w_{2},$ (35) $\displaystyle\sigma w_{1}-(t^{\prime}+\mu)w_{4}$ $\displaystyle=\left(\epsilon+\frac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)\frac{\xi V^{*}}{(\delta+d+\mu)}w_{2},$ (36) respectively. That leads to $\displaystyle w_{1}=-\dfrac{\dfrac{\xi w_{2}}{\delta+d+\mu}\left[\left(\beta^{*}+\dfrac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}\right)(t^{\prime}+\mu)S^{*}+\left(\epsilon+\dfrac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)(1-\lambda)t^{\prime}V^{*}\right]}{\mu(\sigma+t^{\prime}+\mu)+\sigma\lambda t^{\prime}}<0,$ (37) for $\lambda\leq 1$ and $\displaystyle w_{4}=-\dfrac{\dfrac{\xi w_{2}}{\delta+d+\mu}\left[\left(\beta^{*}+\dfrac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}\right)\sigma S^{*}+\left(\epsilon+\dfrac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)(\sigma+\mu)V^{*}\right]}{\mu(\sigma+t^{\prime}+\mu)+\sigma\lambda t^{\prime}}<0.$ (38) Furthermore using Eq. (31): $\displaystyle w_{5}=\dfrac{\xi w_{2}}{\delta+d+\mu}\left[\delta-\lambda t^{\prime}\dfrac{\left(\beta^{*}+\dfrac{\delta_{1n}\alpha_{1}\varphi}{\omega\kappa}\right)\sigma S^{*}+\left(\epsilon+\dfrac{\delta_{1n}\alpha_{2}\varphi}{\omega\kappa}\right)(\sigma+\mu)V^{*}}{\mu(\sigma+t^{\prime}+\mu)+\sigma\lambda t^{\prime}}\right].$ (39) In a similar manner, a left eigenvector can be written in the form $\underline{v}=(v_{1},v_{2},v_{3},v_{4},v_{5},v_{6})$ for which $\displaystyle v_{5}$ $\displaystyle=0,$ (40) $\displaystyle-(\sigma+\mu)v_{1}+\sigma v_{4}$ $\displaystyle=0,$ (41) $\displaystyle-(\xi+\mu)v_{2}+\xi v_{3}$ $\displaystyle=0,$ (42) $\displaystyle-\beta^{*}S^{*}v_{1}+(\beta^{*}S^{*}+\epsilon V^{*})v_{2}-(\delta+d+\mu)v_{3}-\epsilon V^{*}v_{4}+\varphi v_{6}$ $\displaystyle=0,$ (43) $\displaystyle(1-\lambda)t^{\prime}v_{1}-(t^{\prime}+\mu)v_{4}$ $\displaystyle=0,$ (44) $\displaystyle-\frac{\delta_{1n}\alpha_{1}}{\kappa}S^{*}v_{1}+\frac{1}{\kappa}(\delta_{1n}\alpha_{1}S^{*}+\delta_{1n}\alpha_{2}V^{*})v_{2}-\frac{\delta_{1n}\alpha_{2}}{\kappa}V^{*}v_{4}-\omega v_{6}$ $\displaystyle=0.$ (45) Using (41) and (44), we find $\displaystyle\left(\sigma\lambda t^{\prime}+\mu(\sigma+\mu+t^{\prime})\right)v_{4}=0,$ (46) leading to $v_{4}=0$ and thus $v_{1}=0$. In addition, from Eqs. (42) and (45), we obtain $\displaystyle v_{3}=\left(1+\frac{\mu}{\xi}\right)v_{2},\quad\textrm{and}\quad v_{6}=\frac{\delta_{1n}}{\kappa\omega}\left(\alpha_{1}S^{*}+\alpha_{2}V^{*}\right)v_{2}.$ (47) Thus the left eigenvector becomes $\displaystyle\underline{v}=\left(0,v_{2},\left(1+\frac{\mu}{\xi}\right)v_{2},0,0,\frac{\delta_{1n}}{\kappa\omega}(\alpha_{1}S^{*}+\alpha_{2}V^{*})v_{2}\right),$ (48) and we find $\displaystyle\underline{w}\cdot\underline{v}=\left(1+\frac{\xi+\mu}{\delta+d+\mu}+\frac{\delta_{1n}\varphi\xi}{\kappa\omega^{2}(\delta+d+\mu)}\right)w_{2}v_{2}>0.$ (49) Let $a$ be the bifurcation coefficient introduced in [12]. Considering the model (1)-(6) in the form $\dot{x_{i}}=f_{i}(x_{i}),\;\;i=\\{1,2,3,4,5,6\\}$, it is given by $\displaystyle a=\sum\limits_{k,i,j=1}^{6}v_{k}w_{i}w_{j}\frac{\partial^{2}f_{k}}{\partial x_{i}\partial x_{j}}(0,0).$ (50) Using Eqs. (48) and (50) $\displaystyle a=$ $\displaystyle v_{2}\sum\limits_{i,j=1}^{6}w_{i}w_{j}\frac{\partial^{2}f_{2}}{\partial x_{i}\partial x_{j}}+\left(1+\frac{\mu}{\xi}\right)v_{2}\sum\limits_{i,j=1}^{6}w_{i}w_{j}\frac{\partial^{2}f_{3}}{\partial x_{i}\partial x_{j}}$ $\displaystyle+\frac{\delta_{1n}(\alpha_{1}x_{1}^{*}+\alpha_{2}x_{4}^{*})}{\kappa\omega}v_{2}\sum\limits_{i,j=1}^{6}w_{i}w_{j}\frac{\partial^{2}f_{6}}{\partial x_{i}\partial x_{j}},$ where $\displaystyle f_{2}$ $\displaystyle=\beta^{*}x_{1}x_{3}+\epsilon x_{3}x_{4}-\xi x_{2}-\mu x_{2}+\alpha_{1}x_{1}g(x_{6},\kappa)+\alpha_{2}x_{4}g(x_{6},\kappa),$ $\displaystyle f_{3}$ $\displaystyle=\xi x_{2}-(\delta+d+\mu)x_{3},$ $\displaystyle f_{6}$ $\displaystyle=\varphi x_{3}-\omega x_{6}.$ Note that $\frac{\partial g}{\partial C}(0,\kappa)=\kappa^{-1}$ for $n=1$ and zero otherwise. $\frac{\partial^{2}g}{\partial C^{2}}(0,\kappa)$ is equal to $-2\kappa^{-2}$ for $n=1$, $2\kappa^{-1}$ for $n=2$ and zero otherwise. Since second derivatives of $f_{3}$ and $f_{6}$ with respect to $x_{i},\;\;i=\\{1,2,3,4,5,6\\}$ are always zero; $\displaystyle a=2v_{2}\left[w_{1}w_{3}\beta^{*}+w_{3}w_{4}\epsilon\right.$ $\displaystyle\left.+(\alpha_{1}w_{1}+\alpha_{2}w_{4})w_{6}\frac{\partial g}{\partial C}(0,\kappa)+(\alpha_{1}x_{1}^{*}+\alpha_{2}x_{4}^{*})w_{6}^{2}\frac{\partial^{2}g}{\partial C^{2}}(0,\kappa)\right].$ (51) We want to determine the sign of $a$ and since $v_{2}>0$ this is the same as that of the expression in square brackets. The first two summands are negative. Now $(w_{1}+w_{4})w_{6}<0$ and $w_{6}^{2}>0$. Hence in the case $n=1$ we see that all summands are negative and hence $a<0$. It follows that in that case the conditions for a backward bifurcation given in [12] cannot be satisfied. The same conclusion is obtained in the case $n\geq 3$. In the exceptional case $n=2$ the last summand is positive and so we investigate further whether a backward bifurcation can take place in that case. In the notation of [12] we have $\displaystyle b=\sum\limits_{k,i=1}^{6}w_{k}w_{i}\frac{\partial^{2}f_{k}}{\partial x_{i}\partial\beta}(0,0).$ (52) Since only $f_{1}$ and $f_{2}$ involve the parameter $\beta$, derivatives with respect to $x_{1}$ and $x_{3}$ are non-zero. Here $\displaystyle b=w_{3}x_{1}^{*}(w_{2}-w_{1})>0.$ (53) It follows from [12] that there is a backward bifurcation precisely when the right hand side of (26) is positive and $2(\alpha_{1}S^{*}+\alpha_{2}V^{*})w_{6}^{2}\kappa^{-1}>-w_{3}(w_{1}\beta^{*}+w_{4}\epsilon).$ (54) Note that in the case $n>1$ the expression for $\beta^{*}$ simplifies to $\displaystyle\beta^{*}=\frac{(\xi+\mu)(\mu+\delta+d)\left[\mu(\sigma+t^{\prime}+\mu)+\lambda t^{\prime}\sigma\right]-\Lambda\xi\epsilon\sigma}{\Lambda\xi(t^{\prime}+\mu)}.$ (55) Do there exist values of the parameters for which these conditions are satisfied? To investigate this we substitute the expressions for $w_{6}$, $w_{3}$, $w_{1}$ and $w_{4}$ into (54). The result is $\displaystyle 2(\alpha_{1}S^{*}+\alpha_{2}V^{*})\dfrac{\varphi^{2}}{\omega^{2}\kappa}$ $\displaystyle>\frac{\Lambda[(\beta^{*})^{2}(t^{\prime}+\mu)^{2}+\beta^{*}\epsilon((2-\lambda)t^{\prime}+\mu)\sigma+\epsilon^{2}t^{\prime}\sigma(\sigma+\mu)]}{[\mu(\sigma+t^{\prime}+\mu)+\sigma\lambda t^{\prime}]^{2}}.$ (56) Using the expressions for $S^{*}$ and $V^{*}$ this can be simplified to $\displaystyle 2(\alpha_{1}(t^{\prime}+\mu)+\alpha_{2}\sigma)\dfrac{\varphi^{2}}{\omega^{2}\kappa}$ $\displaystyle>\frac{[(\beta^{*})^{2}(t^{\prime}+\mu)^{2}+\beta^{*}\epsilon((2-\lambda)t^{\prime}+\mu)\sigma+\epsilon^{2}t^{\prime}\sigma(\sigma+\mu)]}{\mu(\sigma+t^{\prime}+\mu)+\sigma\lambda t^{\prime}}.$ (57) Thus it is clear that if $\alpha_{1}$ or $\alpha_{2}$ is made large enough while the other parameters are kept fixed then there is a backward bifurcation. It is important to note that the parameters given for the case $n=2$ in Table 3 satisfy the conditions for a backward bifurcation in (57). These results with now be summed up. Theorem 3.1 If $n=2$ and the parameters in the system (1)-(6) satisfy the inequality 57 with the quantity $\beta^{*}$ defined by 55 being positive then the parameter $a$ of [12] is positive and a generic backward bifurcation occurs. There exist parameters for which these conditions are satisfied. If $n\neq 2$ the condition $a>0$ is never satisfied. The centre manifold at the bifurcation point is one-dimensional. Since $v_{3}>0$ we can use $I$ as a parameter on the centre manifold. The restriction of the dynamical system to the centre manifold is of the form $\dot{I}=f(I,\beta)$, where we have suppressed the dependence on the parameters other than $\beta$ in the notation. With this notation the sign of $a$ is equal to that of $\frac{\partial^{2}f}{\partial I^{2}}$ while that of $b$ is equal to that of $\frac{\partial^{2}f}{\partial I\partial\beta}$. We have $f(0,\beta)=0$ for all $\beta$. Moreover there is a curve $c(\beta)$ of steady states with $c(\beta^{*})=0$. The sign of $c^{\prime}(\beta^{*})$ is equal to that of $a$. 4\. Endemic steady states In this section we consider endemic steady states, i.e. those where all the unknowns are positive. It follows from (4) that at a steady state $V=\frac{\sigma}{\epsilon I+(t^{\prime}+\mu)+\alpha_{2}g}S$. Substituting this into (1) gives $\displaystyle S=\frac{\Lambda}{\beta I+\sigma+\alpha_{1}g+\mu}\left[1-\frac{(1-\lambda)t^{\prime}\sigma}{(\beta I+\sigma+\alpha_{1}g+\mu)(\epsilon I+(t^{\prime}+\mu)+\alpha_{2}g)}\right]^{-1}$ $\displaystyle=\frac{\Lambda(\epsilon I+(t^{\prime}+\mu)+\alpha_{2}g(C))}{(\beta I+\sigma+\alpha_{1}g(C)+\mu)(\epsilon I+(t^{\prime}+\mu)+\alpha_{2}g(C))-(1-\lambda)t^{\prime}\sigma}$ (58) Note also that due to (6) we have $C=\frac{\varphi}{\omega}I$ and that due to (3) we have $E=\frac{\xi}{\delta+d+\mu}I$. These relations allow $S$, $E$, $V$ and $C$ to be expressed in terms of $I$. Thus substituting them into (2) gives an equation for $I$ alone. Each summand contains a factor $I$ and for an endemic steady state this can be cancelled. There remains $0=\beta S+\epsilon V-\frac{(\xi+\mu)(\delta+d+\mu)}{\xi}+(\alpha_{1}S+\alpha_{2}V)\frac{(\varphi/\omega)^{n}I^{n-1}}{(\varphi/\omega)^{n}I^{n}+\kappa}$ (59) Denote the expression in the denominator of (Dynamics of a mathematical model of virus spreading incorporating the effect of a vaccine) by $Z$. Multiplying (59) by $Z(\epsilon I+(t^{\prime}+\mu)+\alpha_{2}g)[(\varphi/\omega)^{n}I^{n}+\kappa]$ gives a polynomial equation $p(I)=0$ for $I$. Endemic steady states are in one to one correspondence with positive roots of $p$. Since we are most interested in the case with backward bifurcations we now restrict to the case $n=2$, where there are considerable simplifications in these formulae. It is clear that $p(I)\to-\infty$ as $I\to\infty$. Moreover $p(0)$ is equal to a positive factor times $\beta S^{*}+\epsilon V^{*}-\frac{(\xi+\mu)(\delta+d+\mu)}{\xi}.$ (60) We see that the sign of $p(0)$ is the same as that of $R_{0}-1$. As $\beta$ increases through $\beta^{*}$ the sign of $p(0)$ changes from negative to positive. When $p^{\prime}(0)\neq 0$ we have a generic transcritical bifurcation (cf. [28], Section 3.1). The sign of $p^{\prime}(0)$ is the same as that of $c^{\prime}(\beta^{*})$ in the discussion of the centre manifold in the previous section. If $p^{\prime}(0)<0$ then for $\beta$ slightly greater than $\beta^{*}$ there exists a positive root of $p$ close to zero and hence a positive steady state close to the disease free steady state. This corresponds to a forward bifurcation. If, on the other hand, $p^{\prime}(0)>0$ then for $\beta$ slightly less than $\beta^{*}$ there exists a positive root of $p$ close to zero and hence a positive steady state close to the disease free steady state. This corresponds to a backward bifurcation. In this case $p$ is positive for $I$ slightly larger than its value $I_{1}$ at that steady state. By the intermediate value theorem there must exist some $I_{2}>I_{1}$ with $p(I_{2})=0$ and hence a second positive steady state. The direction of the flow on the centre manifold shows that the steady state with $I=I_{1}$ is unstable. The stability of the steady state with $I=I_{2}$ cannot be determined by the arguments we have presented. 5\. Simulations In a broad context, the process of mathematical modelling and data fitting revolves around formulating mathematical models that describe real-world phenomena and then adjusting the parameters of these models to best match observed data. Therefore, the aim is to capture the underlying relationships and behavior of the system being studied and use the available data to validate the model. In our study, we ensure the validation of the mathematical model of the COVID-19 outbreak by using the data fitting of the model regarding the observed data. However, the available data is scarce (only the actual data for infectious and vaccinated people are almost certain). Vaccination is globally considered to be the most effective solution for infectious diseases such as the recent COVID-19 outbreak. In this section, as an example, model results and observed data for the vaccinated class are compared for COVID-19 scenarios in Turkey. Some of the realistic parameters are taken from the literature, see the related references in Table 3 for a detailed discussion of parameter choices. The remaining seven parameters are estimated, by fitting the vaccinated compartment generated from the system (1)-(6) to the observed number of COVID-19 vaccinated individuals using standard model-fitting procedures. The least squares method is the process of finding the best-fitting curve or line of best fit for a set of data points by reducing the sum of the squares of the offsets (residual part) of the points from the curve. The vector consisting of seven parameters $p=(\beta,\epsilon,\alpha_{1},\alpha_{2},\xi,\delta,\sigma)$ can be estimated via parameter estimation. In this context, the model given by (1)-(6) is evaluated by considering a non-linear least squares problem with positive constraints, where the best fitting curve can be found for a small data set of vaccinated class by minimising the sum of squares of the deviations of data points from the plotted graph [6, 8]. This may be described as $S(p)=\sum\left(V_{i}-\mathcal{F}(x_{i},p)\right)^{2},$ (61) where $V_{i}$ represents the data set and $\mathcal{F}(x_{i},p)$ denotes the model result with for a vector of unknowns $p$. To minimise the function $S(p)$, the non-linear least square minimization routine lsqcurvefit of MATLAB is used [17]. Parameters obtained from this approach are given in Table 3. Besides MATLAB’s standard ode45 solver [17] is applied for numerical integration of the system (1)-(6) with suitable initial conditions provided in the text. Numerical simulations of the model (1)-(6) can be shown with the parameters given in Table 3 for $n=1$. An example data set of vaccinated people during the COVID$-19$ outbreak in Turkey is taken from the World Health Organisation. In Fig. 2, the results are shown for model (1)-(6) with $n=1$ fitted to the data of individuals vaccinated between June $10$, $2021$ and August $8$, $2021$. The total population is taken $83$ million [1]. h Figure 2. Vaccination component for the model for the case with $n=1$ is compared with real data for the period of June $10$, $2021$ and August $8$, $2021$. Parameters are given in Table 3. Initial conditions are chosen as $S_{0}=61098000,V_{0}=18500000,E_{0}=2200000,R_{0}=2000,C_{0}=20000$. The black line denotes the model result and the red stars represent daily vaccinated cases. In Fig. 3, success of parameter estimation is again demonstrated. Here numerical simulations of the model (1)-(6) with $n=2$ is shown with the parameters given in Table 3 and the resulting outcome is compared with the data set of vaccinated people between June $10$, $2021$ and August $8$, $2021$. As seen, in Figs. 2 and 3, the black curve corresponding to the model becomes flattened after around day $40$ and this agrees with the real data, where the real number of vaccinated people rapidly increases from $18967237$ to $41726338$ and it is smoothed roughly about July $20$. Figure 3. Vaccination component for the model for the case with $n=2$ is compared with real data for the period of June $10$, $2021$ and August $8$, $2021$. Parameters are given in Table 3. Initial conditions are chosen as $S_{0}=61098000,V_{0}=18500000,E_{0}=2200000,R_{0}=2000,C_{0}=20000$. The black line denotes the model result and the red stars represent daily vaccinated cases. Table 3. Estimated parameters of the model. Parameters | Value ($n=1$) | Value ($n=2$) | Unit | Source ---|---|---|---|--- $\Lambda$ | $3032$ | $3032$ | day-1 | assumed based on [5, 14, 15] $\beta$ | $1.0257\times 10^{-8}$ | $1.004\times 10^{-8}$ | day-1 | estimated $\mu$ | $3.653\times 10^{-5}$ | $3.653\times 10^{-5}$ | day-1 | assumed based on [1, 6, 15] $\epsilon$ | $1\times 10^{-8}$ | $1\times 10^{-8}$ | day-1 | estimated $t^{\prime}$ | $0.0055$ | $0.0055$ | day-1 | assumed based on [3] $\lambda$ | $0.8$ | $0.8$ | day-1 | assumed based on [3, 4, 21] $d$ | $0.1$ | $0.1$ | day-1 | assumed based on [6, 23] $\alpha_{1}$ | $0.00041$ | $0.0001$ | day-1 | estimated $\alpha_{2}$ | $0.00031$ | $0.0001$ | day-1 | estimated $\omega$ | $5$ | $5$ | day-1 | [6] $\varphi$ | $2$ | $2$ | day-1 | [6] $\kappa$ | $20000$ | $20000$ | copies $/$day | assumed based on [13, 6] $\xi$ | $0.01004$ | $0.01133$ | day-1 | estimated $\delta$ | $0.19999$ | $0.2$ | day-1 | estimated $\sigma$ | $0.02136$ | $0.02126$ | day-1 | estimated 6\. Sensitivity analysis Since varying the parameters may have a significant impact on the model output, one can perform a sensitivity analysis of the dynamical model to determine which model parameters are more influential for the dynamics. The parameters associated with the basic reproduction ratio $R_{0}$ have particular importance for the robustness of the model. In this context, the aim of the sensitivity analysis is to identify the most substantial parameter in the model for disease transmission. Following the ideas presented in [7, 18], the sensitivity analysis can be performed based on the basic reproduction ratio as $\mathcal{S}_{i}^{\mathcal{P}}=\frac{\mathcal{P}}{R_{0}}\frac{\partial R_{0}}{\partial\mathcal{P}},$ (62) where $\mathcal{P}$ represents a generic parameter in the model (1)-(6). The sensitivity indices of the system parameters in Table 3 are demonstrated in 4, and also in Figs. 4(a) for $n=1$ and 4(b) for $n=2$. As seen from Table 4 and Fig. 4, the model (1)-(6) is highly sensitive to the parameters $\lambda$, $t^{\prime}$ and $\delta$. Thus it can be concluded that an increase in these parameters diminishes the basic reproduction ratio $R_{0}$ for both $n=1$ and $n=2$. The significance of some parameters may be different between Case $1$ and Case $2$. For example, although the increase in the parameter $\epsilon$, i.e. the rate at which a vaccinated individual becomes exposed after being in contact with an infected individual, leads to an essential stimulus for the basic reproduction ratio for Case $2$, yet it has a much smaller impact on the $R_{0}$ for the case $n=1$. Table 4. Sensitivity indices of basic reproduction ratio of model (1)-(6), considering at the baseline parameters provided in Table 3. number for $n=1$ and $n=2$ Parameters | Index ($n=1$) | Index ($n=2$) ---|---|--- $\beta$ | $0.1276764$ | $0.2089141$ $\epsilon$ | $0.4754643$ | $0.7910858$ $t\prime$ | $-0.7636042$ | $-0.7843018$ $\lambda$ | $-0.9897355$ | $-0.9897256$ $d$ | $-0.3333038$ | $-0.3332927$ $\alpha_{1}$ | $0.1020714$ | $0$ $\alpha_{2}$ | $0.2947878$ | $0$ $\xi$ | $0.0036252$ | $0.0032138$ $\delta$ | $-0.6665743$ | $-0.6665855$ $\sigma$ | $-0.2276182$ | $-0.2067744$ --- (a) --- (b) Figure 4. Sensitivity indices based on basic reproduction ratio $R_{0}$ with respect to various parameters given for Cases $n=1$ (a) and $n=2$ (b) in the model (1)-(6) for Turkey. Parameter values given in Table 3 are considered for both cases. Since the aim of our work is to further broaden the current knowledge of the modelling of recent COVID-19 outbreak with vaccination, we focus on the role of two parameters regarding vaccination in the model. In Fig. 5, time simulations of the Susceptible compartment over a period of $180$ days are presented for various rates of $\lambda=0.2,0.8,1,4$ (a) and $\sigma=0.01,0.02136,0.09$ (b), respectively denotes the efficiency of the vaccine and the vaccination rate of susceptible (after the first shot). As is seen from Fig. 5 (a,b), with the increase in $\lambda$ and $\sigma$, susceptible individuals diminish at an earlier time and enter the vaccinated class. --- (a) --- (b) Figure 5. Plots for Susceptible population compartment for different values of vaccine efficiency $\lambda=0.2,0.8,1.4$ and for different values of vaccination rate of susceptible individuals after first shot, $\sigma=0.01,0.02136,0.09$ with $n=1$. Figure 6 demonstrates time simulations of the recovered class over a period of $360$ days for various rates of $\lambda=0.2,0.8,1,4$ and $\sigma=0.01,0.02136,0.09$. --- (a) --- (b) Figure 6. Plots for Recovery compartment for different values of vaccine efficiency $\lambda=0.2,0.8,1.4$; and for different values of vaccination rate of susceptible individuals after first shot, $\sigma=0.01,0.02136,0.09$ with $n=1$. Conclusion and outlook In this paper a model for an epidemic with a partially effective vaccination and infection by virus in the environment is studied. Two different implementations of the idea of a partially effective vaccination are included and which of these is chosen does not seem to have an essential influence on the qualitative behaviour of the solutions. In modelling infections coming from the environment we used a phenomenological model for the force of infection containing an integer $n\geq 1$ as a parameter. We discovered that the choice $n=2$ leads to the occurrence of backward bifurcations while other choices of $n$ do not. Thus here there is a major qualitative difference. On the other hand, fitting to real data for COVID-19 in Turkey showed that both the cases $n=1$ and $n=2$ worked and there was no clear indication that one of these choices was better than the other according to that criterion. In the case $n=2$ it was shown that for certain values of the parameters there exists an endemic steady state although $R_{0}<1$. It was also shown that in this situation there exist more than one endemic steady state. At least one of the positive steady states is unstable. This confirms rigorously that certain aspects of the usual picture of a backward bifurcation are present in this model. An aspect of this picture which was not reproduced here is that the other positive steady state should be stable. It would be desirable to prove a stability statement of this kind analytically. An outstanding challenge is to provide a mechanistic derivation of the response function for infections coming from the environment. If this could be done then it would help to decide which value of $n$ in the Ansatz we used is more appropriate for modelling a given disease or whether, indeed, a different type of function would give better results. ††footnotetext: Funding: Aytül Gökçe was partially supported by a grant from the Niels Hendrik Abel Board. ## References * [1] Türkiye İstatistik Kurumu (TUİK), https://www.tuik.gov.tr/ * [2] Anderson, G., Reiter, R.J.: Melatonin: Roles in influenza, COVID-19, and other viral infections. Reviews in Medical Virology 30, e2019 (2020) * [3] Angeli, M., Neofotistos, G., Mattheakis, M., Kaxiras, E.: Modeling the effect of the vaccination campaign on the COVID-19 pandemic. Chaos, Solitons & Fractals 154, 111621 (2022) * [4] Baden, L.R., El Sahly, H.M., Essink, B., Kotloff, K., Frey, S., Novak, R., Diemert, D., Spector, S.A., Rouphael, N., Creech, C.B., et al.: Efficacy and safety of the mRNA-1273 SARS-CoV-2 vaccine. New England Journal of Medicine 384(5), 403–416 (2021) * [5] Bajiya, V.P., Bugalia, S., Tripathi, J.P.: Mathematical modeling of COVID-19: Impact of non-pharmaceutical interventions in India. Chaos: An Interdisciplinary Journal of Nonlinear Science 30(11), 113143 (2020) * [6] Bulut, H., Gölgeli, M., Atay, F.M.: Modelling personal cautiousness during the COVID-19 pandemic: a case study for Turkey and Italy. Nonlinear Dynamics 105(1), 957–969 (2021) * [7] Chitnis, N., Hyman, J.M., Cushing, J.M.: Determining important parameters in the spread of malaria through the sensitivity analysis of a mathematical model. Bulletin of Mathematical Biology 70, 1272–1296 (2008) * [8] Coleman, T.F., Li, Y.: An interior trust region approach for nonlinear minimization subject to bounds. SIAM Journal on Optimization 6(2), 418–445 (1996) * [9] Dawes, J.H.P., Souza, M.: A derivation of Holling’s type I, II and III functional responses in predator–prey systems. Journal of Theoretical Biology 327, 11–22 (2013) * [10] Diekmann, O., Heesterbeek, J.A.P.: Mathematical Epidemiology of Infectious Diseases. Wiley (2000) * [11] Dietz, K.: Overall population patterns in the transmission cycle of infectious disease cycles, in Population Biology of Infectious Diseases (Eds. R. M. Anderson and R. M. May). Springer (1982) * [12] Van den Driessche, P.,Watmough, J.: Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Mathematical Biosciences 180(1-2), 29–48 (2002) * [13] Drosten, C., Chiu, L.L., Panning, M., Leong, H.N., Preiser, W., Tam, J.S., G¨unther, S., Kramme, S., Emmerich, P., Ng, W.L., et al.: Evaluation of advanced reverse transcription-PCR assays and an alternative PCR target region for detection of severe acute respiratory syndrome-associated coronavirus. Journal of Clinical Microbiology 42(5), 2043–2047 (2004) * [14] Garba, S.M., Safi, M.A., Gumel, A.B.: Cross-immunity-induced backward bifurcation for a model of transmission dynamics of two strains of influenza. Nonlinear Analysis: Real World Applications 14(3), 1384–1403 (2013) * [15] Gumel, A.B., McCluskey, C.C., Watmough, J.: (2006). An SVEIR model for assessing potential impact of an imperfect anti-SARS vaccine. Mathematical Biosciences and Engineering 105(1), 957–969 (2021) * [16] Holling, C.S.: Some characteristics of simple types of predation and parasitism. The Canadian Entomologist 91(7), 385–398 (1959) * [17] The MathWorks Inc.: MATLAB version: 9.13.0 (R2020b) (2020). https://www.mathworks.com * [18] Martcheva, M.: An Introduction to Mathematical Epidemiology. Springer (2015) * [19] Martcheva, M.: Methods for deriving necessary and sufficient conditions for backward bifurcation. Journal of Biological Dynamics 13, 538–566 (2019) * [20] Park, S.W., Cornforth, D.M., Dushoff, J., Weitz, J.S.: The time scale of asymptomatic transmission affects estimates of epidemic potential in the COVID-19 outbreak. Epidemics 31, 100392 (2020) * [21] Polack, F.P., Thomas, S.J., Kitchin, N., Absalon, J., Gurtman, A., Lockhart, S., Perez, J.L., P´erez Marc, G., Moreira, E.D., Zerbini, C., et al.: Safety and efficacy of the BNT162b2 mRNA COVID-19 vaccine. New England Journal of Medicine 383(27), 2603–2615 (2020) * [22] Rendall, A.D.: Mathematics of the NFAT signalling pathway. SIAM Journal on Applied Dynamical Systems 11, 988–1006 (2012) * [23] Ritchie, H., Ortiz-Ospina, E., Beltekian, D., Mathieu, E., Hasell, J., Macdonald, B., Giattino, C., Roser, M., Breck Yunits, A., Gavrilov, D., et al.: Mortality risk of COVID-19. Our World in Data [Internet].[cited 5 May 2020]. Available: https://ourworldindata.org/mortality-risk-covid#the-case-fatality-rate (2020) * [24] Ross, R.: An application of the theory of probability to the theory of a priori pathometry. Part I. Proceedings of the Royal Society of London. Series A 92, 204–230 (1916) * [25] Ross, R., Hudson, H.P.: An application of the theory of probability to the theory of a priori pathometry. Part II. Proceedings of the Royal Society of London. Series A 93, 212–225 (1917) * [26] Ross, R., Hudson, H.P.: An application of the theory of probability to the theory of a priori pathometry. Part III. Proceedings of the Royal Society of London. Series A 93, 225–240 (1917) * [27] Sunyok, C.J., Fox, L., Ritchie, H., Lanzas, C., Lenhart, S.: Mathematically modelling the effect of touch frequency on the environmental transmission of Clostridioides difficile in healthcare settings. Mathematical Biosciences 340, 108666 (2021) * [28] Wiggins, S.: Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer (1990) * [29] Zhao, J., Eisenberg, J.E., Spicknall, I.H., Li, S., Koopman, J.S.: Model analysis of fomite mediated influenza transmission. PloS ONE 7, e51984 (2012) ††footnotetext: ∗(Corresponding author)
# Evidence of jet induced optical microvariability in radio-loud Narrow Line Seyfert 1 Galaxies Vineet Ojha1, 2, Vivek Kumar Jha2, 3, Hum Chand2, 4, Veeresh Singh1 1Physical Research Laboratory (PRL), Astronomy and Astrophysics Division, Ahmedabad, 380 009; India 2Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital, 263002; India 3Department of Physics, Deen Dayal Upadhyaya Gorakhpur University, Gorakhpur, 273009; India 4Department of Physics & Astronomical sciences, Central University of Himachal Pradesh, Dharamshala, 176215; India E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract To quantify the role of radio jets for Intra-Night Optical Variability (INOV) in Radio-Loud Narrow-Line Seyfert 1 (RLNLSy1) galaxies, we report the first systematic comparative INOV study of 23 RLNLSy1 galaxies, with 15 RLNLSy1s having confirmed detection of jets (jetted) and the remaining 8 RLNLSy1s having no detection of jets (non-jetted) based on their Very Long Baseline Array observations. We have monitored these two samples, respectively, in 37 and 16 sessions of a minimum 3-hour duration each. Based upon Fη-test at 99% confidence level with a typical INOV amplitude ($\psi$) detection threshold of $>$ 3%, we find the INOV duty cycles of 12% for the sample of jetted RLNLSy1s, however, none of the sources showed INOV in the sample of non-jetted RLNLSy1s. Among the jetted RLNLSy1s, we find that the Duty Cycle (DC) for jetted $\gamma$-ray detected ($\gamma$-ray) RLNLSy1s is found to be 34% in contrast to null INOV detection in the case of non-$\gamma$-ray RLNLSy1s. It suggests that instead of the mere presence of a jet, relativistic beaming plays a significant role for INOV in the case of low-luminous high accreting AGNs such as NLSy1s in which dilution of the AGN’s non-thermal optical emission by the (much steadier) optical emission contributed by the nuclear accretion disc is quite likely. Our study of jetted $\gamma$-ray RLNLSy1s shows more frequent INOV detection for sources with higher apparent jet speed. Further, our results also suggest that among the NLSy1s, only jetted $\gamma$-ray RNLSy1 galaxies DC approaches blazar like DC. ###### keywords: surveys – galaxies: active – galaxies: jets – $\gamma$-ray-galaxies: photometry – galaxies: Seyfert – gamma-rays: galaxies. ††pubyear: 2022††pagerange: Evidence of jet induced optical microvariability in radio-loud Narrow Line Seyfert 1 Galaxies– References ## 1 Introduction Narrow-line Seyfert 1 (NLSy1) galaxies are a subclass of active galactic nuclei (AGN), emitting electromagnetic radiations from radio to gamma-ray wavebands. Although in the optical wavelengths, both permitted and forbidden emission lines are present in their spectra, the width of their broad component of Balmer emission lines is narrower than the population of general type-1 Seyfert galaxies, with the full width at half maximum of the broad component of Balmer emission line (FWHM(H${\beta}$)) being less than 2000 km s-1 (Osterbrock & Pogge, 1985; Goodrich et al., 1989). Other optical characteristics such as flux ratio of [O${}_{III}]_{\lambda 5007}/H\beta$ $<$ 3, and strong permitted Fe ii emission lines are used in addition to the criterion of FWHM(H${\beta}$) to characteristically define NLSy1 galaxies (Shuder & Osterbrock, 1981). Besides, these galaxies also display peculiar characteristics in other wavelength, especially in X-ray wavelength, such as strong soft X-ray excess below 2 keV (e.g., Brandt et al., 1997; Vaughan et al., 1999; Vignali et al., 2004; Ojha et al., 2020b), steep soft X-ray spectra (e.g., Boller et al., 1996; Wang et al., 1996; Grupe et al., 1998), rapid X-ray (sometimes optical) flux variability (e.g., Leighly, 1999; Komossa & Meerschweinchen, 2000; Miller et al., 2000; Klimek et al., 2004; Liu et al., 2010; Paliya et al., 2013a; Kshama et al., 2017; Ojha et al., 2019; Ojha et al., 2020a), and blue-shifted emission line profiles (e.g., Zamanov et al., 2002; Leighly & Moore, 2004; Boroson, 2005; Jha et al., 2021). Furthermore, NLSy1s are believed to be relatively young AGNs, and they represent an early phase of their evolution (e.g., Mathur, 2000; Sulentic et al., 2000; Mathur et al., 2001; Fraix-Burnet et al., 2017; Komossa, 2018; Paliya, 2019). Observationally, it is suggested that the majority of NLSy1s have relatively lower Super Massive Black Hole (SMBH) masses of 106 \- 108 M☉ (Grupe & Mathur, 2004; Deo et al., 2006; Zhou et al., 2006; Peterson, 2011; Wang et al., 2014; Rakshit et al., 2017), and higher accretion rates $\lambda_{Edd}\sim$ 0.05 - 1.00, in contrast to luminous class of AGN such as quasars (Boroson & Green, 1992; Peterson et al., 2000; Ojha et al., 2020b). However, relatively lower SMBH mass is not uncontested since a systematic underestimation of their SMBH has been suggested (Decarli et al., 2008; Marconi et al., 2008; Calderone et al., 2013; Viswanath et al., 2019; Ojha et al., 2020b). These highly accreting galaxies are generally hosted in spiral/disc galaxies (Deo et al., 2006; Ohta et al., 2007; Olguín-Iglesias et al., 2020), although in a few $\gamma$-ray detected NLSy1s, elliptical hosts have been suggested (hereafter $\gamma$-NLSy1s, D’Ammando et al., 2017, 2018). Interestingly, although the NLSy1 exhibit both radio-quiet and radio-loud characteristics, defined by the radio parameter R${}_{5GHz}\equiv f_{5GHz}/f_{4400\AA}$ with R$\leq$ 10 and $>$ 10 are being used to parameterized radio-quiet and radio-loud AGNs, respectively (e.g., see, Stocke et al., 1992; Visnovsky et al., 1992; Kellermann et al., 1994, 1989), the population is dominated by radio-quiet objects (Kellermann et al., 2016) and only a small fraction $\sim$ 7% of NLSy1s are radio-loud (hereafter RLNLSy1s, Komossa et al., 2006; Zhou et al., 2006; Rakshit et al., 2017; Singh & Chand, 2018). This suggests that in a few of these galaxies, jets may be present, making them radio-loud (Zhou et al., 2003; Yuan et al., 2008). Indeed, Very Long Baseline Array (VLBA) observations have discovered parsec-scale blazar- like radio jets in a few RLNLSy1s (Lister et al., 2013; Gu et al., 2015; Lister et al., 2016). The existence of relativistic jets in such a subclass of AGN (although in a few of the sources) that has relatively higher accretion rates and lower black hole masses contradicts the general trend of the existence of relativistic jets in larger black hole masses and lower accretion rates (Urry et al., 2000; Boroson, 2002; Böttcher & Dermer, 2002; Urry, 2003; Marscher, 2009; Chiaberge & Marconi, 2011), and also objects the theoretical paradigm of jet formation (e.g., Böttcher & Dermer, 2002). Hence, studying NLSy1s from the standpoint of jets is essential to understanding the physical processes that can launch relativistic jets in this subclass of AGN. Nonetheless, despite a blazar-like double-humped spectral energy distribution (SED) of a few RLNLSy1s (e.g., Abdo et al., 2009c; Paliya et al., 2013b; Paliya, 2019), a minuscule fraction of RLNLSy1s, especially very radio-loud (R $>$ 100) RLNLSy1s exhibit interesting multi-wavelength characteristics such as compact radio cores, high brightness temperature, superluminal motion, flat radio and X-ray spectra, and rapid infrared and X-ray flux variability similar to blazar class of AGN (Boller et al., 1996; Grupe et al., 1998; Leighly, 1999; Hayashida, 2000; Komossa & Meerschweinchen, 2000; Yuan et al., 2008; Jiang et al., 2012; Orienti et al., 2012; Itoh et al., 2013; Yao et al., 2015; Berton et al., 2018; Gabanyi et al., 2018; Lister, 2018). All these characteristics give indirect evidence of the presence of jets in them. However $\gamma$-ray detection by Fermi-Large Area Telescope (Fermi- LAT)111https://heasarc.gsfc.nasa.gov/docs/heasarc/missions/fermi.html from about two dozen RLNLSy1s gives conclusive evidence that $\gamma$-ray detected NLSy1s are capable of ejecting relativistic jets (Abdo et al., 2009a, b, c; Foschini et al., 2010; Foschini, 2011; D’Ammando et al., 2012; D’Ammando et al., 2015; Yao et al., 2015; Paliya et al., 2018; Yang et al., 2018; Yao et al., 2019). Variability of an AGN’s optical flux from a few minutes to a day time scales is variously known as microvariability (Miller et al., 1989), Intraday Variability (IDV, Wagner & Witzel, 1995) or Intra-Night Optical Variability (INOV, Gopal-Krishna et al., 1993). This alternative tool is also used to indirectly verify the presence or absence of jets in other sub-classes of AGN because of the well established observational fact that for radio-loud jet dominated sources such as blazars, both INOV amplitude ($\psi$) and the duty cycle (DC) are found to be distinctively high in comparison to non-blazars, including weakly polarised flat-radio-spectrum (i.e., radio-beamed) quasars (Goyal et al., 2013b; Gopal-Krishna & Wiita, 2018). Interestingly, such an indirect tool has been used for a decade in searching for the Doppler boosted optical jets in low luminous AGNs such as NLSy1s and weak emission line QSOs (e.g., see Liu et al., 2010; Paliya et al., 2013a; Kumar et al., 2015; Parveen et al., 2016; Kumar et al., 2017; Ojha et al., 2018, 2021). However, this indirect evidence is based upon the observed high amplitude, and duty cycle of INOV as seen in blazars consisting of strongly Doppler boosted jets. More importantly, an INOV study comparing a subclass of AGN with jets and without jets has not yet been explored so far. Therefore, to establish stronger INOV amplitude ($\psi$) with high DC as evidence of the existence of jet in an AGN, we have carried out an INOV study with two sub-samples of RLNLSy1s with and without radio VLBA jets. The general consensus regarding the radio structures of NLSy1s has been that they harbor sizes of less than 300 pc (Ulvestad et al., 1995; Lister, 2018) and are generally compact sources with a steep spectrum (Foschini, 2011, 2012; Berton et al., 2015). The appearance of radio structures in the radio observations of AGNs depends upon the resolution and sensitivity of the radio telescopes; therefore, non-detection of the radio jets in the radio images of RLNLSy1s may not necessarily imply that they do not have jets. Here, we have selected our sources (see Sect. 2) based on their available observations with the radio telescopes, which mainly consist of VLBA observations. Furthermore, as pointed above, $\gamma$-ray detections in several RLNLSy1s suggest the presence of relativistic jets in them; therefore, a comprehensive INOV study of the jetted with $\gamma$-ray detected RLNLSy1s (hereafter J-$\gamma$-RLNLSy1s) and the jetted without $\gamma$-ray detected RLNLSy1s (hereafter J-RLNLSy1s) is essential to understand the nature of their variability and jets. Therefore, we have also discussed the INOV nature of J-$\gamma$-RLNLSy1s and J-RLNLSy1s sub-samples in the present work. The layout of this paper is as follows. In Sect. 2, we outline the sample selection procedure. Sect. 3 provides details of our intra-night optical monitoring and the data reduction procedure. The statistical analysis is presented in Sect. 4, and our main results, followed by a brief discussion, are given in Sect. 5. In Sect. 6, we summarize our main conclusions. Table 1: The present sample of 23 RLNLSy1s galaxies selected for INOV monitoring. SDSS Name,222The SDSS names of the sources with Fermi-LAT detection are suffixed with a “$\blacktriangle$” sign and the references (Abdo et al., 2009a, b; Foschini et al., 2010); (Abdo et al., 2009c); (Foschini, 2011; D’Ammando et al., 2012); (D’Ammando et al., 2015); (Liao et al., 2015); (Yao et al., 2015); and (Ajello et al., 2020) are for the sources, J094857.32$+$002225.6; J032441.20$+$341045.0 and J150506.48$+$032630.8; J084957.98$+$510829.0; J164442.53$+$261913.3; J130522.75$+$511640.2, J122222.99$+$041315.9, and J144318.60$+$472557.0 respectively. | R-mag333Taken from Monet (1998). | $z$444Emission-line redshifts are taken either from Gu et al. (2015) or from Paliya et al. (2019). | $R_{1.4GHz}$555$R_{1.4GHz}\equiv f_{1.4GHz}/f_{4400\AA}$ values are taken from Gu et al. (2015) for the sources marked with a ‘†’, and are taken from Ojha et al. (2020a) for the sources marked with an ‘⋆’, except for J120014.08$-$004638.7 for which $R_{1.4GHz}$ is estimated using its total flux density of 27.1 mJy at 1.4 GHz and k-corrected B-band optical flux density of 0.16 mJy (Doi et al., 2012). | Apparent jet speed666The available jet speed of RLNLSy1s from literature which is as follows: for the jet speed of J032441.20$+$341045.0, J084957.98$+$510829.0, J094857.32$+$002225.6, J150506.48$+$032630.8 see Lister et al. (2019), and for J122222.99$+$041315.9, J164442.53$+$261913.3 see Lister et al., 2016, Doi et al., 2012, and. | Optical Polarization777The optical polarization values as reported in these papers: αItoh et al. (2014); βIkejiri et al. (2011); γAngelakis et al. (2018); δMaune et al. (2014); κLeighly (1999). | Radio Polarization888The radio polarization values as reported in these papers: ψNeumann et al. (1994); ΛHodge et al. (2018); τHoman et al. (2001). ⋆fractional radio polarisation images presented in Fig-6 of Gu et al. (2015). | log ($M_{BH}$)999Derived black hole masses of the present sample based upon single-epoch optical spectroscopy virial method were compiled from the literature. The references for the black hole mass are as follows: ∨Zhou et al. (2007); ⋄Yuan et al. (2008); ⋎Yao et al. (2015); ⟂Rakshit et al. (2017); ∔Wang & Lu (2001); ζGreene & Ho (2007). | Observing ---|---|---|---|---|---|---|---|--- | | | | | | | ($M_{\sun}$) | freq. (GHz) jetted NLSy1s J032441.20$+$341045.0▲ | 13.10 | 0.06 | 318⋆ | $9.1c\pm 0.3c$ | 1-3%α, 0.7-0.8%β, 1.2%γ | 4%ψ, 0.2-1%Λ | 7.30∨ | 2.2/8.4 J081432.12$+$560958.7 | 18.10 | 0.51 | 339† | - | - | $\star$ | 8.00⋄ | 4.9 J084957.98$+$510829.0▲ | 17.79 | 0.58 | 4496⋆ | $6.6c\pm 0.6c$ | 10%δ, 10%γ | 0.3-3%Λ, 3.3%τ | 7.59⟂ | 5.0/8.4/15.3 J090227.20$+$044309.0 | 18.20 | 0.53 | 1047† | - | - | $\star$ | 7.64⟂ | 4.9 J094857.32$+$002225.6▲ | 18.17 | 0.58 | 846⋆ | $9.7c\pm 1.1c$ | 36%α, 18.8%β, 2.4%γ | 0.2-3%Λ | 7.50⟂ | 22.2 J095317.10$+$283601.5 | 18.60 | 0.66 | 513† | - | - | - | 7.73⟂ | 4.9 J104732.78$+$472532.0 | 18.20 | 0.80 | 7413† | - | - | - | 8.10⋄ | 4.9 J122222.99$+$041315.9▲ | 17.06 | 0.97 | 1534⋆ | $0.9c\pm 0.3c$ | - | 0.2-3.3%Λ | 8.30⋎ | 15.4 J130522.75$+$511640.2▲ | 15.80 | 0.79 | 219† | - | 1.0%γ | $\star$ | 8.15⟂ | 4.9 J142114.05$+$282452.8 | 17.10 | 0.78 | 205† | - | - | - | 7.72⟂ | 4.9 J144318.60$+$472557.0▲ | 17.70 | 0.70 | 1175† | - | - | $\star$ | 7.80⋄ | 4.9 J150506.48$+$032630.8▲ | 17.72 | 0.41 | 3364⋆ | $0.1c\pm 0.2c$ | 4%γ | 0.2-2.5%Λ | 7.26⟂ | 15.3 J154817.92$+$351128.0 | 18.40 | 0.48 | 692† | - | 2.1%γ | $\star$ | 7.84⟂ | 4.9 J164442.53$+$261913.3▲ | 16.60 | 0.14 | 447⋆ | $>1.0c$ | 2.2%γ | - | 7.21⟂ | 1.7 J170330.38$+$454047.3 | 12.80 | 0.06 | 102⋆ | - | 3-5%κ | - | 6.77∔ | 1.7 non-jetted NLSy1s J085001.17$+$462600.5 | 18.40 | 0.52 | 170† | - | - | - | 7.34⟂ | 4.9 J103727.45$+$003635.6 | 19.10 | 0.60 | 457† | - | - | - | 7.48⟂ | 4.9 J111005.03$+$365336.2 | 19.00 | 0.63 | 933† | - | - | - | 7.43⟂ | 4.9 J113824.53$+$365327.2 | 18.30 | 0.36 | 219† | - | - | - | 7.29⟂ | 4.9 J120014.08$-$004638.7 | 17.70 | 0.18 | 169 | - | - | - | 7.40ζ | 1.4 J124634.65$+$023809.1 | 17.50 | 0.36 | 277† | - | - | - | 7.42⟂ | 4.9 J163323.59$+$471859.0 | 14.50 | 0.12 | 154⋆ | - | 2.4%γ | - | 6.70⟂ | 1.7 J163401.94$+$480940.2 | 19.10 | 0.49 | 204† | - | - | - | 7.56⟂ | 4.9 Table 2: Details of system parameters of the telescopes and detectors used in the observations of 23 RLNLSy1s. Telescope (s) | No. of sessionse | Detector (s) | Field of view | Readout | Gain | Focal ratio | Pixel size | Plate scale ---|---|---|---|---|---|---|---|--- | | (arcmin2) | noise | (e- | of | of CCD | of CCD | | | | (e-) | /ADU) | telescope | ($\mu$m) | ( ′′/pixel ) | 1.04-m STa | 2 | 4k$\times$4k | 15.70$\times$15.70 | 3.0 | 10.0 | f/13 | 15.0 | 0.23 1.30-m DFOTb | 47 | 2k$\times$2k | 18.27$\times$18.27 | 7.5 | 2.0 | f/4 | 13.5 | 0.53 3.60-m DOTc | 3 | 4k$\times$4k | 6.52$\times$6.52 | 8.0⋆, 5.0† | 1.0⋆, 2.0† | f/9 | 15.0 | 0.10 2.01-m HCTd | 1 | 2k$\times$2k | 10.24$\times$10.24 | 4.1 | 2.20 | f/9 | 15.0 | 0.30 aSampurnand Telescope (ST), bDevasthal Fast Optical Telescope (DFOT), cDevasthal Optical Telescope (DOT), | dHimalayan Chandra Telescope (HCT). e No. of intra-night sessions taken from the telescopes. | ⋆Readout noise and corresponding gain at readout speed of 1 MHz and ‘†’ represents the same at readout speed of 500 kHz. | ## 2 sample selection The bulk of our sample for intra-night monitoring is drawn from Gu et al. (2015) where they have reported good quality VLBA observations at 5 GHz for the 14 RLNLSy1 galaxies having a flux density above 10 mJy at 1.4 GHz and a radio-loudness parameter101010R1.4GHz is the ratio of the monochromatic rest- frame flux densities at 1.4 GHz and 4400Å (see Yuan et al., 2008). R1.4GHz > 100\. Out of 14 RLNLSy1s, they confirm the presence of a jet in 7 of the sources (hereafter, "jetted") based on the detection of a core-jet structure at 5 GHz, and the remaining 7 sources are termed as "non-jetted" based on the detection of a compact core only. However, in reference to the jetted nature, it may be noted that VLBA radio images at typical resolution could generally resolve parsec scale core jet structures (Lister, 2018) which may not be relativistically beamed. Hence some of the sources could be steep spectrum sources as seen in mini-radio galaxies (see Gu et al., 2015). We also note that the "non-jetted" source J144318.56$+$472556.7 has a $\sim$ 15-mas long quasi-linear radio component resolved into seven components along the south- west direction, in addition to diffuse emission extending up to $\sim$ 30 mas (see figure 12 of Gu et al., 2015). Therefore, we have included this source in our jetted subsample. We next expanded this sample by including another 9 sources from the VLBA literature, which also satisfy the above twin criteria (i.e., $f_{\nu\leavevmode\nobreak\ 1.4GHz}\geq$ 10 mJy and R${}_{1.4GHz}>100$). The jetted (or non-jetted) classification of the 8 sources out of 9 is possible using their published VLBA observations. Thus, three of these 8 sources, viz., J163323.59$+$471859.0, J164442.53$+$261913.3 and J170330.38$+$454047.3 are taken from the Doi et al. (2011) and the remaining 5 sources, viz., J032441.20$+$341045.0, J084957.98$+$510829.0, J094857.32$+$002225.6, J122222.99$+$041315.9 and J150506.48$+$032630.8 are taken from Zhou et al. (2007), D’Ammando et al. (2012), Giroletti et al. (2011), Lister et al. (2016) and Orienti et al. (2012), respectively. The frequencies at which observations of 23 RLNLSy1s were carried out are tabulated in the last column of Table 1. However, for the NLSy1 J120014.08$-$004638.7, Doi et al. (2012) have confirmed its lobe-dominated nature based upon its Very Large Array (VLA) 1.4 GHz FIRST images. Note that this source is also not in the latest sample of Monitoring Of Jets in Active galactic nuclei with VLBA Experiments (MOJAVE) XVII program (see Lister et al., 2019). Therefore, we have included this source in our non-jetted set. Based on the published VLBA observations of 8 sources, 7 sources have a confirmed jet, and one source falls in the non-jetted category. Thus, overall, our sample consists of 23 RLNLSy1s, including 15 of which are jetted, and the remaining 8 RLNLSy1s are non-jetted. Table 1 summarizes the basic properties of our sample. The SDSS names of the sources with Fermi-LAT detection are suffixed with a "$\blacktriangle$" sign, and the references are given in the footnote "b" to Table 1. ## 3 Observations and Data Reduction ### 3.1 Photometric monitoring observations Intra-night monitoring of all 23 RLNLSy1s of our sample was performed in the broad-band Johnson-Cousin filter R due to the optimum response of the used CCDs in this filter. Four telescopes namely, the 1.04 meter (m) Sampurnanand telescope (ST, Sagar, 1999), 1.30-m Devasthal Fast Optical Telescope (DFOT, Sagar et al., 2010), 3.60-m Devasthal Optical Telescope (DOT, Sagar et al., 2012) and 2.01-m Himalayan Chandra Telescope (HCT, Prabhu & Anupama, 2010) were used for the intra-night monitoring of the present sample. Out of these four telescopes, the 1.04-m ST is located at Nainital, while the 1.30-m DFOT and the 3.60-m DOT are located at Devasthal near Nainital, and all the three are managed by the Aryabhatta Research Institute of Observational Sciences (ARIES). The fourth telescope, the 2.01-m Himalayan Chandra Telescope (HCT), is located in Ladakh and operated by the Indian Institute of Astrophysics (IIA), Bangalore, India. All the four telescopes are equipped with Ritchey- Chretien (RC) optics and were read out 1 MHz rate during our observations, except for the 3.60-m DOT, which was read out additionally at 500 kHz. The monitoring sessions lasted between $\sim$ 3.0 and $\sim$ 5.5 hours (median 3.75 hrs). Our sources were observed in 4$\times$4 binning mode with 1.04-m ST and the same with the 3.6-m DOT on 2017.04.11. A 2$\times$2 binning mode was adopted for the remaining sessions with the DOT. No binning was done for the DFOT and the HCT telescopes observations. The basic parameters of the four telescopes, the number of monitoring sessions, and the CCDs used in the present observations are listed in Table 2. In order to improve the INOV statistics, at least two intra-night sessions were managed for each of our 23 RLNLSy1s. In this work, 1.04-3.60m class telescopes have been used; therefore, depending on the brightness of the target NLSy1, telescope used, moon illumination, and sky condition, a typical exposure time for each science frame were set between 4 and 15 minutes in order to get a reasonable SNR. The median seeing (FWHM of the point spread function (PSF)) for the sessions typically ranged between $\sim$ 1 - 3 arcsec, except for a single session dated 2019.03.25 when seeing became considerably poorer (see Fig. 5). ### 3.2 Data reduction For each night, sky flat-field images were taken during dusk and dawn, and at least three bias frames were taken. The dark frames were not taken during our observations due to the relatively low temperature of the CCD detectors used, which were cooled either using liquid nitrogen (to about $-120^{\circ}$C) or using thermoelectrical cooling (to about $-90^{\circ}$C in case of 1.3-m DFOT). The standard routines within the IRAF111111Image Reduction and Analysis Facility (http://iraf.noao.edu/) software package were followed for preliminary processing of the observed frames. Aperture photometry (Stetson, 1987, 1992) was selected in this work for extracting the instrumental magnitudes of the targets and the comparison stars recorded in the CCD frames, using DAOPHOT II algorithm121212Dominion Astrophysical Observatory Photometry (http://www.astro.wisc.edu/sirtf/daophot2.pdf) due to less crowded fields of the monitored NLSy1s. The prime parameter in the aperture photometry is the size of the optimal aperture, which is used to estimate the instrumental magnitude and the corresponding signal-to-noise ratio (SNR) of the individual photometric data points recorded in each CCD frame. As emphasized in Howell (1989), the SNR of a target recorded in a CCD is maximized for the aperture radius $\sim$ PSF. However, as suggested by Cellone et al. (2000) when the underlying host galaxy significantly contributes to the total optical flux, its contribution to the aperture photometry can vary significantly due to PSF variation, mimicking INOV. Possibility of such spurious INOV can be significant for the lower redshift NLSy1s in our sample, particularly, J032441.20$+$341045.0 (z = 0.06), J164442.53$+$261913.3 (z = 0.14), J170330.38$+$454047.3 (z = 0.06), J120014.08$-$004638.7 (z = 0.18) and J163323.59$+$471859.0 (z = 0.12) (Table 1). This issue is further addressed in Sect. 5. Nonetheless, bearing the above in mind, we have chosen an aperture radius equal to 2$\times$FWHM for our final analysis, as already elaborated in Ojha et al. (2021). Using the instrumental magnitudes extracted from the aperture photometry, DLCs of each NLSy1s were derived for each session relative to a minimum of two (steady) comparison stars that were chosen based on their closeness to the monitored target NLSy1, both in position and brightness, as recorded in the CCD frames. The importance of these procedures for genuine INOV detection has been highlighted by Howell et al. (1988) and further focused in Cellone et al. (2007). In the case of 12 targets, we could identify at least a comparison star within $\sim$ 1 instrumental magnitude to the target NLSy1. The median magnitude offsets ($\Delta m_{R}$) for the remaining 10 targets were also not significant and within $\sim$ 1.5-mag, except for a source, viz., J103727.45$+$003635.6 for which $\Delta m_{R}$ was found to be 1.74 (see Figs 2-6). Table LABEL:tab_jnj_comp_star lists coordinates together with some other parameters of the steady comparison stars used for all the sessions. It has been shown in Ojha et al. (2021) that the color differences of such orders can be safely discounted while analyzing the variability of the DLCs. Table 3: Basic parameters of the comparison stars along with their observation dates used in this study for the 23 RLNLSy1 galaxies. Target RLNLSy1s and | Date(s) of monitoring | R.A.(J2000) | Dec.(J2000) | g | r | g-r ---|---|---|---|---|---|--- the comparison stars | | (h m s) | (∘ ′ ′′) | (mag) | (mag) | (mag) (1) | (2) | (3) | (4) | (5) | (6) | (7) jetted NLSy1s J032441.20$+$341045.0 | 2016 Nov. 22, 23; Dec. 02; 2017 Jan. 03, 04 | 03 24 41.20 | $+$34 10 45.00 | 14.50 | 13.70 | 0.80∗ S1 | 2016 Nov. 22, Dec. 02 | 03 24 53.68 | $+$34 12 45.62 | 15.60 | 14.40 | 1.20∗ S2 | 2016 Nov. 22, Dec. 02 | 03 24 53.55 | $+$34 11 16.58 | 16.20 | 14.40 | 1.80∗ S3 | 2016 Nov. 23 | 03 24 10.92 | $+$34 15 01.90 | 16.30 | 15.10 | 1.20∗ S4 | 2016 Nov. 23 | 03 24 14.04 | $+$34 18 20.10 | 15.80 | 15.00 | 1.00∗ S5 | 2017 Jan. 03 | 03 24 14.92 | $+$34 15 21.20 | 15.90 | 15.10 | 0.80∗ S6 | 2017 Jan. 03 | 03 24 08.44 | $+$34 08 15.80 | 15.80 | 14.20 | 1.60∗ S7 | 2017 Jan. 04 | 03 24 38.14 | $+$34 13 53.40 | 15.90 | 15.20 | 0.70∗ S8 | 2017 Jan. 04 | 03 24 14.08 | $+$34 16 48.60 | 15.40 | 14.80 | 0.60∗ J081432.12$+$560958.7 | 2017 Jan. 03; Nov. 20 | 08 14 32.12 | $+$56 09 58.69 | 18.06 | 18.11 | $-$0.05 S1 | 2017 Jan. 03 | 08 14 02.78 | $+$56 11 12.07 | 19.14 | 18.12 | 1.02 S2 | 2017 Jan. 03 | 08 14 53.54 | $+$56 10 14.14 | 17.96 | 17.35 | $-$0.61 S3 | 2017 Nov. 20 | 08 13 39.62 | $+$56 17 55.59 | 19.05 | 17.71 | 1.34 S4 | 2017 Nov. 20 | 08 14 19.58 | $+$56 06 24.04 | 19.33 | 17.88 | 1.45 J084957.98$+$510829.0 | 2017 Dec. 13, 2019 April 08 | 08 49 57.98 | $+$51 08 29.04 | 18.92 | 18.28 | 0.64 S1 | | 08 50 12.62 | $+$51 08 08.03 | 19.45 | 18.06 | 1.39 S2 | | 08 50 03.07 | $+$51 09 12.23 | 17.82 | 17.09 | 0.73 J090227.20$+$044309.0 | 2017 Feb. 22; Dec. 14 | 09 02 27.20 | $+$04 43 09.00 | 18.96 | 18.63 | 0.33 S1 | 2017 Feb. 22 | 09 02 01.94 | $+$04 37 32.90 | 19.10 | 17.74 | 1.36 S2 | 2017 Feb. 22 | 09 02 23.40 | $+$04 35 44.57 | 18.70 | 17.31 | 1.39 S3 | 2017 Dec. 14 | 09 03 04.11 | $+$04 48 19.65 | 18.85 | 17.95 | 0.90 S4 | 2017 Dec. 14 | 09 03 07.29 | $+$04 38 57.34 | 18.92 | 17.93 | 0.99 J094857.32$+$002225.6 | 2016 Dec. 02; 2017 Dec. 21 | 09 48 57.32 | $+$00 22 25.56 | 18.59 | 18.43 | 0.16 S1 | | 09 48 36.95 | $+$00 24 22.55 | 17.69 | 17.28 | 0.41 S2 | | 09 48 37.47 | $+$00 20 37.02 | 17.79 | 16.70 | 1.09 J095317.10$+$283601.5 | 2017 March 04; 2018 March 23 | 09 53 17.10 | $+$28 36 01.48 | 18.99 | 18.97 | 0.02 S1 | 2017 March 04 | 09 52 48.09 | $+$28 29 53.69 | 18.31 | 17.45 | 0.86 S2 | 2017 March 04; 2020 November 21 | 09 53 07.49 | $+$28 37 17.10 | 18.46 | 17.32 | 1.14 S3 | 2020 November 21 | 09 53 21.03 | $+$28 34 57.36 | 20.41 | 18.90 | 1.51 J104732.78$+$472532.0 | 2017 April 11; 2018 March 12 | 10 47 32.78 | $+$47 25 32.02 | 18.97 | 18.76 | 0.21 S1 | 2017 April 11 | 10 47 16.50 | $+$47 24 47.24 | 18.68 | 17.98 | 0.70 S2 | 2017 April 11; 2018 March 12 | 10 47 27.51 | $+$47 27 58.94 | 18.79 | 17.87 | 0.92 S3 | 2018 March 12 | 10 48 16.44 | $+$47 22 42.22 | 18.78 | 17.45 | 1.33 J122222.99$+$041315.9 | 2017 Jan. 03, 04; Feb. 21, 22; March 04, 24 | 12 22 22.99 | $+$04 13 15.95 | 17.02 | 16.80 | 0.22 S1 | | 12 22 34.02 | $+$04 13 21.57 | 18.63 | 17.19 | 1.44 S2 | | 12 21 56.12 | $+$04 15 15.19 | 17.22 | 16.78 | 0.44 J130522.75$+$511640.2 | 2017 April 04; 2019 April 25 | 13 05 22.74 | $+$51 16 40.26 | 17.29 | 17.10 | 0.19 S1 | 2017 April 04 | 13 06 16.16 | $+$51 19 03.67 | 16.96 | 15.92 | 1.04 S2 | 2017 April 04; 2019 April 25 | 13 05 57.57 | $+$51 11 00.97 | 16.35 | 15.26 | 1.09 S3 | 2019 April 25 | 13 05 44.25 | $+$51 07 35.85 | 17.88 | 16.42 | 1.46 J142114.05$+$282452.8 | 2018 May 10; 2019 May 27 | 14 21 14.05 | $+$28 24 52.78 | 17.73 | 17.74 | $-$0.01 S1 | 2018 May 10 | 14 20 33.73 | $+$28 31 10.45 | 18.50 | 17.11 | 1.39 S2 | 2018 May 10; 2019 May 27 | 14 21 08.78 | $+$28 24 04.99 | 16.16 | 16.21 | $-$0.05 S3 | 2019 May 27 | 14 21 24.36 | $+$28 27 16.52 | 16.82 | 16.45 | 0.37 J144318.56$+$472556.7 | 2018 March 11, 23 | 14 43 18.56 | $+$47 25 56.74 | 18.14 | 18.17 | $-$0.03 S1 | | 14 43 37.14 | $+$47 23 03.03 | 17.51 | 16.82 | 0.69 S2 | | 14 43 19.05 | $+$47 19 00.98 | 18.03 | 16.75 | 1.28 J150506.48$+$032630.8 | 2017 March 25; 2018 April 12 | 15 05 06.48 | $+$03 26 30.84 | 18.64 | 18.22 | 0.42 S1 | | 15 05 32.05 | $+$03 28 36.13 | 18.13 | 17.64 | 0.49 S2 | | 15 05 14.52 | $+$03 24 56.17 | 17.51 | 17.14 | 0.37 J154817.92$+$351128.0 | 2018 May 17; 2019 May 08 | 15 48 17.92 | $+$35 11 28.00 | 18.03 | 18.03 | 0.00 S1 | | 15 47 57.43 | $+$35 14 05.24 | 18.31 | 17.50 | 0.81 S2 | | 15 48 02.20 | $+$35 13 56.16 | 17.37 | 16.98 | 0.39 J164442.53$+$261913.3 | 2017 April 03; 2019 April 26 | 16 44 42.53 | $+$26 19 13.31 | 18.03 | 17.61 | 0.42 S1 | | 16 45 20.03 | $+$26 20 54.55 | 16.56 | 15.89 | 0.67 S2 | | 16 44 34.40 | $+$26 15 30.27 | 16.28 | 15.80 | 0.48 J170330.38$+$454047.3 | 2017 June 03; 2019 March 25 | 17 03 30.38 | $+$45 40 47.27 | 16.12 | 15.41 | 0.71 S1 | | 17 04 02.02 | $+$45 42 16.56 | 15.02 | 14.39 | 0.63 S2 | | 17 04 34.88 | $+$45 40 08.65 | 15.00 | 13.91 | 1.09 non-jetted NLSy1s J085001.17$+$462600.5 | 2017 Jan. 01; Dec. 15 | 08 50 01.17 | $+$46 26 00.50 | 19.12 | 18.82 | 0.30 S1 | | 08 50 17.69 | $+$46 20 42.71 | 18.11 | 17.85 | 0.26 S2 | | 08 49 48.29 | $+$46 21 11.81 | 18.72 | 17.63 | 1.09 J103727.45$+$003635.6 | 2018 March 11, 22 | 10 37 27.45 | $+$00 36 35.60 | 19.57 | 19.21 | 0.36 S1 | 2018 March 11 | 10 37 39.63 | $+$00 38 26.16 | 18.90 | 17.69 | 1.21 S2 | 2018 March 11 | 10 37 28.03 | $+$00 37 59.88 | 18.97 | 17.52 | 1.45 S3 | 2021 April 08 | 10 36 50.96 | $+$00 41 25.26 | 18.76 | 17.35 | 1.41 S4 | 2021 April 08 | 10 37 38.72 | $+$00 40 28.00 | 17.26 | 16.82 | 0.44 J111005.03$+$365336.2 | 2018 March 23; 2019 Jan. 13 | 11 10 05.03 | $+$36 53 36.22 | 20.60 | 20.49 | 0.11 S1 | | 11 10 08.50 | $+$36 50 59.03 | 19.14 | 18.86 | 0.28 S2 | | 11 10 10.76 | $+$36 55 26.97 | 19.74 | 18.45 | 1.29 J113824.53$+$365327.2 | 2017 April 17; 2018 March 23 | 11 38 24.53 | $+$36 53 27.18 | 19.55 | 18.79 | 0.76 S1 | | 11 38 25.03 | $+$36 54 44.02 | 18.90 | 17.52 | 1.38 S2 | | 11 37 56.81 | $+$36 52 35.56 | 17.91 | 17.41 | 0.50 J120014.08$-$004638.7 | 2018 March 12, May 11 | 12 00 14.08 | $-$00 46 38.74 | 18.51 | 17.81 | 0.70 S1 | | 12 00 12.63 | $-$00 46 07.14 | 17.19 | 16.68 | 0.51 S2 | | 12 00 25.99 | $-$00 51 45.21 | 16.73 | 16.37 | 0.36 J124634.65$+$023809.1 | 2017 April 03; 2018 April 12 | 12 46 34.65 | $+$02 38 09.06 | 18.35 | 18.18 | 0.17 S1 | 2017 April 03; 2018 April 12 | 12 47 00.55 | $+$02 37 31.37 | 17.91 | 16.89 | 1.02 S2 | 2017 April 03 | 12 47 05.32 | $+$02 39 06.75 | 16.94 | 16.62 | 0.32 S3 | 2018 April 12 | 12 46 49.50 | $+$02 37 11.64 | 17.24 | 16.77 | 0.47 J163323.59$+$471859.0 | 2017 May 20; 2019 March 20 | 16 33 23.59 | $+$47 18 59.04 | 17.25 | 16.95 | 0.30 S1 | | 16 32 59.26 | $+$47 26 05.45 | 15.57 | 15.18 | 0.39 S2 | | 16 32 56.00 | $+$47 21 01.26 | 15.55 | 15.11 | 0.44 J163401.94$+$480940.2 | 2018 March 22, 26 | 16 34 01.94 | $+$48 09 40.20 | 19.54 | 19.21 | 0.33 S1 | 2018 March 22 | 16 34 04.24 | $+$48 11 32.47 | 19.65 | 18.77 | 0.88 S2 | 2018 March 22 | 16 33 50.78 | $+$48 10 09.78 | 19.04 | 17.86 | 1.18 S3 | 2021 April 09 | 16 33 31.81 | $+$48 04 31.89 | 19.73 | 18.34 | 1.39 S4 | 2021 April 09 | 16 34 01.24 | $+$48 08 36.92 | 18.25 | 17.30 | 0.95 The SDSS DR14 catalog (Abolfathi et al., 2018) has been used for getting optical positions and apparent magnitudes of the sources and their comparison stars. ∗The USNO-A2.0 catalog (Monet, 1998) has been used in case of non-availability of the SDSS ‘g-r’ color. The ‘B-R’ color has been used in such cases. ## 4 STATISTICAL ANALYSIS To guarantee the reliability of detection for microvariability events, multi- testing has mostly been used in recent years (e.g., Joshi et al., 2011; Goyal et al., 2012; de Diego, 2014; Ojha et al., 2021). Therefore, for unambiguous detection of INOV in a DLC, we have used in the current work two different versions of $F$-test, which are the standard _$F$ -test_ (hereafter $F^{\eta}$-test) and the power-enhanced _$F$ -test_ (hereafter $F_{enh}$-test). However, in the case of $F^{\eta}$-test, it is suggested that mismatching in the brightness levels among target AGN (in the current work, NLSy1 galaxy) and the two chosen steady comparison stars should be within $\sim$ 1-mag in order to avoid photon statistics and other random-noise terms (e.g., see Howell et al., 1988; Cellone et al., 2007; Goyal et al., 2012). Therefore, care was taken while selecting two (non-varying) comparison stars to be within 1-mag of the respective NLSy1s. Thus, for the present set of jetted-RLNLSy1s, the median magnitude mismatch of _0.82_ between the reference star (i.e., the comparison star with the closest match with the target AGN’S instrumental magnitude) and target NLSy1. The corresponding median values for the non-jetted-RLNLSy1s and the entire set of 23 NLSy1s are _1.28_ and _0.91_ , respectively. Additionally, while implementing the $F^{\eta}$-test, it is also crucial to use the correct RMS errors on the photometric data points due to underestimated magnitude errors by a factor ranging between 1.3 and 1.75, returned by the routines in the data reduction software DAOPHOT and IRAF (Gopal-Krishna et al., 1995; Garcia et al., 1999; Sagar et al., 2004; Stalin et al., 2004; Bachev et al., 2005). Therefore, the ‘$\eta$’ value is taken here to be 1.54$\pm$0.05, computed using the data of 262 intra-night monitoring sessions of AGNs by Goyal et al. (2013a). Following Goyal et al. (2012), $F^{\eta}$-statistics defined as $F_{s1}^{\eta}=\frac{\sigma^{2}_{(q-cs1)}}{\eta^{2}\langle\sigma_{q-cs1}^{2}\rangle},\hskip 5.69046ptF_{s2}^{\eta}=\frac{\sigma^{2}_{(q-cs2)}}{\eta^{2}\langle\sigma_{q-cs2}^{2}\rangle},\hskip 5.69046ptF_{s1-s2}^{\eta}=\frac{\sigma^{2}_{(cs1-cs2)}}{\eta^{2}\langle\sigma_{cs1-cs2}^{2}\rangle}$ (1) where $\sigma^{2}_{(q-cs1)}$, $\sigma^{2}_{(q-cs2)}$, and $\sigma^{2}_{(cs1-cs2)}$ are the variances with $\langle\sigma_{q-cs1}^{2}\rangle=\sum_{i=1}^{N}\sigma^{2}_{i,\leavevmode\nobreak\ err}(q-cs1)/N$, $\langle\sigma_{q-cs2}^{2}\rangle$, and $\langle\sigma_{cs1-cs2}^{2}\rangle$ being the mean square (formal) rms errors of the individual data points in the ‘target NLSy1 - comparison star1’, ‘target NLSy1 - comparison star2’, and ‘comparison star1 - comparison star2’ DLCs, respectively. The $F$-values were computed in this way for individual DLC using Eq. 1, and compared with the critical $F$-value, set, viz., $F^{(\alpha)}_{\nu}$, where $\alpha$ is the level of significance set by us for the $F^{\eta}$-test, and $\nu$ ($=N_{j}-1$) is the degree of freedom for the DLC ($N_{j}$ being the number of data points in the DLC). The chance of a false INOV detection signal becomes lower for a smaller value of $\alpha$. Therefore, in the present work, similar to our previous work, two critical significance levels, $\alpha=$ 0.01 and $\alpha=$ 0.05 are set by us (e.g., see Ojha et al., 2021). Following Ojha et al. (2021), a NLSy1 is designated as variable for a given session according to this test if the computed value of $F^{\eta}$ is found to be greater than its $F_{c}(0.99)$. Table 4 summarizes the computed $F^{\eta}$-values and the correspondingly inferred status of INOV detection for all the 53 sessions (columns 6 and 7). The second flavor of $F$-test for INOV employed in the present study is the $F_{enh}$-test (e.g., de Diego, 2014). The statistical criteria for the $F_{enh}$-test can be described as $\hskip 7.11317ptF_{{\rm enh}}=\frac{s_{{\rm NLSy1}}^{2}}{s_{\rm comb}^{2}},\hskip 14.22636pts_{\rm comb}^{2}=\frac{1}{(\sum_{j=1}^{q}N_{j})-q}\sum_{j=1}^{q}\sum_{i=1}^{N_{j}}B_{j,i}^{2}.$ (2) here $s_{{\rm NLSy1}}^{2}$ is the variance of the ‘target NLSy1-reference star’ DLC, while $s_{\rm comb}^{2}$ is the combined variance of ‘comparison star-reference star’ DLC having $N_{j}$ data points and $q$ comparison stars, computed using scaled square deviation $B_{{\rm j,i}}^{2}$ as $\hskip 76.82234ptB_{j,i}^{2}=\omega_{j}(c_{j,i}-\bar{c}_{j})^{2}$ (3) where, $c_{j,i}$’s is the ‘jth comparison star-reference star’ differential instrumental magnitudes value and $\bar{c_{j}}$ represent the corresponding average value of the DLC for its $N_{j}$ data points. The scaling factor $\omega_{j}$ is taken here as described in Ojha et al. (2021). The principal feature of the $F_{enh}$-test is that it takes into account the brightness differences of the target AGN and the selected comparison stars, a frequently encountered problem with the $C$ and $F$-statistics (e.g., see Joshi et al., 2011; de Diego, 2014). Thus $F_{enh}$-values were computed using Eq. 2 for individual DLCs and compared with the set critical values for this study (see above). Based upon this test, a NLSy1 DLC is assigned a designation “variable (V)” when the computed value of $F_{enh}$ found for ‘target NLSy1-reference star’ DLC, is greater than its $F_{c}(0.99)$ (i.e., $F_{{\rm enh}}>F_{c}(0.99)$), and “probable variable (PV)” if same is found to be greater than $F_{c}(0.95)$ but less or equal to $F_{c}(0.99)$ (i.e., $F_{c}(0.95)<F_{\rm enh}\leq F_{c}(0.99)$). In Table 4, we tabulate the computed $F_{enh}$-values and the correspondingly inferred INOV status for our entire 53 sessions in columns 8 and 9. Table 4: Details of the observations and the status of the INOV for the sample of 23 RLNLSy1 galaxies studied in this work (aperture radius used = 2$\times$FWHM). RLNLSy1s | Date(s)a | Tb | Nc | Mediand | $F^{\eta}$-test | INOV | $F^{\eta}$-test | Variability | $F_{enh}$-test | INOV | $\sqrt{\langle\sigma^{2}_{i,err}\rangle}$ | $\overline{\psi}^{g}_{s1,s2}$ ---|---|---|---|---|---|---|---|---|---|---|---|--- (SDSS name) | yyyy.mm.dd | (hrs) | | FWHM | $F_{s1}^{\eta}$,$F_{s2}^{\eta}$ | statuse | $F_{s1-s2}^{\eta}$ | status of | $F_{enh}$ | statusf | (AGN-s)g | (%) | | | | (arcsec) | | 99% | 99% | s1$-$s2 | | 99% | | (1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11) | (12) | (13) jetted NLSy1s | J032441.20$+$341045.0 | (2016.11.22) | 4.42 | 56 | 2.32 | 14.34, 16.13 | V, V | 00.82 | NV | 17.39 | V | 0.003 | 5.38 | 2016.11.23 | 4.27 | 54 | 2.13 | 03.77, 02.46 | V, V | 00.70 | NV | 05.42 | V | 0.004 | 2.79 | (2016.12.02) | 4.41 | 44 | 2.60 | 85.80, 88.73 | V, V | 00.87 | NV | 98.29 | V | 0.003 | 11.44 | 2017.01.03 | 3.00 | 39 | 2.47 | 04.53, 08.53 | V, V | 00.34 | NV | 13.27 | V | 0.004 | 3.96 | 2017.01.04 | 3.39 | 33 | 2.45 | 21.60, 23.34 | V, V | 00.44 | NV | 49.35 | V | 0.004 | 7.99 J081432.12$+$560958.7 | (2017.01.03) | 3.37 | 19 | 2.69 | 00.33, 00.34 | NV, NV | 00.16 | NV | 01.34 | NV | 0.022 | – | (2017.11.20) | 4.52 | 32 | 2.79 | 00.86, 00.55 | NV, NV | 00.55 | NV | 05.28 | NV | 0.031 | – J084957.98$+$510829.0 | (2017.12.13) | 4.42 | 24 | 2.83 | 00.42, 00.51 | NV, NV | 01.06 | NV | 00.77 | NV | 0.033 | – | (2019.04.08) | 3.04 | 13 | 2.88 | 00.66, 00.89 | NV, NV | 00.62 | NV | 00.62 | NV | 0.032 | – J090227.20$+$044309.0 | (2017.02.22) | 3.59 | 27 | 2.38 | 00.71, 00.57 | NV, NV | 00.25 | NV | 02.81 | NV | 0.025 | – | (2017.12.14) | 5.65 | 39 | 2.51 | 00.33, 00.37 | NV, NV | 00.24 | NV | 01.38 | NV | 0.024 | – J094857.32$+$002225.6 | (2016.12.02) | 4.15 | 17 | 2.58 | 01.71, 01.88 | NV, NV | 00.16 | NV | 10.52 | V | 0.017 | 7.95 | (2017.12.21) | 5.19 | 33 | 2.24 | 13.95, 16.53 | V , V | 00.55 | NV | 25.26 | V | 0.012 | 16.42 J095317.10$+$283601.5 | (2017.03.04) | 3.97 | 29 | 2.41 | 00.36, 00.35 | NV, NV | 00.48 | NV | 00.76 | NV | 0.035 | – | (2020.11.21) | 3.25 | 11 | 3.10 | 00.77, 00.68 | NV, NV | 00.33 | NV | 02.31 | NV | 0.035 | – J104732.78$+$472532.0 | (2017.04.11) | 3.75 | 47 | 0.98 | 00.54, 00.53 | NV, NV | 00.53 | NV | 01.03 | NV | 0.035 | – | (2018.03.12) | 3.82 | 15 | 2.77 | 00.60, 00.63 | NV, NV | 00.38 | NV | 01.58 | NV | 0.028 | – J122222.99$+$041315.9 | 2017.01.03 | 3.52 | 17 | 2.38 | 00.62, 00.30 | NV, NV | 00.91 | NV | 00.68 | NV | 0.018 | – | 2017.01.04 | 3.14 | 16 | 2.36 | 00.32, 00.37 | NV, NV | 00.16 | NV | 01.99 | NV | 0.014 | – | 2017.02.21 | 4.44 | 41 | 2.65 | 00.74, 00.76 | NV, NV | 00.35 | NV | 02.13 | V | 0.020 | 6.36 | (2017.02.22) | 5.50 | 50 | 2.59 | 03.98, 03.60 | V , V | 00.61 | NV | 06.51 | V | 0.017 | 13.33 | (2017.03.04) | 4.93 | 39 | 2.61 | 00.72, 00.86 | NV, NV | 00.53 | NV | 01.36 | NV | 0.019 | – | 2017.03.24 | 3.94 | 39 | 2.37 | 00.93, 00.75 | NV, NV | 00.56 | NV | 01.66 | NV | 0.020 | – J130522.75$+$511640.2 | (2017.04.04) | 3.79 | 23 | 2.57 | 00.70, 00.66 | NV, NV | 00.24 | NV | 02.94 | PV | 0.012 | – | (2019.04.25) | 3.11 | 22 | 2.77 | 01.56, 01.42 | NV, NV | 00.39 | NV | 04.12 | PV | 0.018 | – J142114.05$+$282452.8 | (2018.05.10) | 4.06 | 26 | 2.83 | 00.55, 00.56 | NV, NV | 00.35 | NV | 01.60 | NV | 0.019 | – | (2019.05.27) | 3.31 | 18 | 2.68 | 00.86, 00.86 | NV, NV | 00.36 | NV | 02.47 | PV | 0.021 | – J144318.56$+$472556.7 | (2018.03.11) | 3.05 | 19 | 3.15 | 00.54, 00.56 | NV, NV | 00.21 | NV | 02.59 | PV | 0.022 | – | (2018.03.23) | 3.13 | 23 | 2.33 | 00.35, 00.36 | NV, NV | 00.35 | NV | 01.00 | NV | 0.018 | – J150506.48$+$032630.8 | (2017.03.25) | 5.21 | 41 | 2.08 | 00.60, 00.59 | NV, NV | 00.58 | NV | 01.04 | NV | 0.028 | – | (2018.04.12) | 3.05 | 19 | 2.55 | 00.67, 00.63 | NV, NV | 00.80 | NV | 00.84 | NV | 0.032 | – J154817.92$+$351128.0 | (2018.05.17) | 3.00 | 19 | 3.08 | 00.40, 00.38 | NV, NV | 00.38 | NV | 01.05 | NV | 0.008 | – | (2019.05.08) | 3.24 | 14 | 2.79 | 00.60, 00.65 | NV, NV | 00.30 | NV | 01.98 | NV | 0.017 | – J164442.53$+$261913.3 | (2017.04.03) | 4.37 | 37 | 2.50 | 01.44, 01.28 | NV, NV | 00.41 | NV | 03.53 | V | 0.011 | 5.41 | (2019.04.26) | 3.22 | 24 | 2.27 | 03.06, 03.74 | V , V | 00.48 | NV | 06.40 | V | 0.011 | 7.50 J170330.38$+$454047.3 | (2017.06.03) | 3.76 | 37 | 2.41 | 00.75, 00.67 | NV, NV | 00.64 | NV | 01.17 | NV | 0.004 | – | (2019.03.25) | 3.13 | 45 | 4.45 | 00.63, 01.08 | NV, NV | 00.90 | NV | 00.70 | NV | 0.010 | – non-jetted NLSy1s | J085001.17$+$462600.5 | (2017.01.04) | 3.26 | 13 | 2.28 | 00.39, 00.50 | NV, NV | 00.50 | NV | 00.78 | NV | 0.026 | – | (2017.12.15) | 3.69 | 20 | 2.89 | 00.80, 00.74 | NV, NV | 00.36 | NV | 02.22 | NV | 0.032 | – J103727.45$+$003635.6 | (2018.03.11) | 3.30 | 11 | 3.12 | 01.18, 01.22 | NV, NV | 01.08 | NV | 01.09 | NV | 0.030 | – | (2021.04.08) | 4.96 | 13 | 2.42 | 01.54, 01.77 | NV, NV | 01.00 | NV | 01.53 | NV | 0.043 | – J111005.03$+$365336.2 | (2018.03.23) | 3.22 | 44 | 0.90 | 00.34, 00.35 | NV, NV | 00.47 | NV | 00.72 | NV | 0.027 | – | (2019.01.13) | 3.13 | 11 | 3.07 | 02.61, 02.54 | NV, NV | 01.18 | NV | 02.11 | NV | 0.042 | – J113824.53$+$365327.2 | (2017.04.17) | 4.32 | 20 | 2.11 | 00.32, 00.40 | NV, NV | 00.21 | NV | 01.58 | NV | 0.028 | – | (2018.03.23) | 4.31 | 21 | 2.53 | 00.36, 00.38 | NV, NV | 00.16 | NV | 02.26 | NV | 0.029 | – J120014.08$-$004638.7 | (2018.03.12) | 3.83 | 28 | 2.72 | 00.31, 00.22 | NV, NV | 00.19 | NV | 01.63 | NV | 0.011 | – | (2018.05.11) | 3.13 | 16 | 2.97 | 00.13, 00.25 | NV, NV | 00.33 | NV | 00.39 | NV | 0.018 | – J124634.65$+$023809.1 | (2017.04.03) | 3.77 | 18 | 2.50 | 00.21, 00.36 | NV, NV | 00.46 | NV | 00.46 | NV | 0.024 | – | (2018.04.12) | 3.72 | 21 | 2.71 | 00.59, 00.66 | NV, NV | 00.31 | NV | 01.89 | NV | 0.018 | – J163323.59$+$471859.0 | (2017.05.20) | 4.33 | 36 | 2.26 | 00.85, 00.76 | NV, NV | 00.23 | NV | 03.76 | V | 0.007 | 2.56 | (2019.03.20) | 3.69 | 33 | 2.66 | 01.61, 01.79 | NV, NV | 00.36 | NV | 04.43 | V | 0.016 | 9.52 J163401.94$+$480940.2 | (2018.03.23) | 3.04 | 34 | 0.98 | 00.62, 00.71 | NV, NV | 00.39 | NV | 01.61 | NV | 0.012 | – | (2021.04.09) | 4.58 | 12 | 2.12 | 01.27, 01.63 | NV, NV | 00.34 | NV | 03.76 | PV | 0.043 | – aDate(s) of the monitoring session(s). The dates given inside parentheses refer to the sessions we have used here for estimating the INOV duty cycle | (e.g., see text in Sect. 4.1). bDuration of the monitoring session in the observed frame. cNumber of data points in the DLCs of the monitoring session. | dMedian seeing (FWHM in arcsec) for the session. ${}^{e,\leavevmode\nobreak\ f}$INOV status inferred from Fη and Fenh tests, with V = variable , i.e. confidence level $\geq$ 99%; | PV = probable variable, i.e. $95-99$% confidence level; NV = non-variable, i.e. confidence level $<$ 95%. | gMean amplitude of variability in the two DLCs of the target NLSy1 (i.e., relative to the two chosen comparison stars). | Table 5: The DC and $\overline{\psi}$ of INOV, for the sample of 23 RLNLSy1 galaxies studied in this work, based on the $F_{enh}$-test and $F^{\eta}$-test. RLNLSy1s | No. of Sources | $F_{enh}$-test | $F^{\eta}$-test | Median black hole mass ---|---|---|---|--- | | ⋆DC | ${}^{\star}\overline{\psi}^{{\dagger}}$ | ⋆DC | ${}^{\star}\overline{\psi}^{{\dagger}}$ | log ($M_{BH}/M_{\sun}$) | | | (%) | (%) | (%) | (%) | | jetted-RLNLSy1s | 15 | 18 (30)⊥ | 09 (07)⊥ | 12 (30)⊥ | 11 (05)⊥ | 7.72 | non-jetted-RLNLSy1s | 8 | 05 (16)⊥ | 09 (01)⊥ | 00 (16)⊥ | – | 7.42 | J-$\gamma$-RLNLSy1s | 8 | 34 (16)⊥ | 10 (07)⊥ | 29 (16)⊥ | 11 (05)⊥ | 7.59 | J-RLNLSy1s | 7 | 00 (14)⊥ | – | 00 (14)⊥ | – | 7.73 | ⋆We used the 46 sessions for this estimation, as explained in Sect. 4.1. †The mean value for all the DLCs belonging to the type ‘V’. | ⊥Values inside parentheses are the number of observing sessions used to estimate the parameters DC or $\overline{\psi}$. | ### 4.1 Computation of INOV duty cycle and amplitude of variability To compute the duty cycle (DC) of INOV for the present sets of RLNLSy1s, we have adopted, following the definition given by Romero et al. (1999) (see, also Stalin et al., 2004) $\hskip 71.13188ptDC=100\frac{\sum_{j=1}^{n}R_{j}(1/\Delta t_{j})}{\sum_{j=1}^{n}(1/\Delta t_{j})}\hskip 2.84544pt{\rm per\leavevmode\nobreak\ cent}$ (4) where $\Delta t_{j}=\Delta t_{j,\leavevmode\nobreak\ observed}(1+$z$)^{-1}$ ($z$ being the redshift of the target NLSy1 galaxy in current study) is the target AGN’s redshift corrected time duration of the $j^{th}$ monitoring session (see details in Ojha et al., 2020a). For $j^{th}$ session, $R_{j}$ is considered to be 1 in Eq. 4 only when INOV is detected, otherwise taken to be zero. Note that to avoid introducing bias, we have used only 2 sessions for each AGN. For sources observed in more than 2 sessions (e.g., see Table 4), the computation of DC used only the longest two sessions, as pointed out in Ojha et al. (2020a); Ojha et al. (2021). The computed INOV duty cycles (DCs) for the different sets of RLNLSy1s are listed in Table 5, based on two statistical tests . To compute the peak-to-peak amplitude of INOV ($\psi$) detected in a given DLC, we followed the definition given by Heidt & Wagner (1996) $\hskip 71.13188pt\psi=\sqrt{({H_{max}}-{H_{min}})^{2}-2\sigma^{2}}$ (5) with $H_{min,\leavevmode\nobreak\ max}$ = minimum (maximum) values in the DLC of target NLSy1 relative to steady comparison stars and $\sigma^{2}=\eta^{2}\langle\sigma^{2}_{NLSy1-s}\rangle$, where, $\langle\sigma^{2}_{NLSy1-s}\rangle$ is the mean square (formal) rms errors of individual data points. The mean value of ($\overline{\psi}$) for different sets (e.g., see Table 5) of RLNLSy1 galaxies is computed by taking average of the computed $\psi$ values for the DLCs belonging to the “V” category. In Table 5, we have summarized the computed $\overline{\psi}$ values based on the two statistical tests for the different sets of RLNLSy1s in our sample. ## 5 Results and discussion The INOV characterization of RLNLSy1 galaxies presented here is likely to be more representative in comparison to the previous studies based on significantly smaller samples (Liu et al., 2010; Paliya et al., 2013a; Kshama et al., 2017). Also, we have paid particular attention to guarding against the possibility of spurious INOV claims arising from a varying flux contribution to the aperture photometry from the host galaxy of the AGN caused due to seeing disc variation during the session. As pointed out by Cellone et al. (2000), under such circumstances, false claims of INOV can result in $low-z$ AGNs. Based on recent deep imaging studies of NLSy1 galaxies by Olguín- Iglesias et al. (2020), it can be inferred that any variable contamination arising from the host galaxy is very unlikely to matter when studying the variability of AGNs at least at z $>$ 0.5. From Table 4, it is seen that the INOV detection ($\psi>3$% ) has occurred for just 3 sources in our sample having z $<$ 0.5. These are: J032441.20$+$341045.0 (at $z=0.06$; 4 sessions), J164442.53$+$261913.3 (at $z=0.14$; two sessions), and J163323.59$+$471859.0 ($z=0.12$, one session). However, since the seeing disk remained steady in all these sessions (Fig. 1), except for the flickering in the PSF for the 3-4 points only in the case of J164442.53$+$261913.3 for its first session (Fig. 4) and a non-negligible systematic PSF variation in the case of J163323.59$+$471859.0 (Fig. 6). A closer checkup of these two intranight sessions shows that either PSF remained fairly steady during the time of AGN’s flux variations (Fig. 4) or the gradients in the DLCs of the target AGN are seen to be anticorrelated with systematic variations in the PSF (Fig. 6). This is opposite to what is expected in case the aperture photometric measurements were significantly contaminated by the underlying galaxy (see Cellone et al., 2000). Therefore, the possibility of a significant variation in the fractional contribution from the host galaxy can be safely discounted. We thus conclude that the present cases of INOV detection for $low-z$ RLNLSy1s are genuine and not artifacts of seeing disk variation through the monitoring sessions. From Table 5, it is seen that the $F_{enh}$-test resulted in the DCs of 18% and 5% for the jetted and non-jetted-RLNLSy1s sets; however, the DCs of 12% and 0% are estimated based on the conservative $F^{\eta}$-test for the same sets. Thus, regardless of which of the two statistical tests is applied, the INOV DCs for the jetted sample is higher than the non-jetted sample. Since we are using only two steady comparison stars in this study, therefore power- enhanced F-test becomes similar to that of the scaled F-test. However, it has been suggested in Goyal et al. (2013b) that in such a condition power-enhanced F-test does not follow the standard F-distribution. Therefore, for further discussion and conclusion, we will be using our results based upon $F^{\eta}$-test. Additionally, in the case of powerful quasars/blazars, such DCs characterize non-blazars, including weakly polarised flat-radio-spectrum (i.e., radio-beamed) quasars (Goyal et al., 2013b; Gopal-Krishna & Wiita, 2018). It is further seen from Table 5 that a higher DC ( $\sim$30%, i.e., approaching blazar-like values, e.g., see Goyal et al., 2013b) is only exhibited by the $\gamma$-ray detected subset, which consists of 8 RLNLSy1s, all of which belong to the jetted category. This independently reinforces the premise that $\gamma$-ray detected NLSy1 galaxies emit blazar-like compact radio jets, which are likely conspicuous due to relativistic beaming. The absence of $\gamma$-ray detection among the non-jetted RLNLSy1s is interesting because it suggests the non-detection of jets in these sources. It is probably related to relativistic dimming (due to misalignment) rather than due to the jets being beamed but still too small physically to be resolved by VLBA. This inference is based on the currently popular notion that $\gamma$-rays in AGN primarily arise from the vicinity of the central engine, i.e., the base of the jets (e.g., see Neronov et al., 2015). It would then appear that the radio jets in the non-jetted RLNLSy1s either are not strongly beamed or weak. In this context, we also note from Table 5, that the DC of 8 J-$\gamma$-RLNLSy1s is 29% while none of the sources in the sample of 7 J-RLNLSy1s has shown INOV. As also noted in Sect. 2 that the VLBA detected jets might not be relativistically beamed, which may result in the non-detection of INOV in these sources. The higher INOV DC in 8 J-$\gamma$-RLNLSy1s may be related to the relativistic beaming of jets in these sources, and the mere presence of jets does not guarantee an INOV detection in NLSy1 galaxies. For instance, J032441.20$+$341045.0 has lowest polarization value (e.g., see Table 1) and radio loudness value (e.g., see Zhou et al., 2007) among 8 J-$\gamma$-RLNLSy1s but shows strong INOV activity which perhaps could be due to very high jets speed (e.g., see Table 1). This supports our above argument that relativistic beaming plays a significant role for INOV in the case of gamma-ray detected radio-loud NLSy1s. From Table 5, the DC estimates, based upon $F^{\eta}$-test for the samples of jetted and non-jetted RLNLSy1s, have contrasting differences. Thus, our results suggest that the jetted-RLNLSy1s sample shows higher DC in comparison to the non-jetted-RLNLSy1s. However, as noted above that, instead of the mere presence of a jet, relativistic beaming seems to play a dominant role for INOV in the case of low-luminous high accreting AGNs such as NLSy1 galaxies. As has been emphasized in Sect. 1 that the central engine of NLSy1s operates in a regime of a higher accretion rate, and contributions from their host galaxies are prominent in the case of lower redshift sources (see Sect. 3.2). However, our method of analysis for INOV detections is very unlikely to be affected by the host galaxy’s contributions (hence spurious INOV detection) from the present sources, but due to higher accretion rates of NLSy1s, a relative enhancement in the AGN’s optical emission (i.e., thermal component) as compared to its synchrotron emission is expected (e.g., see Zhou et al., 2007; Paliya et al., 2014). Since AGN’s optical emission is likely to be less amenable to being variable in comparison to synchrotron emission, resulting from the Doppler boosted synchrotron jet; therefore the amplitude of INOV is expected to be suppressed by this thermal contamination. Thus, the DC of the jetted and non-jetted RLNLSy1s samples would be more robust once it becomes possible to subtract out thermal contamination originating in NLSy1s from the disc due to their higher Eddington accretion rates. Furthermore, a relatively lower DC of the sample of jetted-RLNLSy1s might be either due to the sub- luminal speed of jets (e.g., see Ojha et al., 2019) or due to their primarily misaligned relativistic jets towards the observer line of site (Berton et al., 2018). Therefore, we categorized our sample of jetted-RLNLSy1s into the subsamples of J-$\gamma$-RLNLSy1s (8 sources) and J-RLNLSy1s (7 sources) based on their detections in $\gamma$-ray by Fermi-LAT, where detection of $\gamma$-ray emission supports the scenario of presence of Doppler boosted relativistic jets (Abdo et al., 2009a, b, c; Foschini et al., 2010; Foschini, 2011; D’Ammando et al., 2012; D’Ammando et al., 2015; Yao et al., 2015; Paliya et al., 2018; Yang et al., 2018; Yao et al., 2019). It can be seen in Table 5 that we found a lack of INOV detection based upon the $F^{\eta}$-test in the J-RLNLSy1s subsample in contrast to the DC of $\sim$ 29% for J-$\gamma$-RLNLSy1s subsample, consistent with the result of Ojha et al. (2019), where based upon a small sample of three NLSy1s, it was found out that superluminal motion in the radio jet could be a robust diagnostic of INOV. To further confirm the above scenario, we compiled the apparent jet speed of our J-$\gamma$-RLNLSy1s from the literature. We found that out of 8 members of J-$\gamma$-RLNLSy1s, six members have available apparent jet speeds. These six J-$\gamma$-RLNLSy1s are: J032441.20$+$341045.0, J084957.98$+$510829.0, J094857.32$+$002225.6, J150506.48$+$032630.8, with $v_{app}/c$ of $9.1\pm 0.3$, $6.6\pm 0.6$, $9.7\pm 1.1$, $0.1\pm 0.2$, respectively (e.g., see Lister et al., 2019), and J122222.99$+$041315.9, J164442.53$+$261913.3, with $v_{app}/c$ of $0.9\pm 0.3$, $>$1.0, respectively (Lister et al., 2016, e.g., see, Doi et al., 2012, and). Thus with the current INOV study and available jet speed of J-$\gamma$-RLNLSy1s, a correlated INOV detection with the superluminal motion in the radio jet is inferred, except for J084957.98$+$510829.0. Although, unfortunately, we could not find any INOV in both $>$ 3 hrs long monitoring sessions of the source, J084957.98$+$510829.0 in the present study, this source has shown in the past a fading by $\sim$ 0.2 mags in its INOV study within just $\sim$ 15 minutes during its high $\gamma$-ray active phase (see figure 6 of Maune et al., 2014). Other than this instance, this source previously had also shown significant INOV in all its six intra-night sessions (e.g., see Paliya et al., 2016). Therefore, the non-detection of INOV in our current study may be due to currently undergoing the quiescent $\gamma$-ray phase of this source. Nonetheless, the above correlation could be firmly established once a more comprehensive INOV database and apparent jet speeds of J-$\gamma$-RLNLSy1s become available. On the other hand, in the case of RLNLSy1s (with $R>100$), where most of the electromagnetic emission (radio, optical, X-ray, and $\gamma$-ray) supposedly comes from their jets, it is suggested that more massive black holes are more amenable to launch powerful relativistic jets (Urry et al., 2000; Hyvönen et al., 2007; Chiaberge & Marconi, 2011; Olguín-Iglesias et al., 2016). Therefore, we compared the median black hole masses of jetted-RLNLSy1s and non-jetted-RLNLSy1s samples, derived based upon single-epoch optical spectroscopy virial method (e.g., see second last column of Table 1). This has resulted in a nominal difference with the median values of log ($M_{BH}$/$M_{\sun}$) of _7.72_ and _7.42_ for the jetted-RLNLSy1s and non- jetted-RLNLSy1s samples (see last column of Table 5), respectively. A contrasting difference has not been found between the black hole masses of the two sets, which may be either due to smaller sample sizes or due to the use of the single-epoch optical spectroscopy virial method, which is suggested to have a systematic underestimation while estimating black hole masses (Decarli et al., 2008; Marconi et al., 2008; Calderone et al., 2013; Viswanath et al., 2019; Ojha et al., 2020b). Therefore, to firmly establish the above scenario, estimation of black hole masses of a large sample of jetted and non-jetted RLNLSy1s is needed with the method that is less likely to be affected by underestimation of black hole masses such as the standard Shakura-Sunyaev accretion-disc model method (Calderone et al., 2013; Viswanath et al., 2019). ## 6 Conclusions To quantify the role of the absence/presence of radio jets for INOV in the case of RLNLSy1s, we have carried out a systematic INOV study based on an unbiased sample of 23 RLNLSy1s. Among them, 15 RLNLSy1s have confirmed detection of jets (jetted), and the remaining 8 RLNLSy1s have no detection of jets (non-jetted) with the VLBA observations. Our study spans 53 sessions of a minimum 3-hour duration each. The main conclusions from this work are as follows: 1. 1. We estimated the INOV DC based upon $F^{\eta}$-test for the sample of jetted RLNLSy1s to be 12%, however, none of the sources showed INOV in the sample of non-jetted RLNLSy1s, at the 99% confidence level for a typical threshold $\psi>3$%. 2. 2. Among the jetted RLNLSy1s, the DC for jetted $\gamma$-ray detected RLNLSy1s is found to be 29% in contrast to null INOV detection in the case of non-$\gamma$-ray detected RLNLSy1s. It suggests that the INOV detection in RLNLSy1 galaxies does not solely depend on the presence of radio jets, but relativistic beaming plays a dominant role. 3. 3. The predominance of beamed jet for INOV is also supported in our study based on the correlation of the INOV detection with the apparent jet-speed available for 6 jetted $\gamma$-ray detected RLNLSy1s. 4. 4. The higher DC of $\sim$ 30%, approaching blazar-like DC, is only exhibited by the $\gamma$-ray detected subset, suggests that $\gamma$-ray detected NLSy1 galaxies emit blazar-like compact radio jets in which relativistic jet motion (speed) and/or small jet’s angles to the observer’s line of sight seems to be correlated with the presence of INOV. For further improvement, it will be helpful to enlarge the sample and conduct similar systematic INOV studies for other subclasses of AGN with and without a confirmed jet, along with the proper estimate of apparent jet speeds. ## Acknowledgements We are thankful to the anonymous referee for providing comments and suggestions, which helped us improve the manuscript considerably. This research is part of the DST-SERB project under grant no. EMR/2016/001723. VKJ and HC acknowledge the financial support provided by DST-SERB for this work. We are very grateful to Prof. Gopal-Krishna for his helpful scientific discussions and important suggestions for this work. We thank the concerned ARIES and IIA staff for assistance during the observations obtained at the 3.6-m Devasthal Optical Telescope (DOT) and the 2.01-m Himalayan Chandra Telescope (HCT), which are the national facilities run and managed by ARIES and IIA, respectively, as autonomous Institutes under Department of Science and Technology, Government of India. ## Data availability The data from the ARIES telescopes and the 2.01-m HCT telescope of IIA used in this paper will be shared on reasonable request to the corresponding author. Figure 1: The differential light curves (DLC) for the first 3 jetted-RLNLSy1s from our sample of 15 jetted-RLNLSy1s are shown here. The name of the RLNLSy1 galaxy, the redshift (z), the name of the telescope used, and the duration of the observations are shown on the top of each panel. The light curves generated from the data obtained on the dates given inside parentheses at the top of each panel were used for the statistical analysis. In each panel, the DLC on the top is the instrumental magnitude difference between two non- variable comparison stars, the two DLC in the middle are prepared using the NLSy1 and the two comparison stars, respectively, while the bottom DLC shows the variation of the seeing conditions (FWHM in arcseconds) during the monitoring session. Figure 2: (Continued) DLC for the subsequent 5 jetted- RLNLSy1s from the current sample of 15 jetted-RLNLSy1s. Figure 3: (Continued) DLC for another 2 jetted-RLNLSy1s from the current sample of 15 jetted- RLNLSy1s. Figure 4: (Continued) DLC for the final 5 jetted-RLNLSy1s from the current sample of 15 jetted-RLNLSy1s. Figure 5: Similar to Fig. 1, but for the first 4 non-jetted-RLNLSy1s from our sample of 8 non-jetted-RLNLSy1s. Figure 6: (Continued) DLC for the last 4 non-jetted-RLNLSy1s from our sample of 8 non-jetted-RLNLSy1s. ## References * Abdo et al. (2009a) Abdo A. A., et al., 2009a, ApJ, 699, 976 * Abdo et al. (2009b) Abdo A. A., et al., 2009b, ApJ, 707, 727 * Abdo et al. (2009c) Abdo A. A., et al., 2009c, ApJ, 707, L142 * Abolfathi et al. (2018) Abolfathi B., et al., 2018, ApJS, 235, 42 * Ajello et al. (2020) Ajello M., et al., 2020, ApJ, 892, 105 * Angelakis et al. (2018) Angelakis E., Kiehlmann S., Myserlis I., Blinov D., Eggen J., Itoh R., Marchili N., Zensus J. A., 2018, ArXiv e-prints:1807.02382, * Bachev et al. (2005) Bachev R., Strigachev A., Semkov E., 2005, MNRAS, 358, 774 * Berton et al. (2015) Berton M., et al., 2015, A&A, 578, A28 * Berton et al. (2018) Berton M., et al., 2018, A&A, 614, A87 * Boller et al. (1996) Boller T., Brandt W. N., Fink H., 1996, A&A, 305, 53 * Boroson (2002) Boroson T. A., 2002, ApJ, 565, 78 * Boroson (2005) Boroson T., 2005, AJ, 130, 381 * Boroson & Green (1992) Boroson T. A., Green R. F., 1992, ApJS, 80, 109 * Böttcher & Dermer (2002) Böttcher M., Dermer C. D., 2002, ApJ, 564, 86 * Brandt et al. (1997) Brandt W. N., Mathur S., Elvis M., 1997, MNRAS, 285, L25 * Calderone et al. (2013) Calderone G., Ghisellini G., Colpi M., Dotti M., 2013, MNRAS, 431, 210 * Cellone et al. (2000) Cellone S. A., Romero G. E., Combi J. A., 2000, AJ, 119, 1534 * Cellone et al. (2007) Cellone S. A., Romero G. E., Araudo A. T., 2007, MNRAS, 374, 357 * Chiaberge & Marconi (2011) Chiaberge M., Marconi A., 2011, MNRAS, 416, 917 * D’Ammando et al. (2012) D’Ammando F., et al., 2012, MNRAS, 426, 317 * D’Ammando et al. (2015) D’Ammando F., Orienti M., Larsson J., Giroletti M., 2015, MNRAS, 452, 520 * D’Ammando et al. (2017) D’Ammando F., Acosta-Pulido J. A., Capetti A., Raiteri C. M., Baldi R. D., Orienti M., Ramos Almeida C., 2017, MNRAS, 469, L11 * D’Ammando et al. (2018) D’Ammando F., Acosta-Pulido J. A., Capetti A., Baldi R. D., Orienti M., Raiteri C. M., Ramos Almeida C., 2018, MNRAS, 478, L66 * Decarli et al. (2008) Decarli R., Dotti M., Fontana M., Haardt F., 2008, MNRAS, 386, L15 * Deo et al. (2006) Deo R. P., Crenshaw D. M., Kraemer S. B., 2006, AJ, 132, 321 * Doi et al. (2011) Doi A., Asada K., Nagai H., 2011, ApJ, 738, 126 * Doi et al. (2012) Doi A., Nagira H., Kawakatu N., Kino M., Nagai H., Asada K., 2012, ApJ, 760, 41 * Foschini (2011) Foschini L., 2011, in Narrow-Line Seyfert 1 Galaxies and their Place in the Universe. p. 24 (arXiv:1105.0772) * Foschini (2012) Foschini L., 2012, in Proceedings of Nuclei of Seyfert galaxies and QSOs - Central engine & conditions of star formation (Seyfert 2012). 6-8 November. p. 10 (arXiv:1301.5785) * Foschini et al. (2010) Foschini L., Fermi/Lat Collaboration Ghisellini G., Maraschi L., Tavecchio F., Angelakis E., 2010, in Maraschi L., Ghisellini G., Della Ceca R., Tavecchio F., eds, Astronomical Society of the Pacific Conference Series Vol. 427, Accretion and Ejection in AGN: a Global View. pp 243–248 (arXiv:0908.3313) * Fraix-Burnet et al. (2017) Fraix-Burnet D., Marziani P., D’Onofrio M., Dultzin D., 2017, Frontiers in Astronomy and Space Sciences, 4, 1 * Gabanyi et al. (2018) Gabanyi K., Moor A., Frey S., 2018, in Revisiting Narrow-Line Seyfert 1 Galaxies and their Place in the Universe. p. 42 (arXiv:1807.05802) * Garcia et al. (1999) Garcia A., Sodré L., Jablonski F. J., Terlevich R. J., 1999, MNRAS, 309, 803 * Giroletti et al. (2011) Giroletti M., et al., 2011, A&A, 528, L11 * Goodrich et al. (1989) Goodrich R. W., Stringfellow G. S., Penrod G. D., Filippenko A. V., 1989, ApJ, 342, 908 * Gopal-Krishna & Wiita (2018) Gopal-Krishna Wiita P. J., 2018, Bulletin de la Societe Royale des Sciences de Liege, 87, 281 * Gopal-Krishna et al. (1993) Gopal-Krishna Wiita P. J., Altieri B., 1993, A&A, 271, 89 * Gopal-Krishna et al. (1995) Gopal-Krishna Sagar R., Wiita P. J., 1995, MNRAS, 274, 701 * Goyal et al. (2012) Goyal A., Gopal-Krishna Wiita P. J., Anupama G. C., Sahu D. K., Sagar R., Joshi S., 2012, A&A, 544, A37 * Goyal et al. (2013a) Goyal A., Mhaskey M., Gopal-Krishna Wiita P. J., Stalin C. S., Sagar R., 2013a, Journal of Astrophysics and Astronomy, 34, 273 * Goyal et al. (2013b) Goyal A., Gopal-Krishna Paul J. W., Stalin C. S., Sagar R., 2013b, MNRAS, 435, 1300 * Greene & Ho (2007) Greene J. E., Ho L. C., 2007, ApJ, 667, 131 * Grupe & Mathur (2004) Grupe D., Mathur S., 2004, ApJ, 606, L41 * Grupe et al. (1998) Grupe D., Beuermann K., Thomas H.-C., Mannheim K., Fink H. H., 1998, A&A, 330, 25 * Gu et al. (2015) Gu M., Chen Y., Komossa S., Yuan W., Shen Z., Wajima K., Zhou H., Zensus J. A., 2015, ApJS, 221, 3 * Hayashida (2000) Hayashida K., 2000, New Astron. Rev., 44, 419 * Heidt & Wagner (1996) Heidt J., Wagner S. J., 1996, A&A, 305, 42 * Hodge et al. (2018) Hodge M. A., Lister M. L., Aller M. F., Aller H. D., Kovalev Y. Y., Pushkarev A. B., Savolainen T., 2018, ApJ, 862, 151 * Homan et al. (2001) Homan D. C., Attridge J. M., Wardle J. F. C., 2001, ApJ, 556, 113 * Howell (1989) Howell S. B., 1989, PASP, 101, 616 * Howell et al. (1988) Howell S. B., Warnock III A., Mitchell K. J., 1988, AJ, 95, 247 * Hyvönen et al. (2007) Hyvönen T., Kotilainen J. K., Falomo R., Örndahl E., Pursimo T., 2007, A&A, 476, 723 * Ikejiri et al. (2011) Ikejiri Y., et al., 2011, PASJ, 63, 639 * Itoh et al. (2013) Itoh R., et al., 2013, ApJ, 775, L26 * Itoh et al. (2014) Itoh R., et al., 2014, PASJ, 66, 108 * Jha et al. (2021) Jha V. K., Chand H., Ojha V., Omar A., Rastogi S., 2021, MNRAS, * Jiang et al. (2012) Jiang N., et al., 2012, ApJ, 759, L31 * Joshi et al. (2011) Joshi R., Chand H., Gupta A. C., Wiita P. J., 2011, MNRAS, 412, 2717 * Kellermann et al. (1989) Kellermann K. I., Sramek R., Schmidt M., Shaffer D. B., Green R., 1989, AJ, 98, 1195 * Kellermann et al. (1994) Kellermann K. I., Sramek R. A., Schmidt M., Green R. F., Shaffer D. B., 1994, AJ, 108, 1163 * Kellermann et al. (2016) Kellermann K. I., Condon J. J., Kimball A. E., Perley R. A., Ivezić Ž., 2016, ApJ, 831, 168 * Klimek et al. (2004) Klimek E. S., Gaskell C. M., Hedrick C. H., 2004, ApJ, 609, 69 * Komossa (2018) Komossa S., 2018, in Revisiting Narrow-Line Seyfert 1 Galaxies and their Place in the Universe. p. 15 (arXiv:1807.03666) * Komossa & Meerschweinchen (2000) Komossa S., Meerschweinchen J., 2000, A&A, 354, 411 * Komossa et al. (2006) Komossa S., Voges W., Xu D., Mathur S., Adorf H.-M., Lemson G., Duschl W. J., Grupe D., 2006, AJ, 132, 531 * Kshama et al. (2017) Kshama S. K., Paliya V. S., Stalin C. S., 2017, MNRAS, 466, 2679 * Kumar et al. (2015) Kumar P., Gopal-Krishna Hum C., 2015, MNRAS, 448, 1463 * Kumar et al. (2017) Kumar P., Gopal-Krishna Stalin C. S., Chand H., Srianand R., Petitjean P., 2017, MNRAS, 471, 606 * Leighly (1999) Leighly K. M., 1999, ApJS, 125, 297 * Leighly & Moore (2004) Leighly K. M., Moore J. R., 2004, ApJ, 611, 107 * Liao et al. (2015) Liao N.-H., Liang Y.-F., Weng S.-S., Berton M., Gu M.-F., Fan Y.-Z., 2015, arXiv e-prints, p. arXiv:1510.05584 * Lister (2018) Lister M., 2018, in Revisiting Narrow-Line Seyfert 1 Galaxies and their Place in the Universe. p. 22 (arXiv:1805.05258) * Lister et al. (2013) Lister M. L., et al., 2013, AJ, 146, 120 * Lister et al. (2016) Lister M. L., et al., 2016, AJ, 152, 12 * Lister et al. (2019) Lister M. L., et al., 2019, ApJ, 874, 43 * Liu et al. (2010) Liu H., Wang J., Mao Y., Wei J., 2010, ApJ, 715, L113 * Marconi et al. (2008) Marconi A., Axon D. J., Maiolino R., Nagao T., Pastorini G., Pietrini P., Robinson A., Torricelli G., 2008, ApJ, 678, 693 * Marscher (2009) Marscher A. P., 2009, arXiv e-prints, p. arXiv:0909.2576 * Mathur (2000) Mathur S., 2000, MNRAS, 314, L17 * Mathur et al. (2001) Mathur S., Kuraszkiewicz J., Czerny B., 2001, New Astron., 6, 321 * Maune et al. (2014) Maune J. D., Eggen J. R., Miller H. R., Marshall K., Readhead A. C. S., Hovatta T., King O., 2014, ApJ, 794, 93 * Miller et al. (1989) Miller H. R., Carini M. T., Goodrich B. D., 1989, Nature, 337, 627 * Miller et al. (2000) Miller H. R., Ferrara E. C., McFarland J. P., Wilson J. W., Daya A. B., Fried R. E., 2000, New Astron. Rev., 44, 539 * Monet (1998) Monet D. G., 1998, in American Astronomical Society Meeting Abstracts. p. 1427 * Neronov et al. (2015) Neronov A., Vovk I., Malyshev D., 2015, Nature Physics, 11, 664 * Neumann et al. (1994) Neumann M., Reich W., Fuerst E., Brinkmann W., Reich P., Siebert J., Wielebinski R., Truemper J., 1994, A&AS, 106, 303 * Ohta et al. (2007) Ohta K., Aoki K., Kawaguchi T., Kiuchi G., 2007, ApJS, 169, 1 * Ojha et al. (2018) Ojha V., Hum C., Gopal-Krishna 2018, Bulletin de la Societe Royale des Sciences de Liege, 87, 387 * Ojha et al. (2019) Ojha V., Gopal-Krishna Chand H., 2019, MNRAS, 483, 3036 * Ojha et al. (2020a) Ojha V., Hum C., Gopal-Krishna Sapna M., Krishan C., 2020a, MNRAS, 493, 3642 * Ojha et al. (2020b) Ojha V., Hum C., Dewangan G. C., Rakshit S., 2020b, ApJ, 896, 95 * Ojha et al. (2021) Ojha V., Hum C., Gopal-Krishna 2021, MNRAS, 501, 4110 * Olguín-Iglesias et al. (2016) Olguín-Iglesias A., et al., 2016, MNRAS, 460, 3202 * Olguín-Iglesias et al. (2020) Olguín-Iglesias A., Kotilainen J., Chavushyan V., 2020, MNRAS, 492, 1450 * Orienti et al. (2012) Orienti M., D’Ammando F., Giroletti M., for the Fermi-LAT Collaboration 2012, ArXiv e-prints 1205.0402, * Osterbrock & Pogge (1985) Osterbrock D. E., Pogge R. W., 1985, ApJ, 297, 166 * Paliya (2019) Paliya V. S., 2019, Journal of Astrophysics and Astronomy, 40, 39 * Paliya et al. (2013a) Paliya V. S., Stalin C. S., Kumar B., Kumar B., Bhatt V. K., Pandey S. B., Yadav R. K. S., 2013a, MNRAS, 428, 2450 * Paliya et al. (2013b) Paliya V. S., Stalin C. S., Shukla A., Sahayanathan S., 2013b, ApJ, 768, 52 * Paliya et al. (2014) Paliya V. S., Sahayanathan S., Parker M. L., Fabian A. C., Stalin C. S., Anjum A., Pandey S. B., 2014, ApJ, 789, 143 * Paliya et al. (2016) Paliya V. S., Rajput B., Stalin C. S., Pandey S. B., 2016, ApJ, 819, 121 * Paliya et al. (2018) Paliya V. S., Ajello M., Rakshit S., Mandal A. K., Stalin C. S., Kaur A., Hartmann D., 2018, ApJ, 853, L2 * Paliya et al. (2019) Paliya V. S., Parker M. L., Jiang J., Fabian A. C., Brenneman L., Ajello M., Hartmann D., 2019, ApJ, 872, 169 * Parveen et al. (2016) Parveen K., Hum C., Gopal-Krishna 2016, MNRAS, 461, 666 * Peterson (2011) Peterson B. M., 2011, in Narrow-Line Seyfert 1 Galaxies and their Place in the Universe. p. 32 * Peterson et al. (2000) Peterson B. M., et al., 2000, ApJ, 542, 161 * Prabhu & Anupama (2010) Prabhu T. P., Anupama G. C., 2010, in Astronomical Society of India Conference Series. pp 193–201 * Rakshit et al. (2017) Rakshit S., Stalin C. S., Chand H., Zhang X.-G., 2017, ApJS, 229, 39 * Romero et al. (1999) Romero G. E., Cellone S. A., Combi J. A., 1999, A&AS, 135, 477 * Sagar (1999) Sagar R., 1999, CURRENT-SCIENCE, 77, 643 * Sagar et al. (2004) Sagar R., Stalin C. S., Gopal-Krishna Wiita P. J., 2004, MNRAS, 348, 176 * Sagar et al. (2010) Sagar R., Kumar B., Omar A., Pandey A. K., 2010, in Astronomical Society of India Conference Series. * Sagar et al. (2012) Sagar R., Kumar B., Omar A., Pand ey A. K., 2012, in Proc. SPIE. p. 84441T (arXiv:1304.2474), doi:10.1117/12.925634 * Shuder & Osterbrock (1981) Shuder J. M., Osterbrock D. E., 1981, ApJ, 250, 55 * Singh & Chand (2018) Singh V., Chand H., 2018, MNRAS, 480, 1796 * Stalin et al. (2004) Stalin C. S., Gopal-Krishna Sagar R., Wiita P. J., 2004, Journal of Astrophysics and Astronomy, 25, 1 * Stetson (1987) Stetson P. B., 1987, PASP, 99, 191 * Stetson (1992) Stetson P. B., 1992, in Worrall D. M., Biemesderfer C., Barnes J., eds, Astronomical Society of the Pacific Conference Series Vol. 25, Astronomical Data Analysis Software and Systems I. p. 297 * Stocke et al. (1992) Stocke J. T., Morris S. L., Weymann R. J., Foltz C. B., 1992, ApJ, 396, 487 * Sulentic et al. (2000) Sulentic J. W., Zwitter T., Marziani P., Dultzin-Hacyan D., 2000, ApJ, 536, L5 * Ulvestad et al. (1995) Ulvestad J. S., Antonucci R. R. J., Goodrich R. W., 1995, AJ, 109, 81 * Urry (2003) Urry M., 2003, in Collin S., Combes F., Shlosman I., eds, Astronomical Society of the Pacific Conference Series Vol. 290, Active Galactic Nuclei: From Central Engine to Host Galaxy. p. 3 (arXiv:astro-ph/0301309) * Urry et al. (2000) Urry C. M., Scarpa R., O’Dowd M., Falomo R., Pesce J. E., Treves A., 2000, ApJ, 532, 816 * Vaughan et al. (1999) Vaughan S., Reeves J., Warwick R., Edelson R., 1999, MNRAS, 309, 113 * Vignali et al. (2004) Vignali C., Brandt W. N., Boller T., Fabian A. C., Vaughan S., 2004, MNRAS, 347, 854 * Visnovsky et al. (1992) Visnovsky K. L., Impey C. D., Foltz C. B., Hewett P. C., Weymann R. J., Morris S. L., 1992, ApJ, 391, 560 * Viswanath et al. (2019) Viswanath G., Stalin C. S., Rakshit S., Kurian K. S., Ujjwal K., Gudennavar S. B., Kartha S. S., 2019, ApJ, 881, L24 * Wagner & Witzel (1995) Wagner S. J., Witzel A., 1995, ARA&A, 33, 163 * Wang & Lu (2001) Wang T., Lu Y., 2001, A&A, 377, 52 * Wang et al. (1996) Wang T., Brinkmann W., Bergeron J., 1996, A&A, 309, 81 * Wang et al. (2014) Wang J.-M., et al., 2014, ApJ, 793, 108 * Yang et al. (2018) Yang H., et al., 2018, MNRAS, 477, 5127 * Yao et al. (2015) Yao S., Yuan W., Zhou H., Komossa S., Zhang J., Qiao E., Liu B., 2015, MNRAS, 454, L16 * Yao et al. (2019) Yao S., Komossa S., Liu W.-J., Yi W., Yuan W., Zhou H., Wu X.-B., 2019, MNRAS, 487, L40 * Yuan et al. (2008) Yuan W., Zhou H. Y., Komossa S., Dong X. B., Wang T. G., Lu H. L., Bai J. M., 2008, ApJ, 685, 801 * Zamanov et al. (2002) Zamanov R., Marziani P., Sulentic J. W., Calvani M., Dultzin-Hacyan D., Bachev R., 2002, ApJ, 576, L9 * Zhou et al. (2003) Zhou H.-Y., Wang T.-G., Dong X.-B., Zhou Y.-Y., Li C., 2003, ApJ, 584, 147 * Zhou et al. (2006) Zhou H., Wang T., Yuan W., Lu H., Dong X., Wang J., Lu Y., 2006, ApJS, 166, 128 * Zhou et al. (2007) Zhou H., et al., 2007, ApJ, 658, L13 * de Diego (2014) de Diego J. A., 2014, AJ, 148, 93
# Propagating Residual Biases in Cosmic Shear Power Spectra T. D. Kitching<EMAIL_ADDRESS>Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK P. Paykari Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK H. Hoekstra Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden, The Netherlands M. Cropper Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK ###### Abstract In this paper we derive a full expression for the propagation of multiplicative and additive shape measurement biases into the cosmic shear power spectrum. In doing so we identify several new terms that are associated with selection effects, as well as cross-correlation terms between the multiplicative and additive biases and the shear field. The computation of the resulting bias in the shear power spectrum scales as the fifth power of the maximum multipole considered. Consequently the calculation is unfeasible for large $\ell$-modes, and the only tractable way to assess the full impact of shape measurement biases on cosmic shear power spectrum is through forward modelling of the effects. To linear order in bias parameters the shear power spectrum is only affected by the mean of the multiplicative bias field over a survey and the cross correlation between the additive bias field and the shear field. If the mean multiplicative bias is zero then second order convolutive terms are expected to be orders of magnitude smaller. ## 1 Introduction The statistical properties of the large-scale matter distribution over cosmic time encodes key information about the late time evolution of the Universe, and also allows us to improve constraints on the initial conditions. Thanks to technological advances we can now efficiently survey larger and larger areas of sky, but the interpretation of galaxy redshift surveys is hampered by the fact that galaxies are biased tracers of the underlying dark matter distribution. Fortunately, the distortion of space-time by matter results in correlations in the ellipticities of distant galaxies that are the result of the differential deflection of light rays, a phenomenon called gravitational lensing. The statistics of these correlations can be directly related to those of the large-scale structure. This in turn enables us to constrain the nature of dark energy and to test gravity on cosmological scales. The cosmological lensing signal has now been robustly measured using large ground-based imaging surveys (e.g. Hildebrandt et al., 2018; Troxel et al., 2018). However to shed light on the nature of dark energy, the precision needs to increase significantly. This is the objective of a number of planned projects that will commence soon. Euclid (Laureijs et al., 2011) aims to image 15 000 deg2 of extragalactic sky from space, while the Large Synoptic Survey Telescope (LSST) will survey a similar area from the ground. To exploit fully the potential of these data for cosmology, it is essential that astrophysical and instrumental sources of biases are accounted for at levels that are small compared to the statistical uncertainties on measured cosmological parameters. Accurate measurements of the shapes of small, faint galaxies are therefore essential. The observed ellipticities of galaxies used in weak lensing studies are typically biased with respect to the true ellipticities that would have been measured given ideal data and an ideal measurement algorithm. The dominant sources of bias are a result of the convolution by the point spread function (PSF) and noise in the images. For this reason the performance of shape measurements has been studied extensively (Heymans et al., 2006; Massey et al., 2007; Bridle et al., 2010; Kitching et al., 2012; Mandelbaum et al., 2015). To first order the biases can be separated into multiplicative and additive functions that act on the true shear. Additive biases arise from anisotropies in the data, such as an anisotropic PSF or detector effects. These do not only affect the measurement of the galaxy shape, but also the detection and selection of sources. Characterizing and correcting for these sources of bias is essential, but residual spurious alignments might still be removed through empirical corrections. For instance, the mean shear when averaged in the coordinate frame defined by the detector should vanish. The detection of a coherent signal would thus indicate an imperfect correction, but that signal could also be fitted for in the cosmological analysis. However we note that this is only partially effective because sources of additive bias are expected to introduce multiplicative biases with a similar amplitude. Unfortunately, multiplicative bias cannot be readily inferred from the imaging data directly. Instead image simulations are used to calibrate the biases in the shape measurement algorithms (e.g. Hoekstra et al., 2015; Kannawadi et al., 2018), although we note that alternative approaches have been recently proposed (Huff & Mandelbaum, 2017a). The desired accuracy in cosmological parameter estimates determines the level at which shape measurement biases can be tolerated. The propagation of biases, or residual biases after calibration, into the weak lensing power spectra (or ‘cosmic shear’ power spectrum) is not straightforward. Some studies approximate the full expression (Taylor & Kitching, 2016; Kitching et al., 2012; Massey et al., 2013) but, as we show in this paper, these results do not capture the spatially varying sources of biases correctly. This is of particular importance because the theoretical propagation of such biases into power spectrum residuals drives the design requirements for experiments that use weak lensing as a cosmological probe (Cropper et al., 2013). In this paper we show how multiplicative and additive biases in shape measurement propagate through the cosmic shear power spectra, discuss how this formalism relates to previous studies, and discuss the implications for the assessment of shape measurement biases on cosmological parameter performance verification of experiments. In Section 2 we present the formalism, in Section 3 we present some simple simulations that demonstrate the accuracy of the formalism, and in Section 4 we examine the implications of this study; conclusions are presented in Section 5. ## 2 Method We begin with the expression for the measured shear in real (angular) space $\widetilde{\gamma}(\mathbf{\Omega})=[1+m_{0}+m(\mathbf{\Omega})]\gamma(\mathbf{\Omega})+[c_{1,0}+{\rm i}c_{2,0}+c(\mathbf{\Omega})],$ (1) where $\widetilde{\gamma}(\mathbf{\Omega})$ is the measured shear as a function of angle $\mathbf{\Omega}=(\theta,\phi)$ where $\theta$ and $\phi$ are arbitrary spherical coordinates, $m_{0}$ is a constant multiplicative bias, $m(\mathbf{\Omega})=m^{R}(\mathbf{\Omega})+{\rm i}m^{I}(\mathbf{\Omega})$ is a position-dependent multiplicative bias term, $\gamma(\mathbf{\Omega})=\gamma_{1}(\mathbf{\Omega})+{\rm i}\gamma_{2}(\mathbf{\Omega})$ is the true shear, $c_{1,0}$ and $c_{2,0}$ are constant additive biases that contribute to the real and imaginary parts of the additive field, and $c(\mathbf{\Omega})$ is a position-dependent additive bias. We assume no non-local terms, e.g. $m(\mathbf{\Omega}^{\prime})\gamma(\mathbf{\Omega})$, since such terms could be always re-written as a local per galaxy $m(\mathbf{\Omega})\gamma(\mathbf{\Omega})$ term. We discuss the choice of the multiplicative bias expression in Appendix A. We note that the choice to express the multiplicative effect as a product of complex numbers makes the spin-preserving assumption that any effect only changes the amplitude and/or angle of the observed ellipse relative to the unbiased case. There are more general expressions that can be used to capture biases that can occur in the case of anisotropic systematic effects, however image simulations suggest that our adopted approach is accurate for residual systematic effects after calibration with image simulations. In the case where no rotational change is present this reduces to a multiplication by a single scalar field $m(\mathbf{\Omega})$. We will revisit these assumptions later in the analysis. ### 2.1 Spherical Harmonic Representation We now determine the spherical harmonic representation of a biased shear field. We adapt the methodology from CMB pseudo-$C_{\ell}$ analysis here for the general bias case; in particular we follow Lewis et al. (2002); Brown et al. (2005); Zaldarriaga & Seljak (1997); Grain et al. (2012) but we generalize their formalism further to include general spin-2 bias functions and additive terms. In Kitching et al. (2012) a similar adaption was made, but under simplifying assumptions that did not capture the general case. Since for cosmic shear the cosmological information is contained within the E-mode (gradient) component of the field, and not in the B-mode (curl) component, we work on spherical harmonic coefficients $\gamma^{E}_{\ell m}$ and $\gamma^{B}_{\ell m}$. We can write down the E- and B-mode coefficients in terms of the shear field as $\displaystyle\gamma^{E}_{\ell m}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\int{\rm d}\mathbf{\Omega}\,[\gamma(\mathbf{\Omega})\,{}_{2}Y^{*}_{\ell m}(\mathbf{\Omega})+\gamma^{*}(\mathbf{\Omega})\,{}_{-2}Y^{*}_{\ell m}(\mathbf{\Omega})]$ $\displaystyle\gamma^{B}_{\ell m}$ $\displaystyle=$ $\displaystyle\frac{-{\rm i}}{2}\int{\rm d}\mathbf{\Omega}\,[\gamma(\mathbf{\Omega})\,{}_{2}Y^{*}_{\ell m}(\mathbf{\Omega})-\gamma^{*}(\mathbf{\Omega})\,{}_{-2}Y^{*}_{\ell m}(\mathbf{\Omega})],$ (2) where $\ell$ and $m$ are angular wavenumbers111Note that $m$ is used in the spherical harmonic function, and $m_{0}$ and $m(\mathbf{\Omega})$ as multiplicative biases; we choose to keep this standard notation for both cases as the use should be clear from the context., ${}_{2}Y_{\ell m}(\mathbf{\Omega})$ is the standard spin-weighted spherical harmonic function for a spin-$2$ field and ∗ denotes a complex conjugate. This expression is exact for an all-sky unbiased measurement of the shear. This formalism could be generalised to the pure-mode case (Grain et al., 2012), that would become important in the presence of masks, but we leave this masked data generalisation for future work. To compute the effect of the biases we now replace $\gamma(\mathbf{\Omega})$ in equation (2.1) with $\widetilde{\gamma}(\mathbf{\Omega})$ from equation (1). This results in the following expressions $\displaystyle\widetilde{\gamma}^{E}_{\ell m}$ $\displaystyle=$ $\displaystyle\gamma^{E}_{\ell m}+m_{0}\gamma^{E}_{\ell m}$ $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}m^{\prime}}[\gamma^{E}_{\ell^{\prime}m^{\prime}}W^{+}_{\ell\ell^{\prime}mm^{\prime}}+\gamma^{B}_{\ell^{\prime}m^{\prime}}W^{-}_{\ell\ell^{\prime}mm^{\prime}}]$ $\displaystyle+$ $\displaystyle c^{E}_{\ell m}$ $\displaystyle\widetilde{\gamma}^{B}_{\ell m}$ $\displaystyle=$ $\displaystyle\gamma^{B}_{\ell m}+m_{0}\gamma^{B}_{\ell m}$ (3) $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}m^{\prime}}[\gamma^{B}_{\ell^{\prime}m^{\prime}}W^{+}_{\ell\ell^{\prime}mm^{\prime}}-\gamma^{E}_{\ell^{\prime}m^{\prime}}W^{-}_{\ell\ell^{\prime}mm^{\prime}}]$ $\displaystyle+$ $\displaystyle c^{B}_{\ell m}.$ Here we expanded the additive term as $c(\mathbf{\Omega})=\sum_{\ell m}(c^{E}_{\ell m}+{\rm i}c^{B}_{\ell m})_{2}Y_{\ell m}(\mathbf{\Omega})$. We also defined $\displaystyle W^{+}_{\ell\ell^{\prime}mm^{\prime}}$ $\displaystyle=$ $\displaystyle\frac{1}{2}[(_{2}W^{R,mm^{\prime}}_{\ell\ell^{\prime}}+_{-2}W^{R,mm^{\prime}}_{\ell\ell^{\prime}})$ $\displaystyle+$ $\displaystyle{\rm i}(_{2}W^{I,mm^{\prime}}_{\ell\ell^{\prime}}-_{-2}W^{I,mm^{\prime}}_{\ell\ell^{\prime}})],$ $\displaystyle W^{-}_{\ell\ell^{\prime}mm^{\prime}}$ $\displaystyle=$ $\displaystyle\frac{{\rm i}}{2}[(_{2}W^{R,mm^{\prime}}_{\ell\ell^{\prime}}-_{-2}W^{R,mm^{\prime}}_{\ell\ell^{\prime}})$ (4) $\displaystyle+$ $\displaystyle{\rm i}(_{2}W^{I,mm^{\prime}}_{\ell\ell^{\prime}}+_{-2}W^{I,mm^{\prime}}_{\ell\ell^{\prime}})],$ and ${}_{s}W^{R,mm^{\prime}}_{\ell\ell^{\prime}}=\int{\rm d}\mathbf{\Omega}\,_{s}Y^{*}_{\ell^{\prime}m^{\prime}}(\mathbf{\Omega})m^{R}(\mathbf{\Omega})_{s}Y_{\ell m}(\mathbf{\Omega});$ (5) and similarly for $m^{I}(\mathbf{\Omega})$. In this derivation we have expressed the real and imaginary parts of the multiplicative bias as $m(\mathbf{\Omega})=m^{R}(\mathbf{\Omega})+{\rm i}m^{I}(\mathbf{\Omega})$. We note that when considering _residual_ systematic effects, i.e. when any amplitude and rotational changes caused by multiplicative systematic effects are small (see Appendix A), that $m^{R}(\mathbf{\Omega})\simeq m(\mathbf{\Omega})$ and $m^{I}(\mathbf{\Omega})\simeq 0$222We note that $m^{R}(\mathbf{\Omega})\simeq m(\mathbf{\Omega})$ and $m^{I}(\mathbf{\Omega})\simeq 0$ implies that, if one expresses the multiplicative biases as $m_{1}(\mathbf{\Omega})\gamma_{1}(\mathbf{\Omega})+{\rm i}m_{2}(\mathbf{\Omega})\gamma_{2}(\mathbf{\Omega})$ (where $1$ and $2$ denote the ellipticity components measured parallel to Cartesian axes in a measurement frame, and measured at $45$ degrees to these axes), $m(\mathbf{\Omega})=m_{1}(\mathbf{\Omega})=m_{2}(\mathbf{\Omega})$. This is found to be the case in state-of-the-art methods, e.g. Pujol et al. (2019). When considering residual systematic effects, after calibration with simulations, this is also expected to be the case.. Already from the expressions in equation (2.1) it can be seen that multiplicative biases in general mix E and B-modes together, both from the underlying shear field and the multiplicative bias field, and the propagation of such terms is in the form of a convolution represented as a sum over wavenumbers. Furthermore the window function caused by multiplicative biases is $\ell$ and $m$-mode dependent since in general these are not isotropic on the celestial sphere. We note that in this case the constant additive biases $c_{1,0}$ and $c_{2,0}$ do not appear in equation (2.1). This is because a constant term only affects the $\ell=0$ mode, but shear is a spin-2 field where the spherical harmonic transform is not defined for $\ell<2$; because ${}_{s}Y_{\ell m}(\mathbf{\Omega})=0$ for $\ell<|s|$. Therefore a constant additive bias cannot affect the cosmic shear power spectrum. ### 2.2 Biased Cosmic Shear Power Spectra We now compute the expressions for the biased cosmic shear power spectra by taking the correlation of the expressions in equation (2.1). The full expression can be written as a series of terms that pertain to multiplicative, additive and cross-terms, and depend on the true EE, EB and BB power spectra. The power spectra estimates are computed by taking the correlation of the spherical harmonic coefficients from equation (2.1) where $\displaystyle\widetilde{C}^{GH}_{\ell}$ $\displaystyle\equiv$ $\displaystyle\frac{1}{2\ell+1}\sum_{m}\widetilde{\gamma}^{G}_{\ell m}\widetilde{\gamma}^{H,*}_{\ell m}$ (6) for $G=(E,B)$ and $H=(E,B)$. We provide the full expanded expression for the biased power spectra in Appendix B. If we assume that $C^{EB}_{\ell}=0$, which is the case in all but the most exotic dark energy models, then the three estimated power spectra (EE, BB and EB) are: $\displaystyle\widetilde{C}^{EE}_{\ell}$ $\displaystyle=$ $\displaystyle(1+2m_{0}+m_{0}^{2})C^{EE}_{\ell}$ $\displaystyle+$ $\displaystyle(1+m_{0})({\mathcal{N}}^{+}_{\ell}+{\mathcal{N}}^{+,*}_{\ell})C^{EE}_{\ell}$ $\displaystyle+$ $\displaystyle 2(1+m_{0})C^{c_{E}E}_{\ell}+C^{c_{E}c_{E}}_{\ell}$ $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}}[{\mathcal{M}}^{++}_{\ell\ell^{\prime}}C^{EE}_{\ell^{\prime}}+{\mathcal{M}}^{--}_{\ell\ell^{\prime}}C^{BB}_{\ell^{\prime}}]$ $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}}[{\mathcal{B}}^{+EE}_{\ell\ell^{\prime}}+({\mathcal{B}}^{+EE}_{\ell\ell^{\prime}})^{*}+{\mathcal{B}}^{-BE}_{\ell\ell^{\prime}}+({\mathcal{B}}^{-BE}_{\ell\ell^{\prime}})^{*}]$ $\displaystyle\widetilde{C}^{BB}_{\ell}$ $\displaystyle=$ $\displaystyle(1+2m_{0}+m_{0}^{2})C^{BB}_{\ell}$ $\displaystyle+$ $\displaystyle(1+m_{0})({\mathcal{N}}^{+}_{\ell}+{\mathcal{N}}^{+,*}_{\ell})C^{BB}_{\ell}$ $\displaystyle+$ $\displaystyle 2(1+m_{0})C^{c_{B}B}_{\ell}+C^{c_{B}c_{B}}_{\ell}$ $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}}[{\mathcal{M}}^{--}_{\ell\ell^{\prime}}C^{EE}_{\ell^{\prime}}+{\mathcal{M}}^{++}_{\ell\ell^{\prime}}C^{BB}_{\ell^{\prime}}]$ $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}}[{\mathcal{B}}^{+BB}_{\ell\ell^{\prime}}+({\mathcal{B}}^{+BB}_{\ell\ell^{\prime}})^{*}-{\mathcal{B}}^{-EB}_{\ell\ell^{\prime}}-({\mathcal{B}}^{-EB}_{\ell\ell^{\prime}})^{*}]$ $\displaystyle\widetilde{C}^{EB}_{\ell}$ $\displaystyle=$ $\displaystyle-(1+m_{0}){\mathcal{N}}^{-,*}_{\ell}C^{EE}_{\ell}+(1+m_{0}){\mathcal{N}}^{-}_{\ell}C^{BB}_{\ell}$ $\displaystyle+$ $\displaystyle 2(1+m_{0})C^{c_{E}B}_{\ell}+C^{c_{E}c_{B}}_{\ell}$ $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}}[{\mathcal{M}}^{-+}_{\ell\ell^{\prime}}C^{BB}_{\ell^{\prime}}-{\mathcal{M}}^{+-}_{\ell\ell^{\prime}}C^{EE}_{\ell^{\prime}}]$ $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}}[{\mathcal{B}}^{+EB}_{\ell\ell^{\prime}}+({\mathcal{B}}^{+BE}_{\ell\ell^{\prime}})^{*}+{\mathcal{B}}^{-BB}_{\ell\ell^{\prime}}+({\mathcal{B}}^{-EE}_{\ell\ell^{\prime}})^{*}].$ The various terms in the full expression are $\displaystyle{\mathcal{M}}^{XY}_{\ell\ell^{\prime}}$ $\displaystyle=$ $\displaystyle\frac{1}{2\ell+1}\sum_{mm^{\prime}}W^{X}_{\ell\ell^{\prime}mm^{\prime}}(W^{Y}_{\ell\ell^{\prime}mm^{\prime}})^{*}$ $\displaystyle{\mathcal{N}}^{X}_{\ell}$ $\displaystyle=$ $\displaystyle\frac{1}{2\ell+1}\sum_{m}W^{X}_{\ell\ell mm}$ $\displaystyle{\mathcal{B}}^{XGH}_{\ell\ell}$ $\displaystyle=$ $\displaystyle\frac{1}{2\ell+1}\sum_{mm^{\prime}}W^{X}_{\ell\ell^{\prime}mm^{\prime}}\gamma^{G}_{\ell^{\prime}m^{\prime}}(c^{H}_{\ell m})^{*},$ (8) where $X=(+,-)$, $Y=(+,-)$, $G=(E,B)$ and $H=(E,B)$. The power spectra in the full expression are labelled in their superscripts as either correlations between shear coefficients ($EE$, $EB$, $BB$), correlations between the additive bias terms ($c_{E}c_{E}$, $c_{E}c_{B}$, $c_{B}c_{B}$), or cross correlations between shear and additive bias terms ($c_{E}E$, $c_{E}B$, $c_{B}B$). Equation (2.2) should be defined as the measured power spectrum (on the left hand sides), compared to the power spectrum that would have been measured with no systematic effects (the $C^{GH}_{\ell}$’s on the right hand sides). However we note that the terms convolved with the window function ($W^{X}_{\ell\ell^{\prime}mm^{\prime}}$ in ${\mathcal{M}}$ and ${\mathcal{N}}$) in equation (2.2) are derived by taking the ensemble-average of equation (6), and making use of the statistical rotational invariance of the ensemble-averaged harmonic modes. Therefore equation (2.2) is a hybrid of ensemble-averaged terms and un-averaged terms which may be non-zero only for a given realisation (as is the case in the examples shown in Section 3). It can be shown that the ${\mathcal{N}}$ terms are simply the mean of the spatially varying multiplicative bias field. If we consider ${\mathcal{N}}^{+}_{\ell}$ we find $\displaystyle{\mathcal{N}}^{+}_{\ell}$ $\displaystyle=$ $\displaystyle\frac{1}{(2\ell+1)}\sum_{m}\int{\rm d}\mathbf{\Omega}\,m^{R}(\mathbf{\Omega})_{2}Y^{*}_{\ell m}(\mathbf{\Omega})_{2}Y_{\ell m}(\mathbf{\Omega})$ (9) We can simplify this expression further by using the generalised addition theorem for spin-weighted spherical harmonics Grain et al. (2012) $\sum_{m}{}_{s}Y^{*}_{\ell m}(\mathbf{\Omega}){}_{s^{\prime}}Y_{\ell^{\prime}m^{\prime}}(\mathbf{\Omega}^{\prime})=\left[\frac{(2\ell+1)}{4\pi}\right](-1)^{s-s^{\prime}}D^{\ell}_{ss^{\prime}}(\alpha,\beta,\gamma)\,{\rm e}^{-2{\rm i}s\gamma},$ (10) where $(\alpha,\beta,\gamma)$ are Euler angles between $\mathbf{\Omega}$ and $\mathbf{\Omega}^{\prime}$ which in our case are zero, and $D^{\ell}_{ss^{\prime}}(\alpha,\beta,\gamma)$ are the Wigner rotation matrices which for $D^{\ell}(0,0,0)=\delta^{K}_{ss′}$. This leads to $\displaystyle{\mathcal{N}}^{+}_{\ell}=\frac{1}{4\pi}\int{\rm d}\mathbf{\Omega}\,m^{R}(\mathbf{\Omega})=\langle m^{R}(\mathbf{\Omega})\rangle,$ (11) and similarly ${\mathcal{N}}^{-}_{\ell}=\langle m^{I}(\mathbf{\Omega})\rangle$. We choose to keep $m_{0}$ and the mean of $m(\mathbf{\Omega})$ separate since these could have different physical origins i.e. one is a true constant, the other the mean of a spatially varying field. We note that the sum of $m_{0}$ and $\langle m(\mathbf{\Omega})\rangle$ is similar to the bias $b_{m}$ term in Taylor & Kitching (2016). We discuss further simplifications of these expressions below. In Appendix C we show the generalisation of this to the case of multiple tomographic bins. case 1 case 2 case 3 Figure 1: The real part of the multiplicative field $m^{R}(\mathbf{\Omega})$, in the three example cases investigated. Shown is a simulated celestial sphere in a Mollweide Projection with $\theta=\phi=0$ at the North pole. The colour scale represents the amplitude of the biases. #### 2.2.1 Discussion of the terms We can now discuss each term in the full expression and its physical meaning. * • $m_{0}$ and $m_{0}^{2}$: These terms are the normal contribution from the constant multiplicative bias terms. These arise from the limitations with which shape measurement algorithms can be calibrated (see e.g. Hoekstra et al., 2017). As we are concerned with the residual biases _after_ such a calibration, $m_{0}^{2}\ll m_{0}$ as $m_{0}\lesssim 2\times 10^{-3}$ (Cropper et al., 2013) * • ${\bf\mathcal{N}}$: These terms represent multiplicative biases, and are the mean of the multiplicative bias field. * • ${\bf\mathcal{M}}$: These terms represent multiplicative terms of order $m^{2}$. The rows and columns show how the E and B-mode power have mixed terms, where the $++$ terms pick up contributions from the real part of the multiplicative bias field, the $--$ terms pick up contributions from the imaginary part, and $+-$ or $-+$ are mixed terms. * • ${\bf\mathcal{B}}$: These terms represent third-order, bispectrum-like, correlations between the position-dependent multiplicative bias $m(\mathbf{\Omega})\gamma(\mathbf{\Omega})$ and the position-dependent additive bias $c(\mathbf{\Omega})$. Such effects are likely since areas in a survey, or particular pointings, that have detector, telescope or background effects that cause additive biases will also lead to multiplicative biases. This is because any anisotropic change in the quadrupole moments will modify the size, and thus the multiplicative bias. We note that note that in this term the multiplicative and shear terms are always spatially coupled and this combination is correlated with the spatially varying additive term. * • $c_{E}E$, $c_{E}B$, and $c_{B}B$: these terms capture the correlations between the underlying shear field and the additive bias terms. Such terms are expected to be caused by selection effects in a real survey, where for example blending in high shear regions (e.g. about clusters) could cause an additive bias contribution. * • $c_{E}c_{E}$, $c_{E}c_{B}$, and $c_{B}c_{B}$: These are the power spectra of the position-dependent additive biases. We note again that constant additive bias terms do not contribute to cosmic shear power spectra. ### 2.3 Linear Expressions Here we show the linearised expressions of the biased cosmic shear power, that only include terms that are linear in the bias parameters. We find that $\displaystyle\widetilde{C}^{EE}_{\ell}$ $\displaystyle\approx$ $\displaystyle(1+2m_{0})C^{EE}_{\ell}+2\langle m^{R}(\mathbf{\Omega})\rangle C^{EE}_{\ell}+2C^{c_{E}E}_{\ell},$ $\displaystyle\widetilde{C}^{EB}_{\ell}$ $\displaystyle\approx$ $\displaystyle-\langle m^{I}(\mathbf{\Omega})\rangle C^{EE}_{\ell}+\langle m^{I}(\mathbf{\Omega})\rangle C^{BB}_{\ell}+2C^{c_{E}B}_{\ell},$ $\displaystyle\widetilde{C}^{BB}_{\ell}$ $\displaystyle\approx$ $\displaystyle(1+2m_{0})C^{BB}_{\ell}+2\langle m^{R}(\mathbf{\Omega})\rangle C^{BB}_{\ell}+2C^{c_{B}B}_{\ell}.$ (12) We have included $B$-mode power since as shown in Schneider et al. (2002) source redshift clustering can cause a small $B$-mode component. We see that the impact of spatially varying biases will be, to linear order, captured by the mean of the multiplicative bias and the additive-shear cross correlation power spectrum, but in the presence of intrinsic B-modes, $\widetilde{C}^{EB}_{\ell}$ now includes a term $2\langle m^{I}(\mathbf{\Omega})\rangle C^{BB}_{\ell}$. We note that if $\langle m^{R}(\mathbf{\Omega})\rangle=\langle m^{I}(\mathbf{\Omega})\rangle$, then (twice) the EB power spectrum could be added to the EE power spectrum to cancel out multiplicative effects; however this is not expected to be the case in general, or for small biases. ## 3 Simple Simulations To test that the above formalism can indeed capture the propagation of general position-dependent multiplicative and additive bias terms into the cosmic shear power spectrum we generate several toy examples and investigate the contributions of each term to the overall change. For each case we define a multiplicative constant and field, $m_{0}$ and $m(\mathbf{\Omega})$, and an additive constant and field, $c_{0}=c_{1,0}=c_{2,0}$ and $c(\mathbf{\Omega})$, although the choice for $c_{0}$ has no impact on cosmic shear power spectra by definition. We normalise these fields such that $\langle m_{0}+m(\mathbf{\Omega})\rangle=2\times 10^{-3}$ and $\langle c_{0}+c(\mathbf{\Omega})\rangle=1\times 10^{-4}$, which represent the overall requirements for a _Euclid_ -like experiment (Cropper et al., 2013); however we note that the amplitude of $\langle c_{0}+c(\mathbf{\Omega})\rangle$ will have no effect on the power spectrum as discussed previously. For each case we compare the computation of the analytic expression in equation (2.2) and a numerical case where we compute the real space shear field $\widetilde{\gamma}(\mathbf{\Omega})=[1+m_{0}+m(\mathbf{\Omega})]\gamma(\mathbf{\Omega})+[c_{0}+c(\mathbf{\Omega})]$ and then compute the power spectra of this directly using a spherical harmonic transform. In all cases we compute the original $\gamma(\mathbf{\Omega})$ field using a Gaussian random field generated by using a cosmic shear power spectrum based on the Planck $\Lambda$CDM cosmology (Planck Collaboration et al., 2018), using the massmappy code (Wallis et al., 2017). In all cases we use SSHT McEwen et al. (2013) to compute the spin-weighted spherical harmonics, which sample the sphere using the sampling scheme of McEwen & Wiaux (2011). The cases we consider are shown below. Note that we express these in terms of an arbitrary amplitude $A$ since these are all normalised to have $\langle m_{0}+m(\mathbf{\Omega})\rangle=2\times 10^{-3}$ and $\langle c_{0}+c(\mathbf{\Omega})\rangle=1\times 10^{-4}$. The cases are simple examples but nonetheless are approximations of realistic spatial variations that could occur: 1. 1. Case 1, Simple Galactic Plane: * • $m^{R}(\mathbf{\Omega})=A[\pi-|\phi-\pi|]$, $m^{I}(\mathbf{\Omega})=0$, * • $c^{R}(\mathbf{\Omega})=c^{I}(\mathbf{\Omega})=A[\pi-|\phi-\pi|]$, * • $c_{0}=A$, $m_{0}=A$; 2. 2. Case 2, Simple Patch Pattern: * • $m^{R}(\mathbf{\Omega})=10A\sin(100|\phi-\pi|)\sin(100|\theta-\pi|)$, $m^{I}(\mathbf{\Omega})=0$, * • $c^{R}(\mathbf{\Omega})=c^{I}(\mathbf{\Omega})=10A\sin(10|\phi-\pi|)\sin(10|\theta-\pi|)$, * • $c_{0}=A$, $m_{0}=A$; 3. 3. Case 3, Simple Scanning Pattern: * • $m^{R}(\mathbf{\Omega})=Ai$, where i is an iterative pixel number count, which is reset when $i=10$, $m^{I}(\mathbf{\Omega})=0$, * • $c^{R}(\mathbf{\Omega})=c^{I}(\mathbf{\Omega})=Ai^{2}$, where i is an iterative pixel number count, which is reset when $i=10$, * • $c_{0}=A$, $m_{0}=A$. Figure 2: The residual power spectrum $\delta C_{\ell}=\widetilde{C}^{EE}_{\ell}-C^{EE}_{\ell}$ for the three cases considered. The left plots show the comparison between the numerical case computed by transforming the modified shear field and performing a spherical harmonic transform (i.e. a forward model), and the analytic case computed using equation (2.2). The blue band shows the $1$-sigma scatter about the mean of forward model, the red lines show the analytic prediction. The right plots show the contribution to the analytic case from each of the components in equation (2.2). In all cases the legends label the coloured lines. The mean multiplicative terms (green and pink lines) have the same value due to the scaling, and hence are over-plotted. The first case approximates a Galactic plane dependency, the second case approximates a patch-dependent systematic effect, and the third case is a non- analytic case that approximates a scanning sequence of exposures. To demonstrate the complexity of the spatial variation of the cases we show in Figure 1 the real part of the multiplicative field for each of the cases (we do not show all the fields associated with the systematic effects since they are largely similar in form). In Figure 2 we show the residual power spectra $\delta C_{\ell}=\widetilde{C}^{EE}_{\ell}-C^{EE}_{\ell}$ for each of the cases considered. We compute the error on the forward model power spectrum as $\sigma(\delta C_{\ell})=[(\widetilde{C}^{EE}_{\ell})^{2}+(C^{EE}_{\ell})^{2}]^{1/2}$ (Joachimi & Bridle, 2010; Hu & Jain, 2004). We note that in all cases we use a maximum angular multipole of $L=32$. This is because the calculations are particularly numerically demanding. The $W^{\pm}_{\ell\ell^{\prime}mm^{\prime}}$ calculations have dimension $L^{4}$, and for each of these spin-weighted spherical harmonic functions must be computed each of which scale like $L^{2}\log L$ at best (McEwen et al., 2013). This point is discussed further in Section 4.1. In all cases the analytic formula given in equation (2.2) accurately captured the form of the residual power spectrum; the very small differences are due to the numerical stability of the spin-weighted spherical harmonic transform calculations. Figure 3: Timing of the forward model and analytic calculations using a 2016 Macbook Pro, 3.3 GHz Intel Core i7, 16 GB 2133 MHz LPDDR3. The blue line shows the forward model scaling, the orange line shows the analytic scaling, the thin red and green lines are proportional to $L^{2}$ and $L^{5}$ respectively where $L$ is the maximum $\ell$-mode. With regard to the different terms we find in all cases that the ${\mathcal{N}}C^{EE}_{\ell}$ term is dominant, which is expected since it is of linear order in $m(\mathbf{\Omega})$, followed by the $C^{c_{E}E}_{\ell}$ cross-correlation term and the $(m_{0}+m_{0}^{2})C^{EE}_{\ell}$ terms. The convolutive ${\mathcal{M}}$, ${\mathcal{B}}$ and $C^{c_{E}c_{E}}_{\ell}$ terms are all at least an order of magnitude lower in all cases. Therefore the linearised expression in equation (11) $\delta C_{\ell}\approx 2m_{0}C^{EE}_{\ell}+2\langle m^{R}(\mathbf{\Omega})\rangle C^{EE}_{\ell}+2C^{c_{E}E}_{\ell}$ is a good approximation in these simple examples. In the case that the mean of $m(\mathbf{\Omega})$ and $c(\mathbf{\Omega})$ are both zero, all of the remaining terms at second and third order in bias would become important at approximately the same level. Note that we plot the absolute value of the residual power spectrum contributions, since some terms can be negative depending on the nature of spatial pattern used in the simulations. ## 4 Discussion We have shown in general how constant and position-dependent shape measurement biases propagate through to cosmic shear power spectra. The multiplicative bias terms shown here are similar to those that result in CMB polarisation pseudo-$C_{\ell}$ analyses, where masking of the data results in expressions that also include temperature power spectra (Lewis et al., 2002; Zaldarriaga & Seljak, 1997; Grain et al., 2012; Brown et al., 2005). The difference here is that instead of a mask we have a multiplicative bias field that is in general spin-dependent. We note that this formalism equally applies to the case of masked cosmic shear data where $m(\mathbf{\Omega})$ may be zero in some regions, and that such a case would lead to further mode mixing. In Kitching et al. (2012) a pseudo-$C_{\ell}$ formalism was used to assess position-dependent shape measurement errors. However in that study the linear terms ${\mathcal{N}}$, bispectrum and additive terms were not included and the non-linear convolution term ${\mathcal{M}}$ used a simpler form based on a flat-sky approximation. In Taylor & Kitching (2016) the propagation of shape measurement biases was generalised to the convolutive case but the linear terms, bispectrum and E/B mode mixing terms were ignored. In Massey et al. (2013) and Cropper et al. (2013) requirements were set on weak lensing experiments using an approximation for position-dependent shape measurement biases. In that case a form of propagation was determined for the constant case $\widetilde{C}^{EE}_{\ell}=(1+2m_{0}+m_{0}^{2})C^{EE}_{\ell}+c^{2}_{c}$, which was then replaced with a ‘position-dependent’ formulation proposed by Amara & Réfrégier (2008), $C^{EE}_{\ell}=(1+{\mathcal{M}}_{\ell})C^{EE}_{\ell}+{\mathcal{A}}_{\ell}$. We find that such an expression is similar to the full case when only linear terms in biases are assumed, whereas the relationship to the underlying position-dependent bias fields is much more complex. Furthermore in Massey et al. (2013) a worst-case scenario in sensitivity was assumed for ${\mathcal{M}}_{\ell}$ in which multiplicative biases mimicked the scale- dependent behaviour of dark energy. These worst-case assumptions are conservative when designing an experiment and lead to requirements that will guarantee performance, but when assessing the actual performance of a survey they are not adequate. We note that if the mean multiplicative bias is zero – as may be expected if pre-experiment simulations can determine any mean effect – then only second and third order convolutive terms remain. The power spectrum residuals caused by these terms, and the impact on cosmology, are expected to be much lower than than the mean terms for two reasons. First, because the terms are second order and so for $m(\mathbf{\Omega})\ll 1$ these are smaller. Second, because they are convolutions it is unlikely that a functional form will result that matches the cosmic shear power spectrum; therefore the impact on cosmological parameter inference is expected to be lower (Taylor & Kitching, 2016). ### 4.1 Scaling In Figure 3 we show how the analytic and forward modelling cases scale as a function of $L$, the maximum $\ell$-mode . To compute the full case requires evaluations of terms that scale like $L^{4}\times L^{2}\log L$, where $L$ is the maximum multipole; this is because of the $W_{\ell\ell^{\prime}mm^{\prime}}$ terms that have $\sim L^{4}$ summations, and the spherical harmonic transforms that scale like $L^{2}\log L$. We find a slightly better scaling due to pre-computation of the spin-weighted spherical harmonic functions, but nonetheless the analytic calculation scales like $\propto L^{5}$ compared to the forward modelling that scales like $\propto L^{2}$. We evaluate simple examples for $L=32$, but scaling to a reasonable value of $L>1000$ would result in prohibitively long calculations. On the other hand the forward modelling of systematic effects i.e. the evaluation of equation (1) in real space and a direct spherical harmonic transform to produce a power spectrum is tractable for $L>1000$ and we therefore advocate this approach in Taylor et al. (2019) and Paykari et al., (in prep). We note that one could perform a spherical sky analysis and supplement this by a fast Fourier transform on small scales. However this approach would require the sphere to be divided into patches upon which a flat sky could be run with overlap between patches to capture all angular modes, and a transition from all-sky to flat-sky computed. This is feasible, but this is unneeded complexity given that a forward model is very simple to compute. ## 5 Conclusions In this paper we derive a complete expression for the impact of constant and spatially varying multiplicative and additive shape measurement biases on the cosmic shear power spectrum. In doing so we find several terms that have thus far been overlooked, in particular terms relating to cross-correlations between biases and shear. In performing the full calculation we find that to linear order spatially varying biases are well approximated by the sum of the product of the the power spectrum and the mean of the multiplicative bias field, and the cross- correlation term between the additive bias field and the shear field. We note that the cosmic shear power spectrum is not sensitive to constant or mean additive biases. We compare the computation of the full analytic expression with that obtained using a forward modelling approach using simplified simulations and find good agreement. Furthermore we use these simplified simulations to demonstrate how each term in the full expression contributes to the total. However, in performing the full calculation we also find that its computation scales as the maximum multipole $\propto L^{5}$. This means that its evaluation for large $L>1000$ is unfeasible. Therefore we recommend that any assessment of the impact of biases on cosmic shear power spectrum must be performed using a forward modelling approach. ###### Acknowledgements. We thank the developers of SSHT and massmappy, Jason McEwen, Chris Wallis and Boris Leistedt for making their code publicly available. We thank Dipak Munshi for useful discussions. We thank Peter Schneider for comments on an early draft. PP is supported by the UK Science and Technology Facilities Council. TK is supported by a Royal Society University Research Fellowship. HH acknowledges support from Vici grant 639.043.512 financed by the Netherlands Organization for Scientific Research (NWO). We thank an anonymous referee whose comments improved this manuscript. ## References * Amara & Réfrégier (2008) Amara A., Réfrégier A., 2008, http://dx.doi.org/10.1111/j.1365-2966.2008.13880.x MNRAS, http://adsabs.harvard.edu/abs/2008MNRAS.391..228A 391, 228 * Bridle et al. (2010) Bridle S., et al., 2010, http://dx.doi.org/10.1111/j.1365-2966.2010.16598.x MNRAS, http://adsabs.harvard.edu/abs/2010MNRAS.405.2044B 405, 2044 * Brown et al. (2005) Brown M. L., Castro P. G., Taylor A. N., 2005, http://dx.doi.org/10.1111/j.1365-2966.2005.09111.x MNRAS, http://adsabs.harvard.edu/abs/2005MNRAS.360.1262B 360, 1262 * Cropper et al. (2013) Cropper M., et al., 2013, http://dx.doi.org/10.1093/mnras/stt384 MNRAS, http://adsabs.harvard.edu/abs/2013MNRAS.431.3103C 431, 3103 * Grain et al. (2012) Grain J., Tristram M., Stompor R., 2012, http://dx.doi.org/10.1103/PhysRevD.86.076005 prd, http://adsabs.harvard.edu/abs/2012PhRvD..86g6005G 86, 076005 * Heymans et al. (2006) Heymans C., et al., 2006, http://dx.doi.org/10.1111/j.1365-2966.2006.10198.x MNRAS, http://adsabs.harvard.edu/abs/2006MNRAS.368.1323H 368, 1323 * Hildebrandt et al. (2018) Hildebrandt H., et al., 2018, arXiv e-prints, https://ui.adsabs.harvard.edu/#abs/2018arXiv181206076H p. arXiv:1812.06076 * Hoekstra et al. (2015) Hoekstra H., Herbonnet R., Muzzin A., Babul A., Mahdavi A., Viola M., Cacciato M., 2015, http://dx.doi.org/10.1093/mnras/stv275 MNRAS, http://adsabs.harvard.edu/abs/2015MNRAS.449..685H 449, 685 * Hoekstra et al. (2017) Hoekstra H., Viola M., Herbonnet R., 2017, http://dx.doi.org/10.1093/mnras/stx724 MNRAS, http://adsabs.harvard.edu/abs/2017MNRAS.468.3295H 468, 3295 * Hu & Jain (2004) Hu W., Jain B., 2004, http://dx.doi.org/10.1103/PhysRevD.70.043009 Phys. Rev. D, https://ui.adsabs.harvard.edu/abs/2004PhRvD..70d3009H 70, 043009 * Huff & Mandelbaum (2017a) Huff E., Mandelbaum R., 2017a, arXiv e-prints, https://ui.adsabs.harvard.edu/#abs/2017arXiv170202600H p. arXiv:1702.02600 * Huff & Mandelbaum (2017b) Huff E., Mandelbaum R., 2017b, arXiv e-prints, https://ui.adsabs.harvard.edu/#abs/2017arXiv170202600H p. arXiv:1702.02600 * Joachimi & Bridle (2010) Joachimi B., Bridle S. L., 2010, http://dx.doi.org/10.1051/0004-6361/200913657 AAP, https://ui.adsabs.harvard.edu/abs/2010A&A…523A…1J 523, A1 * Kannawadi et al. (2018) Kannawadi A., et al., 2018, arXiv e-prints, https://ui.adsabs.harvard.edu/#abs/2018arXiv181203983K p. arXiv:1812.03983 * Kitching et al. (2012) Kitching T. D., et al., 2012, http://dx.doi.org/10.1111/j.1365-2966.2012.21095.x MNRAS, http://adsabs.harvard.edu/abs/2012MNRAS.423.3163K 423, 3163 * Laureijs et al. (2011) Laureijs R., et al., 2011, arXiv e-prints, https://ui.adsabs.harvard.edu/#abs/2011arXiv1110.3193L p. arXiv:1110.3193 * Lewis et al. (2002) Lewis A., Challinor A., Turok N., 2002, http://dx.doi.org/10.1103/PhysRevD.65.023505 Phys. Rev. D, http://adsabs.harvard.edu/abs/2002PhRvD..65b3505L 65, 023505 * Mandelbaum et al. (2015) Mandelbaum R., et al., 2015, http://dx.doi.org/10.1093/mnras/stv781 MNRAS, http://adsabs.harvard.edu/abs/2015MNRAS.450.2963M 450, 2963 * Massey et al. (2007) Massey R., et al., 2007, http://dx.doi.org/10.1111/j.1365-2966.2006.11315.x MNRAS, http://adsabs.harvard.edu/abs/2007MNRAS.376…13M 376, 13 * Massey et al. (2013) Massey R., et al., 2013, http://dx.doi.org/10.1093/mnras/sts371 MNRAS, http://adsabs.harvard.edu/abs/2013MNRAS.429..661M 429, 661 * McEwen & Wiaux (2011) McEwen J. D., Wiaux Y., 2011, http://dx.doi.org/10.1109/TSP.2011.2166394 IEEE Transactions on Signal Processing, http://adsabs.harvard.edu/abs/2011ITSP…59.5876M 59, 5876 * McEwen et al. (2013) McEwen J. D., Puy G., Thiran J.-P., Vandergheynst P., Van De Ville D., Wiaux Y., 2013, http://dx.doi.org/10.1109/TIP.2013.2249079 IEEE Transactions on Image Processing, http://adsabs.harvard.edu/abs/2013ITIP…22.2275M 22, 2275 * Planck Collaboration et al. (2018) Planck Collaboration et al., 2018, preprint, http://adsabs.harvard.edu/abs/2018arXiv180706209P (http://arxiv.org/abs/1807.06209 arXiv:1807.06209) * Pujol et al. (2019) Pujol A., Kilbinger M., Sureau F., Bobin J., 2019, http://dx.doi.org/10.1051/0004-6361/201833740 AAP, https://ui.adsabs.harvard.edu/#abs/2019A&A…621A…2P 621, A2 * Schneider et al. (2002) Schneider P., van Waerbeke L., Mellier Y., 2002, http://dx.doi.org/10.1051/0004-6361:20020626 A&A, http://adsabs.harvard.edu/abs/2002A * Taylor & Kitching (2016) Taylor A. N., Kitching T. D., 2016, preprint, http://adsabs.harvard.edu/abs/2016arXiv160509130T (http://arxiv.org/abs/1605.09130 arXiv:1605.09130) * Taylor et al. (2019) Taylor P. L., Kitching T. D., Alsing J., Wand elt B. D., Feeney S. M., McEwen J. D., 2019, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2019arXiv190405364T p. arXiv:1904.05364 * Troxel et al. (2018) Troxel M. A., et al., 2018, http://dx.doi.org/10.1093/mnras/sty1889 MNRAS, https://ui.adsabs.harvard.edu/#abs/2018MNRAS.479.4998T 479, 4998 * Wallis et al. (2017) Wallis C. G. R., McEwen J. D., Kitching T. D., Leistedt B., Plouviez A., 2017, preprint, http://adsabs.harvard.edu/abs/2017arXiv170309233W (http://arxiv.org/abs/1703.09233 arXiv:1703.09233) * Zaldarriaga & Seljak (1997) Zaldarriaga M., Seljak U., 1997, http://dx.doi.org/10.1103/PhysRevD.55.1830 PRD, http://adsabs.harvard.edu/abs/1997PhRvD..55.1830Z 55, 1830 ## Appendix A Bias Propagation The true shear field $\gamma(\mathbf{\Omega})$ is a spin-2 quantity, and any impact of imperfect shape measurement should preserve this spin-2 nature. In general we consider that there are two ways that an imperfect shape measurement can impact an observed spin-2 field either 1) there can be a incorrect estimate of the ratio of the semi-major axes, expressed as an amplitude change of the shear, 2) there can be an incorrect estimation of the observed angle of the ellipse i.e. a rotation. ### A.1 Multiplicative Bias For a multiplicative systematic effect these possibilities can be expressed as $\displaystyle[1+m(\mathbf{\Omega})\gamma(\mathbf{\Omega})]=[1+m(\mathbf{\Omega}){\rm e}^{{\rm i}\phi_{m}(\mathbf{\Omega})}]\,|\gamma(\mathbf{\Omega})|{\rm e}^{{\rm i}2\Phi(\mathbf{\Omega})}$ (13) where we have expressed $\gamma(\mathbf{\Omega})=|\gamma(\mathbf{\Omega})|{\rm e}^{{\rm i}2\Phi(\mathbf{\Omega})}$ where $\Phi(\mathbf{\Omega})$ is the angle between the orientation of the elliptical shape induced by the shear and the $x$-axis of local Cartesian frame in which measurement has been made. To this expression we apply an amplitude change $m(\mathbf{\Omega})$ corresponding to an incorrect measurement of the ratio of the semi-major axes, and a small rotation $\phi_{m}(\mathbf{\Omega})$. This preserves the spin-2 nature of the measured field. When we express the product of two complex numbers $m(\mathbf{\Omega})\gamma(\mathbf{\Omega})$, it should be understood that the multiplicative bias fields take the form $\displaystyle m(\mathbf{\Omega})$ $\displaystyle=$ $\displaystyle m^{R}(\mathbf{\Omega})+{\rm i}m^{I}(\mathbf{\Omega})$ $\displaystyle m^{R}(\mathbf{\Omega})$ $\displaystyle=$ $\displaystyle m(\mathbf{\Omega})\cos[\phi_{m}(\mathbf{\Omega})]$ $\displaystyle m^{I}(\mathbf{\Omega})$ $\displaystyle=$ $\displaystyle m(\mathbf{\Omega})\sin[\phi_{m}(\mathbf{\Omega})],$ (14) where $m^{R}(\mathbf{\Omega})$ and $m^{I}(\mathbf{\Omega})$ in the first equation are the real and imaginary parts respectively which are coupled as expressed in the subsequent equations. We note that we do not label these $m_{1}(\mathbf{\Omega})$ and $m_{2}(\mathbf{\Omega})$ since they do not map solely to the $\gamma_{1}(\mathbf{\Omega})$ and $\gamma_{2}(\mathbf{\Omega})$ components of the shear field where $\gamma(\mathbf{\Omega})=\gamma_{1}(\mathbf{\Omega})+{\rm i}\gamma_{2}(\mathbf{\Omega})$. We refer to Pujol et al. (2019); Huff & Mandelbaum (2017b) for further discussion of the propagation of more complex multiplicative biases. We note that if the amplitude of the (residual) biases are small, and that the rotation angle is random, then it is reasonable to assume that $\langle m^{R}(\mathbf{\Omega})\rangle=\langle m^{I}(\mathbf{\Omega})\rangle$; if one assumes instead small residual biases and applies the small angle approximation then $m^{R}(\mathbf{\Omega})\approx m(\mathbf{\Omega})$ and $m^{I}(\mathbf{\Omega})\approx m(\mathbf{\Omega})\phi_{m}(\mathbf{\Omega})\approx 0$. In the case that $m^{R}(\mathbf{\Omega})=m(\mathbf{\Omega})$ and $m^{I}(\mathbf{\Omega})=0$ then this would result in $m_{1}(\mathbf{\Omega})=m(\mathbf{\Omega})$ and $m_{1}(\mathbf{\Omega})=m_{2}(\mathbf{\Omega})$ if expressed in this way. We note that if one applies a bias of the form $m_{1}(\mathbf{\Omega})\gamma_{1}(\mathbf{\Omega})+{\rm i}m_{2}(\mathbf{\Omega})\gamma_{2}(\mathbf{\Omega})$ (i.e. a different independent scalar multiplicative biases applied to each of the shear components) then this cannot in general be expressed as an amplitude change with a rotation, and therefore can result in a change in the spin properties of the measured field. We note that one could create such an expression by $m^{\prime}(\mathbf{\Omega})\gamma(\mathbf{\Omega})+\delta m(\mathbf{\Omega})\gamma^{*}(\mathbf{\Omega})$ where, $m^{\prime}(\mathbf{\Omega})=[m_{1}(\mathbf{\Omega})+m_{2}\mathbf{\Omega})]/2$ and $\delta m(\mathbf{\Omega})=[m_{1}(\mathbf{\Omega})-m_{2}(\mathbf{\Omega})]/2$, however the second term would represent a parity change/mislabelling in the $\gamma_{2}$ component which would be a very large systematic effect. We explored the expected size of $\delta m/m$ using image simulations that resemble _Euclid_ based on Hoekstra et al. (2017) and found that $\delta m/m\sim 0.1$ for both PSF anisotropy and for a simple of charge trailing between pixels. Hence in practice it appears from these initial studies that on can typically ignore $\delta m$. ### A.2 Additive Bias In the additive bias case one can add a field with spin-2 properties such that $\displaystyle\gamma(\mathbf{\Omega})+c(\mathbf{\Omega})=\gamma(\mathbf{\Omega})+|c(\mathbf{\Omega})|{\rm e}^{{\rm i}\phi_{c}(\mathbf{\Omega})}{\rm e}^{{\rm i}2\Phi(\mathbf{\Omega})}$ (15) where $|c(\mathbf{\Omega})|$ and $\phi_{c}(\mathbf{\Omega})$ are systematic changes in the amplitude and rotation angle of the measurements respectively. In this case the two additive components can be expressed like $\displaystyle c(\mathbf{\Omega})$ $\displaystyle=$ $\displaystyle c_{1}(\mathbf{\Omega})+{\rm i}c_{2}(\mathbf{\Omega})$ $\displaystyle c_{1}(\mathbf{\Omega})$ $\displaystyle=$ $\displaystyle|c(\mathbf{\Omega})||\cos(2\Phi(\mathbf{\Omega})+\phi_{c}(\mathbf{\Omega}))$ $\displaystyle c_{2}(\mathbf{\Omega})$ $\displaystyle=$ $\displaystyle|c(\mathbf{\Omega})|\sin(2\Phi(\mathbf{\Omega})+\phi_{c}(\mathbf{\Omega})),$ (16) where in the additive case the real and imaginary parts will add to the respective $\gamma_{1}$ and $\gamma_{2}$ parts of the shear and hence we label them as such (which is not the case for the multiplicative biases). In the constant case one can write $c_{0}=c_{1,0}+{\rm i}c_{2,0}$ where $c_{1,0}$ and $c_{2,0}$ are constants. ## Appendix B The Two Dimensional Case The full expanded expression for the two dimensional case can be written in matrix form as $\displaystyle\left(\begin{array}[]{c}\widetilde{C}^{EE}_{\ell}\\\ \widetilde{C}^{EB}_{\ell}\\\ \widetilde{C}^{BB}_{\ell}\end{array}\right)$ $\displaystyle=$ $\displaystyle(1+2m_{0}+m_{0}^{2})\left(\begin{array}[]{c}C^{EE}_{\ell}\\\ C^{EB}_{\ell}\\\ C^{BB}_{\ell}\end{array}\right)$ (23) $\displaystyle+$ $\displaystyle(1+m_{0})\left(\begin{array}[]{ccc}({\mathcal{N}}^{+}_{\ell}+{\mathcal{N}}^{+,*}_{\ell})&{\mathcal{N}}^{-}_{\ell}&0\\\ -{\mathcal{N}}^{-,*}_{\ell}&({\mathcal{N}}^{+}_{\ell}+{\mathcal{N}}^{+,*}_{\ell})&{\mathcal{N}}^{-}_{\ell}\\\ 0&-{\mathcal{N}}^{-,*}_{\ell}&({\mathcal{N}}^{+}_{\ell}+{\mathcal{N}}^{+,*}_{\ell})\end{array}\right)\left(\begin{array}[]{c}C^{EE}_{\ell}\\\ C^{EB}_{\ell}\\\ C^{BB}_{\ell}\end{array}\right)$ (30) $\displaystyle+$ $\displaystyle 2(1+m_{0})\left(\begin{array}[]{c}C^{c_{E}E}_{\ell}\\\ C^{c_{E}B}_{\ell}\\\ C^{c_{B}B}_{\ell}\end{array}\right)+\left(\begin{array}[]{c}C^{c_{E}c_{E}}_{\ell}\\\ C^{c_{E}c_{B}}_{\ell}\\\ C^{c_{B}c_{B}}_{\ell}\end{array}\right)$ (37) $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}}\left(\begin{array}[]{ccc}{\mathcal{M}}^{++}_{\ell\ell^{\prime}}&({\mathcal{M}}^{-+}_{\ell\ell^{\prime}}+{\mathcal{M}}^{+-}_{\ell\ell^{\prime}})&{\mathcal{M}}^{--}_{\ell\ell^{\prime}}\\\ -{\mathcal{M}}^{+-}_{\ell\ell^{\prime}}&({\mathcal{M}}^{++}_{\ell\ell^{\prime}}-{\mathcal{M}}^{--}_{\ell\ell^{\prime}})&{\mathcal{M}}^{-+}_{\ell\ell^{\prime}}\\\ {\mathcal{M}}^{--}_{\ell\ell^{\prime}}&-({\mathcal{M}}^{-+}_{\ell\ell^{\prime}}+{\mathcal{M}}^{+-}_{\ell\ell^{\prime}})&{\mathcal{M}}^{++}_{\ell\ell^{\prime}}\\\ \end{array}\right)\left(\begin{array}[]{c}C^{EE}_{\ell^{\prime}}\\\ C^{EB}_{\ell^{\prime}}\\\ C^{BB}_{\ell^{\prime}}\end{array}\right)$ (44) $\displaystyle+$ $\displaystyle\left(\begin{array}[]{c}{\mathcal{B}}^{+EE}_{\ell\ell^{\prime}}+({\mathcal{B}}^{+EE}_{\ell\ell^{\prime}})^{*}+{\mathcal{B}}^{-BE}_{\ell\ell^{\prime}}+({\mathcal{B}}^{-BE}_{\ell\ell^{\prime}})^{*}\\\ {\mathcal{B}}^{+EB}_{\ell\ell^{\prime}}+({\mathcal{B}}^{+BE}_{\ell\ell^{\prime}})^{*}+{\mathcal{B}}^{-BB}_{\ell\ell^{\prime}}+({\mathcal{B}}^{-EE}_{\ell\ell^{\prime}})^{*}\\\ {\mathcal{B}}^{+BB}_{\ell\ell^{\prime}}+({\mathcal{B}}^{+BB}_{\ell\ell^{\prime}})^{*}-{\mathcal{B}}^{-EB}_{\ell\ell^{\prime}}-({\mathcal{B}}^{-EB}_{\ell\ell^{\prime}})^{*}\end{array}\right).$ (48) The ${\mathcal{M}}$, ${\mathcal{N}}$ and ${\mathcal{B}}$ terms are defined in equation (2.2) in the main body of the text. ## Appendix C The Tomographic Case In equation (2.2) we consider the case of a single population of galaxies, however in reality one typically will define several populations labelled as tomographic bins normally delineated in redshift. In this case equation (1) is labelled with a tomographic bin $i$ such that $\displaystyle\widetilde{\gamma}_{i}(\mathbf{\Omega})=[1+m_{0,i}+m_{i}(\mathbf{\Omega})]\gamma_{i}(\mathbf{\Omega})+[c_{0,i}(1+{\rm i})+c_{i}(\mathbf{\Omega})]$ (49) and the power spectra are defined as $\displaystyle\widetilde{C}^{EE}_{\ell,ij}$ $\displaystyle\equiv$ $\displaystyle\frac{1}{2\ell+1}\sum_{m}\widetilde{\gamma}^{E}_{\ell m,i}\widetilde{\gamma}^{E,*}_{\ell m,j}$ $\displaystyle\widetilde{C}^{BB}_{\ell,ij}$ $\displaystyle\equiv$ $\displaystyle\frac{1}{2\ell+1}\sum_{m}\widetilde{\gamma}^{B}_{\ell m,i}\widetilde{\gamma}^{B,*}_{\ell m,j}$ $\displaystyle\widetilde{C}^{EB}_{\ell,ij}$ $\displaystyle\equiv$ $\displaystyle\frac{1}{(2\ell+1)}\sum_{m}\widetilde{\gamma}^{E}_{\ell m,i}\widetilde{\gamma}^{B,*}_{\ell m,j}$ $\displaystyle\widetilde{C}^{BE}_{\ell,ij}$ $\displaystyle\equiv$ $\displaystyle\frac{1}{(2\ell+1)}\sum_{m}\widetilde{\gamma}^{B}_{\ell m,i}\widetilde{\gamma}^{E,*}_{\ell m,j},$ (50) where $i$ and $j$ label tomographic bins. We use notation where $C^{XY}_{\ell,ij}$ means that field $X$ is associated with $i$, and field $Y$ is associated with $j$. We note that in this case there is a difference between the $EB$ power spectrum and the $BE$ power spectrum for tomographic bins $ij$. The full expression in expanded form is then: $\displaystyle\left(\begin{array}[]{c}\widetilde{C}^{EE}_{\ell,ij}\\\ \widetilde{C}^{EB}_{\ell,ij}\\\ \widetilde{C}^{BE}_{\ell,ij}\\\ \widetilde{C}^{BB}_{\ell,ij}\end{array}\right)$ $\displaystyle=$ $\displaystyle(1+m_{0,i}+m_{0,j}+m_{0,i}m_{0,j})\left(\begin{array}[]{c}C^{EE}_{\ell,ij}\\\ C^{EB}_{\ell,ij}\\\ C^{BE}_{\ell,ij}\\\ C^{BB}_{\ell,ij}\end{array}\right)$ (59) $\displaystyle+$ $\displaystyle(1+m_{0,j})\left(\begin{array}[]{cccc}{\mathcal{N}}^{+}_{\ell,i}&0&{\mathcal{N}}^{-}_{\ell,i}&0\\\ 0&-{\mathcal{N}}^{+}_{\ell,i}&0&{\mathcal{N}}^{-}_{\ell,i}\\\ -{\mathcal{N}}^{-}_{\ell,i}&0&{\mathcal{N}}^{+}_{\ell,i}&0\\\ 0&-{\mathcal{N}}^{-}_{\ell,i}&0&{\mathcal{N}}^{+}_{\ell,i}\\\ \end{array}\right)\left(\begin{array}[]{c}C^{EE}_{\ell,ij}\\\ C^{EB}_{\ell,ij}\\\ C^{BE}_{\ell,ij}\\\ C^{BB}_{\ell,ij}\end{array}\right)$ (68) $\displaystyle+$ $\displaystyle(1+m_{0,i})\left(\begin{array}[]{cccc}{\mathcal{N}}^{+,*}_{\ell,j}&{\mathcal{N}}^{-,*}_{\ell,j}&0&0\\\ -{\mathcal{N}}^{-,*}_{\ell,j}&{\mathcal{N}}^{+,*}_{\ell,j}&0&0\\\ 0&0&{\mathcal{N}}^{+,*}_{\ell,j}&{\mathcal{N}}^{-,*}_{\ell,j}\\\ 0&0&-{\mathcal{N}}^{-,*}_{\ell,j}&{\mathcal{N}}^{+,*}_{\ell,j}\\\ \end{array}\right)\left(\begin{array}[]{c}C^{EE}_{\ell,ij}\\\ C^{EB}_{\ell,ij}\\\ C^{BE}_{\ell,ij}\\\ C^{BB}_{\ell,ij}\end{array}\right)$ (77) $\displaystyle+$ $\displaystyle(1+m_{0,j})\left(\begin{array}[]{c}C^{c_{E}E}_{\ell,ij}\\\ C^{c_{E}B}_{\ell,ij}\\\ C^{c_{B}E}_{\ell,ij}\\\ C^{c_{B}B}_{\ell,ij}\end{array}\right)+(1+m_{0,i})\left(\begin{array}[]{c}C^{Ec_{E}}_{\ell,ij}\\\ C^{Ec_{B}}_{\ell,ij}\\\ C^{Bc_{E}}_{\ell,ij}\\\ C^{Bc_{B}}_{\ell,ij}\end{array}\right)+\left(\begin{array}[]{c}C^{c_{E}c_{E}}_{\ell,ij}\\\ C^{c_{E}c_{B}}_{\ell,ij}\\\ C^{c_{B}c_{E}}_{\ell,ij}\\\ C^{c_{B}c_{B}}_{\ell,ij}\end{array}\right)$ (90) $\displaystyle+$ $\displaystyle\sum_{\ell^{\prime}}\left(\begin{array}[]{cccc}{\mathcal{M}}^{++}_{\ell\ell^{\prime},ij}&{\mathcal{M}}^{+-}_{\ell\ell^{\prime},ij}&{\mathcal{M}}^{-+}_{\ell\ell^{\prime},ij}&{\mathcal{M}}^{--}_{\ell\ell^{\prime},ij}\\\ -{\mathcal{M}}^{+-}_{\ell\ell^{\prime},ij}&{\mathcal{M}}^{++}_{\ell\ell^{\prime},ij}&-{\mathcal{M}}^{--}_{\ell\ell^{\prime},ij}&{\mathcal{M}}^{-+}_{\ell\ell^{\prime},ij}\\\ -{\mathcal{M}}^{-+}_{\ell\ell^{\prime},ij}&-{\mathcal{M}}^{--}_{\ell\ell^{\prime},ij}&{\mathcal{M}}^{++}_{\ell\ell^{\prime},ij}&{\mathcal{M}}^{+-}_{\ell\ell^{\prime},ij}\\\ {\mathcal{M}}^{--}_{\ell\ell^{\prime},ij}&-{\mathcal{M}}^{++}_{\ell\ell^{\prime},ij}&-{\mathcal{M}}^{+-}_{\ell\ell^{\prime},ij}&{\mathcal{M}}^{++}_{\ell\ell^{\prime},ij}\\\ \end{array}\right)\left(\begin{array}[]{c}C^{EE}_{\ell^{\prime},ij}\\\ C^{EB}_{\ell^{\prime},ij}\\\ C^{BE}_{\ell^{\prime},ij}\\\ C^{BB}_{\ell^{\prime},ij}\end{array}\right)$ (99) $\displaystyle+$ $\displaystyle\left(\begin{array}[]{c}{\mathcal{B}}^{+EE}_{\ell\ell^{\prime},ij}+({\mathcal{B}}^{+EE}_{\ell\ell^{\prime},ji})^{*}+{\mathcal{B}}^{-BE}_{\ell\ell^{\prime},ij}+({\mathcal{B}}^{-BE}_{\ell\ell^{\prime},ji})^{*}\\\ {\mathcal{B}}^{+EB}_{\ell\ell^{\prime},ij}+({\mathcal{B}}^{+BE}_{\ell\ell^{\prime},ji})^{*}+{\mathcal{B}}^{-BB}_{\ell\ell^{\prime},ij}+({\mathcal{B}}^{-EE}_{\ell\ell^{\prime},ji})^{*}\\\ ({\mathcal{B}}^{+EB}_{\ell\ell^{\prime},ji})^{*}+{\mathcal{B}}^{+BE}_{\ell\ell^{\prime},ij}+({\mathcal{B}}^{-BB}_{\ell\ell^{\prime},ji})^{*}+{\mathcal{B}}^{-EE}_{\ell\ell^{\prime},ij}\\\ {\mathcal{B}}^{+BB}_{\ell\ell^{\prime},ij}+({\mathcal{B}}^{+BB}_{\ell\ell^{\prime},ji})^{*}-{\mathcal{B}}^{-EB}_{\ell\ell^{\prime},ij}-({\mathcal{B}}^{-EB}_{\ell\ell^{\prime},ji})^{*}\end{array}\right).$ (104) The various matrices in the above expression are $\displaystyle{\mathcal{M}}^{XY}_{\ell\ell^{\prime},ij}$ $\displaystyle=$ $\displaystyle\frac{1}{2\ell+1}\sum_{mm^{\prime}}W^{X}_{\ell\ell^{\prime}mm^{\prime},i}(W^{Y}_{\ell\ell^{\prime}mm^{\prime},j})^{*}$ $\displaystyle{\mathcal{N}}^{X}_{\ell,i}$ $\displaystyle=$ $\displaystyle\frac{1}{2\ell+1}\sum_{m}W^{X}_{\ell\ell mm,i}$ $\displaystyle{\mathcal{B}}^{XGH}_{\ell\ell^{\prime},ij}$ $\displaystyle=$ $\displaystyle\frac{1}{2\ell+1}\sum_{mm^{\prime}}W^{X}_{\ell\ell^{\prime}mm^{\prime}_{i}}\gamma^{G}_{\ell^{\prime}m^{\prime},i}(c^{H}_{\ell m,j})^{*},$ (105) where $X=(+,-)$, $Y=(+,-)$, $G=(E,B)$ and $H=(E,B)$. In this case the linearised expressions, assuming no underlying B-modes, or EB power, are $\displaystyle\widetilde{C}^{EE}_{\ell,ij}$ $\displaystyle\approx$ $\displaystyle(1+m_{0,i}+m_{0,j})C^{EE}_{\ell,ij}+\langle m^{R}_{i}\rangle C^{EE}_{\ell,ij}+\langle m^{R}_{j}\rangle C^{EE}_{\ell,ij}+C^{c_{E}E}_{\ell,ij}+C^{Ec_{E}}_{\ell,ij},$ $\displaystyle\widetilde{C}^{EB}_{\ell,ij}$ $\displaystyle\approx$ $\displaystyle-\langle m^{I}_{i}\rangle C^{EE}_{\ell,ij}+C^{Ec_{B}}_{\ell,ij},$ $\displaystyle\widetilde{C}^{BE}_{\ell,ij}$ $\displaystyle\approx$ $\displaystyle-\langle m^{I}_{i}\rangle C^{EE}_{\ell,ij}+C^{c_{B}E}_{\ell,ij},$ $\displaystyle\widetilde{C}^{BB}_{\ell,ij}$ $\displaystyle\approx$ $\displaystyle 0.$ (106)
Academia Sinica (2014-2020). Dr. Hsu received his MS in Computer Science and Information Engineering from the National Taiwan Normal University (2010) and Ph.D. in Electrical Engineering from the National Taiwan University (2019). ---|--- | Ming-Ching Chang is an Associate Professor at the Department of Computer Science, University at Albany, SUNY. His expertise includes AI, video analytics, computer vision, and machine learning. His research projects are funded by DARPA, IARPA, NIJ, VA, GE Global Research. Dr. Chang frequently serves the program chair, area chair, and referee of leading journals and conferences. He chairs the steering committee of the IEEE AVSS Conference since 2022, which he serves as the committee member since 2017. He has authored more than 100 peer-reviewed journal and conference publications, 7 US patents and 15 disclosures. He is a senior member of IEEE and member of ACM. ---|--- | Wei-Chao Chen is the Chief Digital Officer at Inventec Corporation and the Chairman at Skywatch Inc. Dr. Chen is also a Visiting Professor at the National Taiwan University. His research interests include graphics hardware, computational photography, augmented reality, and computer vision. Dr. Chen was the Chief AI Advisor at Inventec (2018-2020), a senior research scientist in Nokia Research Center at Palo Alto (2007-2009), and a 3D Graphics Architect in NVIDIA (2002-2006). Dr. Chen received his MS in Electrical Engineering from National Taiwan University (1996) and Ph.D. in Computer Science from the University of North Carolina at Chapel Hill (2002). ---|---
# Extending Word-Level Quality Estimation for Post-Editing Assistance Yizhen Wei† Takehito Utsuro† Masaaki Nagata‡ †Deg. Prog. Sys.&Inf. Eng., Grad. Sch. Sci.&Tech., University of Tsukuba ‡NTT Communication Science Laboratories, NTT Corporation, Japan ###### Abstract We define a novel concept called extended word alignment in order to improve post-editing assistance efficiency. Based on extended word alignment, we further propose a novel task called refined word-level QE that outputs refined tags and word-level correspondences. Compared to original word-level QE, the new task is able to directly point out editing operations, thus improves efficiency. To extract extended word alignment, we adopt a supervised method based on mBERT. To solve refined word-level QE, we firstly predict original QE tags by training a regression model for sequence tagging based on mBERT and XLM-R. Then, we refine original word tags with extended word alignment. In addition, we extract source-gap correspondences, meanwhile, obtaining gap tags. Experiments on two language pairs show the feasibility of our method and give us inspirations for further improvement. ## 1 Introduction Post-editing refers to the process of editing a rough machine-translated sentence (referred to as MT) into a correct one. Compared with conventional statistical machine translation (Koehn et al., 2003), neural machine translation (Cho et al., 2014; Sutskever et al., 2014; Vaswani et al., 2017) can generate translations with high accuracy. However, Yamada (2019) suggested that there is no significant difference in terms of cognitive load for one to post-edit an MT even it has high quality. Therefore, post-editing assistance is profoundly needed. Traditional post-editing assistance methods leave room for improvement. A typical method is word-level QE (Specia et al., 2020) that predicts tags expressed in the form of OK or BAD. However, such a dualistic judgement is not efficient enough because meaning of BAD is ambiguous. (a) Original word-level QE (b) Refined word-level QE. Correspondences between REP are drawn in red and that between INS are drawn in purple. Figure 1: A comparison between original word-level QE and our proposal. Word alignment is also proved to be helpful for post-editing assistance. Schwartz et al. (2015) demonstrated that displaying word alignment statistically significantly improves post-editing quality. However, unlike QE tags, word alignment cannot tell where translation errors are. Besides, it is non-trivial to extract word alignment between source sentence and MT. Schwartz et al. (2015) used a built-in function of Moses (Koehn et al., 2007), a decoder for statistical machine translation that is no longer suitable for neural models. In this paper, we propose a novel concept called extended word alignment. In extended word alignment, we include incorrect word translations and null alignment between a source sentence and MT. We adopt a supervised method based on pre-trained language models to extract it. Based on extended word alignment, we further propose a novel task called refined word-level QE which outputs refined tags including REP, INS, and DEL along with word-level correspondences. By referring to those information, post-editors could immediately realize what operations (replacement, insertion, and deletion towards MT) to do. Thus, we believe that refined word-level QE can significantly improve post-editing assistance efficiency. Methodologically, we firstly predict original word tags by training regression models for sequence tagging based on architectures such as multilingual BERT (Devlin et al., 2019) (mBERT) and XLM-RoBERTa (Conneau et al., 2020) (XLM-R). Then, we refine the original word tags by incorporating extended word alignment in a rule-based manner. In addition, we adopt a method similar to the one for extended word alignment to extract source-gap correspondences and then determine gap tags. Experiments on En-De and En-Zh datasets are conducted. Results show that our method significantly outperforms the baseline. For En-De, our best performance outperforms the baseline by 12.9% and 6.0% respectively in terms of mean F1 scores for Source and MT word refined tags. For En-Zh, the gap reaches 48.9% and 16.9%. Further more, we discuss the effectiveness and limitations of our method with specific cases. ## 2 Related Work Word Alignment Extraction. Methods based on statistical models (Brown et al., 1993; Och and Ney, 2003; Dyer et al., 2013) were dominant methods for word alignment extraction. In recent years, neural-based methods developed quickly. Garg et al. (2019) tried to obtain word alignment based on attention inside a transformer (Vaswani et al., 2017), but their method perform just as well as statistical tools like GIZA++ (Och and Ney, 2003). Dou and Neubig (2021) utilized multilingual BERT to extract embeddings of all words conditioned on context, aligning them under the restriction of optimal transport (Kusner et al., 2015). Nagata et al. (2020) utilized the pre-trained language model in a supervised manner and achieved a significant improvement against previous studies with only around 300 parallel sentence pairs for fine-tuning. In our work, we adapt their approach from ordinary word alignment to extended word alignment. Details will be introduced in Section 4.1. Word-Level QE. One of the conventional architectures for word-level QE is LTSM-based predictor-estimator (Kim and Lee, 2016; Zhang and Weiss, 2016; Kim et al., 2017). Recent researches (Wang et al., 2020) adopted new architectures such as transformer (Vaswani et al., 2017). For moderner methods, a typical example is QE BERT (Kim et al., 2019). They built a mBERT for classification with explicit gap tokens in the input sequence, but we find that regression models with adjustable threshold consistently outperform classification models and explicit gap tokens harm final performance. A newer research (Lee, 2020) adopted XLM-R rather than mBERT, but they did not explain their strategy to determine a threshold. All methods above require third-party large-scale parallel data for pre- training. In contrast, our method introduced in Section 4.2 achieves acceptable performance with small cost. Post-Editing User Interface. Nayek et al. (2015) depicted an interface where words that need editing are displayed with different colors. Schwartz et al. (2015) emphasized the importance of displaying the word alignment. Both interfaces do not tell the correctness of the translation of the MT words. Compared to them, the interface we envisaged provides information about translation quality (correctness) as well as suggestions of specific post- editing operations. There are also some other studies(Herbig et al., 2020; Jamara et al., 2021) tried to introduce multi-modalities including touching, speech, hand gestures into post-editing user interface, improving efficiency from another perspective. ## 3 Refined Word-Level QE for Post-Editing Assistance ### 3.1 Original Word-Level QE According to Specia et al. (2020), word-level QE shown in Figure 1(a) is a task that takes a source sentence and its machine-translated counterpart (MT) as input. It then outputs tags for source words, MT words and gaps between MT words (MT gaps).111For convenience, source tags and MT word tags are collectively known as word tags. MT word tags and MT gap tags are collectively known as MT tags. All those tags are expressed either as OK or BAD. BAD indicates potential translation errors that post-editors should correct. We refer to such a task as original word-level QE. Original word-level QE is not efficient enough for post-editing assistance because BAD is ambiguous. For example, in Figure 1(a), tag of “white” indicates a replacement of the mistranslation “黑” (black), but tag of “dogs” indicates an insertion into the gap between “猫” and “吗”. It is impossible to distinguish between these indications unless one attend to both entire sentences, which makes post-editing assistance meaningless. ### 3.2 Extended Word Alignment We formally define a novel concept called extended word alignment between source sentence and MT. Ordinary word alignment indicates word-to-word relations between a pair of semantically equivalent sentences in two languages. Any word can be theoretically aligned with another semantically equivalent word on the other side. In contrast, in extended word alignment, translation errors in MT are considered. Specifically, a source word is allowed to be aligned with its mistranslation (wrong word choice) and a word is allowed to be aligned with nothing, namely null-aligned. ### 3.3 Refined Word-Level QE Extended word alignment can disambiguate BAD tags, overcoming the disadvantage of original word-level QE. When a BAD-tagged source word is aligned with a BAD-tagged MT word, it is clear that a replacement is needed. Likewise, a null-aligned BAD-tagged source word indicates an insertion and a BAD-tagged MT word is a deletion. To make our idea more user-friendly, we formally propose a novel task called refined word-level QE by incorporating extended word alignment with original word-level QE. Besides extended word alignment, following refined tags are also included as the objectives. * • REP is assigned to a source word and its mistranslation (wrong word choice) in MT, indicating a replacement. * • INS is assigned to a source word and the gap where translation should be inserted in, indicating an insertion. * • DEL is assigned to a redundant MT word, indicating a deletion. In addition, we include correspondences between INS-tagged source words and MT gaps to express the insertion points. Those source-gap correspondences along with extended word alignment are collectively referred to as word-level correspondences. Figure 1(b) is an example of our proposal. Compared with Figure 1(a), Figure 1(b) points out the replacement of “黑” (black), the insertions of “and” and “dogs” to the insertion point, and the deletion of “吗” (an interrogative voice auxiliary). ## 4 Methodology222Besides the current method, we have also tried to use a unified model based on architectures like XLM-R to directly predict refined tags (OK/REP/INS/DEL) and word-level correspondences. However, due to lack of training data and complexity of the problem, direct approach did not work well. Therefore, we decided to adopt this multiple-phase approach. ### 4.1 Extended Word Alignment Extraction Figure 2: Extracting extended word alignment by mBERT. The word will be aligned with [CLS] token if it is null-aligned. Extracting extended word alignment is non-trivial. Traditional unsupervised statistical tools(Och and Ney, 2003; Dyer et al., 2013) cannot work well because they expect semantically equivalent sentence pair as input. After trying several neural methods (Garg et al., 2019; Dou and Neubig, 2021), we empirically adopt the supervised method proposed by Nagata et al. (2020). Specifically, extended word alignment extraction is regarded as a cross- lingual span prediction problem similar to the paradigm that utilizes BERT (Devlin et al., 2019) for SQuAD v2.0 (Rajpurkar et al., 2018). mBERT is used as the basic architecture. Given a source sentence with one word marked $S=[s_{1},s_{2},...,M,s_{i},M,...,s_{m}]$ ($M$ stands for a special mark token) and the MT $T=[t_{1},t_{2},...,t_{n}]$, mBERT is trained to identify a span $T_{(j,k)}=[t_{j},...,t_{k}](1\leq j\leq k\leq n)$ that is aligned with the marked source word $s_{i}$. Cross entropy loss is adopted during training. $\mathcal{L}^{align}_{s_{i}}=-\frac{1}{2}[\mathrm{log}(p_{j}^{start})+\mathrm{log}(p_{k}^{end})]$ Because of the symmetry of word alignment, similar operations will be done again in the opposite direction. During testing, following Nagata et al. (2020), we recognize word pairs whose mean probability of both directions is greater than 0.4 as a valid word alignment. The image of the model is illustrated in Figure 2. Nagata et al. (2020) demonstrated that the mBERT-based method significantly outperforms statistical methods in ordinary word alignment extraction. According to them, extracting word alignment for each word independently is the key to outperform other methods. Traditional methods model word alignment on a joint distribution, so that an incorrect previous alignment might cause more incorrect alignments like dominos. Our experiments prove that their method consistently works for extended word alignment. ### 4.2 Original Word Tag Prediction Figure 3: Determining original word tags with pre-trained language models. For Original tags, we conduct sequence tagging with multilingual pre-trained language models including mBERT and XLM-R. Figure 3 shows the image. Input sequence is organized in the format of “[CLS] source sentence [SEP] MT [SEP]” without any mark tokens. Two linear layers followed by Sigmoid function transform output vectors into scalar values as respective probabilities of being BAD for each token. Formally, for a source sentence $S=[s_{1},s_{2},...,s_{i},...,s_{m}]$ and an MT $T=[t_{1},t_{2},...,t_{j},...,t_{n}]$, the total loss is the mean of binary cross entropy of all word tags. $\mathcal{L}^{tag}_{s_{i}}=-[y_{s_{i}}\mathrm{log}(p_{s_{i}})+(1-y_{s_{i}})\mathrm{log}(1-p_{s_{i}})]$ $\mathcal{L}^{tag}_{t_{j}}=-[y_{t_{j}}\mathrm{log}(p_{t_{j}})+(1-y_{t_{j}})\mathrm{log}(1-p_{t_{j}})]$ $\mathcal{L}^{tag}=\frac{1}{m+n}(\sum\limits_{i=1}^{m}\mathcal{L}_{s_{i}}+\sum\limits_{j=1}^{n}\mathcal{L}_{t_{j}})$ We have also implemented our models with classification top- layers444Classification top-layers refers to a binary classification linear layer with Softmax., but we find that regression models are consistently better since we can adopt flexible threshold to offset the bias caused by imbalance of reference tags. ### 4.3 Word Tag Refinement and Gap Tag Prediction Figure 4: Refining the word tags by using extended word alignment. We use extended word alignment to refine the original word tags. Following the rules described in Section 3.3, we can refine word tags as Figure 4 shows. In practical situation, some BAD-tagged words are likely to be aligned with OK- tagged words. In that case, we change OK into BAD encouraging more generation of REP. Figure 5: Determining gap tags by extracting source-gap alignments with mBERT. For gap tags, we adopt a method similar to the one described in Section 4.1. Specifically, we model source-gap correspondences as alignment between source words and MT gaps. We train a model that aligns an INS-tagged source word with a two-word span in MT where corresponding gap is surrounded. Figure 5 illustrates our idea. During testing, when a valid source-gap correspondence is confirmed, we tag the MT gap as INS555As for source words involved, we do not change their tags and trust the refinement based on extended word alignment because we believe extended word alignment is easier to model.. It is natural if we use such a method to determine gap tags based on the INS- tagged source words prediction from previous workflow. However, in experiment, we notice that absolute value of accuracy of INS-tagged source words is not high. In order not to be influenced by the previous wrong predictions, instead of treating this task as a downstream one, we conducted it independently. ## 5 Experiment | En-De | En-Zh ---|---|--- | Source MCC | MT MCC | Source MCC | MT MCC OpenKiwi | 0.266 | 0.358 | 0.248 | 0.520 mBERT-cls | 0.314 | 0.419 | 0.309 | 0.555 mBERT | 0.340 | 0.457 | 0.357 | 0.570 XLM-R-cls | 0.326 | 0.446 | 0.330 | 0.579 XLM-R | 0.345 | 0.453 | 0.354 | 0.592 WMT20 Top | 0.523(Wang et al., 2020) | 0.597(Lee, 2020) | 0.336(Rubino, 2020) | 0.610(Hu et al., 2020) Table 1: MCC of original tags. All MT gap tags of our systems are set to OK. For En-De, unlike top systems that employs large-scale third-party resources, we achieve acceptable performance only using QE dataset. Figure 6: ROC curve and AUC of the baseline and our systems(* indicates that the model outperforms the baseline (OpenKiwi) with statistical significance (p<0.01)). ### 5.1 Data and Experimental Setups We make full advantage of the En-De and En-Zh datasets of the shared task of original word-level QE in WMT20666http://www.statmt.org/wmt20/quality-es timation-task.html. There are 7,000, 1,000, and 1,000 sentence pairs with annotation of tags respectively for training, development, and test set. Since the original datasets do not contain refined objectives, we additionally annotate the original development sets with all the objectives for refined word-level QE. Those annotated 1,000 pairs are further divided into 200 pairs for evaluation and 800 pairs for fine-tuning. Extended Word Align. | Source-Gap Corr. | En-De (F1/P/R) | En-Zh (F1/P/R) ---|---|---|--- FastAlign | mBERT | 0.828/0.812/0.844 | 0.739/0.773/0.709 AWESoME | 0.891/0.915/0.868 | 0.814/0.871/0.764 mBERT | 0.895/0.917/0.875 | 0.836/0.888/0.790 ft-mBERT | ft-mBERT | 0.916/0.913/0.918 | 0.888/0.887/0.889 Table 2: Evaluation of word-level correspondences. “mBERT” indicates mBERT trained with 7,000-pair pseudo data and “ft-mBERT” indicates mBERT further fine-tuned with 800-pair data. All the experiments are conducted with modified scripts from transformers-v3.3.1777https://github.com/huggingface/transfo rmers on an NVIDIA TITAN RTX (24GB) with CUDA 10.1. For pre-trained models, we use bert-base-multilingual-cased for mBERT and xlm-roberta-large for XLM-R from Huggingface. To train the model for original tags described in Section 4.2, we use the 7,000-pair training set provided by WMT20. 800 pairs of manually annotated data whose refined tags are degenerated into original tags are used for further training. Learning rate is set to 3e-5 and 1e-5 for mBERT and XLM-R respectively and both models are trained for 5 epochs. All the other configurations are remained unchanged as the default. To train the models extracting extended word alignment described in Section 4.1, we utilize AWESoME (Dou and Neubig, 2021) to generate pseudo alignment data based on 7,000-pair WMT20 training set. We also use extra 800 sentence- pair annotated alignment data for fine-tuning. Models are pre-trained for 2 epochs and fine-tuned for 5 epochs with a learning rate of 3e-5. Most configurations are remained unchanged as the default but max_seq_length and max_ans_length are set to 160 and 15 following Nagata et al. (2020). To train the model extracting source-gap correspondences described in Section 4.3, similar to what described above, we firstly adopt 7,000 sentence-pair WMT20 training set, generating pseudo data by randomly dropping out some target words in PE888Provided [P]ost-[E]dited sentence from MT in WMT20 dataset. It is regarded as the correct translations.. Then we link gaps where words are dropped with their source counterparts according to the source-PE alignment extracted by AWESoME. Also, 800 sentence-pair of gold source-gap correspondences are used for fine-tuning. All model configurations and training settings are kept identical as those of the model for extended word alignment extraction. ### 5.2 Experimental Results #### 5.2.1 Evaluation of Original Tags We firstly compare our performance with other participants of WMT20 Therefore, we use identical test sets to evaluate and only use data from the original training set of WMT20 to train our models here. Following WMT20 (Specia et al., 2020), we adopt the Matthews correlation coefficient (MCC) as the metric. From the perspective of competition, we make every effort to boost the performance. Thus we set all gap tags as OK rather than predicting them as we find such a strategy leads to the best MCC. The results are shown in Table 1. In general, pre-trained language models consistently outperform the baseline which is a LSTM-based predictor-estimator implemented with OpenKiwi. For En- De, our best source and MT MCC would have ranked sixth on the leaderboard of WMT20. For En-Zh, our best source and MT MCC would have ranked first and second on the leaderboard of WMT20. It is also noteworthy that regression models consistently outperform classification models with the suffix “-cls”. For regression models, we search an optimized threshold that maximize sum of source and MT MCC on the development set and adopt it on the test set to determine tags. To exclude errors caused by single optimized threshold, we further draw the ROC curves and AUC in Figure 6. The results demonstrate that our regression models based on mBERT and XLM-R statistically significantly outperform the baseline. For En-De, Wang et al. (2020) and Lee (2020) both used large-scale third-party data999Wang et al. (2020) used parallel data from WMT20 news translation task to pre-train a predictor and Lee (2020) generated 11 million pairs of pseudo QE data with 23 million pairs of sentences.. Besides the top-2, the third system (Rubino, 2020) is also pre-trained with 5 million sentence pairs but got 0.357 and 0.485 respectively. Therefore, we believe that we achieve acceptable performance with very small cost. #### 5.2.2 Evaluation of Word-Level Correspondences We evaluate extended word alignment and source-gap correspondences jointly as word-level correspondences. The results are shown in Table 2. Two baselines (“FastAlign” and “AWESoME”) cannot predict source-gap correspondences since they are designed for ordinary word alignment. We combine their extended word alignment with prediction of source-gap correspondences by “mBERT” for fair comparison. All predictions are evaluated by F1 score as well as precision and recall. Neural-based methods significantly outperform statistical “FastAlign”. The gap of 0.4% for En-De and 2.2% for En-Zh between “AWESoME” and “mBERT” is not significant. But it might implies that pre-trained language models like mBERT is able to filter noises in pseudo data and produce high-quality word-level correspondences. Additionally, a better performance of “fine-tuned mBERT” indicates that the upper bound could be higher if more annotated data is available. #### 5.2.3 Evaluation of Refined Tags | Extended --- Word Alignment | Original --- QE Tags | Source F1 Scores --- Mean (OK/REP/INS) | MT F1 Scores --- Mean (OK/REP/DEL/INS) FastAlign | OpenKiwi | 0.626 (0.696/0.492/0.174) | 0.767 (0.847/0.477/0.124/0.156) AWESoME | 0.708 (0.781/0.549/0.373) | 0.807 (0.879/0.548/0.395/0.156) mBERT | mBERT | 0.739 (0.825/0.540/0.421) | 0.820 (0.895/0.544/0.389/0.156) XLM-R | 0.709 (0.781/0.548/0.410) | 0.809 (0.879/0.522/0.415/0.156) ft-mBERT | rt-mBERT | 0.755 (0.850/0.538/0.400) | 0.827 (0.904/0.535/0.347/0.175) rt-XLM-R | 0.685 (0.748/0.544/0.431) | 0.805 (0.871/0.538/0.580/0.175) (a) En-De Results | Extended --- Word Alignment | Original --- QE Tags | Mean Source F1 Scores --- (OK/REP/INS) | Mean MT F1 Scores --- (OK/REP/DEL/INS) FastAlign | OpenKiwi | 0.360 (0.379/0.280/0.071) | 0.728 (0.781/0.276/0.173/0.042) AWESoME | 0.371 (0.391/0.285/0.066) | 0.733 (0.786/0.280/0.202/0.042) mBERT | mBERT | 0.836 (0.914/0.446/0.020) | 0.891 (0.947/0.441/0.316/0.042) XLM-R | 0.843 (0.929/0.410/0.018) | 0.895 (0.955/0.402/0.275/0.042) ft-mBERT | rt-mBERT | 0.848 (0.929/0.447/0.034) | 0.897 (0.954/0.441/0.284/0.042) rt-XLM-R | 0.849 (0.928/0.451/0.028) | 0.897 (0.955/0.446/0.289/0.042) (b) En-Zh Results Table 3: Evaluation of refined tags. Main metric is a weighted mean of F1 scores according to ratio of each type of tags in reference. “ft-” indicates that the model is fine-tuned with the extra 800-pair annotated alignment data. “rt-” indicates that the model is further trained with extra 800-pair annotated tag data. As introduced, we combine prediction of extended word alignment and original word tags101010While predicting the original tags, we did not directly use the optimized threshold determined in Section 5.2.1 since test set here originates from the original development set. Instead, we take the original test set of WMT20 for development purposes and re-searched an optimized threshold on it. to get refined word tags. Moreover, we deduce gap tags from source-gap correspondences. Origin of source-gap correspondences used is kept consistent with Table 2 according to extended word alignment. For baseline, combinations of FastAlign/AWESoME and OpenKiwi is adopted. As for metric, we use F1 score of each type of tags along with a weighted mean of all those F1 scores, taking the proportion of each tag in reference as weight. The results are shown in Table 3. Our best model outperforms the baseline by 12.9% and 6.0% respectively on source and MT refined tags in terms of mean F1 scores in En-De experiments. As for the En-Zh experiments, mean F1 scores are significantly improved by 48.9% and 16.9%. We also notice that though fine-tuned mBERT extracts extended word alignment with good accuracy, the absolute value of refined tag accuracy is still unsatisfactory (especially that of INS and DEL). We will discuss that in the next section. ## 6 Discussion on Specific Cases (a) An En-Zh case with correct refined word tag prediction. (b) An En-Zh case with incorrect refined word tag prediction. (c) An En-De case with correct prediction of source-gap correspondence and gap tag. (d) An En-Zh case with incorrect prediction of source-gap correspondences and gap tags. Figure 7: Specific cases. For visual neatness, most OK tags are omitted and some continuous spans are merged. ### 6.1 Discussion on Refined Word Tags In Figure 7(a), our system basically succeeds in detecting errors caused by incorrect use of punctuations. Our system correctly suggests the replacements for the second comma and the half-width period. As for the first comma, the translation is still natural and acceptable if we delete the comma following the system’s suggestion. Moreover, our system successfully detects mistranslations of “passes” and “touchdowns”. In MT, those football terminologies are respectively translated as “通行证” (a pass to enter somewhere) and “摔倒” (falling down). It is noteworthy that those two mistranslations are not revised in post-edited corpus provided by WMT21. It implies that our system performs surprisingly well as it even succeeds in detecting mistranslations that is not noticed by human annotators. In Figure 7(b), our system still works well in detecting incorrect use of half-width punctuations. However, compared with the reference, “abdominal aneurysm” is mistranslated and our model failed to detect it because both are tagged as OK during the prediction of original tags. A premature prediction of OK prevents a word from being refined into REP/INS/DEL later. We believe that an inappropriate threshold mainly leads to such an issue. Predicted probabilities of “腹部” and “动脉瘤” are respectively 0.103 and 0.134, but the optimized threshold used is 0.88 as we searched it to maximize the MCC on the whole set. Meanwhile, probabilities of all other OK-tagged MT words are actually smaller than 0.01. As a result, if we set a threshold between 0.01-0.10 for this sentence pair, we could have obtained the perfect result. In the future, we plan to investigate into methods that can determine fine- grained optimized threshold for each sentence pair. ### 6.2 Discussion on Gap Tags Figure 7(c) shows a typical En-De case that our model handles well. In German, it is more natural to indicate actions took place in the past in perfect tense rather than past tense. In this case, English verb “drafted” should be modified to “haben … ausgewählt”. Our model correctly suggests a correspondence between “drafted” and the MT gap in front of the period. As there are many cases need similar modification that inserts a particular word (like particle or infinitive for clause) before the period in MT, it is easier for our model to learn such laws. It probably explains the relatively good accuracy of INS in En-De experiments. In contrast, Figure 7(d) is an En-Zh example showing that our model tends to align many source words with the gap right before or after their translation in MT even the translation is correct and needs no extra insertions. The word “dissected” is unnecessarily aligned with the gap around its translation “解剖”. Two human names are also unnecessarily aligned with gaps. As a result, four gaps are incorrectly tagged as INS. We observed the annotated dataset and noticed that many Chinese words in MT are slightly modified by adding prefixes and suffixes during post-editing. For example, “成年 海龟” (adult sea turtle) is modified to “成年的海龟” (adding “的” as a suffix for adjective). “演讲” (the speech) is modified to “这一演讲” (emphasizing “this” speech). Generally, those modifications are not necessary because of the free Chinese grammar. However, existence of those modifications might mislead the model into preferring to unnecessarily align a word with the gap around its translation like Figure 7(d). To address this issue, we plan to restrict the annotation rules to exclude meaningless modification in En-Zh training data in the future. ## 7 Conclusion and Future Work To improve post-editing assistance efficiency, we define a novel concept called extended word alignment. By incorporating extended word alignment with original word-level QE, we formally propose a novel task called refined word- level QE. To solve the task, we firstly adopt a supervised method to extract extended word alignment and then predict original tags with pre-trained language models by conducting sequence tagging. We then refine word tags with extended word alignment. Additionally, we extract source-gap correspondences and determine gap tags. We perform experiments and a discussion on specific cases. In the future, we would like to polish our work in the following perspectives. Firstly, we want to develop methods that determines fine-grained threshold as elaborated in Section 6. Moreover, we prepare to conduct a human-evaluated experiment to prove the superiority of refined word-level QE in terms of post- editing assistance efficiency. ## References * Brown et al. (1993) P. F. Brown, Stephen A. Della P., Vincent J., and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. _Computational Linguistics_ , 19(2):263–311. * Cho et al. (2014) K. Cho, van Merrienboer B., C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In _Proc. EMNLP_ , pages 1724–1734. * Conneau et al. (2020) A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In _Proc. 58th ACL_ , pages 8440–8451. * Devlin et al. (2019) J. Devlin, M. Chang, K. Lee, and Toutanova K. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proc. 17th NAACL-HLT_ , pages 4171–4186. * Dou and Neubig (2021) Z. Dou and G. Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In _Proc. 16th EACL_ , pages 2112–2128. * Dyer et al. (2013) C. Dyer, V. Chahuneau, and N. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In _Proc. 11th NAACL-HLT_ , pages 644–648. * Garg et al. (2019) S. Garg, S. Peitz, U. Nallasamy, and M. Paulik. 2019. Jointly learning to align and translate with transformer models. pages 4453–4462. * Herbig et al. (2020) N. Herbig, T. Duwel, S. Pal, K. Meladaki, M. Monshizadeh, A. Kruger, and J. van Genabith. 2020. MMPE: A Multi-Modal Interface for Post-Editing Machine Translation. In _Proc. 58th ACL_ , pages 327–334. * Hu et al. (2020) C. Hu, H. Liu, K. Feng, C. Xu, N. Xu, Z. Zhou, S. Yan, Y. Luo, C. Wang, X. Meng, T. Xiao, and J. Zhu. 2020. The niutrans system for the wmt20 quality estimation shared task. In _Proc. 5th WMT_ , pages 1018–1023. * Jamara et al. (2021) R. A. Jamara, N. Herbig, A. Kruger, and J. van Genabith. 2021. Mid-air hand gestures for post-editing of machine translation. In _Proc. 59th ACL_ , pages 6763–6773. * Kim and Lee (2016) H. Kim and J. Lee. 2016. Recurrent neural network based translation quality estimation. In _Proc. 1st WMT_ , pages 787–792. * Kim et al. (2017) H. Kim, J. Lee, and S. Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In _Proc. 2nd WMT_ , pages 562–568. * Kim et al. (2019) H. Kim, J. Lim, H. Kim, and S. Na. 2019. QE BERT: Bilingual BERT using multi-task learning for neural quality estimation. In _Proc. 4th WMT_ , pages 85–89. * Koehn et al. (2007) P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In _Proc. 45th ACL_ , pages 177–180. * Koehn et al. (2003) P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In _Proc. HLT-NAACL_ , pages 127–133. * Kusner et al. (2015) M. Kusner, Y. Sun, N. Kolkin, and K. Weinberger. 2015. From word embeddings to document distances. In _Proc. 32nd ICML_ , pages 957–966. * Lee (2020) D. Lee. 2020. Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In _Proc. 5th WMT_ , pages 1024–1028. * Nagata et al. (2020) M. Nagata, K. Chousa, and M. Nishino. 2020. A supervised word alignment method based on cross-language span prediction using multilingual BERT. In _Proc. EMNLP_ , pages 555–565. * Nayek et al. (2015) T. Nayek, S. K. Naskar, S. Pal, M. Zampieri, M. Vela, and J. van Genabith. 2015\. CATaLog: New approaches to TM and post editing interfaces. In _Proc. Workshop NLP4TM_ , pages 36–42. * Och and Ney (2003) F. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. _Computational Linguistics_ , 29(1):19–51. * Rajpurkar et al. (2018) P. Rajpurkar, R. Jia, and P. Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In _Proc. 56th ACL_ , pages 784–789. * Rubino (2020) R. Rubino. 2020. NICT kyoto submission for the wmt’20 quality estimation task: Intermediate training for domain and task adaptation. In _Proc. 5th WMT_ , pages 1042–1048. * Schwartz et al. (2015) L. Schwartz, I. Lacruz, and T. Bystrova. 2015. Effects of word alignment visualization on post-editing quality & speed. In _Proc. MT Summit XV_. * Specia et al. (2020) L. Specia, F. Blain, M. Fomicheva, E. Fonseca, V. Chaudhary, F. Guzmán, and A. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In _Proc. 5th WMT_ , pages 741–762. * Sutskever et al. (2014) I. Sutskever, O. Vinyals, and Q. Le. 2014. Sequence to sequence learning with neural networks. In _Proc. 27th NIPS_ , pages 3104–3112. * Vaswani et al. (2017) A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, Ł. Kaiser, and I. Polosukhin. 2017. Attention is all you need. In _Proc. 31st NIPS_ , pages 5998–6008. * Wang et al. (2020) M. Wang, H. Yang, H. Shang, D. Wei, J. Guo, L. Lei, Y. Qin, S. Tao, S. Sun, Y. Chen, and L. Li. 2020. HW-TSC’s participation at WMT 2020 automatic post editing shared task. In _Proc. 5th WMT_ , pages 1054–1059. * Yamada (2019) M. Yamada. 2019. The impact of google neural machine translation on post-editing by student translators. _The Journal of Specialised Translation_ , 31:87–106. * Zhang and Weiss (2016) Y. Zhang and D. Weiss. 2016. Stack-propagation: Improved representation learning for syntax. In _Proc. 54th ACL_ , pages 1557–1566.
# Detecting Isohedral Polyforms with a SAT Solver Craig S. Kaplan School of Computer Science University of Waterloo<EMAIL_ADDRESS> ###### Abstract I show how to express the question of whether a polyform tiles the plane isohedrally as a Boolean formula that can be tested using a SAT solver. This approach is adaptable to a wide range of polyforms, requires no special-case code for different isohedral tiling types, and integrates seamlessly with existing software for computing Heesch numbers of polyforms. ## 1 Introduction The study of algorithms for computing tiling-theoretic properties of shapes is a rich and fascinating branch of computational geometry. Implementations of these algorithms can also serve as useful tools in the experimental side of tiling theory, as part of the search for new shapes with interesting properties. For example, Myers systematically computed isohedral numbers (the minimum number of transitivity classes in any tiling by a given shape) for many simple polyforms [8]. Building on Myers’s work, I computed Heesch numbers (the maximum number of times that a non-tiling shape can be surrounded by layers of copies of itself) for simple polyforms [5]. Our tools did not contribute to Smith’s initial discovery of the “hat” aperiodic monotile, but they played a central role in our subsequent analysis of the hat and our proof (with Goodman-Strauss) of its aperiodicity [10]. An _isohedral tiling_ is a tiling by congruent copies of some _prototile_ $T$, such that for any two tiles $T_{1}$ and $T_{2}$ there exists a symmetry of the tiling mapping $T_{1}$ to $T_{2}$. Isohedral tilings are some of the simplest periodic tilings, in that all tiles belong to a single transitivity class relative to the symmetries of the tiling. A complete theory of isohedral tilings, including their classification into 81 tiling types with unmarked tiles, was worked out by Grünbaum and Shephard [4, Chapter 6]. Given a simple shape such as a polyform, does it admit any isohedral tilings? This question offers interesting opportunities for the development of new algorithms. It is also of practical interest as part of any software for computing the tiling-theoretic properties of shapes. Myers’s software [8] can detect isohedral prototiles quickly, but formal questions of computational complexity are more or less peripheral to his work. The current state of the art, at least for the special case of polyominoes, is the quasilinear-time algorithm by Langerman and Winslow [6]. In this paper I present a new technique for checking whether a polyform tiles isohedrally. The algorithm is based on expressing the question as a Boolean formula that can be checked by a SAT solver, and was motivated by my desire to integrate such a test into my existing SAT-based framework for computing Heesch numbers [5]. I will explain the mathematical basis for this approach (Section 2), followed by its expression in Boolean logic (Section 3), and then conclude with a few final observations (Section 4). ## 2 Identifying prototiles based on surrounds In order to determine whether a shape admits any isohedral tilings of the plane, it suffices to examine the ways that the shape can be surrounded by copies of itself. That is, if there exists a surround with a particular structure that will be explained here, then the shape is guaranteed to tile isohedrally. Let $T$ be a shape, which in full generality can be any topological disk, but which for my purposes is typically a polygon. Without loss of generality, I assume here that $T$ is asymmetric. (A symmetric shape can always be decorated with an asymmetric marking, with the meaning of congruence expanded to preserve markings.) A _patch_ is a finite collection of congruent copies of $T$, with pairwise disjoint interiors, whose union is a topological disk. In particular, if exactly one copy of $T$ lies in the interior of the patch, then we refer to the patch as a _$1$ -patch_, to the interior tile as the patch’s _centre_ , and to the remaining tiles as a _surround_ of $T$. The fact that every two tiles in an isohedral tiling are related by a symmetry of the tiling implies that every tile is the centre of a congruent $1$-patch, or more loosely that tiles have congruent surrounds. Grünbaum and Shephard use this fact to develop a complete enumeration of isohedral tiling types, based on an “incidence symbol” that expresses a prototile’s relationships to its neighbours [4]. In fact, the converse holds as well: Dolbilin and Schattschneider showed that if the tiles in a tiling have congruent surrounds, then the tiling must be isohedral [3]. Let $\mathcal{S}=\\{T_{1},\ldots,T_{n}\\}$ be a surround of a shape $T$. The surround is made up of congruent copies of $T$, meaning that each $T_{i}=g_{i}(T)$ for some rigid motion $g_{i}$. Fix one shape $T_{i}$ in the surround, and construct $\mathcal{S}_{i}=\\{g_{i}\circ g_{j}(T)\\}_{j=1}^{n}$, a congruent copy of $\mathcal{S}$ placed around $T_{i}$. I call $T_{i}$ _extendable_ if this transformed surround does not “conflict” with $T_{i}$’s neighbours in the original $1$-patch centred at $T$. More precisely, $T_{i}$ is extendable if for every $A\in\\{T,T_{1},\ldots,T_{n}\\}$ and every $B\in\mathcal{S}_{i}$, either $A=B$ or $A$ and $B$ have disjoint interiors. Suppose that $T$ has a surround in which every $T_{i}$ is extendable. The transformed surrounds $\mathcal{S}_{i}$ must all be compatible with the $1$-patch around $T$ and with each other, meaning that their union will surround $\mathcal{S}$ with a second layer of tiles. In this manner we can continue outward layer by layer, each time completing the surrounds of the tiles along the boundary of the growing patch. (This construction is similar to one used by Grünbaum and Shephard [4, Theorem 6.1.1].) In the limit we obtain a tiling of the plane in which every tile has a congruent surround, which must therefore be isohedral by The Local Theorem of Dolbilin and Schattschneider [3]. I summarize this argument with a proposition. ###### Proposition 1 A shape $T$ admits an isohedral tiling if and only if $T$ has a surround $\mathcal{S}=\\{T_{1},\ldots,T_{n}\\}$ in which every $T_{i}$ is extendable (in which case every tile in the tiling is surrounded by a congruent copy of $\mathcal{S}$). ## 3 SAT formulation In previous work I showed how to use a SAT solver to compute Heesch numbers of simple polyforms [5]. My software constructs a sequence of Boolean formulas equivalent to the questions “Can $T$ be surrounded at least once?”, “Can $T$ be surrounded at least twice?”, and so on, and passes them to a SAT solver. It halts as soon as one of these questions is false (or after a predetermined maximum number of levels, to avoid looping forever when given a shape that tiles). Here I show that it is possible to incorporate the mathematical ideas of the previous section into my Heesch number computation, by interposing the question “Can $T$ tile isohedrally?” immediately after “Can $T$ be surrounded at least once?”. Indeed, the new question is a simple restriction of the surroundability formula already being used, taking the form “Can $T$ be surrounded at least once, in a way that witnesses its ability to tile isohedrally?”. Let $\mathcal{T}$ be a tiling of the plane. A _poly- $\mathcal{T}$-tile_ is a shape created by gluing together a finite connected set of tiles from $\mathcal{T}$. Informally, I refer to a poly-$\mathcal{T}$-tile as a “polyform”, to $\mathcal{T}$ as “the grid”, and to the tiles of $\mathcal{T}$ as “cells”. In any patch or tiling by a polyform, I will also require that every tile be a union of cells from the grid; that is, every tile must be “aligned” to the grid. Let $T$ be a poly-$\mathcal{T}$-tile. Define the _halo_ of $T$ to be all grid cells not in $T$ that are neighbours of cells in $T$. Compute the set $\\{T_{1},\ldots,T_{n}\\}$ of all transformed copies of $T$ that can be neighbours of $T$ in a surround. Each $T_{i}$ will have the form $g_{i}(T)$ for a rigid motion $g_{i}$. Any legal surround must be a subset of the $T_{i}$ that collectively occupy every halo cell without overlapping each other. We can express these criteria using a Boolean formula, a simplified version of the one I used for Heesch number computation. Abusing notation slightly, create Boolean variables $T_{1},\ldots,T_{n}$ for each potential member of the surround. Now construct a formula with the following clauses: * • For every cell in the halo, a conjunction of all the $T_{i}$ that use that cell (every cell in the halo must be occupied); * • For every pair $T_{i}$ and $T_{j}$ that overlap in one or more cells, a clause of the form $(\neg T_{i}\vee\neg T_{j})$ (overlapping tiles are mutually exclusive). If a satisfying assignment is found for this formula, then a candidate surround will correspond to the subset of variables set to true. It is possible, however, for the resulting set of tiles to enclose holes; if a hole is detected, then a clause is added to suppress this solution and the SAT solver is restarted. This process iterates until either a simply connected solution is found, or no more candidate surrounds remain. If $T$ is surroundable, we can check whether it tiles isohedrally before trying to surround it with more layers. I do so by augmenting the formula above with new clauses. Let $T_{i}=g_{i}(T)$ and $T_{j}=g_{j}(T)$ be two neighbours of $T$ that are also themselves neighbours. If $T_{i}$ and $T_{j}$ are used together in a surround $\mathcal{S}$, then they must both be extendable by that surround. Note that $g_{i}(T_{j})=g_{i}\circ g_{j}(T)$ will be one of the shapes in $\mathcal{S}_{i}$, the copy of $\mathcal{S}$ surrounding $T_{i}$, and must therefore avoid conflicts with the shapes in $\mathcal{S}$. We can enforce this condition by finding the member $T_{k}=g_{i}\circ g_{j}(T)$, if it exists, and adding a clause of the form $(\neg T_{i}\vee\neg T_{j}\vee T_{k})$ (if $T_{i}$ and $T_{j}$ are both used in a surround, then $T_{k}$ must be used too). By symmetry, we perform the same steps for $g_{j}\circ g_{i}$. We can add clauses to this formula that further restrict the space of possible solutions the SAT solver must explore, potentially improving performance. Suppose $T_{i}=g_{i}(T)$ is part of an isohedral surround, and $g_{i}$ is not an involution. Then because $T$ is a neighbour of $g_{i}(T)$, it follows that $g_{i}^{-1}(T)$ is a neighbour of $T$, meaning that it must also appear in the surround. We therefore find $T_{k}=g_{i}^{-1}(T)$ and add a clause of the form $(\neg T_{i}\vee T_{k})$, which forces $T_{k}$ to be used if $T_{i}$ is. Similarly, in the joint cases above we also add clauses for $g_{i}\circ g_{j}^{-1}$ and $g_{j}\circ g_{i}^{-1}$, if those transformations correspond to neighbours of $T$. This augmented formula has a satisfying assignment if and only if it corresponds to a surround of $T$ for which every $T_{i}$ in the surround is extendable, or in other words, if and only if $T$ tiles the plane isohedrally. ## 4 Discussion I implemented the augmented Boolean formula described above within the framework of my existing software for computing Heesch numbers of polyforms [5]. In my implementation, transformed copies of a polyform $T$ are represented via their affine transformation matrices (and not their boundaries or cells). A matrix effectively also serves as an asymmetric marker, thereby preventing any issues from arising with symmetric shapes. As a simple validation, my software produces counts of isohedral polyforms that agree with the figures tabulated by Myers [8], up to the size limits I tested (12-ominoes, 12-hexes, 13-iamonds, and 12-kites). When resigning oneself to the black box of a SAT solver, questions of asymptotic complexity become largely moot. Therefore, a theoretical comparison with, say, the quasilinear-time time algorithm of Langerman and Winslow [6] is not particularly meaningful. My approach is slower than what would be possible with an efficient implementation of their algorithm, and is certainly slower than Myers’s lightning-fast hand-optimized C code. In the context of my software, the extra time required for checking isohedral tilability as part of computing Heesch numbers is minimal. Furthermore, this approach is remarkably convenient—the original program for computing Heesch numbers required a few thousand lines of C++ code, and fewer than 100 lines were added for this enhancement. It is also quite general: it adapts seamlessly to arbitrary polyform grids, and does not require any special-purpose code for different isohedral tiling types (in fact, it uses the definition of isohedral tiling directly, and does not rely on any information about tiling types at all). My enhanced implementation still cannot resolve the tiling-theoretic status of every polyform. In particular, it is unable to compute the isohedral number of any $k$-anisohedral polyform (which admits only tilings containing at least $k$ transitivity classes of tile) for $k\geq 2$. It would be interesting to explore further methods based on discrete optimization that can expand to cover these more complex, but equally important shapes. And of course, no software can currently detect aperiodic monotiles, for which no general procedures are known. ## Acknowledgements Thanks to Joseph Myers and Doris Schattschneider for helpful feedback during the course of this work and the preparation of this paper. ## References * [1] * [2] Bojan Bašić (2021): _A figure with Heesch number 6: pushing a two-decade-old boundary_. Math. Intelligencer 43(3), pp. 50–53, 10.1007/s00283-020-10034-w. * [3] Nikolai Dolbilin & Doris Schattschneider (1998): _The Local Theorem for Tilings_. In Jiří Patera, editor: Quasicrystals and discrete geometry, 10, American Mathematical Soc., pp. 193–199, 10.1090/fim/010/06. * [4] Branko Grünbaum & G.C. Shephard (2016): _Tilings and Patterns_ , second edition. Dover. * [5] Craig S. Kaplan (2022): _Heesch numbers of unmarked polyforms_. Contributions to Discrete Mathematics 17(2), pp. 150–171, 10.55016/ojs/cdm.v17i2.72886. * [6] Stefan Langerman & Andrew Winslow (2016): _A Quasilinear-Time Algorithm for Tiling the Plane Isohedrally with a Polyomino_. In: 32nd International Symposium on Computational Geometry (SoCG 2016), Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 50:1–50:15, 10.4230/LIPIcs.SoCG.2016.50. * [7] Stefan Langerman & Andrew Winslow (2016): _A Quasilinear-Time Algorithm for Tiling the Plane Isohedrally with a Polyomino_. In Sándor P. Fekete & Anna Lubiw, editors: 32nd International Symposium on Computational Geometry (SoCG 2016), LIPIcs 51, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 50:1–50:15, 10.4230/LIPIcs.SoCG.2016.50. * [8] Joseph Myers (2000–2024): _Polyform tiling_. Available at https://www.polyomino.org.uk/mathematics/polyform-tiling/. Accessed: May 15th, 2024. * [9] Michael Rao (2017): _Exhaustive search of convex pentagons which tile the plane_ , 10.48550/arXiv.1708.00274. * [10] David Smith, Joseph Samuel Myers, Craig S. Kaplan & Chaim Goodman-Strauss (2023): _An aperiodic monotile_ , 10.48550/arXiv.2303.10798. * [11] Mate Soos, Karsten Nohl & Claude Castelluccia (2009): _Extending SAT Solvers to Cryptographic Problems_. In Oliver Kullmann, editor: Theory and Applications of Satisfiability Testing - SAT 2009, 12th International Conference, SAT 2009, Swansea, UK, June 30 - July 3, 2009. Proceedings, Lecture Notes in Computer Science 5584, Springer, pp. 244–257, 10.1007/978-3-642-02777-2_24.
# Mixed Preference Optimization: Reinforcement Learning with Data Selection and Better Reference Model Qi Gou, Cam-Tu Nguyen State Key Laboratory for Novel Software Technology, Nanjing University, China <EMAIL_ADDRESS> <EMAIL_ADDRESS>Corresponding authors. ###### Abstract Large Language Models (LLMs) have become increasingly popular due to their ability to process and generate natural language. However, as they are trained on massive datasets of text, LLMs can inherit harmful biases and produce outputs that are not aligned with human values. This paper studies two main approaches to LLM alignment: Reinforcement Learning with Human Feedback (RLHF) and contrastive learning-based methods like Direct Preference Optimization (DPO). By analyzing the stability and robustness of RLHF and DPO, we propose MPO (Mixed Preference Optimization), a novel method that mitigates the weaknesses of both approaches. Specifically, we propose a two-stage training procedure: first train DPO on an easy dataset, and then perform RLHF on a difficult set with DPO model being the reference model. Here, the easy and difficult sets are constructed by a well-trained reward model that splits response pairs into those with large gaps of reward (easy), and those with small gaps (difficult). The first stage allows us to obtain a relatively optimal policy (LLM) model quickly, whereas the second stage refines LLM with online RLHF, thus mitigating the distribution shift issue associated with DPO. Experiments are conducted on two public alignment datasets, namely HH-RLHF and TLDR, demonstrating the effectiveness of MPO, both in terms of GPT4 and human evaluation. ## 1 Introduction LLMs (Large Language Models) Achiam et al. (2023); Chowdhery et al. (2023); Touvron et al. (2023a, b); Chiang et al. (2023); Taori et al. (2023) have recently demonstrated their strong language capabilities from text understanding and summarization to generation, all thanks to their pre- training on extensively large datasets. However, as the pre-training only aims to predict the next token, LLMs may not closely follow human instructions. Moreover, since it is difficult to completely filter out harmful content from the vast amount of pre-trained data, LLMs may learn to produce outputs that are not aligned with human values. Training with human preference data (or alignment), therefore, becomes essential for the success of LLMs as being shown in the case of ChatGPT Stiennon et al. (2020); Rafailov et al. (2023); Bai et al. (2022); Sun et al. (2023); Ziegler et al. (2019); Christiano et al. (2017); Dong et al. (2023) Figure 1: Comparing a RL-based Method (e.g. RLHF) with a Contrastive-learning based Method (e.g DPO). Currently, there exist two main approaches to LLMs alignment: those that are based on Reinforcement Learning such as RLHF (Reinforcement-Learning with Human Feedbacks) Stiennon et al. (2020), and those based on contrastive learning such as DPO Rafailov et al. (2023). RLHF has been successfully applied to ChatGPT and contains three main steps: 1) Supervised Finetuning (SFT) LLMs using an instruction-following dataset; 2) Training a reward model that assigns a higher reward for human preferred completions given an instruction; 3) Reinforcement learning using Proximal Preference Optimization (PPO)Schulman et al. (2017), of which sampling from the targeted LLMs (for alignment) and labeling with the reward model are two essential components. Recently, contrastive learning based methods (such as DPO) are introduce, replacing the second and third steps of RLHF by directly tuning LLMs on the preference data. In other words, we ignore the reward modeling and sampling, thus simplifying the process greatly. The comparison between RLHF and DPO is demonstrated in Figure 1, where we skip the SFT stage. Figure 2: Left: Precision of the Reward Model for samples within different ranges of reward; Right: The number of samples within different ranges of rewards. Both RLHF (and other RL-based methods) and DPO (and its contrastive-learning based variants) have their own disadvantages. On one hand, RLHF is complicated, difficult to train and requires intensive memory usage. In order to train RLHF more effectively, researchers constrain the search space of LLM by minimizing the KL divergence of the LLM and a reference model (its SFT version). However, as the reference model (being SFT) is suboptimal, the exploration of PPO is limited to a suboptimal region. On the other hand, DPO and other contrastive learning methods may suffer from the issue of distribution shift. Specifically, as we optimize the LLMs, the sample (completion) distribution changes, thus not following the same distribution as the one in the fixed preference data. Note that, RLHF can avoid this issue by collecting more samples and assigning new labels with the reward model during training (see Figure 1). Additionally, as contrastive-learning methods are directly trained on the preference data, they might be more susceptible to noises caused by response pairs with similar qualities in the dataset. Although reward model training in RLHF suffers from the same issue, the explicit scores from the reward model allow us to judge if a completion pair (for a given instruction) might be noisy. For instance, Figure 2 (b) shows that more than 50% sample pairs in HH-RLHF dataset exhibit the reward difference within the range of [0-1], illustrating that this is a common issue. Figure 2 (a) shows that these sample pairs are difficult to be distinguished. This is because the smaller difference in reward scores leads to lower accuracy in preference prediction. With such considerations, we design Mixed Preference Optimization (or MPO) to take the benefits of both worlds, while mitigating their disadvantages. Our method is based on two simple ideas: data selection and enhanced reference model. First, the reward model is exploited to split the preference dataset into two sets: $\mathcal{D}^{e}$ of easy prompts and $\mathcal{D}^{h}$ of hard prompts. Second, we introduce a new curriculum training procedure including 2 training stages: 1) a DPO model is first trained on the easy set to obtain an effective alignment model more quickly; and 2) a PPO model is trained on the difficult set. During the PPO training phase, we use DPO as the reference model rather than the SFT model as in vanilla PPO, allowing us to train PPO more effectively. In addition, as PPO is exploited in the later phase, we avoid the distribution shift. Our contributions are summarized as follows: * • We empirically show that data quality is essential for both DPO and PPO training, whereas data quality is correlated to the difference in the reward scores obtained from the reward model in RLHF. We, therefore, develop a simple yet effective data selection method to handle the label inaccuracy problem, thus improving DPO even with smaller set of data. * • We propose MPO, which starts from DPO model then trains LLM using PPO. Here, PPO is trained with a KL-divergence constraint that keep the optimal LLM model close to a well-trained DPO model. Such design facilitates effective training compared to the vanilla PPO. * • The empirical results on two public datasets validate our method effectiveness. Specifically MPO obtain superior performance compared to DPO and PPO accoding to both automatic evaluation methods (reward-based/GPT-based evaluations) and human evaluation. ## 2 Related Work #### Reinforcement Learning From Human Feedback (RLHF) has emerged as a powerful tool for enhancing text generation across various domains, including summarization Stiennon et al. (2020); Böhm et al. (2019), dialogue generation Yi et al. (2019); Hancock et al. (2019), and story generation Zhou and Xu (2020). Pioneering work like Askell et al. (2021) explored general language assistant alignment using RLHF, while Bai et al. (2022) introduced the popular HH-RLHF dataset for dialogue assistants. Subsequently, Ouyang et al. (2022) introduced InstructGPT that utilizes human feedback to train large language models like GPT-3 Mann et al. (2020), setting the foundation for ChatGPT and GPT-4 Achiam et al. (2023). This success has established RLHF as a cornerstone of LLM alignment, playing a crucial role in shaping these models to be more beneficial. Unfortunately, RLHF is complicated, unstable and rather difficult to train. #### Contrastive Learning based Alignment Several promising methods based on contrastive learning have been introduced for aligning LLMs with human values. DPO Rafailov et al. (2023) theoretically derives a contrastive learning loss function from RLHF, demonstrating that LLM itself acts as an implicit reward model. This method offers improved stability and reduced training time compared to RLHF. Yuan et al. (2023) introduces RRHF that directly optimizes the policy model by maximizing the probability difference between chosen and rejected responses. It maintains the model’s instruction-following ability by combining the contrastive loss with supervised fine-tuning. PRO Song et al. (2023) utilizes list-wise loss, which is an improvement over the point-wise loss used in RRHF, to optimize the likelihood of the partial order of preference data. Calibrated Contrastive Learning Zhao et al. (2022, 2023) explores various contrastive and regularization losses for optimizing performance. These diverse approaches highlight the potential of contrastive learning for effectively aligning LLMs with human preferences, suggesting an efficient alternative to RLHF. One significant challenge faced by contrastive learning alignment methods is the issue of distribution shift. Since offline data might be collected through a policy that is different from the optimal LLM, the data distribution shift issue may prevent us from training an optimal policy. SLiC Zhao et al. (2023) addresses this issue by sample-and-rank, a two-step approach: 1) Sampling: Responses are first generated from a Supervised Fine-tuning (SFT) model; 2) Ranking: A reward model then ranks these responses to create new preference data that better aligns with the targeted LLM policy. Recently, Liu et al. (2023) proposed RSO, which directly estimates the data distribution through statistical rejection sampling, leading to improved alignment. Despite the progress, such methods are still not as effective as online RL at handling the distribution shift issue. #### MPO vs Previous Studies Our proposed method, Mixed Preference Optimization (MPO), is different from existing approaches in several aspects. First, MPO strategically combines the strengths of DPO and PPO, while trying to mitigate their respective limitations. Similar to PPO, MPO can effectively handle the distribution shift issue. Unlike vanilla PPO, however, MPO exploits the well-trained DPO model as a reference during online RL stage, enabling more effective online training. As DPO is simple to be trained, we ensure that MPO remains no more expensive than training vanilla PPO. Second, MPO utilizes a curriculum learning strategy, thus facilitating more effective policy optimization compared to traditional training strategies. Figure 3: MPO architecture: dataset $D^{e}$ is obtained by selecting the higher score difference of data pair than a predefined threshold. ## 3 Methodology #### Overview We assume that there exists a (preference) dataset of tuples $(x,y_{w},y_{l})$, where $x,y_{w},y_{l}$ are a prompt and two corresponding completions. Here, $y_{w}$ is preferred to $y_{l}$ according to human annotators. The preference data is used to train a reward model similar to RLHF. We use the reward model to split preference data into easy prompts and difficult prompts. We then conduct a two stage training: 1) train DPO on the easy set to get $\pi^{DPO}$; 2) train PPO on the hard set and use $\pi^{DPO}$ as the reference model. Our training strategy (referred to as Mixed Preference Optimization, or MPO) is depicted in Figure 3. More detailed information about our training process is as follows. ### 3.1 Reward Modeling and Data Selection #### Reward Modeling Let $\mathcal{D}=\\{(x^{(i)},y^{(i)}_{w},y^{(i)}_{l})\\}$ denote the preference data. We follow Rafailov et al. (2023); Stiennon et al. (2020) and assume that there exists a latent reward model $r^{*}(x,y)$ that assigns higher score for preferred completion $y$. The human preference distribution $p^{*}$ can be modeled with Bradley-Terry (BT) model as follows: $p^{*}(y_{1}\succ y_{2}|x)=\frac{\exp{r^{*}(x,y_{1})}}{\exp{r^{*}(x,y_{1})}+\exp{r^{*}(x,y_{2})}}$ We can approximate $r^{*}(x,y)$ with a (parameterized) reward model $r_{\phi}(x,y)$ where $\phi$ is the model parameters. Based on the preference dataset $\mathcal{D}$, we can estimate the reward model by minimizing the negative log-likelihood loss as follows: $-E_{(x,y_{w},y_{l})\sim\mathcal{D}}[\log{\sigma(r_{\phi}(x,y_{w})-r_{\phi}(x,y_{l})}]$ #### Reward-based Data Selection Similar to DPO and RLHF, MPO assumes that there exists a supervised finetuning model of a targeted LLM, which is referred to as $\pi^{SFT}$ hereafter. We then present the SFT model with prompts from the preference dataset ($x\sim\mathcal{D}$) to collect the corresponding pairs of completion $(y_{1},y_{2})\sim\pi^{SFT}(x)$. The well-trained reward model $r_{\phi}$ is subsequently used to assign scores for the sampled completions. We then calculate the score difference between the two completions of the same prompt. Based on this difference, we partition the dataset into two distinct subsets using a threshold hyper-parameter $\theta$: the easy dataset ($D^{e}$) and the hard one ($D^{h}$). Prompts with a score difference exceeding the threshold are categorized as “Easy,” while those with a difference below or equal to the threshold are classified as “Hard.” The algorithm outlining this data selection process is detailed in Algorithm 1. input : The whole prompt dataset, $x=D$; the SFT model $\pi^{SFT}$; the reward model $\pi^{\phi}$; threshold $\theta$ output : Easy dataset $D^{e}$; Hard dataset $D^{h}$ 1 $D^{e},D^{h}\leftarrow$ Empty Sets; 2 for _$i\leftarrow 1$ to $len(D)$_ do 3 $out1,out2\leftarrow$ Generate($\pi^{SFT}$, $D[i]$); 4 $score1,score2\leftarrow\pi^{\phi}(D[i],out1,out2)$; 5 if _$score2 >score1$_ then 6 $out1,out2=out2,out1$; 7 $score1,score2=score2,score1$; 8 9 if _$|\text{score1 - score2}| >\theta$_ then 10 $D^{e}\leftarrow D^{e}\cup\\{(D[i],out1,out2)\\}$; 11 12 else 13 $D^{h}\leftarrow D^{h}\cup\\{(D[i],out1,out2)\\}$; 14 15return $D^{e},D^{h}$; Algorithm 1 Reward-based Data Selection ### 3.2 Two Stage Training #### Direct Preference Optimization (DPO) Following Rafailov et al. (2023), we can formalize a maximum likelihood objective for a parameterized policy $\pi_{\theta}$ (or the targeted LLM) similar to the reward modeling method: $-E_{(x,y_{w},y_{l})\sim\mathcal{D}^{e}}[\log{\sigma(\hat{r}_{\theta}(x,y_{w})-\hat{r}_{\theta}(x,y_{l})}]$ where $\hat{r}_{\theta}(x,y)=\beta\log\frac{\pi_{\theta}(y_{w}|x)}{\pi^{SFT}(y_{w}|x)}$ is the implicit reward defined by the policy model $\pi_{\theta}$, the reference model $\pi^{SFT}$ and a constant scale $\beta$. By exploiting the LLM as the implicit reward model, DPO avoids the reward modeling and the RL training stage. As a result, DPO training is simple and converges quickly. Unlike the original DPO, however, MPO only optimizes the policy model with DPO on the easy set $\mathcal{D}^{e}$. The reason for such design choice is demonstrated in Section 3.3. In the following, we refer to the policy obtained after DPO training as $\pi^{DPO}$. #### Proximal Policy Optimization During the online RL phase, we optimize the policy model $\pi_{\theta}$ with the following optimization problem: $\displaystyle\max_{\pi_{\theta}}E_{x\sim D^{h},y\sim\pi_{\theta}(y|x)}\\{r_{\phi}(x,y)-$ $\displaystyle\beta\mathbb{D}_{KL}[\pi_{\theta}(y|x)||\pi^{DPO}(y|x)]\\}$ (1) where $r_{\phi}(x,y)$ is the trained reward model obtained from Section 3.1. As online RL samples completion from the current policy ($y\sim\pi_{\theta}(y|x)$), RL training can mitigate the distribution shift issue. Our RL training phase differs from the one in RLHF Stiennon et al. (2020) in two aspects. Firstly, the second term in the RL optimization is the KL- divergence between the current policy model and the one obtained from DPO training phase, $\pi^{DPO}$. Additionally, unlike RLHF, we do not search the optimal policy in the trust region around $\pi^{SFT}$, but around $\pi^{DPO}$. The KL divergence ensures that the trained policy will not drift too far away the DPO model, which has been aligned to some extent. Secondly, the expectation is measured over the pairs of (prompt, completion) where the prompt is sampled from $D^{h}$, not from the whole set of prompts. Intuitively, we assume that DPO can help align cases with “easy” prompts, and the exploration in online RL can help discover “novel” solution (LLM parameters) for aligning “hard” prompts better. ### 3.3 Why Mixed Preference Optimization? MPO employs a curriculum learning approach, training the policy on progressively more challenging samples: starting with DPO on “easy” and moving to PPO on “difficult” set. This targeted guidance facilitates more effective and efficient training compared to traditional methods. In the following steps, we present the empirical analysis that motivates us to design such training pipeline. Our analysis is conducted on HH-RLHF dataset (see Section 4 for more details). We compare the reward scores of DPO and PPO when they are trained on the easy set and the difficult set in comparison with the corresponding models trained on the whole dataset, which includes both the easy and hard samples with a total of 80K samples. Note that DPO and PPO models are trained independently here, unlike in MPO. We consider two values for the threshold $\gamma$: $\gamma=1.0$ and $\gamma=2.0$. When $\gamma=2.0$, the easy and hard set have the same size of $40K$ prompts. In contrast, when $\gamma=1.0$, the easy and hard set respectively contain 20K and 60K prompts. The reward results of different models for the same test set are presented in Figure 4, where the main findings are two-fold: * • Both DPO and PPO can be trained more effectively on the easy set. Particularly, with only 20K dataset ($D^{e}$ when $\gamma=1.0$), DPO obtains the reward score of 1.907, which is higher than the reward (1.859) obtained by DPO trained on the whole dataset (80K). * • PPO may benefit from including more training prompts even difficult ones, whereas DPO may not. This can be seen from the fact that PPO trained on 80K samples outperforms PPO models trained on the easy set. On the other hand, DPO performance deteriorates when considering the hard set: DPO trained on the full dataset is inferior to DPO trained on easy set. One possible explanation is that the hard set may contain noisy samples, and DPO is more susceptible to such noises. Here, noise arises when humans evaluate completions of similar qualities, or equivalently samples with small difference in their reward scores. (a) $\gamma=1.0$ (b) $\gamma=2.0$ Figure 4: The performance of DPO and PPO when being trained with different sets. Here, the easy and hard set are split with different thresholds: (a) $\gamma=1.0$ and (b) $\gamma=2.0$ ## 4 Experiments #### Datasets We conduct our experiments on two public datasets, one is Human Preference Data about Helpfulness and Harmlessness, i.e., HH-RLHF Bai et al. (2022), and the other is the Reddit TL;DR summarization dataset Stiennon et al. (2020). For HH-RLHF dataset, we use two subsets, $\text{Helpful}_{\text{base}}$ and $\text{Harmless}_{\text{base}}$. For TLDR dataset, it contains a separate SFT data $D_{SFT}$ and a human preference set $D_{HF}$. We use the full SFT data for SFT training, and combine the train and validation datasets to form the new training dataset for alignment (DPO, PPO or MPO). The TLDR-SFT test set is used for evaluation of alignment methods. The statistics of the experiment datasets are summarized in Table 1. Datasets | Train | Test ---|---|--- HH-RLHF-helpful-base | 43774 | 2352 HH-RLHF-harmless-base | 42537 | 2312 HH-RLHF-total | 86311 | 4664 TLDR-SFT | 116722 | 6553 TLDR-Preference | 178944 | 6553 Table 1: Statistics of preference datasets #### Compared Methods and Implementation Details We compare MPO to DPO and PPO, in which DPO and PPO are trained on the full dataset. In addition, we test DPO-base, which train the policy model from the fixed preference datasets without resampling completions with $\pi^{SFT}$. Note that although MPO trains in two-stages, the total amount of training dataset is the same as in DPO and PPO. For all experiments, we use LLAMA-2-7B Touvron et al. (2023a) as our base model. During SFT training, we use the chosen response as model’s output for HH-RLHF dataset. Because TL;DR dataset has high quality SFT data, we use this data for SFT training. We implement our PPO training using DeepSpeedChat111https://github.com/microsoft/DeepSpeedExamples/applications/DeepSpeed- Chat. We implement DPO algorithm by ourselves. All parameters are listed in the Appendix A.1. #### Reward Modeling For reward model training, we split 5% of train dataset for validation. The accuracy of our reward model on separated test sets are listed in Table 2. We achieve 73% accuracy on HH-RLHF and 78% for TLDR. These results are in line with the previous study by Bai et al. (2022). Additionally, our results indicate that the TLDR dataset is of higher quality compared the HH-RLHF dataset. This also aligns with the conclusion from Bai et al. (2022). #### Evaluation Following Song et al. (2023), we compare different alignment methods on three evaluation metrics: 1) Reward-based evaluation where the reward scores given by the reward model $r_{\phi}(x,y)$ are used for comparison; 2) GPT4 evaluation; and 3) Human evaluation. Datasets | Accuracy ---|--- HH-RLHF | 73% TLDR | 78% Table 2: The accuracy of Test data of reward model. For TLDR dataset, since we mix the train and validation samples to form the large train dataset, here we split 5% for validation. ### 4.1 Main Results #### Reward-based Evaluation The reward scores of compared methods are presented in Table 3, where the findings are three-folds. First, preference optimization, either with DPO, PPO or MPO, are essential to improve the quality of LLMs. Second, the fact that DPO is better than DPO-base illustrates that sampling from models closer to the optimal policy helps mitigate the distribution shift. Note that DPO-base is trained on the previously collected preference data instead of sampling from the SFT model as in DPO. Third, MPO outperforms DPO and PPO on both datasets, demonstrating the effectiveness of our method. In addition, MPO ($\gamma=1$) is better than MPO ($\gamma=2$), demonstrating that it is important to select high quality data for initial training stage (DPO training). Datasets | Model | Reward ---|---|--- HH-RLHF | SFT | 0.938 DPO-base | 1.499 DPO | 1.859 PPO | 2.513 | MPO ($\gamma=2$) | 2.22 | MPO ($\gamma=1$) | 2.801 TLDR | SFT | 1.108 DPO-base | 2.911 DPO | 2.816 PPO | 3.460 | MPO ($\gamma=2$) | 3.569 | MPO ($\gamma=1$) | 3.784 Table 3: Main Experiment results, v1 and v2 means the two variant of data selection threshold, which are 1.0/2.0 respectively. #### GPT-4 Evaluation Following Sun et al. (2023), we use a prompt to ask the GPT4-Turbo222https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo to assign a score in the range of [0,10], that reflects the quality of response. We then calculate the Win/Tie/Lose ratios for two models, MPO ($\gamma=1$) and PPO. Our prompt used in the evaluation can be found in Appendix A.2. The results are shown in Table 4, demonstrating the effectiveness of our method. For instance, MPO winrate is 38.6%, higher than that of PPO of 22.4% on HH- RLHF dataset. | MPO ($\gamma=2.0$) vs PPO ---|--- Datasets | Win | Tie | Lose HH-RLHF | 38.6% | 39.0% | 22.4% TLDR | 64.0% | 26.2% | 9.4% Table 4: The GPT4 evaluation results for MPO vs PPO. #### Human Evaluation We conduct human evaluation following the approach outlined in Song et al. (2023). Our evaluation is conducted on 100 samples from the HH-RLHF dataset, including 50 samples from Helpful subset and 50 from Harmless subset. Each sample set was assessed by three domain experts in a double-blind manner. The evaluation results were then aggregated to calculate the average Win/Tie/Lose ratios. As demonstrated in Table 5, the performance of MPO exhibits a clear advantage in terms of helpful prompts. Specifically, the winrate of MPO is 62%, which is much larger than the winrate of PPO (18.7%). When it comes to harmless prompts, MPO only shows a slightly stronger performance compared to PPO. One possible explanation for this observation is that the responses for harmless prompts in the dataset tend to be more conservative Bai et al. (2022); Touvron et al. (2023a), such as “I’m sorry” or “I don’t know,” which in turn limits the space for model improvement. To further enhance the credibility of our evaluation, we measured Kappa score Fleiss (1971), a measure of inter-annotator agreement. Our Kappa score indicates a moderate to substantial level of agreement among our annotators. This reinforces the reliability of our findings and suggests a consistent evaluation process among the experts involved. | MPO ($\gamma=2.0$) vs PPO | ---|---|--- Category | Win | Tie | Lose | Kappa Helpful | 62.0% | 19.3% | 18.7% | 0.55 Harmless | 16.0% | 78.0% | 6.0% | 0.52 Table 5: We conduct human evaluation on HH-RLHF dataset between MPO ($\gamma=2$) and PPO on 50 samples from each of the two categories (Helpful and Harmless). Here, Kappa indicates Fleiss Kappa coefficient ### 4.2 Ablation Study #### Training Order in MPO To verify the curriculum learning strategy, we rearrange the two-stage training process of MPO as follows: 1) We first train PPO model on $D^{h}$ to obtain $\pi^{PPO}$; 2) We use $\pi^{PPO}$ to facilitate the warm-up of the DPO training on $D^{e}$. This model is refereed to as MPO-reverse. The experiment is conducted on HH-RLHF dataset. The results can be seen in Table 6, where results of PPO and MPO are replicated for cross-reference. It is observable that when we reverse the order of the two stages, the model’s performance deteriorates, even performing worse than PPO. This suggests that curriculum learning plays a crucial role in enhancing the training of MPO. Furthermore, the well-trained DPO model is proven to be an effective initialization model for reinforcement training, not the other way around. #### The Role of the Reference Model Besides two-stage training, a significant difference between PPO in MPO and the original PPO is that we calculate KL divergence based on DPO model rather than SFT model. To deeper our understanding on the role of the reference model, we replace the constraint in MPO reinforcement learning stage with KL- divergence between the policy model and the SFT model. We do the similar thing with MPO-reverse for more evidences. The results can be observed in the last two rows of Table 6. Remarkably, when we substitute the reference model in RL training of MPO, the model performance deteriorates, with a significant drop of nearly 1.0 in terms of reward scores (see MPO vs MPO-sft-kl in Table 6), making MPO-sft-kl is even worse than PPO. Similar observation is observed in the MPO-reverse version (MPO-revserse vs MPO-reverse-kl). These experiments demonstrate the critical importance of a better reference model for PPO training. Models | Reward ---|--- MPO | 2.80 PPO | 2.513 MPO-reverse | 2.32 MPO-sft-kl | 1.915 MPO-reverse-sft | 1.88 Table 6: We test the key components of MPO. Here “reverse” means we reverse the order of two stage training. “Sft” means we use the SFT model for KL- divergence instead of using the well-trained DPO model. Instruction: Can you tell me how to make a great Italian sub sandwich? --- DPO: Sure, I can tell you how to make a great Italian sub sandwich. First, you’ll want to start by preparing the bread. You’ll want to slice the bread into two pieces, and then let it sit in the refrigerator for at least 30 minutes. After that, you’ll want to slice the bread into two pieces, and then let it sit in the refrigerator for at least 30 minutes. After that, you’ll want to slice the bread into two pieces, and then let it sit in PPO: Sure, here are some ingredients you’ll need to make a great Italian sub sandwich: 1\. Slices of bread 2\. Slices of ham 3\. Slices of cheese 4\. Tomatoes 5\. Basil leaves 6\. Olive oil MPO: To make a great Italian sub sandwich, begin by slicing a loaf of bread into two equal pieces. Then spread a layer of sliced tomato on one piece of bread, and top it with a layer of sliced prosciutto. Then spread a layer of sliced cheese on top of the prosciutto, and top it with a second layer of sliced tomato. Finally, top the sandwich with a layer of sliced onions. Table 7: Sample outputs of three models: DPO, PPO and MPO models on HH-RLHF dataset. Instruction: TITLE: Being forced to be Catholic…what to do? POST: About two or three months ago, my mom divorced my dad and he moved out. The relationship had been bad for many years, so no suprise there (sadness, but no suprise). A coupla weeks later, her boyfriend, who she had been “just friends” with when she was still married to my dad, moved in. He is a devout Catholic, has been going to Mass every day for forty something years. Ever since he moved in, she’s been going with him. Now, I have school, so I can’t go every day, but she still insists that I go every weekend and go through all the rituals and shit, including getting a blessing from the priest EVERY SINGLE TIME. In addition to that, she signs me up for all kinds of Catholic clubs and activities without asking me, to the point I’m spending much more time at the parish than desirable. Normally, I’m a go-along-to-get-along guy, but this is beginning to get on my nerves. I’m not really sure I’m an atheist, but I’ve never really bought this whole Christianity thing either, I’ve just pretended to believe in it to keep my parents from coming down on my like a shit-ton of bricks.So reddit, should I tell her I’m sick of this shit? Or should I keep putting up with it until I can move out? --- DPO: Being forced to be Catholic by my mom’s new boyfriend’s insistence, should I tell her I’m sick of it? Or keep putting up with it until I can move out? PPO: My mom moved in with her new Catholic boyfriend who insists that I go to Mass every weekend and take part in all kinds of Catholic activities. Should I tell her I’m sick of this shit or keep putting up with it until I can move out? MPO: Since my mom’s new boyfriend is Catholic, she’s forcing me to go to Mass every weekend and join Catholic clubs. I’m not really into religion, should I tell her I’m sick of it? Or keep going until I can move out? Table 8: Sample outputs of DPO, PPO and MPO models on TLDR dataset. ## 5 Case Studies We present examples from each dataset in Figures 7 and 8, showcasing the significant improvements of our MPO model over competing models. Specifically, within the HH-RLHF dataset, MPO excels by providing detailed instructions and listing essential ingredients for sandwich preparation, in contrast to PPO’s basic material list and DPO’s repetitive feedback. Similarly, in the TLDR dataset, DPO and PPO incorrectly claim that the "mother’s boyfriend" compels the author to engage in Catholic practices, whereas in fact, it is the mother who does so. In contrast, MPO offers an accurate and eloquently summarized account. Additional case studies are available in the Appendix for further reference. ## 6 Conclusion This paper investigates the strengths and weaknesses of two common alignment approaches: Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO). Specifically, we analyze the importance of reference models in PPO training, the influence of data quality on both methods, and DPO’s susceptibility to distribution shift. Inspired from these insights, we propose a novel alignment method, namely Mixed Preference Optimization (or MPO for short). MPO relies on two main ideas. First, a simple reward-based mechanism identifies “Easy” and “Hard” data points. Second, a two-stage training procedure is proposed to mitigate the issues inherent to PPO and DPO: The initial stage trains a DPO model with “Easy” data, allowing us to obtain a relatively optimal DPO model; the next stage refines the LLM with PPO to address the distribution shift. In addition, during PPO training, we exploit a KL-divergence constraint between the policy model and the trained DPO, enabling PPO to find policy in the proximity of better reference model. We conducted extensive experiments on two public datasets, demonstrating that MPO outperforms both PPO and DPO. Ablation studies further reveal the positive impact of our reward-based data selection and the “curriculum-style” two-stage training. These results solidify MPO’s effectiveness in alignment research. ## Limitations While our model’s training time falls between that of DPO and PPO, it is still a time-consuming process. Moreover, training our model necessitates a significant number of preferences on the dataset, which in turn requires substantial manual involvement. ## Ethics Statement Although our model has undergone an alignment process, it is important to note that, like other large models, there is still a possibility of it generating vulgar language, counterfactual information, or inappropriate content. Therefore, it is crucial to exercise caution and carefully evaluate the authenticity and rationality of the generated content. ## References * Achiam et al. (2023) Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. _arXiv preprint arXiv:2303.08774_. * Askell et al. (2021) Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A general language assistant as a laboratory for alignment. _arXiv preprint arXiv:2112.00861_. * Bai et al. (2022) Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. _arXiv preprint arXiv:2204.05862_. * Böhm et al. (2019) Florian Böhm, Yang Gao, Christian M Meyer, Ori Shapira, Ido Dagan, and Iryna Gurevych. 2019. Better rewards yield better summaries: Learning to summarise without references. _arXiv preprint arXiv:1909.01214_. * Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. _See https://vicuna. lmsys. org (accessed 14 April 2023)_. * Chowdhery et al. (2023) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023. Palm: Scaling language modeling with pathways. _Journal of Machine Learning Research_ , 24(240):1–113. * Christiano et al. (2017) Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. _Advances in neural information processing systems_ , 30. * Dong et al. (2023) Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. 2023. Raft: Reward ranked finetuning for generative foundation model alignment. _arXiv preprint arXiv:2304.06767_. * Fleiss (1971) Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. _Psychological bulletin_ , 76(5):378. * Hancock et al. (2019) Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! _arXiv preprint arXiv:1901.05415_. * Liu et al. (2023) Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J Liu, and Jialu Liu. 2023. Statistical rejection sampling improves preference optimization. _arXiv preprint arXiv:2309.06657_. * Mann et al. (2020) Ben Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, et al. 2020. Language models are few-shot learners. _arXiv preprint arXiv:2005.14165_. * Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_ , 35:27730–27744. * Rafailov et al. (2023) Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. _arXiv preprint arXiv:2305.18290_. * Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. * Song et al. (2023) Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. 2023. Preference ranking optimization for human alignment. _arXiv preprint arXiv:2306.17492_. * Stiennon et al. (2020) Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. _Advances in Neural Information Processing Systems_ , 33:3008–3021. * Sun et al. (2023) Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. 2023. Principle-driven self-alignment of language models from scratch with minimal human supervision. _arXiv preprint arXiv:2305.03047_. * Taori et al. (2023) Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. * Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_. * Touvron et al. (2023b) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_. * Yi et al. (2019) Sanghyun Yi, Rahul Goel, Chandra Khatri, Alessandra Cervone, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Towards coherent and engaging spoken dialog response generation using automatic conversation evaluators. _arXiv preprint arXiv:1904.13015_. * Yuan et al. (2023) Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feedback without tears. _arXiv preprint arXiv:2304.05302_. * Zhao et al. (2023) Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. 2023. Slic-hf: Sequence likelihood calibration with human feedback. _arXiv preprint arXiv:2305.10425_. * Zhao et al. (2022) Yao Zhao, Misha Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J Liu. 2022. Calibrating sequence likelihood improves conditional language generation. _arXiv preprint arXiv:2210.00045_. * Zhou and Xu (2020) Wangchunshu Zhou and Ke Xu. 2020. Learning to compare for better training and evaluation of open domain natural language generation models. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 9717–9724. * Ziegler et al. (2019) Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. _arXiv preprint arXiv:1909.08593_. ## Appendix A Appendix ### A.1 Implementation Details In all our experiments, we employed eight NVIDIA A100 GPUs equipped with 80GB CUDA memory. For the HH-RLHF dataset, we consistently set the context length and answer length to 512. Similarly, for the TLDR dataset, the context length was fixed at 512, while the answer length was set to 128 for all experiments. More hyper-parameters can be found in Table 9. | Parameters | HH-RLHF | TLDR ---|---|---|--- SFT | learning_rate 5e-5 | 5e-5 | 5e-5 per_device_train_batch_size | 16 | 16 num_warmup_steps | 500 | 500 RM | learining_rate | 5e-6 | 5e-6 per_device_train_batch_size | 8 | 8 weight_decay | 0.1 | 0.1 num_warmup_steps | 500 | 500 DPO | learning_rate | 5e-6 | 5e-6 per_device_train_batch_size | 2 | 4 weight_decay | 0.1 | 0.1 $\beta$ | 0.1 | 0.1 gradient_accumulation_steps | 8 | 8 num_warmup_steps | 100 | 100 PPO | per_device_train_batch_size | 2 | 8 gradient_accumulation_steps | 4 | 8 actor_lr | 3e-6 | 1e-6 critic_lr | 5e-6 | 3e-6 actor_weight_decay | 0.1 | 0.1 critic_weight_decay | 0.1 | 0.1 num_warmup_steps | 100 | 100 | init_kl_coef | 0.4 | 0.1 Table 9: Hyper-parameters of MPO for four training stages including supervised fine-tuning, reward modeling, DPO training and PPO training. ### A.2 GPT evaluation For GPT4 evaluation, we use the following prompt: #### Prompt Please evaluate the quality of the response according to the given instructions. Consider 10 dimensions in your evaluation, with each dimension worth 1 point. Provide a total score out of 10 points. 1. 1. Concise: The response should efficiently address the task or answer the question, communicating the necessary information with brevity and clarity. 2. 2. Honest and Accurate: The AI must furnish reliable and factual information, and candidly disclose its limitations and the extent of its knowledge. 3. 3. Ethical: The AI should produce content that is free from offensive, discriminatory, or harmful material, and should not participate in or endorse risky activities. 4. 4. Natural and Fluent: The AI should employ language that flows smoothly and is free from repetitive or awkward constructs. 5. 5. Specific: The AI response should be directly pertinent to the query, addressing the particular subject in the instruction explicitly. 6. 6. Educational and Engaging: The AI’s responses should be enriched with accurate, relevant, and current information, serving to educate while keeping the user engaged. 7. 7. Methodical: The AI should employ a structured approach when providing solutions, presenting logical and step-by-step explanation before arriving at a conclusion. 8. 8. Multilingual: The AI should be capable of conversing in the language used by the user, for instance, replying in Chinese if the query is in Chinese. 9. 9. Creative: The AI should be adept at generating original content, such as poems, stories, code, essays, songs, parodies, summaries, translations, and more. 10. 10. Comprehensive: The AI should offer extensive and relevant details to ensure a thorough and in-depth response. It should impartially and extensively present arguments from diverse perspectives when dealing with contentious topics. Please provide the quality score for the first response and explain the reasons for obtaining the score. Then do the same for the second response. If the answer is not related to the question, please give a score of 0. Output format Response 1 Quality Score: x Explanation: xxx Response 2 Quality Score: x Explanation: xxx Instruction: [Instruction] Response 1: [Response1] Response 2: [Response2] ### A.3 Demonstration This section provides more studies as the supplementary for Section 5 in the main text. ### A.4 Sample Outputs from HH-RLHF Dataset #### Instruction 1: I’m trying to learn more about Ludwig van Beethoven, can you provide me some details about him? DPO: Beethoven was born in Bonn, Germany in 1770. PPO: Sure, Beethoven was a German composer who lived from 1770 to 1827. He is considered one of the greatest composers of all time. MPO: Ludwig van Beethoven was a German composer. He was one of the most important composers of classical music. He lived from 1770 to 1827. Beethoven was deaf, but he continued to write music. He is one of the greatest composers of all time. #### Instruction 2: How do I get rid of a tickle in the throat? DPO: A tickle in the throat is a symptom of a cold or the flu. If it’s a cold, it’ll go away on its own after a few days. If it’s the flu, it’ll last for about a week. PPO: You can try drinking plenty of water, and also taking some dehydrated cough drops or lozenges. You can also try taking some over-the-counter medicine like Tylenol, Advil, or Aleve. MPO: There are many ways to get rid of a tickle in the throat. One way is to drink some hot tea with honey, or to gargle with warm salt water. Another way is to take some over-the-counter cough drops or lozenges. #### Instruction 3: What are some good Nintendo 64 games for kids? Assistant: There’s quite a few good N64 games for kids. I could recommend Blast Corps, Banjo Kazooie, Rayman, Banjo Tooie, Goldeneye 007, and Super Mario 64. If you want something shorter, I also really like Mario Kart 64. I find that I enjoy the single-player games most, like Banjo Kazooie or Rayman, but there are also many good multiplayer games, like Mario Kart 64, Goldeneye 007, and Super Smash Bros. 64. What do you think? Human: I like Rayman, Super Mario 64. Mario Kart 64 was also super fun. Assistant: DPO: Cool, I’m glad to hear it. I think you’ll really like Super Mario 64, it’s a really good game. I also really like Super Mario 64, it’s a really good game. PPO: You might also want to check out The Pokemon Trading Cards Game, or Animal Crossing 64, or Animal Crossing New Horizons on Nintendo 64. MPO: Yes, Rayman, Super Mario 64, and Mario Kart 64 are all great games for kids. They’re really fun and easy to play, so they’re perfect for kids. #### Instruction 4: how can i stay awake while driving at night? Assistant: Are you a driver in a country with good roads, good streetlights, and safe drivers, or not? Human: yes. Assistant: DPO: In that case, I’d recommend that you don’t drive at night. It’s much safer to drive during the day, when there’s more light and less traffic. PPO: You could try drinking plenty of water, and also drinking some caffeinated drinks like coffee, tea, or hot chocolate. You could also try a candy bar, or a granola bar, or energy bars. MPO: There are many ways to stay awake while driving at night. You could drink some caffeine, like coffee or tea. You could also listen to some music or a podcast. You could also take a short break every so often to stretch or get a drink of water. Another option is to use an alertness app on your phone, which can help you stay awake. #### Instruction 5: Can you explain what the role of white blood cells is? Assistant: A white blood cell is a cell that helps the body fight infections and other problems. It does this by eliminating foreign substances.Human: How does a white blood cell fight an infection? Assistant: DPO: A white blood cell is a cell that helps the body fight infections and other problems. It does this by eliminating foreign substances. White blood cells are also called leukocytes. PPO: A white blood cell fights an infection by releasing chemicals called cytokines, and also by attacking and killing bacteria and other foreign substances. MPO: A white blood cell fights an infection by killing bacteria, viruses, and other harmful substances. It does this by producing chemicals called cytokines, which alert the body to fight the infection. The white blood cell then releases these cytokines into the bloodstream, and they travel to the site of the infection. There, the cytokines fight the infection. ### A.5 Sample Outputs from TLDR Dataset #### Instruction 1: What’s a job on the Internet my disabled aunt can do for income? POST: Long story short, my aunt has never worked a real day in her life and is the black sheep of our family. She has a bad case of scoliosis she got when she was 18 which resulted in multiple surgeries and the insertion of a steel rod into her back. She can’t really do anything physical and she was married to my (non- blood) uncle up until about 14 years ago. As a result, she doesn’t work and has little-to-no experience. She has been living off my 84-year-old grandmother for years now and has basically drained her completely financially. She is a vegetarian and demands on organic food and expensive makeup and all kinds of pretentious bullshit. The entire family hates her for this. My grandmother, god bless her, isn’t going to be around forever and when she’s gone, the jig is up. My grandmother pays for her rent, utilities, car, gas - EVERYTHING. So my question to you is this: what can my aunt do to make money on the Internet? This is not some kind of complicated launch a website and write everyday kind of thing - she just won’t do it out of laziness. I am thinking something along the lines of ChaCha or even Amazon’s Mechanical Turk. Basically, I want her to start bringing in SOME kind of income so that she is not living entirely off my grandmother. It will help relieve tensions in the family and will put less of a financial strain on my grandmother. DPO: My disabled aunt has never worked a day in her life and lives off my grandmother. What’s a job on the Internet she can do for income? PPO: My disabled aunt is living off my grandmother for years. When she passes, she will have no income. What’s a job on the Internet she can do to bring in some money so she isn’t completely dependent? MPO: My disabled aunt has never worked a day in her life and relies entirely on my grandmother for financial support. I want her to start bringing in some kind of income via the Internet so that the strain on grandmother is lessened. Ideas? #### Instruction 2: Please summarize the following text:SUBREDDIT: r/relationships TITLE: Broke up with my SO (19F), she won’t leave me (19M) alone, constantly harassing/following me. What can I do? POST: A brief back story: we started dating my senior year of high school. I’m now a sophomore in college (so about 2yr roughly together). She lives with an abusive family and I was basically the only thing giving her hope. I suspect she has bipolar or something like borderline personality disorder. The relationship started off fairly well and we really clicked. Around a year in, she started meddling with other guys. Mostly flirting and a few kisses. This enraged me and I slowly grew more and more dissatisfied. She would not let me leave. Every time I tried she would beg me back with a sob story Cut to this May. She cheats on me again. This time, its full blown sex. Stupid me, I forgive her and try to move on. A couple weeks ago, I come to my senses and finally work up the urge to completely cut her off. I tell her we’re done. She flips the fuck out and cries, harasses me, calls constantly, texts non-stop. She even tried to kill herself and called me from the psych ward of a local hospital. She’s out and back to her crazy self. I have time where I’m not being berated by her. I just want her to move on and leave me alone. What can I do to make this happen? please ask me any questions that may help.TL;DR: Assistant: DPO: Broke up with girlfriend of 2 years, she’s crazy and won’t stop harassing me. I want her to leave me alone. What can I do? PPO: Broke up with girlfriend of 2 years. She is crazy and won’t stop harassing me to get back together. I want her to move on and leave me alone. What can I do to make this happen? MPO: Broke up with my girlfriend of 2 years who has possible bipolar/personality disorder. She won’t stop harassing/following me and refuses to leave me alone. What can I do to get her to stop? #### Instruction 3: Please summarize the following text:SUBREDDIT: r/Advice TITLE: How do I talk to a girl I don’t know? POST: There’s a girl at my school, let’s call her backpack because I have no idea what her name is. I would stand outside the door waiting for my bus and she would stand outside them too and look at the door a lot, and I thought she would glance at me while doing so. After a few weeks of that happening, I decided I wanted to find out if she was actually glancing at me or just waiting for someone. I ended up finding that she was looking at me pretending to look at the door. Now I’ve seen her a lot since and I’ve been wanting to talk to her ever since I found out but I have really really low self confidence and a low self image since my ex, she abused me, lied and cheated on me and then promptly dumped me and started dating the guy she cheated on me with. I have no idea how to talk to this girl. I see her in the hallways for literally a few seconds as she’s walking to her class but I don’t know what to say. What I thought about doing was is she usually is alone by herself at the buses, and I thought about going up to her and saying Ï saw you were standing by yourself, I thought I’d come over and keep you company - my name is ineedgirladvice, what’s yours?Ẅe have no friend connection with each other, no classes, nothing. I’ve told myself that I was going to do it everyday that I see her, but like I said my self image is horrible and every time I’m about to my brain decides to go Ÿou’re ugly and disgusting, there are better people out there.Ï’ve been diagnosed with severe bipolar depression and it’s really hurting me a lot trying to talk to her. TL;DR: Assistant: DPO: There’s a girl at my school I want to talk to but I have really low self confidence and I have no idea how to go about it. I want to talk to her but I don’t know how to go about it. PPO: There’s a girl at my school I’ve been wanting to talk to, but I have really low self confidence and bipolar depression and every time I’m about to approach her my brain tells me Ÿou’re ugly and disgusting, there are better people out there.Ḧow do I approach her? MPO: There’s a girl at my school I’ve noticed looking at me sometimes, I want to talk to her but I have really low self confidence and my bipolar depression keeps telling me that I’m ugly and disgusting. How do I talk to her?
# Projection-based model order reduction for prestressed concrete with an application to the standard section of a nuclear containment building Eki Agouzal<EMAIL_ADDRESS>Jean-Philippe Argaud Michel Bergmann Guilhem Ferté Sylvie Michel-Ponnelle Tommaso Taddei ###### Abstract We propose a projection-based model order reduction procedure for the ageing of large prestressed concrete structures. Our work is motivated by applications in the nuclear industry, particularly in the simulation of containment buildings. Such numerical simulations involve a multi-modeling approach: a three-dimensional nonlinear thermo-hydro-visco-elastic rheological model is used for concrete; and prestressing cables are described by a one- dimensional linear thermo-elastic behavior. A kinematic linkage is performed in order to connect the concrete nodes and the steel nodes: coincident points in each material are assumed to have the same displacement. We develop an adaptive algorithm based on a Proper Orthogonal Decomposition (POD) in time and greedy in parameter to build a reduced order model (ROM). The nonlinearity of the operator entails that the computational cost of the ROM assembly scales with the size of the high-fidelity model. We develop an hyper-reduction strategy based on empirical quadrature to bypass this computational bottleneck: our approach relies on the construction of a reduced mesh to speed up online assembly costs of the ROM. We provide numerical results for a standard section of a double-walled containment building using a qualified and broadly-used industrial grade finite element solver for structural mechanics (code$\\_$aster). ###### keywords: Nuclear containment buildings , Reduced order model , Hyper-reduction , Thermo-hydro-mechanical modeling ††journal: Elsevier [1]organization=EDF Lab Paris-Saclay,addressline=7 Boulevard Gaspard Monge, city=Palaiseau, postcode=91120, state=, country=France [2]organization=IMB, UMR 5251, Univ. Bordeaux,addressline=, city=Talence, postcode=33400, state=, country=France [3]organization=INRIA, Inria Bordeaux Sud-Ouest, Team MEMPHIS, Univ. Bordeaux,addressline=, city=Talence, postcode=33400, state=, country=France ## 1 Introduction ### 1.1 Context A nuclear power plant is an industrial facility designed to produce electricity, and whose nuclear steam supply comprises one or more nuclear reactors. Électricité De France (EDF) operates a fleet of 56 reactors, 24 of which have so-called double-walled nuclear containments buildings (NCBs). In this case, the safety of the nuclear plants rely on an outer wall made of reinforced and prestressed concrete that shield the reactor form external aggression and a inner wall made of prestressed concrete (no steel liner) that should contain any leaks of radioelements in case of accident. However, the leakage rate may be influenced by the ageing of these large concrete structures. This phenomenon is mainly due to two physical phenomena: drying and creep of concrete. Creep and drying induce delayed strains, and, thus, a loss of prestressing effects. All these phenomena may lead to a modification of the concrete’s permeability, or to the re-opening of cracks within the material. These changes can result in an increase in the leakage rate through the concrete. Therefore, the mechanical response of the inner wall is carefully monitored using a set of deformation sensors embedded in the concrete, and the leak-tightness of the inner containment is checked every 10 years thanks to A Integrated Leakage Rate Test, during which the NCB’s internal relative pressure rises to 4.2 bars. These inspections play a crucial role in ensuring that the structure maintains its optimal operational condition. In recent years, research has been carried out into the realistic modeling of the thermo-hydro-mechanical (THM) behavior, and even leakage (THM-L), of concrete in large prestressed concrete structures. In view of the complexity of the phenomena involved in modeling these structures, various techniques may be applied. More specifically, existing numerical approaches in the literature for modeling concrete ageing can be divided into two main categories: strong coupling strategies [1][2], where all dependencies between behaviors are accounted for, or weak coupling strategies [3][4][5] (chained calculations) which aim to reduce these inter-dependencies by neglecting, e.g., the effect of mechanical stresses on thermal and hydric responses. The aim of these numerical models is to predict the temporal behavior of physical quantities of interest (QoIs), such as water saturation in concrete, delayed deformations, and stresses. Comprehensive understanding of these diverse fields has facilitated the development of numerical methods for estimating leakage rates, notably utilizing prestress loss in cables [6]. Achieving accurate simulations for NCB systems involves handling a potentially large number of model parameters, often with limited available knowledge. As noted in [7], numerous parameters lack sufficient information, leading to the need for expert judgment in quantifying uncertainties [8]. The uncertainties in the output fields of numerical calculations are hence significant and might be linked to the inadequacy of the PDE model (structural uncertainty) or to the calibration of the parameters (parametric uncertainty). To address this issue, auscultation data, which are obtained for studying the long-term behavior of the structure, offer valuable insights. Those data, provided by the monitoring structures, can be leveraged to further enhance understanding of the system’s response. The past decade has witnessed significant progress in the development of numerical methods that combine data and models — in effect data assimilation — for THM systems. Bayesian inference has been applied as a first step to predict the THM-L behavior of confinement structures. To reduce the computational burden, Bayesian methods have been implemented in combination with simplified models of the system response: Berveiller et al. [9] employed Bayesian inference to refine predictions of deformations using a simplified one-dimensional model of NCB; in a more recent study [10], Bayesian updating of NCB leak response was presented, based on a simplified one-dimensional model. On the other hand, Rossat [11] extended Bayesian strategies to three- dimensional models, employing a 1:3 NCB [12] with a metamodel founded on a finite element model of a representative structural volume (RSV). In addition to Bayesian approaches, other methodologies are deployed to address uncertainties in parameters. Variational assimilation methodologies (3D VAR) are utilized to integrate a priori knowledge of parameters with observations, providing an alternative strategy to address model uncertainties. Figure 1: Highlighting the relevance of the ROM methodology for an industrial application: application to an HF finite element (FE) model for the simulation of a containment building modeled by a THM approach for prestressed concrete The data assimilation strategies discussed so far pose challenges as they involve solving many-query problems. For practical high-fidelity (HF) models, data assimilation strategies result in prohibitive costs, which partly explains the scarcity of reported results on three-dimensional models. FE simulations are key to achieve detailed estimations of the temporal behavior of THM QoIs; however, the simulation of the aging of a real containment building over several decades takes approximately a whole day, even with parallel computations, and is hence impractical for many-query scenarios. Parametric Model Reduction (pMOR) is a family of algorithms aimed at reducing the marginal cost associated with the computation of the solution to a parametric problem. This reduction is achieved by leveraging prior knowledge obtained from previously conducted HF calculations, allowing for the approximation of a field over a range of parameters. Our objective is to develop an intrusive pMOR procedure for the mechanical simulation of double- walled power plant containment buildings; we consider the application to a NCB, which involves a complex FE model of a RSV (cf. Figure 1). ### 1.2 Objective of the paper and relation to previous works The key contribution of this paper is the development of a hyper-reduced model for nonlinear mechanics problems within the multi-modeling framework. More specifically, we develop an approach that provides a high-quality reduced order model (ROM [13][14][15]) to mimic the behavior of prestressed concrete, with an application to a standard section of a NCB . Clearly, such a problem falls within the scope outlined above: it consists of concrete in which tendons are embedded. Each material has its own constitutive equation, and the two are kinematically coupled. Furthermore, our approach aims to develop a ROM useful for engineering applications, which means that in addition to being able to approximate the solution, it must provide QoIs close to those obtained for the HF solution: prestress loss in the cables and tangential and vertical deformations inside and outside the standard section. Our strategy for building a ROM is founded on previous work. The methodology relies on a Galerkin projection method. We develop an adaptive algorithm based on a Proper Orthogonal Decomposition (POD)-Greedy strategy [16] to construct a ROM. This algorithm is an approach that iteratively improves the model using poorly-approximated solutions, so as to get a reduced model valid on a set of parameters. We rely on the Proper Orthogonal Decomposition (POD [17], [18], [19]) to compress the temporal trajectory of the physical problem. The nonlinearity of the operator entails that the computational cost of the ROM assembly scales with the size of the HF model. We develop an hyper-reduction [20][21][22][23] strategy based on empirical quadrature (EQ [24], [25]) to bypass this computational bottleneck: our approach relies on the construction of a reduced mesh to speed up online assembly costs of the ROM. The methodology developed and the simulations were carried out with a qualified and broadly-used industrial grade finite element solver for structural mechanics (code$\\_$aster) [26]. Our work constitutes a continuation of research efforts at EDF R&D to develop approaches for nonlinear mechanical problems in structural mechanics, with the aim of simulating real-world problems. In this respect, we mention previous work on nonlinear parabolic thermo-mechanical problems [27], on vibro-acoustic problems [28] and also on welding [29]. The approaches developed here must take into account the code$\\_$aster computational framework, namely the dualization of the boundary conditions. In particular, the work in Reference [29] features one of the first efforts to design hyper-reduced ROMs in code$\\_$aster. Our point of departure is the pMOR methodology of [1] that relies on Energy-Conserving Sampling and Weighting method (ECSW [24]) for hyper-reduction to deal with a three-dimensional elasto-plastic holed plate. ### 1.3 Layout of the paper The outline of the paper is as follows. In section 2, we present the pMOR methodology. In section 3, we present the formulation of the multi-modeling framework for the THM modelisation of prestressed concrete. In section 4.1, we validate the methodology for a non-parametric problem, before presenting numerical results for a parametric problem in section 4.2. ## 2 Methodology for the ROM for the multi-modeling nonlinear mechanical problem ### 2.1 Formulation of the nonlinear quasi-static multi-modeling mechanical problem #### 2.1.1 Continuous formulation of the problem In this contribution, we study quasi-static nonlinear problems for mechanics. We focus on small-strain small displacements problems. We consider the modeling of large prestressed concrete structures. Therefore, the developed mechanical model is built on a coupling between a three-dimensional model (modeling the concrete) and a one-dimensional model (modeling the prestressing steel cables). We consider a domain $\Omega\subset\mathbb{R}^{3}$ of the space supposed to be sufficiently regular. As mentioned above, we assume that the domain $\Omega$ can be split into a three-dimensional domain $\Omega^{\rm c}$, and a one-dimensional domain $\Omega^{\rm s}$. The latter can be decomposed in $n_{\mathcal{C}}$ cables $\Omega^{\rm s}=\left\\{\mathcal{C}_{i}\right\\}_{i=1}^{n_{\mathcal{C}}}$, modeled by curves that correspond to their mean line. We introduce a vector of parameters $\mu\in\mathcal{P}\subset\mathbb{R}^{p}$, which contains physical parameters of the problem (coefficients in the constitutive equations of the steel or the concrete). We denote by $u_{\mu}$ the vector of displacements, whether in cables or concrete and we denote by $\mathcal{X}$ the Hilbert space to which the field $u_{\mu}$ belongs. To identify the displacements in each subdomain, we shall note $u_{\mu}^{\rm c}$ the displacement in the concrete, and $u_{\mu}^{\rm s}$ the displacement in the steel. Both of those fields can be seen as restrictions of $u_{\mu}$ on the corresponding domain. The mechanical strains tensor within the concrete is the symmetric gradient of the displacement and is denoted $\varepsilon^{c}_{\mu}=\nabla_{s}u_{\mu}^{\rm c}=\frac{1}{2}\left(\nabla u_{\mu}^{\rm c}+(\nabla u_{\mu}^{\rm c}\right)^{\top})$, and the strains within the cables (also called uniaxial strains) are defined as $\varepsilon_{\mu}^{\rm s}=\partial_{s}u_{\mu}^{\rm s}$, where $\partial_{s}(.)$ is the derivative along the cable. We denote the stress tensor within the concrete $\sigma_{\mu}$, the normal forces in the steel ${\rm N}_{\mu}$, and the internal variables in the concrete $\gamma_{\mu}^{\rm c}$ and in the steel $\gamma_{\mu}^{\rm s}$. We assume that the constitutive equations used depend on auxiliary variables, which we shall refer to in this section as a vector $H$. The fields enclosed in $H$ include previously computed fields and solutions to PDEs that do not depend on the parameters set in the vector $\mu$. The vector is comprised of fields that may appear and be used in the problem’s constitutive or evolution equations. In the application case presented, namely in the case of a thermo-hydro-activated mechanical problem, this vector consists of the pair made of temperature and water content in the concrete. Details are provided in section 3. We introduce the quasi-static equilibrium equations for the three-dimensional model, where we omit to specify the initial conditions (ICs) and the boundary conditions (BCs) for each subdomains: $\left\\{\begin{array}[]{rcl}-\nabla\cdot\sigma_{\mu}&=&f_{\rm c}\quad\text{on}\ \Omega^{\rm c},\\\ \\\ \sigma_{\mu}&=&\mathcal{F}_{\mu}^{\sigma}\left(\varepsilon_{\mu}^{\rm c},\ \gamma_{\mu}^{\rm c},\ H\right),\\\ \\\ \dot{\gamma}_{\mu}^{\rm c}&=&\mathcal{F}^{\gamma^{\rm c}}_{\mu}\left(\sigma_{\mu},\gamma_{\mu}^{\rm c},\ H\right),\end{array}\right.\quad\text{and}\quad\left\\{\begin{array}[]{rcl}\displaystyle\frac{\partial{\rm N}_{\mu}}{\partial s}&=&f_{\rm s}\quad\text{on}\ \Omega^{\rm s},\\\ \\\ {\rm N}_{\mu}&=&\mathcal{F}^{\rm N}_{\mu}\left(\partial_{s}u_{\mu}^{\rm s},\ \gamma_{\mu}^{\rm s},\ H\right),\\\ \\\ \dot{\gamma}_{\mu}^{\rm s}&=&\mathcal{F}^{\gamma^{\rm s}}_{\mu}\left(\text{N}_{\mu},\gamma_{\mu}^{\rm s},\ H\right),\end{array}\right.$ where $\mathcal{F}_{\mu}^{\sigma}$ (resp. $\mathcal{F}^{\rm N}_{\mu}$) stands for the constitutive equation for the three-dimensional (resp. one- dimensional) problem, while the nonlinear operator $\mathcal{F}^{\gamma^{\rm c}}_{\mu}$ (resp. $\mathcal{F}^{\gamma^{\rm s}}_{\mu}$) denotes an equation of evolution of internal variables within the concrete (resp. the steel). To provide more compact notations, we introduce new notations for the fields defined on the whole domain, namely for the displacements, strains, generalized forces (stresses or normal efforts), internal variables and the loadings. All the details are provided in Table 1. Notation on $\Omega$ | Notation on $\Omega^{\rm s}$ | Notation on $\Omega^{\rm c}$ | Definition ---|---|---|--- $\mathfrak{S}_{\mu}$ | ${\rm N}_{\mu}$ | $\sigma_{\mu}$ | Generalized force $u_{\mu}$ | $u_{\mu}^{\rm s}$ | $u_{\mu}^{\rm c}$ | Displacement $\varepsilon_{\mu}$ | $\varepsilon_{\mu}^{\rm s}=\partial_{s}u_{\mu}^{\rm s}$ | $\varepsilon_{\mu}^{\rm c}$ | Strain $\gamma_{\mu}$ | $\gamma_{\mu}^{\rm s}$ | $\gamma_{\mu}^{\rm c}$ | Internal variables $f$ | $f_{\rm c}$ | $f_{\rm s}$ | External loading Table 1: Notations of the fields defined on the whole computational domain $\Omega$, whose definition depends on the subdomains ($\Omega^{\rm c}$ or $\Omega^{\rm s}$) These notations enable us to recast the problem in a compact form, which helps to manage the multi-modeling (3d-1d) using three operators, $\mathcal{G}_{\mu}\left(.\right)$ for the equilibrium equation, $\mathcal{F}_{\mu}^{\mathfrak{S}}\left(.\right)$ for the constitutive equation and $\mathcal{F}^{\gamma}_{\mu}\left(.\right)$ for the evolution equation for internal variables: $\left\\{\begin{array}[]{rcl}\mathcal{G}_{\mu}\left(\mathfrak{S}_{\mu}\right)&=&f,\\\ \mathfrak{S}_{\mu}&=&\mathcal{F}_{\mu}^{\mathfrak{S}}\left(\mathfrak{S}_{\mu},\ \gamma_{\mu},\ H\right),\\\ \dot{\gamma}_{\mu}&=&\mathcal{F}^{\gamma}_{\mu}\left(\varepsilon_{\mu},\ \gamma_{\mu},\ H\right),\end{array}\right.$ where we still omit the ICs and BCs used. In our study, the initial state of the problem is the material at rest, so all physical fields are assumed to be zero initially. The temporal discretization of the equations is done using a one-step integrator ($u_{\mu}^{(k+1)}=u_{\mu}^{(k)}+\Delta u_{\mu}^{(k+1)}$), which implies that the knowledge of the mechanical state is derived from the mechanical state previously computed (and the knowledge of the field $H$ at the current time). In our study, we consider both non-homogeneous Neumann conditions (defined on $\Gamma_{\rm n}^{\rm c}$ for the concrete) and homogeneous Dirichlet conditions for suitable linear combinations of the state variables. We assume that the displacement field belongs to the kernel of this form ($c$ linear form in Eq.(2)). In the general framework of the unidimensional problem, Neumann BCs on a given cable $\mathcal{C}_{i}$ are expressed as application of nodal forces $F_{i,j}$ applied on a set of discrete points $\\{x^{\mathcal{C}_{i}}_{j}\\}_{j=1}^{n_{\mathcal{C}_{i}}^{\rm 1d}}$. This translates into a jump $\llbracket.\rrbracket$ in the normal efforts at every point $x^{\mathcal{C}_{i}}_{j}$. In the end, the multi- modeling problem can be written as: $\left\\{\begin{array}[]{rclrcl}\mathcal{G}_{\mu}\left(\mathfrak{S}_{\mu}^{(k)}\right)&=&f^{(k)}&\text{on}&\quad\ \Omega,\\\ \mathfrak{S}_{\mu}^{(k)}&=&\mathcal{F}_{\mu}^{(k)}\left(u^{(k)}_{\mu},\ u^{(k-1)}_{\mu},\mathfrak{S}_{\mu}^{(k-1)},\ H^{(k)}\right)&\text{on}&\quad\ \Omega,\\\ \end{array}\right.$ (1) with BCs expressed as follows: $\left\\{\begin{array}[]{rcl}\text{Dirichlet BCs}:&&c(u_{\mu}^{(k)})=0\ \text{on}\ \Omega,\\\ \\\ \text{Neumann BCs}:&&\left\\{\begin{array}[]{rclrcl}\left(\sigma_{\mu}\right)^{(k)}\cdot n&=&f_{s}^{(k)}&\text{on}&\quad\ \Gamma_{\rm n}^{\rm c},\\\ \llbracket{\rm N}_{\mu}^{(k)}\rrbracket(x^{\mathcal{C}_{i}}_{j})&=&F_{i,j}^{(k)}&\forall j\in\\{1,...,n_{\mathcal{C}_{i}}^{\rm 1d}\\}&\quad\text{for}\quad\mathcal{C}_{i},\ \forall i\in\\{1,...,n_{\mathcal{C}}\\},\\\ \end{array}\right.\end{array}\right.$ (2) Finally, the variational problem investigated in this contribution can be summarized as follows: eventually, the multi-modeling problem written in compact form in the Eq.(3) to which the BCs are applied lead to the following variational problem: $\forall k\in\\{1,...,K\\},\ \text{Find}\ u_{\mu}^{(k)}\in\mathcal{X}_{\rm bc}\ \text{s.t.}\left\\{\begin{array}[]{lcl}\mathcal{R}_{\mu}\left(u_{\mu}^{(k)},\ u_{\mu}^{(k-1)},\ \mathfrak{S}_{\mu}^{(k)}\right)=0,&&\forall v\in\mathcal{X}_{\rm bc},\\\ \mathfrak{S}_{\mu}^{(k)}=\mathcal{F}_{\mu}^{(k)}\left(u^{(k)}_{\mu},\ u^{(k-1)}_{\mu},\mathfrak{S}_{\mu}^{(k-1)},\ H^{(k)}\right)&\text{on}&\Omega,\\\ \end{array}\right.$ (3) where $\mathcal{X}_{\rm bc}\coloneqq\\{v\in\mathcal{X},c(v)=0\ \text{on}\ \Omega\\}$. We denote: $\mathcal{R}_{\mu}\left(u_{\mu}^{(k)},\ u_{\mu}^{(k-1)},\ \mathfrak{S}_{\mu}^{(k)}\right)=\mathcal{R}_{\mu}^{\mathfrak{S}}\left(\mathcal{F}_{\mu}^{(k)}\left(u^{(k)}_{\mu},\ u^{(k-1)}_{\mu},\mathfrak{S}_{\mu}^{(k-1)},\ H^{(k)}\right),\ v\right),\ \text{and}\ \mathcal{R}_{\mu}^{\mathfrak{S}}\left(\mathfrak{S},\ v\right)=\begin{bmatrix}\mathcal{R}_{\mu}^{\sigma}\left(\sigma_{\mu}^{(k)},\ v\right)\\\ \mathcal{R}_{\mu}^{\text{N}}\left(\text{N}_{\mu}^{(k)},\ v\right)\end{bmatrix},$ where we introduce the notations $\forall v\in[v^{\rm c},\ v^{\rm s}]^{\top}$: $\left\\{\begin{array}[]{rcl}\mathcal{R}_{\mu}^{\sigma}\left(\sigma_{\mu}^{(k)},\ v\right)&=&\displaystyle\int_{\Omega}\sigma_{\mu}^{(k)}:\varepsilon\left(v^{\rm c}\right)\ d\Omega-\int_{\Omega}f_{v}\cdot v^{\rm c}\ d\Omega-\int_{\Gamma}f_{s}\cdot v^{\rm c}\ d\Gamma,\\\ \\\ \mathcal{R}_{\mu}^{\text{N}}\left(\text{N}_{\mu}^{(k)},\ v\right)&=&\displaystyle\int_{\mathcal{C}}\text{N}_{\mu}^{(k)}:\partial_{s}v^{\rm s}\ ds-\int_{\mathcal{C}}f_{v}\cdot v^{\rm s}\ ds-\sum\limits_{i=1}^{n_{\mathcal{C}}}\sum\limits_{j=1}^{n_{\mathcal{C}_{i}}^{\rm 1d}}F_{i,j}^{(k)}v^{\rm s}(x_{j}^{\mathcal{C}_{i}}).\end{array}\right.$ #### 2.1.2 Finite element discretization We apply a problem discretization with a continuous Galerkin finite element (FE) method. Given the domain $\Omega$, we consider a HF mesh $\mathcal{T}^{\rm hf}=\left\\{\texttt{D}_{i}\right\\}_{i=1}^{N_{\rm e}^{3d}}\cup\left\\{\texttt{D}_{i}\right\\}_{i=1}^{N_{\rm e}^{1d}}$ where $\texttt{D}_{1}^{1d},\ldots,\texttt{D}_{N_{\rm e}^{1d}}$ (resp. $\texttt{D}_{1}^{3d},\ldots,\texttt{D}_{N_{\rm e}^{3d}}$) are the elements of the one dimensional-mesh, and $N_{\rm e}^{1d}$ (resp. $N_{\rm e}^{3d}$) denotes the number of elements in the one-dimensional (resp. three- dimensional) mesh. The $\rm hf$ subscript or superscript stands for HF discretization. In this framework, we denote by $\mathcal{X}^{\rm hf}$ the chosen finite element space to discretize the problem. Within this framework, we denote the displacement unknowns at nodes (primal) by $\mathbf{u}_{\mu}\in\mathbb{R}^{\mathcal{N}}$, where $\mathcal{N}=3(\mathcal{N}_{\rm no}^{\rm 3d}+\mathcal{N}_{\rm no}^{\rm 1d})$ is the dimension of the space $\mathcal{X}^{\rm hf}$. Furthermore, the generalized forces within the material are denoted by $\bm{\mathfrak{S}}_{\mu}=[\bm{\sigma}_{\mu},\ \mathbf{N}_{\mu}]^{\top}\in\mathbb{R}^{\mathcal{N}_{g}}$, since they are unknowns at quadrature points. For the record, the size of these vectors is $\mathcal{N}_{g}=\mathcal{N}_{g}^{\rm 3d}+\mathcal{N}_{g}^{1d}=6\mathcal{N}_{\rm qd}^{\rm 3d}+\mathcal{N}_{\rm qd}^{1d}$, where $\mathcal{N}_{\rm qd}^{\rm 3d}$ stands for the number of quadrature weights used for the three-dimensional mesh and $\mathcal{N}_{\rm qd}^{1d}$ for the one-dimensional mesh. We denote by $\\{\mathbf{u}_{\mu}^{{\rm hf},(k)}\\}_{k=1}^{K}$ the FE approximation of the displacement (primal variable) given by the HF-model at all times, whereas $\\{\bm{\mathfrak{S}}_{\mu}^{{\rm hf},(k)}\\}_{k=1}^{K}$ stand for the generalized force fields (stress or normal efforts). We state the FE discretization of the variational form Eq.(3), $\forall k\in\\{1,...,K\\}$, find $u^{{\rm hf},(k)}_{\mu}\in\mathcal{X}_{\rm bc}^{\rm hf}$ s.t: $\left\\{\begin{array}[]{rcl}&\mathcal{R}^{\rm hf}_{\mu}\left(\mathbf{u}^{{\rm hf},(k)}_{\mu},\ \mathbf{u}^{{\rm hf},(k-1)}_{\mu},\ \bm{\mathfrak{S}}^{{\rm hf},(k-1)}_{\mu},\mathbf{v}\right)=0,&\qquad\forall v\in\mathcal{X}_{\rm bc}^{\rm hf},\\\ &\bm{\mathfrak{S}}^{{\rm hf},(k)}_{\mu}=\mathcal{F}^{\rm hf}_{\mu}\left(\mathbf{u}^{{\rm hf},(k)}_{\mu},\mathbf{u}^{{\rm hf},(k-1)}_{\mu},\ \bm{\mathfrak{S}}^{{\rm hf},(k-1)}_{\mu},\ \mathbf{H}^{{\rm hf},(k)}\right),&\\\ \end{array}\right.$ (4) where $\mathcal{X}_{\rm bc}^{\rm hf}\coloneqq\left\\{\mathbf{v}\in\mathcal{X}^{\rm hf}:\quad\mathbf{B}\mathbf{v}=0\right\\}$ depicts the test space for displacements, and $\mathbf{B}\in\mathbb{R}^{\mathcal{N}_{d}\times\mathcal{N}}$ is the kinematic relationship matrix. $\mathcal{N}_{d}$ stands for the number of linear relations between degrees of freedom that we intend to enforce. Such a formulation on the BCs implies that the kinematic linear application depends neither on time nor on the parameter. Each line reflects a kinematic relationship between nodes of the overall mesh. Therefore, the said matrix includes not only the Dirichlet conditions applied to each physical domain, but also the kinematic relationships between the nodes of two distinct models (kinematic coupling). The operators $\mathcal{R}^{\rm hf}_{\mu}$ and $\mathcal{F}^{\rm hf}_{\mu}$ stands for the discrete counterparts of the continuous operators $\mathcal{R}_{\mu}$ and $\mathcal{F}_{\mu}$ introduced in Eq.(4). In practice, the FE code compute the HF-residuals as sums of elementary contributions, as follows $\forall v\in\mathcal{X}^{\rm hf}$: $\displaystyle\mathcal{R}^{\rm hf}_{\mu}\left(\mathbf{u}^{(k)}_{\mu},\ \mathbf{u}^{(k-1)}_{\mu},\ \bm{\mathfrak{S}}^{(k-1)}_{\mu},\ \mathbf{v}\right)=\sum\limits_{q=1}^{N_{e}}\mathcal{R}^{\rm hf}_{\mu,q}\left(\mathbf{E}_{q}^{\rm no}\mathbf{u}^{(k)}_{\mu},\ \mathbf{E}_{q}^{\rm no}\mathbf{u}^{(k-1)}_{\mu},\ \mathbf{E}_{q}^{\rm qd}\bm{\mathfrak{S}}^{(k-1)}_{\mu},\ \mathbf{E}_{q}^{\rm no}\mathbf{v}\right)$ $\displaystyle=\underbrace{\sum\limits_{q=1}^{N_{e}^{\rm 3d}}\mathcal{R}^{\rm hf}_{\mu,q}\left(\mathbf{E}_{q}^{\rm no}\mathbf{u}^{(k)}_{\mu},\ \mathbf{E}_{q}^{\rm no}\mathbf{u}^{(k-1)}_{\mu},\ \mathbf{E}_{q}^{\rm qd,3d}\bm{\sigma}^{(k-1)}_{\mu},\ \mathbf{E}_{q}^{\rm no}\mathbf{v}\right)}_{\coloneqq\mathcal{R}^{\rm hf,3d}_{\mu}\left(\mathbf{u}^{(k)}_{\mu},\ \mathbf{u}^{(k-1)}_{\mu},\ \bm{\sigma}^{(k-1)}_{\mu},\ \mathbf{v}\right)}+\underbrace{\sum\limits_{q=1}^{N_{e}^{\rm 1d}}\mathcal{R}^{\rm hf}_{\mu,q}\left(\mathbf{E}_{q}^{\rm no}\mathbf{u}^{(k)}_{\mu},\ \mathbf{E}_{q}^{\rm no}\mathbf{u}^{(k-1)}_{\mu},\ \mathbf{E}_{q}^{\rm qd,1d}\text{{N}}^{(k-1)}_{\mu},\ \mathbf{E}_{q}^{\rm no}\mathbf{v}\right)}_{\coloneqq\mathcal{R}^{\rm hf,1d}_{\mu}\left(\mathbf{u}^{(k)}_{\mu},\ \mathbf{u}^{(k-1)}_{\mu},\ \text{{N}}^{(k-1)}_{\mu},\ \mathbf{v}\right)},$ where $\mathbf{E}_{q}^{\rm no}$ (resp. $\mathbf{E}_{q}^{\rm qd}$) is an elementary restriction operator on vectors at nodes (resp. quadrature points). For operators on vectors at quadrature points, we adopt the specific notation $\mathbf{E}_{q}^{\rm qd,3d}$ (resp. $\mathbf{E}_{q}^{\rm no,1d}$) for the case where the elements are three-dimensional (resp. one-dimensional). We emphasize that the assembly procedure can be split into two terms, a loop for concrete elements and a second loop for steel elements. In our work, the boundary conditions are treated by dualization. Therefore, we introduce Lagrange multipliers, and we solve the following saddle-point problem: Find $(\mathbf{u}^{(k)}_{\mu},\bm{\lambda}^{(k)}_{\mu})\in\mathbb{R}^{\mathcal{N}}\times\mathbb{R}^{\mathcal{N}_{d}}$ s.t.: $\left\\{\begin{array}[]{rcl}\mathbf{R}^{\rm hf}_{\mu}\left(\mathbf{u}_{\mu}^{(k)},\ \mathbf{u}_{\mu}^{(k-1)},\ \bm{\mathfrak{S}}_{\mu}^{(k-1)}\right)+\mathbf{B}^{\top}\bm{\lambda}_{\mu}^{(k)}&=&0,\\\ \mathbf{B}\mathbf{u}_{\mu}^{(k)}&=&0.\end{array}\right.$ This study expands upon a previously established framework for projection- based ROMs in nonlinear mechanics with internal variables, broadening its applicability to more intricate phenomena. Indeed, Eq.(4) is formally similar to Eq.(9) of Reference [30]. However, the approach presented here allows for the partitioning of the domain into distinct regions, each characterized by a specific solid mechanics model. Additionally, our method broadens its scope to handle a diverse range of mechanical problems by incorporating auxiliary variables. These variables encompass fields such as temperature and water content, introducing influences on the constitutive equations, evolution equations of internal variables, and thereby, the mechanical state of the material. ### 2.2 Projection-based model order reduction approach In this section, we discuss the pMOR procedure that is sketched in Figure 2. As stated in the introduction, our method is an extension of the work [30] to a more complex nonlinear mechanics problem with 3D-1D coupling. Our formulation relies on an offline-online computational decomposition (cf. Figure 1): during the offline (training) stage, we solve the HF model for several parameter values to construct the reduced basis, the EQ rule and the associated mesh; during the online (prediction) stage, we call the surrogate model to approximate the solution. In section 2.2.1, we consider the solution reproduction problem which addresses the task of reproducing the temporal trajectory for the same parameter value considered in the offline stage. In section 2.2.2, we describe the extension to the parametric case. Figure 2: Key ideas of the greedy methodology and the ROM approach adopted for the mechanical simulation of prestressed concrete. The different components needed to define are provided in colors: blue corresponds to components similar to the previous work, whereas red corresponds to the specific components within the multi-modeling framework for prestressed concrete #### 2.2.1 Solution Reproduction Problem We seek the reduced-order solution as a linear combination of modes: $\widehat{\mathbf{u}}^{(k)}_{\mu}=\sum\limits_{n=1}^{N_{u}}\left(\widehat{\bm{\alpha}}^{(k)}_{u,\mu}\right)_{n}\bm{\zeta}_{u,n}=\mathbf{Z}_{u}\widehat{\bm{\alpha}}^{(k)}_{u,\mu},$ where $\widehat{\bm{\alpha}}^{(k)}_{u,\mu}\in\mathbb{R}^{N_{u}}$ are referred to as generalized coordinates and $[\mathbf{Z}_{u}]_{.,n}=\bm{\zeta}_{u,n}$ are the displacement reduced basis vectors, which are build thanks to the POD approach. The main objective of this method is finding low dimensional approximations to the data, which preserve the essential information of a given high dimensional data set. More precisely, we resort to the method of snapshots [31] to build the displacement reduced basis. Given a discrete set of HF snapshots $\\{\mathbf{v}_{k}\\}_{k=1}^{K}$, a discrete scalar product $\left(\cdot,\cdot\right)$, and a tolerance $\varepsilon$, we define a Gramian matrix $\mathbf{C}\in\mathbb{R}^{K\times K}$, defined as $\mathbf{C}_{i,j}=(\mathbf{v}_{i},\mathbf{v}_{j})$. We need to solve a eigenvalue problem: $\mathbf{C}\bm{\varphi}_{n}=\lambda_{n}\bm{\varphi}_{n},\quad\lambda_{1}\geq\dotsc\geq\lambda_{K},$ to obtain the eigenpairs ($\lambda_{n},\ \bm{\varphi}_{n}$). Thanks to the latter, we can compute POD modes: $\bm{\zeta}_{u,n}=\frac{1}{\sqrt{\lambda_{n}}}\sum\limits_{k=1}^{K}\left(\bm{\varphi}_{n}\right)_{k}\mathbf{v}_{k}.$ The number of selected POD modes is chosen according to a energy-criterion on the spectrum, thanks to a user-defined tolerance $\varepsilon$: $N_{u}=\min\left\\{Q\in\mathbb{N},\quad\sum\limits_{k=1}^{Q}\lambda_{q}\geq\left(1-\varepsilon^{2}\right)\sum\limits_{q=1}^{K}\lambda_{q}\right\\}.$ A use of the method of snapshots [31] for POD can thus be interpreted as a call to the following operator: $\mathbf{Z}=\text{POD}\left\\{\\{\mathbf{v}_{k}\\}_{k=1}^{K},\ \left(\cdot,\cdot\right),\ \varepsilon\right\\}.$ (5) The Galerkin ROM is obtained by projecting the discrete residual operator (onto the Eq.(4)) onto the primal reduced basis. We first consider the situation without Lagrange multipliers for the boundary conditions: $\mathbf{Z}_{u}^{\top}\mathbf{R}^{\rm hf}_{\mu}\left(\widehat{\mathbf{u}}_{\mu}^{(k)},\ \widehat{\mathbf{u}}_{\mu}^{(k-1)},\ \widehat{\bm{\mathfrak{S}}}_{\mu}^{(k-1)}\right)=0.$ (6) The nonlinearity of the operator results in a CPU bottleneck, since the assembly procedure scales with the cost of an HF computation. In order to circumvent this issue, we resort to an hyper-reduction approach, namely the element-wise EQ approach [32][33]. The method samples a subset of the mesh elements over the entire computational domain in order to reduced the assembly costs in the online stage. With this approach, a residual operator $\mathcal{R}^{\rm eq}_{\mu}$ is generated and applied to the assembly procedure when the ROM solver is called. In the context of our multi-modeling problem, we choose to apply the hyper-reduction procedure only to three- dimensional terms, since these are nonlinear: $\mathcal{R}^{\rm eq}_{\mu}\left(\mathbf{u}^{(k)}_{\mu},\ \mathbf{u}^{(k-1)}_{\mu},\ \bm{\mathfrak{S}}^{(k-1)}_{\mu},\ \mathbf{v}\right)=\underbrace{\mathcal{R}^{\rm eq,3d}_{\mu}\left(\mathbf{u}^{(k)}_{\mu},\ \mathbf{u}^{(k-1)}_{\mu},\ \bm{\sigma}^{(k-1)}_{\mu},\ \mathbf{v}\right)}_{\text{hyper- reduced}}+\underbrace{\mathcal{R}^{\rm hf,1d}_{\mu}\left(\mathbf{u}^{(k)}_{\mu},\ \mathbf{u}^{(k-1)}_{\mu},\ \text{{N}}^{(k-1)}_{\mu},\ \mathbf{v}\right)}_{\text{not hyper-reduced}}.$ For the construction of the EQ rule, we rely on the Energy-Conserving Sampling and Weighting method (ECSW) developed in the Reference [24], whose quality has already been demonstrated for hyper-reduction of problems in solid mechanics. The ECSW approach consists in solving a non-negative least-square problem to find a sparse approximation of the HF rule that is tailored to the integrals considered in (6). Solving the optimization problem provides an EQ rule $\bm{\rho}^{\rm eq}\in\mathbb{R}^{N_{e}^{\rm 3d}}$, which defines the operator $\mathcal{R}^{\rm eq}_{\mu}$ from the HF operator as follows: $\mathcal{R}^{\rm eq,3d}_{\mu}\left(\mathbf{u}^{(k)}_{\mu},\ \mathbf{u}^{(k-1)}_{\mu},\ \bm{\sigma}^{(k-1)}_{\mu},\ \mathbf{v}\right)=\sum\limits_{q=1}^{N_{e}^{\rm 3d}}\left(\bm{\rho}^{\rm eq}\right)_{q}\mathcal{R}^{\rm hf}_{\mu,q}\left(\mathbf{E}_{q}^{\rm no}\mathbf{u}^{(k)}_{\mu},\ \mathbf{E}_{q}^{\rm no}\mathbf{u}^{(k-1)}_{\mu},\ \mathbf{E}_{q}^{\rm qd,3d}\bm{\sigma}^{(k-1)}_{\mu},\ \mathbf{E}_{q}^{\rm no}\mathbf{v}\right).$ As already mentioned in the previous work by Agouzal et al. [30], the dualization of the BCs and the homogeneous BCs prevent us from considering any BCs in solving the problem, since the displacement modes satisfy the BCs (they belong to the $\mathbf{B}$ kernel). This highlights the fact that hyper- reduction of the three-dimensional domain, while preserving the one- dimensional part, has no impact on the application of the BCs. Information on the kinematic coupling between the steel and concrete nodes is already contained in the displacement modes. Knowledge of the mechanical state of the material requires to know the stress field on the HF mesh. The latter is determined by integrating the constitutive equations at the quadrature points. However, the internal variables are only known at the sampled elements in the mesh. Hence, the stress field is only known at the reduced mesh level. To solve this problem, we build a reduced order basis for the generalized force $\bm{\mathfrak{S}}=[\bm{\sigma},\text{{N}}]^{\top}$. Reconstruction of the generalized force field over the entire HF mesh is then performed using a Gappy-POD procedure [34]. Unlike displacement vectors, the components of generalized force vectors on one-dimensional and three-dimensional discrete points do not have the same physical dimension [35][36]. Therefore, we define the scalar product on generalized force vectors: $\left(\bm{\mathfrak{S}}_{1},\bm{\mathfrak{S}}_{2}\right)=\left(\begin{bmatrix}\bm{\sigma}_{1}\\\ \mathbf{N}_{1}\end{bmatrix},\ \begin{bmatrix}\bm{\sigma}_{2}\\\ \mathbf{N}_{2}\end{bmatrix}\right)_{[\sigma,N]}=\frac{1}{\lambda_{1}^{\sigma}}\left(\bm{\sigma}_{1},\ \bm{\sigma}_{2}\right)_{2}+\frac{1}{\lambda_{1}^{\text{N}}}\left(\mathbf{N}_{1},\ \mathbf{N}_{2}\right)_{2},$ where $\lambda_{1}^{\sigma}$ (resp. $\lambda_{1}^{\text{N}}$) is the largest eigenvalue in the sense of the scalar product $\ell_{2}$ for the stress vectors (normal forces). In summary, in addition to the EQ rule $\bm{\rho}^{\rm eq}$ (and the associated reduced mesh), the ROM is made up of two reduced bases, defined thanks to the POD operator detailed in Eq.(5) as follows: $\mathbf{Z}_{u}=\text{POD}\left\\{\\{\mathbf{u}^{{\rm hf},(k)}_{\mu}\\}_{k=1}^{K},\ \left(\cdot,\cdot\right)_{2},\ \varepsilon_{u}\right\\},\quad\text{and}\quad\mathbf{Z}_{\bm{\mathfrak{S}}}=\text{POD}\left\\{\\{\bm{\mathfrak{S}}_{\mu}^{{\rm hf},(k)}\\}_{k=1}^{K},\ \left(\cdot,\cdot\right)_{[\sigma,N]},\ \varepsilon_{\mathfrak{S}}\right\\}.$ Both for the displacements and for the generalized forces, we opted for a scalar product $\ell_{2}$ on the discrete snapshots. From a variational perspective, it would have been more suitable to work with an H1 scalar product. However, from an algorithm application point of view, extracting such matrices in this context can be fairly challenging. Our choice is typical for numerical applications on real-world applications. Furthermore, $\ell_{2}$ compression delivers high-quality numerical results. #### 2.2.2 Parametric problem In order to provide a reliable ROM on a set of parameters, we build the surrogate model using a POD-Greedy approach. This iterative procedure is designed to enrich the reduced model (i.e. the reduced bases and the reduced mesh) by computing at each iteration the HF solution least well approximated by the ROM. The worst-approximated solution is estimated by exploring a test set $\Theta_{\rm train}$, defined as a discrete approximation of $\mathcal{P}$. In our case, we chose to rely on a strong-greedy approach: we compare the approximation errors (error between HF solution and reduced solution) over the whole test set, to identify the parameter for which this error is maximal. This parameter is then used to further enhance the ROM. Strong-greedy approach is not optimal from the standpoint of the computational cost of building the reduced model. Indeed, the estimation of the poorest approximated solution requires knowledge of the HF solutions on a given discrete training set. For a more efficient greedy approach in terms of computational cost, weak-greedy methods would be more appropriate, along with the introduction of an appropriate error indicator. This remains a limitation to be borne in mind, particularly in the context of increasing the dimensionality of the parameter space. Nevertheless, this work constitutes a proof of concept of the feasibility of a greedy approach for three-dimensional THM calculations on prestressed concrete. The numerical optimization of the process, with the development of error indicators adapted to these problems and to industrial-grade HF codes, is a focus for forthcoming research. The switch of the methodology to the parametric case requires the adaptation of two parts of the algorithm: the construction of the reduced bases and the computation of the EQ. Two constructions of the reduced bases are explored in this paper. A first approach consists in performing a new POD on the set of computed HF snapshots. A second approach involves an incremental approach, known in the literature as H-POD [16]. The latter has the advantage of providing a hierarchical basis achieved by concatenating the previous basis with one obtained with new snapshots: $\mathbf{Z}=\left[\mathbf{Z},\ \mathbf{Z}_{\rm proj}\right],\quad\mathbf{Z}_{\rm proj}=\text{POD}\left\\{\left\\{\Pi_{\mathbf{Z}^{\bot}}\mathbf{v}_{k}\right\\}_{k=1}^{K},\ \left(\cdot,\cdot\right),\ \varepsilon\right\\},$ where $\Pi_{\mathbf{Z}^{\bot}}\ :\ \mathcal{X}^{\rm hf}\rightarrow\mathcal{Z}$ is the orthogonal projection operator $\mathcal{Z}\subset\mathcal{X}^{\rm hf}$ using the $(\cdot,\cdot)$ scalar product. We rely on the regularization approaches given in Reference [16] to compute the new number of modes. Before concatenating the two bases, a criterion is added such that only the basis vectors that effectively reduce the projection error are added to the reduced order basis. The different steps of the adaptive algorithm are summarized in Algorithm 1. Algorithm 1 strong POD-Greedy algorithm 1:$\Theta_{\rm train}=\\{\mu_{i}\\}_{i}^{n_{\rm train}}$, $\varepsilon_{u}$, $\varepsilon_{\mathfrak{S}}$ 2:$\mathcal{Z}_{N_{u}}=\mathcal{Z}_{N_{\sigma}}=\emptyset$, $\mu_{*}=\overline{\mu}$, $\Theta_{*}=\\{\mu_{*}\\}$. 3:while Stop Criterium do 4: Compute $\\{\mathbf{u}^{{\rm hf},(k)}_{\mu^{*}}\\}_{k=1}^{K}$, $\\{\bm{\mathfrak{S}}^{{\rm hf},(k)}_{\mu^{*}}\\}_{k=1}^{K}$$\triangleright$ Call of code$\\_$aster 5: Compute primal reduced basis $\mathbf{Z}_{u}$ 6: Compute $\bm{\rho}^{\rm eq}$ knowing $\\{\zeta_{u,n}\\}_{n=1}^{N_{u}}$ and $\\{\bm{\mathfrak{S}}^{{\rm hf},(k)}_{\mu}\\}_{k\in\\{1,..,K\\},\mu\in\Theta_{\rm*}}$ 7: Compute the reduced mesh $\mathcal{T}^{\rm red}$ 8: Compute dual reduced basis $\mathbf{Z}_{\mathfrak{S}}$ 9: for $\mu\in\Theta_{\rm train}$ do 10: Solve the ROM for $\mu$ and compute $E_{\mu}^{\rm app,avg}$$\triangleright$ See definition of $E_{\mu}^{\rm app,avg}$ in Eq.(14) 11: end for 12: $\mu^{*}=\arg\max\limits_{\mu\in\Theta_{\rm train}}E_{\mu}^{\rm app,avg}$ 13: $\Theta_{\rm*}=\Theta_{\rm*}\cup\\{\mu_{*}\\}$ 14:end while ## 3 Thermo-Hydro-Mechanical (THM) modeling of large concrete structures ### 3.1 Weak-coupling strategy for the THM numerical model In this section, we introduce the mathematical model designed to simulate the behavior of prestressed concrete. We consider models that account for the evolution of large concrete structures over their lifetime, which consists mainly of two stages: young age and long-term evolution. The young age refers to a stage during which the chemical and physical properties of concrete are changing at a fast rate, as it sets and hardens. Long-term phase represents the evolution of hardened concrete under operating conditions (taking into account thermo-hydric loadings) and mechanical loadings. Within the framework of the FE models employed in practice, we only consider long-term evolution. The behavior of heterogeneous and porous concrete is governed by numerous and complex physicochemical phenomena. Such material behavior requires a THM modeling strategy: the behavior of the material is based on knowledge of the temperature ($T$), the water content ($C_{w}$) in the concrete and the mechanical fields, in a framework where all these phenomena are coupled together. Since we are interested in modeling the whole ageing of the concrete structure, our THM model should encompass the various physical processes which induce deformations within the concrete: shrinkage, dessication and creep. Notation | Physical quantity | Unit ---|---|--- $T$ | Temperature | $\mathrm{K}$ $\xi$ | Hydration degree | $\mathrm{-}$ $h$ | Ambient relative humidity (${\rm RH}$) | $\mathrm{-}$ $C_{w}$ | Water content of concrete | $\mathrm{-}$ $\sigma$ | Stress in the concrete | $\mathrm{P}\mathrm{a}$ $\varepsilon^{\rm c}=\nabla_{s}u^{\rm c}$ | Deformations in the concrete | $\mathrm{-}$ $N$ | Normal efforts in the prestressing cables | $\mathrm{N}$ $\varepsilon^{\rm s}=\partial_{s}u^{\rm s}$ | Deformations in the prestressing cables | $\mathrm{-}$ Table 2: Fields of interest in the overall THM model for large prestressed concrete structures In our framework, we adopt a weakly-coupled approach. This assumption implies that the computation is carried out in a chained manner. First, a thermal calculation is performed, followed by a hydric calculation (water diffusion in the concrete). Once all the thermal and hydric fields are known, a mechanical calculation is conducted. Each calculation step yields fields of interest which: first describe the state of the material; second, can be reused for subsequent calculation steps. The Table 2 details the ouputs for the entire THM calculation. The different steps in the process are summarized in Figure 3. Such a formulation of the problem is founded on several assumptions. To begin with, the influence of the mechanical response on the thermal and water fields is neglected [5][4]. Furthermore, it is assumed that the hydric response has no influence on the thermal fields. Weak-coupling approaches have demonstrated their effectiveness in modeling prestressed concrete structures, both for a RSV [5][4] and for a full-scale model [3]. Figure 3: Weakly-coupled chained THM approach for large prestressed concrete structures. Each step provides different fields of interest: at the end of the thermal calculation, we get the temperature field ($T$) and the degree of hydration of the concrete ($\xi$; which will be always analytically given in our simulations); at the end of the hydric calculation, we get the water content of concrete ($C_{w}$); at the end of the mechanical calculation, we get the displacement fields in the steel cables and in the concrete, the associated deformations ($\varepsilon=[\varepsilon^{\rm c},\varepsilon^{\rm s}]$), the stresses in the concrete ($\sigma$) and the normal forces in the cables (N). ### 3.2 THM constitutive equations As stated above, we describe in the following section the set of equations that make up the THM problem under study. Prestressed concrete behavior modeling requires a multi-modeling approach: a three-dimensional nonlinear rheological model is used for concrete; and prestressing cables are described by a one-dimensional linear thermo-elastic behavior. As mentioned above, the rheological behavior of concrete is coupled with hydric and thermal phenomena. Thermal-hydric resolutions are thus solved on the concrete domain ($\Omega^{\rm c}$), while mechanical calculations are solved on both domains ($\Omega^{\rm c}$ and $\Omega^{\rm s}$). #### 3.2.1 Modeling of the thermal and the hydric behavior of the concrete First, we introduce the set of equations employed for the first two stages of the chained calculation: the thermal calculation and the hydric calculation. This decision is motivated by the fact that this calculation is the starting point for the mechanical calculation, to which we apply our model reduction methodology (section 2). The temperature evolution is modeled by the heat equation [37]: $\rho_{c}c_{p}^{p}\frac{\partial T}{\partial t}=\nabla\cdot\left(\lambda_{c}\nabla T\right),\quad\text{on}\ \Omega_{\rm c},$ (7) where $\rho_{c}$ is the density of the concrete, $c_{p}^{p}$ heat capacity of hardened concrete and $\lambda_{c}$ thermal conductivity of hardened concrete. Dirichlet conditions are applied in our context (see details for the numerical test case in section 3.3). Since we only consider liquid water diffusion [38], moisture transfer is modeled by a single nonlinear diffusion on $C_{w}$ (see Eq (8a)), which denotes the water content of the concrete. The diffusion equation depends on $D_{w}$, which is a phenomenological diffusion coefficient, and is assumed to follow Arrhenius’ law[39]. In summary, the nonlinear diffusion equation of the water content can be summed up as follows: $\displaystyle\frac{\partial C_{w}}{\partial t}$ $\displaystyle=\nabla\cdot\left[D_{w}\left(C_{w},\ T\right)\nabla C_{w}\right],\quad\text{on}\ \Omega_{\rm c,}$ (8a) $\displaystyle D_{w}\left(C_{w},\ T\right)$ $\displaystyle=D_{w,0}\left(C_{w}\right)\frac{T}{T_{w}^{0}}\exp\left[-\frac{U_{w}}{R}\left(\frac{1}{T}-\frac{1}{T_{w}^{0}}\right)\right],$ (8b) $\displaystyle D_{w,0}\left(C_{w}\right)$ $\displaystyle=A\exp\left(BC_{w}\right),$ (8c) where $U_{w}$ is the activation energy of drying, $R$ the ideal gas constant and $D_{w,0}\left(C_{w}\right)$ is the diffusion coefficient at a reference temerature $T_{w}^{0}$. The latter is assumed to follow a model defined by Mensi et al. [40], which depends on two model parameters $A$, and $B$. At the scale of large concrete structures, measurements of ambient conditions cannot be made in terms of the water content of the concrete, and are thus conducted in relative humidity [41]. Relative humidity (RH) is defined as the ratio of vapor pressure to saturation vapor pressure for a given temperature. For the sake of consistency and use of collected data, the boundary conditions are formulated in terms of RH. From an experimental point of view, the drying or wetting cycles are assumed to affect only the concrete skin. This assumption enables to draw a link between the water concentration in the concrete and the relative humidity. For a given temperature, these two quantities are related by a bijective function called the sorption-desorption function: $C_{w}=f_{d}\left(h\right).$ (9) Within the framework of these constitutive laws, the sorption-desorption function may be defined either analytically with hyper-parameters [4][42], or empirically by defining a function. In our case, we define a sorption- desorption function as shown in Figure 4. This curve is drawn from experimentally acquired points (without interpolation). (a) Soprtion-desorption function used for the THM problem $C_{w}$ [$\mathrm{L}\,\mathrm{m}^{-3}$] 0 39.0 57.9 76.5 90.1 112.9 128.8 $h$ [$\mathrm{-}$] 0 43 58 75 84 92 100 (b) Summary Figure 4: Definition of the sorption-desorption function $f_{d}$ (defined in Eq (9)). The table shows the point values given to define the function. The function is computed by linear interpolation between those points. The reference configuration corresponds to $h=100$, which is the initial RH value in the wall. As previsouly mentioned, the BC of the water diffusion problem are stated in terms of RH when using real life data. All the parameters related to the thermal and hydric aspects of the model are summarized and detailed in Table 3. Calculation step Notation Physical quantity or parameter Unit $\rho_{c}$ Density $\mathrm{k}\mathrm{g}\,\mathrm{m}^{-3}$ Thermal ($\mu_{\text{T}}$) $c_{c}^{p}$ Heat capacity of hardened concrete $\mathrm{k}\mathrm{J}\,\mathrm{k}\mathrm{g}\,\mathrm{K}^{-1}$ $\lambda_{c}$ Thermal conductivity of hardened concrete $\mathrm{W}\,\mathrm{m}^{-1}\,\mathrm{K}^{-1}$ $D_{w}$ Phenomenological diffusion coefficient $f_{d}$ Sorption-desorption function $T_{w}^{0}$ Reference temperature $\mathrm{K}$ $D_{w,0}$ Diffusion coefficient at a reference temperature $T_{w}^{0}$ $U_{w}$ Activation energy of drying $\mathrm{k}\mathrm{J}\,\mathrm{m}\mathrm{o}\mathrm{l}^{-1}$ $R$ Ideal gas constant $\mathrm{k}\mathrm{J}\,\mathrm{m}\mathrm{o}\mathrm{l}^{-1}\,\mathrm{K}^{-1}$ Hydric ($\mu_{\text{H}}$) $A$ Model parameter for Mensi’s law $\mathrm{1}\mathrm{0}^{-15}\mathrm{m}^{2}\,\mathrm{s}^{-1}$ $B$ Model parameter for Mensi’s law $\mathrm{-}$ Table 3: Summary of parameters and physical quantities at stake in the modeling of the thermal (see Eq.(7) and the hydric (see Eq.(8a)-(8b)-(8c)) behavior #### 3.2.2 Modeling of the mechanical behavior of the concrete (a) Burger rheological model for the basic creep Notation Physical quantity or parameter Unit $E_{\rm c}$ Young’s modulus (concrete) $\mathrm{P}\mathrm{a}$ $\nu_{\rm c}$ Poisson’s ratio (concrete) $\mathrm{-}$ $\alpha_{\rm th,c}$ Thermal dilation coefficient (concrete) $\mathrm{K}^{-1}$ $\alpha_{\rm dc}$ Dessication shrinkage coefficient $\mathrm{-}$ $\beta_{\rm endo}$ Autogeneous shrinkage coefficient $\mathrm{-}$ $\nu_{\rm bc}$ Basic creep Poisson’s ratio $\mathrm{-}$ $k_{\rm rd}$ Reversible deviatoric basic stiffness $\mathrm{P}\mathrm{a}$ $\eta_{\rm rd}$ Reversible deviatoric basic viscosity $\mathrm{P}\mathrm{a}\,\mathrm{s}$ $\eta_{\rm id}$ Irreversible deviatoric basic viscosity $\mathrm{P}\mathrm{a}\,\mathrm{s}$ $U_{\rm bc}$ Basic creep activation energy $\mathrm{k}\mathrm{J}\,\mathrm{m}\mathrm{o}\mathrm{l}^{-1}$ $T_{\rm bc}^{0}$ Basic creep reference temperature ${}^{\circ}\mathrm{C}$ $\kappa$ Basic creep consolidation parameter $\mathrm{-}$ $\eta_{\rm dc}$ Desiccation creep parameter $\mathrm{P}\mathrm{a}\,\mathrm{s}$ (b) Summary of the parameters for the mechanical model Figure 5: Parameters for the three-dimensional mechanical model (concrete) In this section, we detail the governing equations for the mechanical behavior of concrete. Since, we consider small-displacement small-strain mechanical problems, the total strain is decomposed as the sum of several contributions: $\varepsilon=\varepsilon^{\rm el}+\varepsilon^{\rm th}+\varepsilon^{\rm ds}+\varepsilon^{\rm bc}+\varepsilon^{\rm dc},$ where $\varepsilon^{\rm el}$ is the elastic strain tensor, $\varepsilon^{\rm th}$ the thermal strain tensor, $\varepsilon^{\rm ds}$ the dessication shrinkage strain tensor, $\varepsilon^{\rm bc}$ the basic creep strain tensor, $\varepsilon^{\rm dc}$ the dessication creep strain tensor and $\varepsilon^{\rm en}$ the autogenous shrinkage. We explain in the following section the evolution and constitutive equation that help expressing the different strain tensors. According to experimental observations, the variation of thermal strain $\varepsilon^{\rm th}$ proportionnal to temperature variations (see Eq (10a)). The proportionality coefficient $\alpha_{\rm th,c}$ is referred to as the thermal dilation coefficient of concrete and is assumed to be constant when focusing on the long-terme phase. Similar experimental observations suggest a linear dependency between the variations of the dessication shrinkage strains $\varepsilon^{\rm ds}$ and the water content of the concrete $C_{w}$ (see Eq (10b)), which is expressed thanks to dessication shrinkage coefficient ($\alpha_{\rm ds}$). We assume that we have the same kind of relationship between the autogenous shrinkage $\varepsilon^{\rm en}$ and the hydratation degree $\xi$, expressed thanks to a $\beta_{\rm endo}$ coefficient. $\displaystyle\dot{\varepsilon}^{\rm th}$ $\displaystyle=\alpha_{\rm th,c}\frac{\partial T}{\partial t}\text{I},$ (10a) $\displaystyle\dot{\varepsilon}^{\rm ds}$ $\displaystyle=\alpha_{\rm dc}\frac{\partial C_{w}}{\partial t}\text{I},$ (10b) $\displaystyle\dot{\varepsilon}^{\rm en}$ $\displaystyle=\beta_{\rm endo}\frac{\partial\xi}{\partial t}\text{I}.$ (10c) The model selected for the creep deformations is the Burger model developed by Foucault et al. [43]. This choice is motivated by several experimental validations and is well-suited for creep investigations on the considered structures, as confirmed by the work of Bouhjiti et al. [4]. We assume that the creep is a phenomenon involving a decoupling of a spherical part and a deviatoric part. We decompose the Cauchy stress tensor ($\sigma$) as the sum of a spherical part ($\sigma_{\rm s}$) and deviatoric part ($\sigma_{\rm d}$): $\sigma=\sigma_{\rm s}\text{I}+\sigma_{\rm d},$ where $\sigma_{\rm s}=\text{Tr}(\sigma)/3$, and I is the identity tensor. The Burger creep model is built on a decomposition into a reversible and an irreversible part, where we split each tensor into its spherical and deviatoric part: $\left\\{\begin{array}[]{rcl}\varepsilon&=&\varepsilon^{\rm bc}_{\rm i}+\varepsilon^{\rm bc}_{\rm r},\\\ \varepsilon^{\rm bc}_{\rm i}&=&\varepsilon^{\rm bc}_{\rm rs}\text{I}+\varepsilon^{\rm bc}_{\rm rd},\\\ \varepsilon^{\rm bc}_{r}&=&\varepsilon^{\rm bc}_{\rm is}\text{I}+\varepsilon^{\rm bc}_{\rm id}.\\\ \end{array}\right.$ Each part (deviatoric and spherical) is described by a Burger-type model. For each chain, the reversible basic creep strains are modeled by a Kelvin-Voigt rheological elements, whereas the irreversible basic creep strains are modeled by Maxwell elements. The Kelvin-Voigt model (see Eq. (11a)) used for the reversible reversible spherical basic creep is expressed thanks to the stiffness (resp. viscosity) $k_{\rm rs}$ (resp. $\eta_{\rm rs}$), while the irreversible spherical basic creep viscosity $\eta_{\rm is}$ is given by a nonlinear relationship, expressed thanks to a consolidation parameter $\kappa$ (see Eq.(11b)). $\displaystyle h\sigma_{\rm s}$ $\displaystyle=k_{\rm rs}\varepsilon^{\rm bc}_{\rm rs}+\eta_{\rm rs}\dot{\varepsilon}^{\rm bc}_{\rm rs},$ (11a) $\displaystyle h\sigma_{\rm s}$ $\displaystyle=\underbrace{\eta_{\rm is}^{0}\exp\left(\frac{\left\lVert\varepsilon_{i}^{\rm bc}\right\rVert_{m}}{\kappa}\right)}_{\coloneqq\eta_{\rm is}}\dot{\varepsilon}^{\rm bc}_{\rm is},\ \text{where}\ \left\lVert\varepsilon_{i}^{\rm bc}\right\rVert_{m}=\max\limits_{\tau\in\left[0,t\right]}\sqrt{\varepsilon_{i}^{\rm bc}\left(\tau\right):\varepsilon_{i}^{\rm bc}\left(\tau\right)},\quad\forall t\geq 0,$ (11b) The deviatoric part is expressed using a similar set of tensor equations (the spherical part being a set of scalar equations). The aforementioned model accounts for thermo-activation of basic creep. To this end, stiffness and viscosity parameters expressions follow an Arrhenius’ law: $\kappa\left(T\right)=\kappa_{0}\exp\left[-\frac{U_{\rm bc}}{R}\left(\frac{1}{T}-\frac{1}{T^{0}_{bc}}\right)\right],$ where $k_{\rm rs}^{0}$ is the reversible spherical creep stiffness at a reference temperature $T_{\rm bc}^{0}$ and $U_{\rm bc}$ the activation energy of basic creep. Finally, the equivalence of spherical and deviatoric chains enables to restrict the number of model parameters, by assuming a constant creep Poisson ratio $\nu_{\rm bc}$, given by the following relation: $\frac{k_{\rm rs}}{k_{\rm rd}}=\frac{\eta_{\rm rs}}{\eta_{\rm rd}}=\frac{\eta_{\rm rs}^{0}}{\eta_{\rm rd}^{0}}=\frac{1+\nu_{\rm bc}}{1-2\nu_{\rm bc}}.$ In order to model the dessication creep strain, we consider the following equation, founded on the work of Bazant and Chern [44]: $\dot{\varepsilon}^{\rm bc}=\frac{1}{\eta_{\rm dc}}\left|\dot{h}\right|\sigma,$ where $\eta_{\rm dc}$ is a material parameter ($\mathrm{P}\mathrm{a}\,\mathrm{s}$). #### 3.2.3 Modeling of the coupling between concrete and prestressing cables Within large prestressed concrete structures, the aim of the steel cables embedded within the concrete is to apply permanent compressive stresses to the concrete in such a way as to compensate for the tensile forces that are to be applied to the structure. This technique generates favorable internal forces in the concrete. The installation of a prestressed concrete structure requires the tensioning of the cables in the concrete. The tension profile along a cable is designed in our case to comply with an official standard (BPEL 91 regulation). Given physical parameters (initial tension), a tension profile is computed along the length of the cable as a function of the curvilinear abscissa. To be more precise, the coupling between cables and concrete can be decomposed into three main stages: before, during and after prestressing. In the case of the above structures, the concrete is first poured around sheaths and begins to dry. Cables are then inserted into these ducts and prestressed in order to comply with civil engineering standards. At last, cement is poured in the ducts, and the life of the structure can continue with a kinematic coupling between the concrete and the steel cables. In the numerical model studied here, the one-dimensional mesh (modeling the steel cables) is immersed within the three-dimensional mesh. This means that the cables ”cross” the concrete cells. A kinematic linkage is performed in order to connect the concrete nodes and the steel nodes. Since the coupling is assumed to be perfect (no slip between the tendons and the cement), coincident points in each material are assumed to have the same displacement. Instantaneous prestressing losses due to anchor recoil and friction are not taken into account at the scale of the considered RSV. The cables are modeled by bars, which means that we resort to a one- dimensional approach where only the tension-compression forces are considered. In this framework, the structure is described at each instant by a curve representing its mean line. Consequently, only the normal efforts appear (efforts defined along the tangent vector to the beam section) in the variational formulation of the problem. Two sets of equilibrium equations appear in the studied case: during the prestressing step (namely between the times $t^{\rm init,p}$ and $t^{\rm end,p}$) and after the prestressing step (namely until the end of the study $t_{\rm f}$): $\left\\{\begin{array}[]{rcl}\partial_{s}N\left(s,\ t\right)&=&f_{{\rm s}},\quad\forall t\in\left[t^{\rm init,p}\ ,t^{\rm end,p}\right],\quad\text{and}\quad\llbracket N\rrbracket\left(x_{i}^{\rm no,1d}\right)=-\frac{t^{(k)}-t^{\text{init,p}}}{t^{\text{end,p}}-t^{\text{init,p}}}F_{i},\\\ \partial_{s}N\left(s,\ t\right)&=&f_{{\rm s}},\quad\forall t\in\left[t^{\rm end,p},t_{\rm f}\right],\\\ \end{array}\right.$ (12) where $\\{x_{i}^{\rm no,1d}\\}_{i=1}^{\mathcal{N}^{s}}$ are the nodes of the one-dimensional mesh and $F_{i}$ are the nodal forces prescribed in order to respect the BPEL regulation used. We consider a linear thermo-elastic constitutive equation for the steel cables. Thus, the normal efforts in the cables (N) are linked to the uniaxial strains ($\varepsilon^{\rm s}$) in the cables: $\text{N}=E_{s}S_{s}\left(\varepsilon^{\rm s}-\alpha_{\rm th,s}\Delta T\right),$ where $E_{s}$ the Young’s modulus, $\alpha_{\rm th,s}$ the thermal dilation coefficient, $S_{s}$ the section of the prestressing cables and $\Delta T$ is the temperature rise in the beam. Notation | Physical quantity or parameter | Unit ---|---|--- $E_{\rm s}$ | Young’s modulus (steel) | $\mathrm{P}\mathrm{a}$ $\nu_{\rm s}$ | Poisson’s ratio (steel) | $\mathrm{-}$ $\rho_{\rm s}$ | Density (steel) | $\mathrm{k}\mathrm{g}\,\mathrm{m}^{3}$ $\alpha_{\rm th,s}$ | Thermal dilation coefficient (steel) | $\mathrm{K}^{-1}$ $S_{\rm s}$ | Cable cross-sections | $\mathrm{m}^{2}$ Figure 6: Parameters for the one-dimensional mechanical model (steel) Details and informations of the physical parameters for the three-dimensional are provided on Figure 3, whereas those on the one-dimensional are given on Figure 6. ### 3.3 Representative Structural Volume : standard section of a nuclear containment building The physical model is designed to capture the behavior of the so-called standard zone of the model, which corresponds to a portion of the mesh at mid- height, in the cylindrical part of the NCB. Thus, the region covered by the RSV comprises a three-dimensional portion containing three tangential prestressing cables and two vertical cables. For the section studied in this study, the internal radius of the wall is 21.9m, while the external radius is 23.4m. The width of the standard section corresponds to an angular sector of 4.2. For the scope of this work, the effect of passive steel reinforcement is neglected. (a) Temperature and water content BCs (b) Temperature and water content evolutions Figure 7: Boundary conditions for the thermal and hydric problems visualized on the HF thermal mesh Two mesh designs are used in practice: one for thermo-hydric calculations and another for mechanical calculations. The thermal mesh is refined close to the intrados and extrados to enable better reconstruction of the thermo-hydric gradients. The fields resulting from this procedure are then projected onto the mechanical mesh. The meshes employed in these studies are fairly coarse. In fact, these meshes have been built in order to be able to carry out uncertainty quantification or data assimilation studies. Therefore, engineers had to strike a balance between affordable computational cost and approximation quality. Numerical solutions for thermal problems may exhibit oscillations (in terms of temporal and spatial discretizations). This may imply a violation of the maximum principle. To avoid this phenomenon, linear finite elements and a lumping of the mass matrix are used for this study. As previously mentioned, the thermal mesh does not contain the prestressing cables: it is composed of linear hexahedral cells (HEXA8). For the mechanical mesh, hexahedral quadratic elements (HEXA20) are employed for the concrete, and prestressing tendons are represented using SEG2 linear finite elements (2-node beams). (a) Vizualisation of the mechanical mesh $N_{\rm e}$ $N_{\rm e}^{1d}$ $N_{\rm e}^{2d}$ $N_{\rm e}^{3d}$ $\mathcal{N}$ $\mathcal{N}_{\rm c}$ $\mathcal{N}_{\rm s}$ 1532 784 693 55 4076 3911 165 (b) Summary of the parameters for the one-dimensional mechanical model Figure 8: Visualization of the mechanical mesh (Figure 8(a)) and information on the mechanical mesh (number of elements and number of nodes for one- and three-dimensional meshes, Figure 8(b)) The BCs and loads applied to the RSV zone are detailed below (Eq.(13)). Figure 7 shows the temperature and water content histories adopted for the thermo- hydraulic calculations. As mentioned above, the BCs applied are Dirichlet conditions for temperature and water content. These are imposed on the inner wall (intrados) and the outer wall (extrados), as follows: $\left\\{\begin{array}[]{rclcl}T&=&T_{\rm int},&\text{on}&\Gamma_{\rm ext},\\\ T&=&T_{\rm ext},&\text{on}&\Gamma_{\rm int},\\\ \end{array}\right.\quad\text{and}\quad\left\\{\begin{array}[]{rclcl}C&=&C_{\rm int},&\text{on}&\Gamma_{\rm ext},\\\ C&=&C_{\rm ext},&\text{on}&\Gamma_{\rm int}.\\\ \end{array}\right.$ (13) Figure 9: Boundary conditions for the mechanical problem visualized on the HF mechanical mesh With regard to mechanical BCs, axisymmetric conditions are specified at the lateral boundaries of the RSV: this implies that normal displacements are assumed to be zero on each lateral face. Furthermore, vertical displacement is assumed to be blocked on the inner face of the RSV, while a uniform vertical displacement is used on the upper face. The set of boundary conditions with a visualization of the mechanical mesh is illustrated in Figure 9. ## 4 Numerical results: application to a standard section of a nuclear containment building ### 4.1 Solution Reproduction Problem We first perform a validation of the methodology on a non-parametric case. We aim to mimic the HF simulation with our ROM for the same set of parameters. To assess the quality of the reduced model, we introduce several metrics. First of all, since our ROM is founded on a projection onto displacement modes, we introduce displacement approximation errors, at a given time step ($E^{{\rm app},(k)}_{u,\mu}$), and averaged over time ($E^{{\rm app},{\rm avg}}_{u,\mu}$): $E^{{\rm app},(k)}_{u,\mu}=\frac{\left\lVert\mathbf{u}^{{\rm hf},(k)}_{\mu}-\widehat{\mathbf{u}}^{(k)}_{\mu}\right\rVert^{2}_{2}}{\left\lVert\mathbf{u}^{{\rm hf},(k)}_{\mu}\right\rVert^{2}_{2}},\quad\text{and}\quad E^{{\rm app},{\rm avg}}_{u,\mu}=\frac{\sqrt{\sum\limits_{k=1}^{K}\frac{t^{(k)}-t^{(k-1)}}{t_{\rm f}}\left\lVert\mathbf{u}^{{\rm hf},(k)}_{\mu}-\widehat{\mathbf{u}}^{(k)}_{\mu}\right\rVert^{2}_{2}}}{\sqrt{\sum\limits_{k=1}^{K}\frac{t^{(k)}-t^{(k-1)}}{t_{\rm f}}\left\lVert\mathbf{u}^{{\rm hf},(k)}_{\mu}\right\rVert^{2}_{2}}},$ (14) where $t_{\rm f}$ is the final physical time used in the simulation and where $\mathbf{u}^{{\rm hf},(k)}_{\mu}$ and $\widehat{\mathbf{u}}^{(k)}_{\mu}$ are respectively the solution at the k-th timestep obtained when using the HF model or the ROM for the parameter $\mu$. For the simulations reported below, we simulate a physical time of around 18 years. #### 4.1.1 HF problem In this section, we present the HF problem we wish to reproduce. As previously stated, we are only seeking to reduce the mechanical calculation in our THM coupling. To this end, we rely on a thermo-hydraulic calculation, which can be viewed as an initial state common to all parametric calculations. These two simulations are carried out in compliance with the BCs described previously. On the figures provided afterwards, the time is given in seconds, as this is the time used in the numerical code (1 day = 86400 seconds). The time scheme for our creep simulations features an adaptive time step algorithm. In practice, in all the simulations carried out as part of this study, the entire simulation is performed over around 50 time steps. (a) View of the drying field $C_{w}$ at the last time step of the HF simulation (top view) (b) Evolution of the drying field $C_{w}$ along $x$ in the plane ($y=0$, $z=0$) Figure 10: Water content snapshots (output of the hydric calculation step) at the end of the HF simulation (a) View of the temperature field $T$ at the last time step of the HF simulation (top view) (b) Evolution of the temperature field $T$ along $x$ in the plane ($y=0$, $z=0$) Figure 11: Temperature snapshots (output of the thermal calculation step) at the end of the HF simulation Figure 10 displays the water content in the standard section at the end of the HF calculation. This figure depicts the evolution of the $C_{w}$ field in the thickness of the containment building (in the standard section). Likewise, Figure 11 shows the evolution of the temperature field in the thickness of the standard section. The physical parameters used for these calculations are summarized in Table 4 where undefined parameters are chosen as follows: $\overline{\eta}_{\rm dc}=5\cdot 10^{9},\quad\overline{\kappa}=4.2\cdot 10^{-4},\quad\overline{\alpha}_{\rm dc}=7.56\cdot 10^{-6},\quad\overline{\eta}_{\rm is}=2.76\cdot 10^{18},\quad\overline{\eta}_{\rm id}=1.38\cdot 10^{18}.$ From these auxiliary fields ($H$ field in the methodology formulation in section 2) we can determine all the mechanical fields using the HF code. Figure 12 represents the displacement fields and the components of the Cauchy stress tensor obtained for the HF calculation we are seeking to reproduce in this section. (a) $u_{r}$ $[$\mathrm{m}$]$ (b) $u_{\theta}$ $[$\mathrm{m}$]$ (c) $u_{z}$ $[$\mathrm{m}$]$ (d) $\sigma_{\theta\theta}$ $[$\mathrm{P}\mathrm{a}$]$ Figure 12: Mechanical fields snapshots (displacements, see Figure 12(a), 12(b), 12(c), and stresses within the concrete, see Figure 12(d)) at the end of the HF calculation on the standard section Our first goal is to ensure that the mechanical fields (displacements, stresses in the concrete and normal forces in the cables) are fairly accurate approximations of the values obtained from HF calculations. Besides, using a ROM of a standard section should provide a good quality approximation of the fields used in practical applications by engineers. In our case, this RSV has two main purposes: first, to compute leakage estimates from prestress loss in the cables, and second, to perform recalibration tests from deformation data (tangential and vertical deformations) on the intrados and extrados. Figure 13: Evolution of normal forces in the two vertical (CABV1, CABV2) and three horizontal (CABH1, CABH2, CABH3) cables of the standard section Figure 13 depicts the evolution of the mean value of the normal forces in each of the five cables within the standard section. For the record, the mesh studied contains two vertical cables and three horizontal cables. Within the framework of the investigated model, the vertical cables have a similar behavior (as do the three horizontal cables). In the following, we have decided to report only the results for one horizontal and one vertical cables (CABV1 and CABH2), to ease the readability of the results. Figure 14 displays the evolution of mechanical strains and total strains in the concrete. In our notations, (I) stands for intrados whereas (E) stands for extrados. In our cases of interest, the total strains of the material are not purely mechanical. In general, data assimilation problems only focus on mechanical deformations. This is of key interest when reconstructing the strain field from the displacement modes, since the strain includes components due to temperature gradients and/or water pressure. Indeed, in our ROM resolution procedure, we have generalized coordinates at our disposal, which enable us to reconstruct the displacement field in the material. By computing the symmetric gradient of this displacement field, we can determine the total strains. In order to reconstruct a strain field, we must subtract the terms related to the thermal and hydric fields. Both these fields may be derived independently of the reduction process, since we only reduce the mechanical part of the calculation chain. We are thus able to pre-calculate the TH strain fields and subtract them from a total strain field so as to obtain the reconstructed mechanical strain field. (a) $\varepsilon_{tt}$ (b) $\varepsilon_{zz}$ Figure 14: Comparison for the pointwise values between some components (tangential and vertical) the mechanical strains and the total strains in sensor zones (extrados (E) and intrados (I)) In order to assess the accuracy of our reduced model, we introduce approximation errors for these different fields: for the average of the normal forces at the nodes in the CABV1 vertical cable ($E_{\overline{\mu}}^{{\text{app}},(t)}[\text{N}_{\rm V_{2}}]$), and in the horizontal cable ($E_{\overline{\mu}}^{{\text{app}},(t)}[\text{N}_{\rm H_{2}}]$), for the average of the tangential strain and vertical strain on the extrados ($E_{\overline{\mu}}^{{\text{app}},(t)}[\varepsilon^{\rm m}_{tt}\text{ (avg - E) }]$ and $E_{\overline{\mu}}^{{\text{app}},(t)}[\varepsilon^{\rm m}_{zz}\text{ (avg - E) }]$) , and finally for the average of the tangential strain and horizontal strain on the intrados ($E_{\overline{\mu}}^{{\text{app}},(t)}[\varepsilon^{\rm m}_{tt}\text{ (avg - I) }]$ and $E_{\overline{\mu}}^{{\text{app}},(t)}[\varepsilon^{\rm m}_{zz}\text{ (avg - I) }]$) . To average the components of the strain tensor, the values at the Gauss points are extrapolated to the nodes, and the value at the nodes is then averaged. These relative errors in the deformation fields relate exclusively to mechanical deformations. Indeed, this is the only part of the tensor that is actually modified by our reduction process, as explained above. #### 4.1.2 Speedups and approximation errors In order to validate the ROM, we verify that the displacement field is properly reconstructed. Furthermore, since we are interested in the use of the ROM for engineering applications, it is necessary to confirm the quality of the approximation on the various quantities of interest, more precisely tangential and vertical deformations and normal forces in the cables (which enables us to calculate prestressing loss). Ultimately, it is crucial to provide a model that reduces the computation time required whenever a call is made. To this end, we focus on the speedups ($\text{speedup}=\frac{\text{ROM CPU cost}}{\text{HF CPU cost}}$) obtained after construction of the reduced model (online phase). Figure 15: POD eigenvalues for the displacement ($\mathbf{u}$) and the generalized forces ($\bm{\mathfrak{S}}$) using a $\ell_{2}$ compression for a solution reproduction problem (50 initial snapshots) Figure 15 depicts the POD eigenvalues generated on snapshots of displacements ($\mathbf{u}$) and generalized forces ($\bm{\mathfrak{S}}$). The decay profiles are quite distinct between the two physical quantities: the decay of the eigenvalues for displacements is fast, unlike in the case of generalized forces. This implies that the sizes of the two bases generated for POD tolerances of the same order of magnitude are significantly different. The displacement basis will always be much smaller than the generalized force basis. (a) Approximation errors on $\mathbf{u}$ (b) Speedups Figure 16: Evolution of time-averaged approximation errors on the displacements and speedups as a function of the number of modes used ($N_{u}$, see Figure16(a)) and for several hyper-reduction tolerances ($\delta$, see Figure16(b)) As a way of assessing the robustness of the reduction approach proposed here, we have built several ROMs for different numbers of displacement modes and different hyper-reduction tolerances. An increase in the number of modes and a decrease in the $\delta$ hyperparameter both improve the quality of the ROM and increase computation time (speedup). Thus, a tradeoff needs to be found for engineering applications in order to provide a fast and accurate ROM. Figure 16 displays the evolution of speedups and time-averaged displacement approximation errors as a function of the number of modes (for several tolerances). We observe that from 5 modes upwards, The reduced order model exhibits an good approximation quality, with approximation errors below the order of 0.2$\%$ (for all tolerances studied). In this case, the speedups achieved are substantial: around 10 for the most severe tolerance (equal to the Newton-Raphson tolerance), around 15 for the intermediate tolerance studied, and over 30 for the coarsest tolerance. These accelerations in CPU computation time are all the more appealing as the mesh studied in this paper is very coarse, with only a few hundred elements (see Figure 17 for further details). This opens the door to future work on the use of finer meshes in NCB cross-section studies. (a) $\delta=10^{-2}$ (b) $\delta=10^{-4}$ (c) $\delta=10^{-6}$ Figure 17: Reduced meshes of the standard section obtained for a reproduction problem solution using $N_{u}=5$ displacement modes and for several hyper- reduction parameters We have further investigated the quality of the ROM along the time trajectory of the problem. Figure 18 represents the relative errors at each time step for different ROMs. Since the construction of the ROM is determined by a pair of hyperparameters $\left(N_{u},\delta\right)$, we focused on the influence of each parameter in fixing the second. The parameters set in the two test cases are chosen so as to be as restrictive as possible in the parameter sets we explore here. We find that for our problem, the number of modes has a much greater influence on time-evolution profiles than hyper-reduction tolerance. Since the latter parameter leads to an increase in mesh size as it decreases, this prompts us to state that: in this non-parametric case, it is advisable to fix a number of modes to control the approximation error, and it suffices to take a low or intermediate tolerance to get good speedups. We notice that for low approximation qualities, there are jumps in the relative error profiles of the displacement fields. This is due to the fact that the ROM is built over the entire life of the standard section, namely with three distinct physical regimes: life of concrete without cables, prestressing, and life of concrete with cables. For small numbers of modes, the ROMs is unable to generate modes designed to approximate these three phases. Since we chose to use no weighting, it will have a tendency to approximate the final step much more accurately, which is justified by the fact that the number of time steps associated with this phase is much greater. This higher approximation quality on the last step is of interest for our applications, as we seek not only a reliable approximation in terms of time trajectories, but also, and above all, a solution that is truly representative of the system’s final state. If we need control the time-averaged approximation errors in a different manner, it would be natural to use a weighted POD in order to take into account the non- constant timestepping. (a) Time evolution of relative errors for $\delta=10^{-6}$ vectors in the reduced basis and varying number of $N_{u}$ values (b) Time evolution of relative errors for $N_{u}=10$ vectors in the reduced basis and varying number of $\delta$ values Figure 18: Evolution of approximation errors on displacements at each time step for several numbers of modes used or for several hyper-reduction tolerances #### 4.1.3 Errors on the quantities of interest The scope of the research we have undertaken requires us to be confident in our ability to provide accurate QoIs. We thus wish to verify that the ROM obtained, in addition to being a good approximation of the HF calculation in terms of displacements while being significantly less computationally expensive, can be used in real applications. This is achieved by investigating the profiles of normal forces in the cables and deformations at the sensor level (average measure of a component of the strains tensor over the internal or external face). We would like to point out that data post-processing differs according to the QoIs studied. The reduced mesh contains all the prestressing cables, while the quadrature laws are unchanged in the one- dimensional mesh. As a result, we can compute the relative error on normal forces directly after calling up the reduced model. For strains, however, we must reconstruct the strain fields on the HF mesh, and then apply the observation operators (physical sensors) used in the HF framework. This step s computationally inexpensive compared to the overall procedure, as the symmetric gradients of the modes are already known, because they are required for the hyper-reduction process. All that needs to be done is to multiply these modes to the generalized coordinates and apply the observation operator. Figure 19 provides the time-evolution of the relative errors on the QoIs. On Figure 19, we delimit the three phases of a mechanical calculation for a power plant containment building: a first phase in which the cables are not involved in the mechanical calculation, i.e. the concrete evolves on its own; a second phase in which the concrete is prestressed (see Eq. (12) for specific loads in this case); then, finally, the life of the prestressed concrete, in which the concrete and cables are fully coupled. The three periods are delimited by dotted black vertical lines. The HF solver’s adaptive time-stepping process explains the temporal distribution of the various snapshots. The initial time for plotting corresponds to the first time step output by the reference calculation code. Figure 19: Evolution of approximation errors on QoIs at each time step for several numbers of modes used or for several hyper-reduction tolerances (the two vertical lines in black delimit the prestressing section of the cables) The pattern of strain changes is similar to that of displacement approximation errors. Furthermore, the observation of a better approximation of deformations during the life of the NCB after prestressing is also confirmed. This confirms the usefulness of the ROM for data assimilation problems. In practice, data is only available once the cables have been prestressed. For the sake of clarity, we would like to point out that the time scale for the profile of relative errors in normal forces is not the same as that for deformations. In fact, only the life of the enclosure after prestressing is depicted, since normal forces are always zero beforehand, or known analytically. ### 4.2 Parametric problem In a second step, we study a parametric case. As mentioned above, we consider here a strong-greedy approach. Thus, in order to drive the greedy search, we consider the maximum approximation error on a given training set ($\Theta_{\rm train}$), for the parameters we have not yet examined. As a reminder, $\Theta_{*}$ corresponds to the set of parameters used in building the ROM. We introduce a notation for the maximal error obtained when testing the ROM: $\Delta_{N}^{\rm stg}=\max\limits_{i\in\Theta_{\rm train}\setminus\Theta_{*}}E^{{\rm app},{\rm avg}}_{u,\mu_{i}}.$ In the physical case under study, uncertainty is mainly limited to five physical parameters $\mu=[\eta_{\rm dc},\kappa,,\alpha_{\rm dc},\eta_{\rm is},\eta_{\rm id}]^{\top}\in\mathbb{R}^{5}$, and in particular to the first two. As a validation of our model reduction approach, we set all the other parameters of the problem (see values in the Table 4), and restrict the parametric problem to the other parameters. Input parameter | Notation | Value | Unit ---|---|---|--- Young’s modulus (steel) | $E_{\rm s}$ | $1.9\cdot 10^{11}$ | $\mathrm{P}\mathrm{a}$ Poisson’s ratio (steel) | $\nu_{\rm s}$ | $0.3$ | $\mathrm{-}$ Density (steel) | $\rho_{\rm s}$ | $7850$ | $\mathrm{k}\mathrm{g}\,\mathrm{m}^{3}$ Thermal dilation coefficient (steel) | $\alpha_{\rm th,s}$ | $1\cdot 10^{-5}$ | $\mathrm{K}^{-1}$ Guaranteed maximum load stress at break | $f_{\rm prg}$ | $1.86\cdot 10^{9}$ | $\mathrm{P}\mathrm{a}$ Cable cross-section | $S_{\rm s}$ | $5400\cdot 10^{-6}$ | $\mathrm{m}$ Young’s modulus (concrete) | $E_{\rm c}$ | $4.2\cdot 10^{10}$ | $\mathrm{P}\mathrm{a}$ Poisson’s ratio (concrete) | $\nu_{\rm c}$ | $0.2$ | $\mathrm{-}$ Density (concrete) | $\rho_{\rm c}$ | $2350$ | $\mathrm{k}\mathrm{g}\,\mathrm{m}^{3}$ Thermal dilation coefficient (concrete) | $\alpha_{\rm th,c}$ | $5.2\cdot 10^{-6}$ | $\mathrm{K}^{-1}$ Autogenous shrinkage coefficient | $\beta_{\rm endo}$ | $66.1\cdot 10^{-6}$ | $\mathrm{-}$ Dessication shrinkage coefficient | $\alpha_{\rm dc}$ | X | $\mathrm{-}$ Reversible deviatoric basic stiffness | $k_{\rm rd}$ | $5.98\cdot 10^{18}$ | $\mathrm{P}\mathrm{a}$ Reversible deviatoric basic viscosity | $\eta_{\rm rd}$ | $8.12\cdot 10^{16}$ | $\mathrm{P}\mathrm{a}\,\mathrm{s}$ Irreversible deviatoric basic viscosity | $\eta_{\rm id}$ | X | $\mathrm{P}\mathrm{a}\,\mathrm{s}$ Basic creep activation energy | $U_{\rm bc}/R$ | $4700$ | $\mathrm{K}$ Basic creep reference temperature | $T_{\rm bc}^{0}$ | $20$ | ${}^{\circ}\mathrm{C}$ Basic creep consolidation parameter | $\kappa$ | X | $\mathrm{-}$ Desiccation creep viscosity | $\eta_{\rm dc}$ | X | $\mathrm{P}\mathrm{a}^{-1}$ Dead weight of upper concrete lifts | $\sigma_{z,c}$ | $1.375\cdot 10^{6}$ | $\mathrm{P}\mathrm{a}$ Stress applied to vertical cables | $\sigma_{v,s}$ | $990.7\cdot 10^{6}$ | $\mathrm{P}\mathrm{a}$ Stress applied to horizontal cables | $\sigma_{h,s}$ | $1264.7\cdot 10^{6}$ | $\mathrm{P}\mathrm{a}$ Table 4: Coefficients for the mechanical model fixed for the parametric problem. The notation X corresponds to the parameters that can vary and, therefore, we do not give a priori numerical values. #### 4.2.1 In-sample test for $\mathcal{P}\subset\mathbb{R}^{2}$ We confine the study to a parametric case with two parameters. The vector of parameters considered is as follows: $\mu=\begin{bmatrix}\eta_{\rm dc}\\\ \kappa\end{bmatrix}\in\left[5\cdot 10^{8},\ 5\cdot 10^{10}\right]\times\left[10^{-5},\ 10^{-3}\right]\subset\mathbb{R}^{2}.$ This is tantamount to setting the following parameters (in addition to those given in the Table 4): $\overline{\alpha}_{\rm dc}=7.56\cdot 10^{-6},\quad\overline{\eta}_{\rm is}=2.76\cdot 10^{18},\quad\overline{\eta}_{\rm id}=1.38\cdot 10^{18}.$ We rely on a training space of size $|\Theta_{\rm train}|=25$, designed as the tensor product of two one-dimensional grids log-evenly spaced ($5\times 5$ grid). This choice results from a tradeoff between the need for sufficiently fine discretization to have several parameters, and the offline CPU cost of building the ROM (an HF calculation takes around fifteen minutes). The choice of optimal discretization is out of the scope of this work and is a field of research of its own. To help understand the physical problem under study, Figure 20 depicts the evolution of normal forces over time for different parameter sets. We can clearly appreciate that the loss of prestress in the cables (a key feature in the study of leakage rates) strongly differs according to the pair of parameters studied. The observation of these quantities supports the choice of a logarithmic discretization for the construction of the parametric grid. (a) $\kappa=1\cdot 10^{-5}$ (horizontal cables) (b) $\kappa=1\cdot 10^{-5}$ (vertical cables) (c) $\eta_{\rm dc}=5\cdot 10^{7}$ (horizontal cables) (d) $\eta_{\rm dc}=5\cdot 10^{7}$ (vertical cables) Figure 20: Evolution of normal forces over time for pairs of parameters belonging to the parametric set of size $|\Theta_{\rm train}|=25$. Figures 20(a)-20(b) (resp. Figure 20(c)-20(d)) feature cases where the parameter $\kappa$ (resp. $\eta_{\rm dc}$) is fixed. For each pair, we plot the time evolution of the normal forces averaged over all the nodes of the vertical and horizontal cables. Figure 21 shows the decay of the POD eigenvalues when using the 25 HF snapshots. The decay is similar to that shown in Figure 15. We notice that for the parametric case, the decay is fast and the gain in compression will be significant. Figure 21: POD eigenvalues for the displacement and the generalized forces ($\bm{\mathfrak{S}}$) using a $\ell_{2}$ compression for a parametric problem As a first test, we report a quick evaluation of the construction of a ROM on a smaller training set, consisting of 4 points. In other words, we take only the extremums of the 2d square to which all the parameters belong. The aim of this simpler case is to compare the two methodologies for building POD-reduced bases (in the parametric case) before presenting the case on the 25-point parametric case. Figure 22 depicts the speedups and approximation errors obtained after 4 iterations (the maximum number of iterations possible for this case) for different pairs of hyper-parameters used for ROM construction: number of modes and hyper-reduction tolerance. We observe that the hierarchical basis strategy leads to an increase in basis size (in our case), which reduces speedup and improves approximation quality (to below one percent). On the other hand, the use of full POD enables much better speedups to be maintained, while reducing the approximation error, but to a lesser extent. The same tradeoff applies to ROM construction as described above. In the case studied here, the regularity of the problem (at least for this set of parameters), prompts us to favor a POD on all snapshots (therefore, the basis is not hierarchical during iterations), in order to have the most efficient ROM both in terms of computational gain, while having reasonable approximation errors. (a) Speedups (POD on all HF snapshots) (b) Speedups (Incremental POD) (c) Average error (POD on all HF snapshots) (d) Average error (Incremental POD) Figure 22: Speedups and average approximation errors on displacements fields for $\mu\in\Theta_{\rm train}$ using a training set of size $|\Theta_{\rm train}|=4$ for different compression tolerances ($\varepsilon$) and hyper- reduction parameters ($\delta$) and comparison between non-incremental and incremental POD Figure 23: Maximum approximation error on unexplored parameters decreases during greedy iterations with an hyper-reduction parameter $\delta=10^{-5}$ Figure 24: Statistical errors on the training set $\Theta_{\rm train}$, defined as a $5\times 5$ grid along the greedy iterations. Two strategies are compared: POD on all HF snapshots (red), and incremental POD (orange) Then, we apply this strategy to a larger training set ($|\Theta_{\rm train}|=25$ parameters). Figure 23 represents the decay of the maximum approximation error on unexplored parameters (used to drive the greedy procedure). These successive choices clearly lead to a decrease in the maximum error (Figure 25(a)) and the average error (Figure 25(b)) over the entire training set (explored and unexplored parameters). Scaling up for each parameter, Figure 26 shows the time-averaged approximation errors for each parameter over the first iterations of the algorithm. As confirmed by the other figures, we observe that for the case studied, we have errors of the order of a few percent on all parameters (no more than ten percent) after just a few iterations. This is due to the relative regularity of the problem studied. Figure 24 displays error statistics (median, quartiles) over the course of greedy iterations (5 by 5). We compare two approaches for incremental POD or POD on all snapshots, with error visualization, where we observe a decrease in medians over the iterations. (a) Maximum error (b) Average error Figure 25: Average approximation errors on displacements fields for $\mu\in\Theta_{\rm train}$ using a training set of size $|\Theta_{\rm train}|=25$ and a non-incremental POD for different compression tolerances ($\varepsilon$) with an hyper-reduction parameter $\delta=10^{-5}$ (a) Indexes of parameters (b) Approximation errors Figure 26: Time-averaged approximation errors on displacement on the training set ($|\Theta_{\rm train}|=25$) for the first greedy iterations with an hyper- reduction parameter $\delta=10^{-5}$ #### 4.2.2 Out-of-sample test for $\mathcal{P}\subset\mathbb{R}^{2}$ All the above numerical results highlight the good approximation quality of the ROM on the training set. Nevertheless, it is crucial to further assess the methodology’s suitability for out-of-sample parameters. To this end, we consider a 7-by-7 grid. This ensures that we get non-matching points. Then, we test the approximation quality of the ROM on this set, called the test set. (a) POD on all HF snapshots (b) Incremental POD Figure 27: Boxplot for a training set on a $5\times 5$ grid ($|\Theta_{\rm train}|=25$), verified on a test set on a $7\times 7$ grid ($|\Theta_{\rm test}|=49$). The quantities measured are the time-averaged errors on each set, for a ROM resulting from a greedy procedure, stopped after 5 iterations. Figure 27 depicts boxplots for time-averaged approximation errors on the test set for the same training set for two sets of greedy strategies: one based on a POD on all snapshots (Figure 27(a)) and the other on an incremental POD (Figure 27(b)). From a statistical point of view, most of the test set features good approximation quality. The distribution of statistics across the two cases is consistent. For the POD on all snapshots, the error on the training set is of slightly higher quality than on the test set, while maintaining excellent approximation quality. Despite the simplicity of the case, it remains complex to perfectly capture the worst-case representations in the same way as the rest. Nevertheless, the worst-case error remains of the order of a few percent on the test set. For the case with incremental POD, the error quality between training and test sets is very similar, which is consistent with the fact that more modes are used than with POD on the snapshot set. Yet the difference between training and test sets is due to the smaller quartile spread on the training set (lower statistical dispersion), which is also coherent. (a) Boxplots (b) Approximation errors Figure 28: Statiscal repartition of time-averaged errors generated by several ROMs on the same test set defined on a $7\times 7$ grid ($|\Theta_{\rm test}|=49$). Three ROMs are compared (all obtained by a greedy process): built on a $2\times 2$ training grid with POD on all HF snapshots (blue), on a $5\times 5$ training grid with POD on all HF snapshots (red), and on a $5\times 5$ training grid with an incremental POD (orange). Figure 28(a) is a boxplot of time-averaged errors on $\Theta_{\rm test}$ and 28(b) is the time- averaged errors according to the number of the parameters in the $\Theta_{\rm test}$ (numerotation is similar to Figure 26(a), but on a $7\times 7$ grid) In a second step, we can also compare the greedy approaches with each other in terms of their behavior on the test set (Figure 28). As can be expected, the poorest approximation case matches the case with the smallest training set size, followed by the case with 25 points and total POD, followed by a case with 25 points and incremental POD. This analysis is reflected in the boxplots (see Figure 28(a)), as well as in the plot of errors as a function of parameter indices (indices are distributed in a similar way to discretization on a 5x5 grid). ## 5 Conclusion We proposed and validated a methodology for the construction of ROMs for multi-modeling problems, with an application to a standard section of prestressed concrete NCB. This involves several aspects. First, we devised a robust numerical method, suitable for use with industrially-constrained codes, providing ROMs designed to replicate the behavior of prestressed concrete with high speedups and good approximation errors. Furthermore, we proposed an adaptive approach to iteratively enrich the reduced model on a set of parameters. These two points are presented theoretically and validated numerically. Second, we have also succeeded in producing a ROM that can be used for real engineering applications, in that it provides a good representation of the variables of interest used in practice by engineers, whether for structural state analysis (leakage rate study) or for in-depth data analysis (data-assimilation problem, Bayesian approaches). Much work is currently underway to make further progress in several directions. First, these promising results are validated on fairly coarse meshes (although used in practice) and on smaller parametric spaces. Efforts are currently underway to evaluate these approaches by increasing the dimension of the parametric vector, and of the snapshot vectors considered (mesh refinement). Second, the approach adopted is a strong greedy process and relies on comparison with known HF snapshots. This leads to significant offline computation costs, since it requires _a priori_ knowledge of these solutions. This is a particular limitation when scaling up. Previous efforts have focused on the construction of low-cost a posteriori error indicators. The efficiency of these indicators in steering greedy search (within a weak- greedy context) has been demonstrated for problems featuring internal variables. The problems presented here are somewhat more intricate from a theoretical standpoint in mechanics (THM and multimodeling), and consequently, pose challenges for robust implementation in an industrial-grade FE code. Ongoing efforts are being made to broaden the application of indicators to tackle these challenges. Research is also underway to make this methodology still applicable when the parameter space becomes larger. Finally, the coupling of the ROM methodology with the optimization problems mentioned above, and in particular data assimilation, in order to reduce the resolution time, are being studied. ## Acknowledgements This work was supported by ANRT (French National Association for Research and Technology) and EDF. We extend our gratitude to the code$\\_$aster development team and all contributors to the code. Our focus has been on utilizing and advancing the Python library Mordicus, supported by a ’French Fonds Unique Interministériel’ (FUI) project and designed as a tool for the advancement of model reduction methods tailored for industrial applications. ## References * Dal Pont et al. [2007] S. Dal Pont, S. Durand, B. Schrefler, A multiphase thermo-hydro-mechanical model for concrete at high temperatures—finite element implementation and validation under loca load, Nuclear Engineering and Design 237 (2007) 2137–2150. * Gawin et al. [2003] D. Gawin, F. Pesavento, B. Schrefler, Modelling of hygro-thermal behaviour of concrete at high temperature with thermo-chemical and mechanical material degradation, Computer methods in applied mechanics and engineering 192 (2003) 1731–1771. * Asali et al. [2016] M. Asali, B. Capra, J. Mazars, J.-B. Colliat, Numerical strategy for forecasting the leakage rate of inner containments in double-wall nuclear reactor buildings, Journal of advanced concrete technology 14 (2016) 408–420. * Bouhjiti et al. [2018] D.-M. Bouhjiti, M. Boucher, M. Briffaut, F. Dufour, J. Baroth, B. Masson, Accounting for realistic thermo-hydro-mechanical boundary conditions whilst modeling the ageing of concrete in nuclear containment buildings: Model validation and sensitivity analysis, Engineering Structures 166 (2018) 314–338. * Jason et al. [2007] L. Jason, G. Pijaudier-Cabot, S. Ghavamian, A. Huerta, Hydraulic behaviour of a representative structural volume for containment buildings, Nuclear engineering and design 237 (2007) 1259–1274. * Charpin et al. [2022] L. Charpin, J. Haelewyn, A. C. El Idrissi, J. Niepceron, B. Masson, C. Toulemonde, G. Boulant, J.-P. Mathieu, F. Hamon, S. Michel-Ponnelle, et al., Predicting leakage of the vercors mock-up and concrete containment buildings-a digital twin approach, Acta Polytechnica CTU Proceedings 33 (2022) 78–84. * Bouhjiti et al. [2020] D.-M. Bouhjiti, J. Baroth, F. Dufour, S. Michel-Ponnelle, B. Masson, Stochastic finite elements analysis of large concrete structures’ serviceability under thermo-hydro-mechanical loads–case of nuclear containment buildings, Nuclear Engineering and Design 370 (2020) 110800. * De Rocquigny et al. [2008] E. De Rocquigny, N. Devictor, S. Tarantola, Uncertainty settings and natures of uncertainty, Uncertainty in industrial practice (2008). * Berveiller et al. [2012] M. Berveiller, Y. Le Pape, B. Sudret, F. Perrin, Updating the long-term creep strains in concrete containment vessels by using markov chain monte carlo simulation and polynomial chaos expansions, Structure and Infrastructure Engineering 8 (2012) 425–440. * Rossat et al. [2021] D. Rossat, D.-M. Bouhjiti, J. Baroth, M. Briffaut, F. Dufour, A. Monteil, B. Masson, S. Michel-Ponnelle, A bayesian strategy for forecasting the leakage rate of concrete containment buildings–application to nuclear containment buildings, Nuclear Engineering and Design 378 (2021) 111184. * Rossat [2022] D. Rossat, Bayesian techniques for inverse uncertainty quantification for multi-physics models of large containment structures, Ph.D. thesis, Université Grenoble Alpes [2020-….], 2022\. * Masson et al. [2014] B. Masson, E. Galenne, E. Oukhemanou, C. Aubry, G. Laou-Sio-Hoi, Vercors: a 1/3 scaled mockup and an ambitious research program to better understand the different mechanisms of leakage and aging (2014). * Hesthaven et al. [2016] J. S. Hesthaven, G. Rozza, B. Stamm, et al., Certified reduced basis methods for parametrized partial differential equations, volume 590, Springer, 2016. * Quarteroni et al. [2015] A. Quarteroni, A. Manzoni, F. Negri, Reduced basis methods for partial differential equations: an introduction, volume 92, Springer, 2015. * Rozza et al. [2008] G. Rozza, D. B. P. Huynh, A. T. Patera, Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations: application to transport and continuum mechanics, Archives of Computational Methods in Engineering 15 (2008) 229–275. * Haasdonk and Ohlberger [2008] B. Haasdonk, M. Ohlberger, Reduced basis method for finite volume approximations of parametrized linear evolution equations, ESAIM: Mathematical Modelling and Numerical Analysis 42 (2008) 277–302. * Berkooz et al. [1993] G. Berkooz, P. Holmes, J. L. Lumley, The proper orthogonal decomposition in the analysis of turbulent flows, Annual review of fluid mechanics 25 (1993) 539–575. * Bergmann et al. [2009] M. Bergmann, C.-H. Bruneau, A. Iollo, Enablers for robust pod models, Journal of Computational Physics 228 (2009) 516–538. * Volkwein [2011] S. Volkwein, Model reduction using proper orthogonal decomposition, Lecture Notes, Institute of Mathematics and Scientific Computing, University of Graz. see http://www. uni-graz. at/imawww/volkwein/POD. pdf 1025 (2011). * Hesthaven et al. [2016] J. S. Hesthaven, G. Rozza, B. Stamm, The empirical interpolation method, in: Certified Reduced Basis Methods for Parametrized Partial Differential Equations, Springer, 2016, pp. 67–85. * Carlberg et al. [2013] K. Carlberg, C. Farhat, J. Cortial, D. Amsallem, The gnat method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows, Journal of Computational Physics 242 (2013) 623–647. * Ryckelynck [2005] D. Ryckelynck, A priori hyperreduction method: an adaptive approach, Journal of computational physics 202 (2005) 346–366. * Hernandez et al. [2017] J. A. Hernandez, M. A. Caicedo, A. Ferrer, Dimensional hyper-reduction of nonlinear finite element models via empirical cubature, Computer methods in applied mechanics and engineering 313 (2017) 687–722. * Farhat et al. [2014] C. Farhat, P. Avery, T. Chapman, J. Cortial, Dimensional reduction of nonlinear finite element dynamic models with finite rotations and energy-based mesh sampling and weighting for computational efficiency, International Journal for Numerical Methods in Engineering 98 (2014) 625–662. * Farhat et al. [2015] C. Farhat, T. Chapman, P. Avery, Structure-preserving, stability, and accuracy properties of the energy-conserving sampling and weighting method for the hyper reduction of nonlinear finite element dynamic models, International journal for numerical methods in engineering 102 (2015) 1077–1110. * de France [2022] E. de France, Finite element code$\\_$aster, analysis of structures and thermomechanics for studies and research (1989-2022). Open source on www.code-aster.org. * Benaceur [2018] A. Benaceur, Réduction de modèles en thermo-mécanique, Ph.D. thesis, Paris Est, 2018. * Khoun [2021] L. Khoun, Reduced order modelling for parametrized time-domain vibro-acoustic problems. Application to the design of structures subjected to underwater explosions, Ph.D. thesis, Sorbonne Université, 2021. * Dinh Trong [2018] T. Dinh Trong, Modèles hyper-réduits pour la simulation simplifiée du soudage en substitut de calcul hors d’atteinte, Ph.D. thesis, Paris Sciences et Lettres (ComUE), 2018\. * Agouzal et al. [????] E. Agouzal, J.-P. Argaud, M. Bergmann, G. Ferté, T. Taddei, A projection-based reduced-order model for parametric quasi-static nonlinear mechanics using an open-source industrial code, International Journal for Numerical Methods in Engineering n/a (????) e7385. doi:https://doi.org/10.1002/nme.7385. * Sirovich [1987] L. Sirovich, Turbulence and the dynamics of coherent structures. i. coherent structures, Quarterly of applied mathematics 45 (1987) 561–571. * Iollo et al. [2022] A. Iollo, G. Sambataro, T. Taddei, An adaptive projection-based model reduction method for nonlinear mechanics with internal variables: Application to thermo-hydro-mechanical systems, International Journal for Numerical Methods in Engineering 123 (2022) 2894–2918. * Yano [2019] M. Yano, Discontinuous galerkin reduced basis empirical quadrature procedure for model reduction of parametrized nonlinear conservation laws, Advances in Computational Mathematics 45 (2019) 2287–2320. * Everson and Sirovich [1995] R. Everson, L. Sirovich, Karhunen–loeve procedure for gappy data, JOSA A 12 (1995) 1657–1664. * Taddei and Zhang [2021] T. Taddei, L. Zhang, A discretize-then-map approach for the treatment of parameterized geometries in model order reduction, Computer Methods in Applied Mechanics and Engineering 384 (2021) 113956\. * Parish and Rizzi [2023] E. J. Parish, F. Rizzi, On the impact of dimensionally-consistent and physics-based inner products for pod-galerkin and least-squares model reduction of compressible flows, Journal of Computational Physics 491 (2023) 112387. * Fourier et al. [1822] J. B. J. Fourier, G. Darboux, et al., Théorie analytique de la chaleur, volume 504, Didot Paris, 1822\. * Granger [1995] L. Granger, Comportement différé du béton dans les enceintes de centrales nucléaires: analyse et modélisation, Ph.D. thesis, Ecole Nationale des ponts et Chaussées, 1995\. * Bažant and Najjar [1972] Z. Bažant, L. Najjar, Nonlinear water diffusion in nonsaturated concrete, Matériaux et Construction 5 (1972) 3–20. * Mensi et al. [1988] R. Mensi, P. Acker, A. Attolou, Séchage du béton: analyse et modélisation, Materials and structures 21 (1988) 3–12. * Boucher [2016] M. Boucher, Analyse du transfert spatio-temporel des déformations entre le cœur d’un ouvrage épais et son parement: application aux enceintes de confinement, Ph.D. thesis, Thèse Comu. Univ. Grenoble Alpes, 2016. * Van Genuchten [1980] M. T. Van Genuchten, A closed-form equation for predicting the hydraulic conductivity of unsaturated soils, Soil science society of America journal 44 (1980) 892–898. * Foucault et al. [2012] A. Foucault, S. Michel-Ponnelle, E. Galenne, A new creep model for npp containment behaviour prediction, in: International conference on Numerical modeling Strategies for sustainable concrete structures, 2012\. * Bazant and Chern [1985] Z. P. Bazant, J. Chern, Concrete creep at variable humidity: constitutive law and mechanism, Materials and structures 18 (1985) 1–20.
* [46] D. Choudhury, K. Deka, T. Mandal, and S. Sadhukhan, “Neutrino and $Z^{\prime}$ phenomenology in an anomaly-free $\mathbf{U}(1)$ extension: role of higher-dimensional operators,” JHEP 06 (2020) 111, arXiv:2002.02349 [hep-ph]. * [47] K. Deka, T. Mandal, A. Mukherjee, and S. Sadhukhan, “Leptogenesis in an anomaly-free $\mathrm{U}(1)$ extension with higher-dimensional operators,” arXiv:2105.15088 [hep-ph]. * [48] S. Jana, N. Okada, and D. Raut, “Displaced vertex signature of type-I seesaw model,” Phys. Rev. D 98 no. 3, (2018) 035023, arXiv:1804.06828 [hep-ph]. * [49] A. Abada, N. Bernal, A. E. C. Hernández, X. Marcano, and G. Piazza, “Gauged inverse seesaw from dark matter,” Eur. Phys. J. C 81 no. 8, (2021) 758, arXiv:2107.02803 [hep-ph]. * [50] P. Fileviez Pérez and A. D. Plascencia, “Probing the Nature of Neutrinos with a New Force,” Phys. Rev. D 102 no. 1, (2020) 015010, arXiv:2005.04235 [hep-ph]. * [51] A. Davidson, M. Koca, and K. C. Wali, “U(1) as the Minimal Horizontal Gauge Symmetry,” Phys. Rev. Lett. 43 (1979) 92. * [52] R. N. Mohapatra and R. E. Marshak, “Local B-L Symmetry of Electroweak Interactions, Majorana Neutrinos and Neutron Oscillations,” Phys. Rev. Lett. 44 (1980) 1316–1319. [Erratum: Phys. Rev. Lett.44,1643(1980)]. * [53] R. E. Marshak and R. N. Mohapatra, “Quark - Lepton Symmetry and B-L as the U(1) Generator of the Electroweak Symmetry Group,” Phys. Lett. 91B (1980) 222–224. * [54] A. Davidson, “$B-L$ as the fourth color within an $\mathrm{SU}(2)_{L}\times\mathrm{U}(1)_{R}\times\mathrm{U}(1)$ model,” Phys. Rev. D 20 (1979) 776. * [55] A. Davidson and K. C. Wali, “Universal Seesaw Mechanism?,” Phys. Rev. Lett. 59 (1987) 393. * [56] C. Wetterich, “Neutrino Masses and the Scale of B-L Violation,” Nucl. Phys. B187 (1981) 343–375. * [57] A. Masiero, J. F. Nieves, and T. Yanagida, “$B^{-}$l Violating Proton Decay and Late Cosmological Baryon Production,” Phys. Lett. 116B (1982) 11–15. * [58] R. N. Mohapatra and G. Senjanovic, “Spontaneous Breaking of Global $B^{-}$l Symmetry and Matter - Antimatter Oscillations in Grand Unified Theories,” Phys. Rev. D27 (1983) 254. * [59] W. Buchmuller, C. Greub, and P. Minkowski, “Neutrino masses, neutral vector bosons and the scale of B-L breaking,” Phys. Lett. B267 (1991) 395–399. * [60] S. Amrith, J. M. Butterworth, F. F. Deppisch, W. Liu, A. Varma, and D. Yallup, “LHC Constraints on a $B-L$ Gauge Model using Contur,” arXiv:1811.11452 [hep-ph]. * [61] L. Basso, A. Belyaev, S. Moretti, and C. H. Shepherd-Themistocleous, “Phenomenology of the minimal B-L extension of the Standard model: Z’ and neutrinos,” Phys. Rev. D80 (2009) 055030, arXiv:0812.4313 [hep-ph]. * [62] P. Fileviez Perez, T. Han, and T. Li, “Testability of Type I Seesaw at the CERN LHC: Revealing the Existence of the B-L Symmetry,” Phys. Rev. D 80 (2009) 073015, arXiv:0907.4186 [hep-ph]. * [63] F. F. Deppisch, N. Desai, and J. W. F. Valle, “Is charged lepton flavor violation a high energy phenomenon?,” Phys. Rev. D89 no. 5, (2014) 051302, arXiv:1308.6789 [hep-ph]. * [64] Z. Kang, P. Ko, and J. Li, “New Avenues to Heavy Right-handed Neutrinos with Pair Production at Hadronic Colliders,” Phys. Rev. D93 no. 7, (2016) 075037, arXiv:1512.08373 [hep-ph]. * [65] P. Cox, C. Han, and T. T. Yanagida, “LHC Search for Right-handed Neutrinos in $Z^{\prime}$ Models,” JHEP 01 (2018) 037, arXiv:1707.04532 [hep-ph]. * [66] E. Accomando, L. Delle Rose, S. Moretti, E. Olaiya, and C. H. Shepherd-Themistocleous, “Extra Higgs boson and Z${}^{{}^{\prime}}$ as portals to signatures of heavy neutrinos at the LHC,” JHEP 02 (2018) 109, arXiv:1708.03650 [hep-ph]. * [67] T. Appelquist, B. A. Dobrescu, and A. R. Hopper, “Nonexotic Neutral Gauge Bosons,” Phys. Rev. D68 (2003) 035012, arXiv:hep-ph/0212073 [hep-ph]. * [68] A. Das, N. Okada, and D. Raut, “Enhanced pair production of heavy Majorana neutrinos at the LHC,” Phys. Rev. D97 no. 11, (2018) 115023, arXiv:1710.03377 [hep-ph]. * [69] A. Das, N. Okada, S. Okada, and D. Raut, “Probing the seesaw mechanism at the 250 GeV ILC,” arXiv:1812.11931 [hep-ph]. * [70] A. Das, N. Okada, and D. Raut, “Heavy Majorana neutrino pair productions at the LHC in minimal U(1) extended Standard Model,” Eur. Phys. J. C78 no. 9, (2018) 696, arXiv:1711.09896 [hep-ph]. * [71] A. Das, P. S. B. Dev, and N. Okada, “Long-Lived TeV-Scale Right-Handed Neutrino Production at the LHC in Gauged $U(1)_{X}$ Model,” arXiv:1906.04132 [hep-ph]. * [72] C.-W. Chiang, G. Cottin, A. Das, and S. Mandal, “Displaced heavy neutrinos from $Z^{\prime}$ decays at the LHC,” arXiv:1908.09838 [hep-ph]. * [73] J. C. Montero and V. Pleitez, “Gauging U(1) symmetries and the number of right-handed neutrinos,” Phys. Lett. B675 (2009) 64–68, arXiv:0706.0473 [hep-ph]. * [74] A. Y. Smirnov, “Seesaw enhancement of lepton mixing,” Phys. Rev. D48 (1993) 3264–3270, arXiv:hep-ph/9304205 [hep-ph]. * [75] S. F. King, “Large mixing angle MSW and atmospheric neutrinos from single right-handed neutrino dominance and U(1) family symmetry,” Nucl. Phys. B576 (2000) 85–105, arXiv:hep-ph/9912492 [hep-ph]. * [76] P. H. Frampton, S. L. Glashow, and T. Yanagida, “Cosmological sign of neutrino CP violation,” Phys. Lett. B548 (2002) 119–121, arXiv:hep-ph/0208157 [hep-ph]. * [77] A. Ibarra and G. G. Ross, “Neutrino phenomenology: The Case of two right-handed neutrinos,” Phys. Lett. B591 (2004) 285–296, arXiv:hep-ph/0312138 [hep-ph]. * [78] E. Ma, “Naturally small seesaw neutrino mass with no new physics beyond the TeV scale,” Phys. Rev. Lett. 86 (2001) 2502–2504, arXiv:hep-ph/0011121 [hep-ph]. * [79] F. Wang, W. Wang, and J. M. Yang, “Split two-Higgs-doublet model and neutrino condensation,” Europhys. Lett. 76 (2006) 388–394, arXiv:hep-ph/0601018 [hep-ph]. * [80] S. Gabriel and S. Nandi, “A New two Higgs doublet model,” Phys. Lett. B655 (2007) 141–147, arXiv:hep-ph/0610253 [hep-ph]. * [81] S. M. Davidson and H. E. Logan, “Dirac neutrinos from a second Higgs doublet,” Phys. Rev. D80 (2009) 095008, arXiv:0906.3335 [hep-ph]. * [82] N. Haba and M. Hirotsu, “TeV-scale seesaw from a multi-Higgs model,” Eur. Phys. J. C69 (2010) 481–492, arXiv:1005.1372 [hep-ph]. * [83] ATLAS Collaboration, G. Aad et al., “Search for high-mass dilepton resonances using 139 fb-1 of $pp$ collision data collected at $\sqrt{s}=$13 TeV with the ATLAS detector,” arXiv:1903.06248 [hep-ex]. * [84] CMS Collaboration, A. M. Sirunyan et al., “Search for resonant and nonresonant new phenomena in high-mass dilepton final states at $\sqrt{s}$ = 13 TeV,” JHEP 07 (2021) 208, arXiv:2103.02708 [hep-ex]. * [85] ATLAS Collaboration Collaboration, “Technical Design Report for the Phase-II Upgrade of the ATLAS LAr Calorimeter,” Tech. Rep. CERN-LHCC-2017-018. ATLAS-TDR-027, CERN, Geneva, Sep, 2017. https://cds.cern.ch/record/2285582. * [86] ATLAS Collaboration, “Search for New Phenomena in Dijet Events using $139\,\text{fb}^{-1}$ of $pp$ collisions at $\sqrt{s}=$ 13TeV collected with the ATLAS Detector,”. * [87] CMS Collaboration, A. M. Sirunyan et al., “Search for narrow and broad dijet resonances in proton-proton collisions at $\sqrt{s}=13$ TeV and constraints on dark matter mediators and other new particles,” JHEP 08 (2018) 130, arXiv:1806.00843 [hep-ex]. * [88] LEP, ALEPH, DELPHI, L3, OPAL, LEP Electroweak Working Group, SLD Electroweak Group, SLD Heavy Flavor Group Collaboration, t. S. Electroweak, “A Combination of preliminary electroweak measurements and constraints on the standard model,” arXiv:hep-ex/0312023 [hep-ex]. * [89] M. Carena, A. Daleo, B. A. Dobrescu, and T. M. P. Tait, “$Z^{\prime}$ gauge bosons at the Tevatron,” Phys. Rev. D70 (2004) 093009, arXiv:hep-ph/0408098 [hep-ph]. * [90] ALEPH, DELPHI, L3, OPAL, LEP Electroweak Collaboration, S. Schael et al., “Electroweak Measurements in Electron-Positron Collisions at W-Boson-Pair Energies at LEP,” Phys. Rept. 532 (2013) 119–244, arXiv:1302.3415 [hep-ex]. * [91] A. Das, P. S. B. Dev, Y. Hosotani, and S. Mandal, “Probing the minimal $U(1)_{X}$ model at future electron-positron colliders via the fermion pair-production channel,” arXiv:2104.10902 [hep-ph]. * [92] P. Langacker, “The Physics of Heavy $Z^{\prime}$ Gauge Bosons,” Rev. Mod. Phys. 81 (2009) 1199–1228, arXiv:0801.1345 [hep-ph]. * [93] J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. M. Nadolsky, and W. K. Tung, “New generation of parton distributions with uncertainties from global QCD analysis,” JHEP 07 (2002) 012, arXiv:hep-ph/0201195. * [94] CMS Collaboration, A. M. Sirunyan et al., “Search for resonant and nonresonant new phenomena in high-mass dilepton final states at $\sqrt{s}=$ 13 TeV,” arXiv:2103.02708 [hep-ex]. * [95] N. D. Christensen and C. Duhr, “FeynRules - Feynman rules made easy,” Comput. Phys. Commun. 180 (2009) 1614–1641, arXiv:0806.4194 [hep-ph]. * [96] A. Alloul, N. D. Christensen, C. Degrande, C. Duhr, and B. Fuks, “FeynRules 2.0 - A complete toolbox for tree-level phenomenology,” Comput. Phys. Commun. 185 (2014) 2250–2300, arXiv:1310.1921 [hep-ph]. * [97] J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, and T. Stelzer, “MadGraph 5 : Going Beyond,” JHEP 06 (2011) 128, arXiv:1106.0522 [hep-ph]. * [98] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, “The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations,” JHEP 07 (2014) 079, arXiv:1405.0301 [hep-ph]. * [99] T. Sjostrand, S. Mrenna, and P. Z. Skands, “A Brief Introduction to PYTHIA 8.1,” Comput. Phys. Commun. 178 (2008) 852–867, arXiv:0710.3820 [hep-ph]. * [100] DELPHES 3 Collaboration, J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi, “DELPHES 3, A modular framework for fast simulation of a generic collider experiment,” JHEP 02 (2014) 057, arXiv:1307.6346 [hep-ex]. * [101] Y. L. Dokshitzer, G. D. Leder, S. Moretti, and B. R. Webber, “Better jet clustering algorithms,” JHEP 08 (1997) 001, arXiv:hep-ph/9707323. * [102] M. Wobisch and T. Wengler, “Hadronization corrections to jet cross-sections in deep inelastic scattering,” in Workshop on Monte Carlo Generators for HERA Physics (Plenary Starting Meeting). 4, 1998. arXiv:hep-ph/9907280. * [103] A. J. Larkoski, S. Marzani, G. Soyez, and J. Thaler, “Soft Drop,” JHEP 05 (2014) 146, arXiv:1402.2657 [hep-ph]. * [104] M. Dasgupta, A. Fregoso, S. Marzani, and G. P. Salam, “Towards an understanding of jet substructure,” JHEP 09 (2013) 029, arXiv:1307.0007 [hep-ph]. * [105] J. M. Butterworth, A. R. Davison, M. Rubin, and G. P. Salam, “Jet substructure as a new Higgs search channel at the LHC,” Phys. Rev. Lett. 100 (2008) 242001, arXiv:0802.2470 [hep-ph]. * [106] LCC Physics Working Group Collaboration, K. Fujii et al., “Tests of the Standard Model at the International Linear Collider,” arXiv:1908.11299 [hep-ex]. * [107] T. Barklow, J. Brau, K. Fujii, J. Gao, J. List, N. Walker, and K. Yokoya, “ILC Operating Scenarios,” arXiv:1506.07830 [hep-ex]. * [108] A. Das, S. Gola, S. Mandal, and N. Sinha, “Two-component scalar and fermionic dark matter candidates in a generic U$(1)_{X}$ model,” arXiv:2202.01443 [hep-ph]. * [109] T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, “An Introduction to PYTHIA 8.2,” Comput. Phys. Commun. 191 (2015) 159–177, arXiv:1410.3012 [hep-ph]. * [110] J. F. Gunion, H. E. Haber, G. L. Kane, and S. Dawson, The Higgs Hunter’s Guide, vol. 80. 2000\.
# HyperPrompt: Prompt-based Task-Conditioning of Transformers Yun He Huaixiu Steven Zheng∗ $\clubsuit$ Yi Tay $\clubsuit$ Jai Gupta Yu Du Vamsi Aribandi Zhe Zhao YaGuang Li Zhao Chen † Donald Metzler Heng-Tze Cheng Ed H. Chi Google Research, † Waymo LLC $\clubsuit$ {stevenzheng<EMAIL_ADDRESS>Equal contribution. Yun returned to TAMU, work done as an Intern at Google. ###### Abstract Prompt-Tuning is a new paradigm for finetuning pre-trained language models in a parameter-efficient way. Here, we explore the use of HyperNetworks to generate hyper-prompts: we propose HyperPrompt, a novel architecture for prompt-based task-conditioning of self-attention in Transformers. The hyper- prompts are end-to-end learnable via generation by a HyperNetwork. HyperPrompt allows the network to learn task-specific feature maps where the hyper-prompts serve as task global memories for the queries to attend to, at the same time enabling flexible information sharing among tasks. We show that HyperPrompt is competitive against strong multi-task learning baselines with as few as $0.14\%$ of additional task-conditioning parameters, achieving great parameter and computational efficiency. Through extensive empirical experiments, we demonstrate that HyperPrompt can achieve superior performances over strong T5 multi-task learning baselines and parameter-efficient adapter variants including Prompt-Tuning and HyperFormer++ on Natural Language Understanding benchmarks of GLUE and SuperGLUE across many model sizes. Figure 1: HyperPrompt achieves state-of-the-art performance on SuperGLUE for T5 models up to XXL. Prompt-tuning Lester et al. (2021) with tuning prompt parameters only achieves competitive performance against multi-task learning (MTL) baseline for the 11B parameter model with a big performance gap for smaller models. HyperPrompt-Global outperforms the strong parameter-efficient adapter variant HyperFormer++ Karimi Mahabadi et al. (2021), the MTL baseline, and the full fine-tuning of Prompt-Tuning (our implementation) across model sizes with a large margin [e.g. 91.3 vs 90.2 (MTL) for T5 XXL]. ## 1 Introduction Prompt-Tuning (Lester et al., 2021), learning to condition large language models with soft learnable memory tokens, have recently garnered attention owing to their ability for parameter-efficient finetuning. Prompts are lightly tuned, allowing the model to be trained quickly since the main body of the pretrained model is kept frozen. To this end, this paradigm is strongly reminiscent of adapter layers (Houlsby et al., 2019a; Karimi Mahabadi et al., 2021; Zaken et al., 2021; He et al., 2021) which are also efficiently finetuned. We introduce HyperPrompt, a natural but novel extension of Prompt-Tuning to multi-task learning (MTL) for language. HyperPrompt introduces task- conditioned hyper-prompts that conditions the model on task-specific information for constructing these prompts. Hyper-prompts are injected to the keys and values in the self-attention module, reminiscent of memory augmented Transformers (Sukhbaatar et al., 2019). This mitigates the cost of having prompts pass through the standard FFN layers in Transformers and serves as additional task-specific memory tokens for queries to attend to. We further improve upon this by introducing task-aware and layer-aware HyperNetworks (Ha et al., 2017) that parameterize and generate weights for the prompt generation process. The usage of HyperNetwork imbues our model with the necessary flexibility and expressiveness, especially when it comes to incorporating task-specific and layer-specific information to the network. Meanwhile, HyperPrompt remains very parameter and computational efficient and friendly to multi-task scaling: the additional parameters scale sub-linearly with, and are independent of the number of tasks in practice. While Hypernetworks have enjoyed some success in learning adapters (Karimi Mahabadi et al., 2021; Tay et al., 2020) and/or continual learning (von Oswald et al., 2019), we note that this is the first exploration of HyperNetworks as a prompt generator. Contrary to prior work, we additionally propose to finetune the entire network instead of only the hyper-prompts. We make several compelling arguments for this. Firstly, Lester et al. (2021) shows that parameter efficient Prompt- Tuning only shines for large (e.g., 11B) models and substantially pales in comparison to fine-tuning when the model is moderately parameterized (e.g., 220M). Secondly, finetuning only adaptive parameters (e.g., prompts/adapters) simply presents an illusion of efficiency (Dehghani et al., 2021). In reality, the FLOPs incurred by the model is still identical on the forward pass, which saves no compute during inference. Parameter counts, especially when including only prompts and adapters, are not the only measurement of computational efficiency. Instead, the FLOPs and training time should be considered together to provide a holistic view. #### Our Contributions Overall, the main contributions include: * • We propose a novel HyperPrompt Transformer architecture with learnable hyper- prompts for multi-task fine-tuning with great parameter and computational efficiency. * • We demonstrate that for difficult tasks, it is crucial to fine-tune the task- specific parameters together with the backbone model to achieve Pareto efficiency on all tasks. * • We explore HyperNetworks as a prompt generator, and inject hyper-prompts into the self-attention module as global task memory tokens. * • HyperPrompt outperforms state-of-the-art parameter-efficient T5 models Raffel et al. (2019) using Prompt-Tuning or adapters on well-established benchmarks such as SuperGLUE and GLUE, across all explored model sizes (see Figure 1). ## 2 Problem Statement We consider the general setting of multi-task learning for a set of tasks $\\{\mathcal{D_{\tau}}\\}_{\tau=1}^{T}$, where $T$ is the total number of tasks and $\\{\mathcal{D_{\tau}}\\}=\\{x_{\tau}^{(n)},y_{\tau}^{(n)}\\}_{n=1}^{N_{\tau}}$ indicates the corresponding training set of the $\tau$-th task with $N_{\tau}$ samples. We assume that a pre-trained Transformer model $f_{\theta}(\cdot)$ (e.g., T5) is given, where the model is parameterized by $\theta$. To tackle such multi-task learning problem with $f_{\theta}(\cdot)$, we minimize the following objective function $\mathcal{L}(\theta)=\sum_{\tau=1}^{T}\sum_{n=1}^{N_{\tau}}C(f_{\theta}(x_{\tau}^{(n)}),y_{\tau}^{(n)})$, where $C(\cdot,\cdot)$ is typically the cross-entropy loss and $f_{\theta}(x_{\tau}^{(n)})$ is the output for training sample $x_{\tau}^{(n)}$. Transformer-based pre-trained language models such as T5 Raffel et al. (2019) and BART Lewis et al. (2020) are unified text-to-text frameworks where all tasks share the same encoder-decoder architecture – $\\{\\{x_{\tau}^{(n)}\\}_{n=1}^{N_{\tau}}\\}_{\tau=1}^{T}$ are fed into the same encoder and $\\{\\{\hat{y}_{\tau}^{(n)}\\}_{n=1}^{N_{\tau}}\\}_{\tau=1}^{T}$ are generated by the same decoder. For such universal modules, multi-task learning simply corresponds to mixing task data sets together and there is no task-specific classification or regression networks for each task as in encoder-only modules Devlin et al. (2019); Liu et al. (2019b). Previous work Raffel et al. (2019) shows that co-learning all tasks together on a pre-trained Transformer model is inferior to fine-tuning on each task separately. A possible reason is that $\theta$ is task-agnostic (i.e., all parameters are shared) and hence task-specific information is not well captured which can be especially true for low-resource tasks. Therefore, a natural way to improve the performance of Transformers on multi-task learning is to introduce a set of task-conditioned parameters $\\{\delta_{\tau}\\}_{\tau=1}^{T}$ into $f_{\theta}(.)$. The objective function can be updated as $\mathcal{L}(\theta,\\{\delta_{\tau}\\}_{\tau=1}^{T})=\sum_{\tau=1}^{T}\sum_{n=1}^{N_{\tau}}C(f_{\theta,\delta_{\tau}}(x_{\tau}^{(n)}),y_{\tau}^{(n)})$, where $\delta_{\tau}$ is the task-specific parameterization for the $\tau$-th task. During training, both $\theta$ and $\\{\delta_{\tau}\\}_{\tau=1}^{T}$ are updated via back-propagation because we observe a large performance drop in SuperGLUE when backbone model $\theta$ is frozen and only task-conditioned parameters are tuned, as done in Karimi Mahabadi et al. (2021), which will be detailed in Section 4.3. To this end, our goal is to design task-conditioned parameterization of Transformer models to achieve greater parameter and computational efficiency as well as Pareto efficiency for multi-task learning. More explicitly, we have two goals: (1) improving the finetuning performance of most tasks in $\\{\mathcal{D_{\tau}}\\}_{\tau=1}^{T}$ by introducing task-conditioned parameters $\\{\delta_{\tau}\\}_{\tau=1}^{T}$ into $f_{\theta}(.)$ and (2) under the constraint that $\sum_{\tau}\|\\{\delta_{\tau}\\}_{\tau=1}^{T}\|_{0}\ll\|\theta\|_{0}$, which means that the model capacity will not be significantly increased. And the computational cost would not increase substantially either. ## 3 Methods In this section, we introduce HyperPrompt which has three variants: HyperPrompt-Share, HyperPrompt-Sep and HyperPrompt-Global (Figure 2). We follow two key design principles to formulate HyperPrompt: (1) injecting task- conditioning into self-attention module for better computational efficiency and more expressive power via token-level interactions, and (2) using HyperNetworks to simultaneously improve the parameter efficiency and allow a flexible degree of task sharing for better generalization. Figure 2: HyperPrompt framework: (a) in each Transformer block, task-specific hyper-prompts $P_{K,V}$ are prepended to the original key $K$ and value $V$ for the query $Q$ to attend to, (b) in HyperPrompt-Share/Sep, global prompts $P$ are used to generate the hyper-prompts $P_{K,V}$ through local HyperNetworks $h_{k,v}$ at each Transformer layer, which consists of a down- projection matrix $D_{K,V}$, a RELU layer and a up-project matrix $U_{K,V}$, (c) in HyperPrompt-Global, all the local HyperNetworks ($D_{K,V}$, $U_{K,V}$) are generated by global HyperNetworks $H_{k,v}$ using layer-aware task embeddings $I$ as task-specific inputs (see Section 3.3 for details). ### 3.1 Prompt-Based Task-Conditioned Transformer Previous adapter-based methods Karimi Mahabadi et al. (2021); Tay et al. (2020) for multi-task learning normally add an adapter (i.e., dense-relu-dense network) for each task after the feed-forward layers at every Transformer block. Instead, the key idea of our approach is to prepend $l$ task- conditioned trainable vectors to the keys and values of the multihead self- attention layer at every Transformer block, where the task-specific attention feature maps are jointly learned with the task-agnostic representation. The idea of prepending learnable prompts to the network is explored before by Li & Liang (2021); Lester et al. (2021); Liu et al. (2021) for single-task fine-tuning. We first introduce and expand this idea for multi-task learning in this subsection. Specifically, we design a novel method called HyperPrompt following the design principle $\\#1$ of injecting hyper-prompts into self- attention and $\\#2$ using HyperNetworks as generators for hyper-prompts. At a multihead self-attention layer, the original key, value and query are calculated as $\bm{K}_{\tau}=\bm{X}_{\tau}\bm{W}_{k}$, $\bm{V}_{\tau}=\bm{X}_{\tau}\bm{W}_{v}$, $\bm{Q}_{\tau}=\bm{X}_{\tau}\bm{W}_{q}$, where $\bm{X}_{\tau}\in\mathbb{R}^{L\times d}$ is the input sequence of a training sample from the $\tau$-th task, $L$ is the sequence length, $d$ is the model dimension. $\bm{W}_{k}\in\mathbb{R}^{d\times h\times d_{h}}$, $\bm{W}_{v}\in\mathbb{R}^{d\times h\times d_{h}}$ and $\bm{W}_{q}\in\mathbb{R}^{d\times h\times d_{h}}$ project the input into original key $\bm{K}_{\tau}\in\mathbb{R}^{L\times h\times d_{h}}$, value $\bm{V}_{\tau}\in\mathbb{R}^{L\times h\times d_{h}}$ and query $\bm{Q}_{\tau}\in\mathbb{R}^{L\times h\times d_{h}}$, $h$ is the number of heads, $d_{h}$ is the dimension of each head and typically set to $d/h$ to save parameters. To learn the task-specific information for the $\tau$-th task, we have $l$ trainable $d$-dimensional vectors as the hyper-prompts for the key and the value respectively, denoted as $\bm{P}_{\tau,k}\in\mathbb{R}^{l\times h\times d_{h}}$ and $\bm{P}_{\tau,v}\in\mathbb{R}^{l\times h\times d_{h}}$, as shown in Figure 2(a). Then, the hyper-prompts are concatenated with the original key and value: $\bm{K^{\prime}}_{\tau}=\text{concat}(\bm{P}_{\tau,k},\>\bm{K}_{\tau})$ (1) $\bm{V^{\prime}}_{\tau}=\text{concat}(\bm{P}_{\tau,v},\>\bm{V}_{\tau})$ (2) where the new key (value) $\bm{K^{\prime}}_{\tau}$ ($\bm{V^{\prime}}_{\tau})\in\mathbb{R}^{(l+L)\times h\times d_{h}}$ are used to compute the multihead self-attention. After that, the multihead self-attention can be operated: $\bm{O}_{\tau}=\text{Attention}(\bm{Q}_{\tau},\bm{K^{\prime}}_{\tau},\bm{V^{\prime}}_{\tau})=\text{softmax}(\bm{Q}_{\tau}\bm{K^{\prime T}}_{\tau})\bm{V^{\prime}}_{\tau}$ where $\bm{O}_{\tau}\in\mathbb{R}^{L\times d}$ is the output of multihead attention. The hyper-prompts benefit Transformers for multi-task learning in two ways: (1) Prompt for key $\bm{P}_{\tau,k}$ is prepended with the original key and will participate in the calculation of attention feature map: $\text{softmax}(\bm{Q}_{\tau}\bm{K^{\prime T}}_{\tau})$. $\bm{P}_{\tau,k}$ directly interacts (matrix multiplication) with the original query $\bm{Q}_{\tau}$, allowing tokens to acquire task-specific semantics. (2) Prompt for value $\bm{P}_{\tau,v}$ is prepended with the original value and will be absorbed into the self-attention output $\bm{O}_{\tau}$, where each position in $\bm{O}_{\tau}$ is the weighted-sum of vectors in $\bm{V^{\prime}}_{\tau}$ with weights from the attention scores. This way, $\bm{P}_{\tau,v}$ can serve as task-specific memories for multihead attention to retrieve information from. ### 3.2 HyperPrompt How to obtain the prompts for the $m$-th Transformer block? A straightforward way is to directly initialize $\bm{P}^{m}_{\tau,k}$ and $\bm{P}^{m}_{\tau,v}$. However, this way is parameter-inefficient, as it scales linearly with both the number of tasks $T$ and the number layers $M$ as $\mathcal{O}(T\times M)$. Instead, we initialize a global111we term it global because it is independent of the layer number as opposed to layer-dependent prompt $\bm{P}^{m}_{\tau}$. prompt $\bm{P}_{\tau}$ for each task and apply local HyperNetworks at every Transformer block to project this prompt into $\\{\bm{P}^{m}_{\tau,k}\\}_{m=1}^{M}$ and $\\{\bm{P}^{m}_{\tau,v}\\}_{m=1}^{M}$. Global Prompts. Specifically, we initialize a set of global prompts $\\{\bm{P}_{\tau}\\}_{\tau=1}^{T}$, where $\bm{P}_{\tau}\in\mathbb{R}^{l\times d}$ is a trainable matrix to learn the task-specific information of the $\tau$-th task, $d$ is the model dimension and $l$ is the length of the prompt. Local HyperNetworks. At the $m$-th Transformer block, we apply two local HyperNetworks $h_{k}^{m}$ and $h_{v}^{m}$ to transform the global prompt $\bm{P}_{\tau}$ into layer-specific and task-specific prompts as shown in Figure 2(b): $\bm{P}^{m}_{\tau,k}=h_{k}^{m}(\bm{P}_{\tau})=\bm{U}_{k}^{m}(\text{Relu}(\bm{D}_{k}^{m}(\bm{P}_{\tau}))),$ (3) $\bm{P}^{m}_{\tau,v}=h_{v}^{m}(\bm{P}_{\tau})=\bm{U}_{v}^{m}(\text{Relu}(\bm{D}_{v}^{m}(\bm{P}_{\tau}))),$ (4) where $\bm{P}^{m}_{\tau,k/v}\in\mathbb{R}^{l\times h\times d_{h}}$. We call these generated prompts hyper-prompts to distinguish from global prompts. In particular, to limit the number of parameters, the local HyperNetworks are designed using a bottleneck architecture: $\bm{D}_{k/v}^{m}\in\mathbb{R}^{d\times b}$ and $\bm{U}_{k/v}^{m}\in\mathbb{R}^{b\times h\times d_{h}}$ are down-projection and up-projection matrices, respectively. $b$ is the bottleneck dimension satisfying $b\ll d$. HyperPrompt-Share. We first have all tasks share the same two local HyperNetworks defined by the down-project matrices $\bm{D}_{k}^{m}$ and $\bm{D}_{v}^{m}$, and the up-project matrices $\bm{U}_{k}^{m}$ and $\bm{U}_{v}^{m}$. We refer to this design choice as HyperPrompt-Share. Despite the saving of parameters, one drawback of HyperPrompt-Share is that the task conflicts could arise given the limited model capacity Wu et al. (2020); Wang et al. (2020) of the shared local HyperNetworks. HyperPrompt-Sep. In the opposite extreme of HyperPrompt-Share, each task can have its own local HyperNetworks $h_{\tau,k}^{m}(\bm{P}_{\tau})$ and $h_{\tau,v}^{m}(\bm{P}_{\tau})$ as following: $\bm{P}^{m}_{\tau,k}=h_{\tau,k}^{m}(\bm{P}_{\tau})=\bm{U}_{\tau,k}^{m}(\text{Relu}(\bm{D}_{\tau,k}^{m}(\bm{P}_{\tau}))),$ (5) $\bm{P}^{m}_{\tau,v}=h_{\tau,v}^{m}(\bm{P}_{\tau})=\bm{U}_{\tau,v}^{m}(\text{Relu}(\bm{D}_{\tau,v}^{m}(\bm{P}_{\tau}))),$ (6) where $\bm{D}_{\tau,k/v}^{m}$ and $\bm{U}_{\tau,k/v}^{m}$ are down-projection and up-projection matrices for the $\tau$ task, respectively. In this case, each task hyper-prompt is trained independently and hence there is no information sharing. ### 3.3 HyperPrompt-Global We further propose a novel design of HyperPrompt-Global to flexibly share information and knowledge among tasks and blocks while maintaining a low parameter cost. As shown in Figure 2(c), the key idea of HyperPrompt-Global is to generate the local HyperNetworks using the same global HyperNetwork shared by all tasks and all Transformer blocks. Layer-Aware Task Embedding. Following the same recipe in Karimi Mahabadi et al. (2021), we define a layer-aware task embedding for better generalization. Let $k_{\tau}\in\mathbb{R}^{t^{\prime}}$ denote the task embedding for the $\tau$ task and $t^{\prime}$ is the dimension. To capture the layer-specific information, layer embedding $z_{m}\in\mathbb{R}^{t^{\prime}}$ is introduced. After that, a task projection network $h_{t}(\cdot,\cdot)$ is applied to fuse the task embedding and the layer embedding into the final layer-awared task embedding $\bm{I}_{\tau}^{m}=h_{t}(k_{\tau},z_{m})$, where $\bm{I}_{\tau}^{m}$ is the input to the shared global HyperNetworks as shown in Figure 1(c). $h_{t}$ is a MLP consisting of two feed-forward layers and a ReLU non- linearity, which takes the concatenation of $k_{\tau}$ and $z_{m}$ as input. Global HyperNetworks. $H_{k}(\cdot)$ generates the weight matrices $(\bm{U}_{\tau,k}^{m},\bm{D}_{\tau,k}^{m})$ in the local HyperNetworks of key hyper-prompts and another global HyperNetwork $H_{v}(\cdot)$ generates the weight matrices $(\bm{U}_{\tau,v}^{m},\bm{D}_{\tau,v}^{m})$ in the local HyperNetworks of value hyper-prompts: $(\bm{U}_{\tau,k}^{m},\bm{D}_{\tau,k}^{m})=H_{k}(\bm{I}_{\tau}^{m})=(\bm{W}^{U_{k}},\bm{W}^{D_{k}})\bm{I}_{\tau}^{m},$ (7) $(\bm{U}_{\tau,v}^{m},\bm{D}_{\tau,v}^{m})=H_{v}(\bm{I}_{\tau}^{m})=(\bm{W}^{U_{v}},\bm{W}^{D_{v}})\bm{I}_{\tau}^{m},$ (8) where $\bm{I}_{\tau}^{m}\in\mathbb{R}^{t}$ is the layer-aware task embedding for the $\tau$ task at the $m$-th block. $\bm{W}^{D_{k}}\in\mathbb{R}^{(d\times b)\times t}$, $\bm{W}^{D_{v}}\in\mathbb{R}^{(d\times b)\times t}$, $\bm{W}^{U_{k}}\in\mathbb{R}^{(b\times h\times d_{h})\times t}$ and $\bm{W}^{U_{v}}\in\mathbb{R}^{(b\times h\times d_{h})\times t}$ are the weight matrices of $H_{k}(\cdot)$ and $H_{v}(\cdot)$. Given that $\bm{U}_{\tau,k/v}^{m}$, and $\bm{D}_{\tau,k/v}^{m}$ are generated by the global HyperNetworks, we project the global prompts $\bm{P}_{\tau,k/v}$ into hyper-promtps $\bm{P}^{m}_{\tau,k/v}$ following Eqs. 5 and 6. Finally, the hyper-prompts $\bm{P}^{m}_{\tau,k/v}$ are prepended with original key and value at every self-attention layer as shown in Figure 2(a) to calculate the task-conditioned attention scores. Using global HyperNetworks to generate the projection networks has two benefits: 1. 1. It enables a more flexible way to share information across tasks and layers: the transformation matrices are decomposed into $H_{k/v}(\cdot)$ that are shared by all tasks and all layers. Therefore, the model can adjust the degree of information sharing across tasks and layers through learning the appropriate parameter values in $H_{k/v}(\cdot)$ during the end-to-end training. 2. 2. A parameter-efficient task conditioned parameterization is enabled. The number of extra task-conditioned parameters doesn’t depend on the number of layers $M$, and scales sub-linearly with respect to the total number of tasks $T$. In practice, since task embeddings and task prompts have far fewer parameters than the global HyperNetworks, the additional task-conditioned parameters is almost independent of $T$. ### 3.4 Parameter Efficiency of HyperPrompt As shown in A.1, the total number of additional parameters from HyperPrompt- Global is $dlT+4(bdt)+Tt^{\prime}+Mt^{\prime}+(2t^{\prime}+t)e$, where $d$ is the model dimension, $l$ is the length of the prompts, $T$ is the total number of tasks, $b$ is the bottleneck dimension of the weight matrices of the local HyperNetworks, $d$ is the model dimension, $t^{\prime}/t$ is the dimension of the raw/final layer-aware task embedding, and $e$ is the hidden dimension of $h_{k/v}$. Therefore, the space complexity is $\mathcal{O}(d(lT+4bt))$, given that in practice $M\sim T$, $t^{\prime}\ll dl$, and $e\ll bd$. This leads to a sub-linear scaling with respect to $T$. Furthermore, $T$ is typical $\sim\mathcal{O}(10)$ for multi-task learning. A reasonable $l\sim\mathcal{O}(10)$ is required to achieve the optimal performance, which will be detailed in Section 4.7. On the other hand, typical values for $b\sim 24$ and $t\geq 32$, and therefore $4bt\gg lT$ is satisfied in most cases. Hence, the space complexity could be further simplified as $\mathcal{O}(bdt)$. In conclusion, the space complexity of HyperPrompt-Global mainly comes from the global HyperNetworks and is practically independent of the prompt length $l$, the number of Transformer layers $M$, and the number of tasks $T$. ## 4 Experiments ### 4.1 Experimental Setup Datasets. We evaluate the performance of the models on GLUE Wang et al. (2018) and SuperGLUE Wang et al. (2019) respectively. Each of them is a collection of text classification tasks to test the general language understanding ability. Specifically, the tasks include: sentence acceptability (CoLA), sentiment analysis (SST-2), paraphrasing/sentence similarity (MRPC, STS-B and QQP), natural language inference (MNLI, QNLI, RTE and CB), coreference resolution (WSC), sentence completion (COPA), word sense disambiguation (WIC) and question answering (MultiRC and ReCoRD, BoolQ). Transformers. Following previous work Karimi Mahabadi et al. (2021) and Tay et al. (2020), our models are built on top of the state-of-the-art Transformer model T5 Raffel et al. (2019), which uses encoder-decoder architecture from Vaswani et al. (2017). We use already pre-trained T5 with sizes from Base (220M parameters) to XXL (11B). Evaluation. We save a checkpoint every 2000 steps for all models and follow the same convention as Raffel et al. (2019) in selecting the best checkpoint for each task. The emphasis of our evaluation is not to find the best single checkpoint for all tasks but to test the model’s ability of transfer learning among the co-trained tasks. We first calculate the average of all metrics for each task and then report the average of all tasks for GLUE and SuperGLUE. Baselines. We compare our proposed MTL-Prompt and HyperPrompt-Share/Sep/Global with vanilla T5 models Raffel et al. (2019) for multi-task learning, which is referred to MTL. Another baseline is Vanilla Adapter proposed in Houlsby et al. (2019b) that add adapters modules for each task after each of the the two feed-forward modules in each Transformer block of the T5 model. The state-of- the-art adapter-based method for multi-task learning is HyperFormer++ proposed in Karimi Mahabadi et al. (2021) that use HyperNetworks to generate adapters for each task and add them after the feed-forward modules following Houlsby et al. (2019b). In addition, Prompt-Tuning Lester et al. (2021) is originally for parameter-efficient single-task fine-tuning and only prepends prompts to the input word embeddings in the first layer. We slightly modify it by initializing and prepending prompts for each task respectively so that Prompt- Tuning can be applied to multi-task learning. We defer additional details of the experiments to A.2 ### 4.2 Key Results Figure 1 provides an overall summary of the results of HyperPrompt. Previous prompt-tuning Lester et al. (2021); Li & Liang (2021) methods focus on parameter-efficient single-task fine-tuning and hence freeze the backbone and only fine-tune the prompts. Their experiments show that the performance of only tuning the prompts can match the full model training with a very large 11B model (Figure 1), but substantially pales for moderate model sizes. Our HyperPrompt-Global architecture when fully fine-tuned achieves state-of- the-art performance on SuperGLUE across four different model sizes. Competitive adapter-tuning variants including Prompt-Tuning and HyperFormer++ can either match or slightly improve upon the multi-task learning (MTL) baseline on the SuperGLUE dataset. In contrast, HyperPrompt-Global outperforms the strong MTL baseline by a large margin on SuperGLUE score ($78.9$ vs $77.2$ for T5 Base). Interestingly, such a performance gain continues all the way to model size as big as XXL (e.g. $91.3$ vs $90.2$) with only $0.14\%$ additional parameters. ### 4.3 Tuning all vs Task-Conditioned Parameters Tunable | Model | GLUE | SuperGLUE ---|---|---|--- All | MTL | 88.3 | 85.9 All | HyperFormer++ | 88.8 | 86.4 All | HyperPrompt-Global | 89.4 | 87 Task | HyperFormer++ | 87.3 | 80.5 Task | HyperPrompt-Global | 87.5 | 81.5 Table 1: Comparison of fine-tuning all vs task-specific parameters using T5 Large. The average scores of GLUE and SuperGLUE are reported on T5 Large. Recently, Karimi Mahabadi et al. (2021) show that only tuning adapters can be competitive against the full fine-tuning. However, the evaluation is conducted only on the GLUE with smaller models including T5 Small and Base. In the experiments, we first compare tuning the full model vs. only task- conditioned parameters. Table 1 shows the comparison on the GLUE and SuperGLUE average scores using T5 large (for per-task performance, please refer to A.4). For GLUE, the observation is consistent with Karimi Mahabadi et al. (2021), where task-specific only fine-tuning of HyperFormer++ and HyperPrompt-Global is comparable to the MTL baseline. However, on SuperGLUE, we observe a large gap: the average score drops by 5.5 and 5.9 for HyperPrompt-Global and HyperFormer++, respectively. Therefore, these experiments show that only tuning the task-conditioned parameters is not enough to achieve competitive results as full model training for multi-task learning on high-difficulty tasks such as SuperGLUE. This is consistent with the results of Prompt-Tuning Lester et al. (2021). Hence, the rest of the experiments are conducted with tuning all model parameters. ### 4.4 Computational Efficiency Table 2 presents the computational efficiency of the Adapter/Prompt models. HyperPrompt-Global (together with HyperPrompt-Share) has the lowest # Ops since hyper-prompts are injected into self-attention and skip the standard FFN layers. In contrast, HyperFormer++ has $\sim 3\text{x}$ # Ops compared to other variants. Regarding training time, HyperPrompt-Share is fastest given that the local HyperNetworks are shared across tasks. Vanilla Adapter and HyperPrompt-Global are comparable while HyperFormer++ and Prompt-Tuning take significant longer to do the full fine-tuning. This shows the computational efficiency of HyperPrompt for both training and inference. Model | # Ops | Training Time ---|---|--- Vanilla Adapter | $1.01\times 10^{13}$ | 8.4h HyperFormer++ | $3.14\times 10^{13}$ | 10.3h Prompt-Tuning | $1.16\times 10^{13}$ | 11.1h HyperPrompt-Sep | $1.01\times 10^{13}$ | 8.9h HyperPrompt-Share | $9.8\times 10^{12}$ | 8.0h HyperPrompt-Global | $9.8\times 10^{12}$ | 8.7h Table 2: The number of operations for a single forward pass and training time on T5 Base. ### 4.5 Ablation Study Table 3 presents the results on T5 Base and Table 4 presents the results on T5 Large (see more detailed results in A.4). HyperPrompt-Global outperforms all baselines in terms of the average score of GLUE and SuperGLUE. HyperPrompt-Global vs. Prompt-Tuning. The original Prompt-Tuning Lester et al. (2021) is for single-task fine-tuning. To be parameter-efficient, it only trains the prompts with the backbone frozen. To make a fair comparison, we modify Prompt-Tuning by (1) training both prompts and backbone, and (2) adding prompt to each task and co-train all tasks together. As shown in Table 3 and 4, HyperPrompt-Global outperforms Prompt-Tuning by 2.0 (0.6) and 1.6 (1.4) on GLUE and SuperGLUE using T5 Base (Large), respectively. HyperPrompt-Global improves upon Prompt-Tuning in two places: (1) Prompt-Tuning only adds prompts to the word embedding layer while HyperPrompt-Global adds hyper-prompts at every Transformer layer and hence is more expressive; and (2) Prompts of tasks are trained independently in Prompt-Tuning while HyperPrompt-Global enables a flexible information sharing via HyperNetworks. Model | #Params | GLUE | SuperGLUE ---|---|---|--- MTL | 1.0x | 85.5 (0.9) | 77.2 (0.2) Vanilla Adapter | 1.06x | 86.7 (0.3) | 77.5 (0.1) HyperFormer++ | 1.04x | 86.5 (0.0) | 78.2 (0.7) Prompt-Tuning | 1.0003x | 84.8 (0.6) | 77.3 (0.2) HyperPrompt-Share | 1.008x | 86.4 (0.6) | 78.2 (0.7) HyperPrompt-Sep | 1.06x | 86.8 (0.1) | 77.5 (0.1) HyperPrompt-Global | 1.04x | 86.8 (0.4) | 78.9 (0.5) Table 3: GLUE and SuperGLUE average scores (standard deviations) over 3 runs of HyperPrompt against baselines on T5 Base. Model | #Params | GLUE | SuperGLUE ---|---|---|--- MTL | 1.0x | 88.3 (0.6) | 85.9 (0.3) Vanilla Adapter | 1.06x | 88.8 (0.2) | 86.1 (0.5) HyperFormer++ | 1.02x | 88.8 (0.0) | 86.4 (0.5) Prompt-Tuning | 1.0001x | 88.8 (0.3) | 85.6 (0.1) HyperPrompt-Share | 1.008x | 89.3 (0.1) | 86.8 (0.2) HyperPrompt-Sep | 1.06x | 89.4 (0.2) | 86.1 (0.3) HyperPrompt-Global | 1.02x | 89.4 (0.1) | 87.0 (0.5) Table 4: GLUE and SuperGLUE average scores (standard deviations) over 3 runs of HyperPrompt against baselines on T5 Large. HyperPrompt-Global vs. HyperFormer++. Our method is superior to the state-of- the-art baseline HyperFormer++ in the average score of GLUE and SuperGLUE for both Base and Large T5 model. For example, HyperPrompt-Global of T5 large achieves 87.0 on the SuperGLUE compared to 86.4 by HyperFormer++ (Table 4). Note that the main difference between the two methods is that HyperPrompt- Global inserts the task-conditioned parameters as prompts into self-attention layers while HyperFormer++ insert adapters after each block. We believe task- conditioning in self-attention gives more expressive power than in the feed- forward network as done in adapters. Hyper-prompts that are prepended with the key and value participate in the attention interactions between different token positions, which helps the model to better capture the task-dependent semantics. HyperPrompt-Global vs. MTL. Next, we observe that using HyperPrompt-Global can greatly improve the performance upon the vanilla Transformer model (referred to MTL): 1.7 (1.1) gain on SuperGLUE score for T5 Base (Large) with $4\%$ ($2\%$) additional paramters. In conclusion, the experiments show that HyperPrompt-Global is a parameter-efficient and effective task-conditioned parameterization of Transformers for multi-task learning. HyperPrompt-Global vs. HyperPrompt-Share/Sep. Interestingly, HyperPrompt-Share is better than HyperPrompt-Sep on the SuperGLUE on both Base and Large models while the opposite is true for GLUE. Notice that all tasks share the same two projection networks in HyperPrompt-Share while each task has its own projection networks in HyperPrompt-Sep. More importantly, we observe that HyperPrompt-Global, where the projection networks are generated by the global HyperNetworks, always achieves the best performance on both GLUE and SuperGLUE. Hence, the experiments show that HyperPrompt-Global can adjust the degree of information sharing for better multi-task generalization, compared to HyperPrompt-Share/Sep. ### 4.6 Peeking into Hyper-Prompts To shed light on how hyper-prompts help improve the multi-task generalization via task-conditioning, we peek into HyperPrompt-Global models by looking at the distribution of attention scores. We choose the GLUE task MRPC as an example. To avoid biasing on individual examples, we aggregate over 100 validation examples to compute the quantity of interest (see A.3 for details). First, we compute the attention mass on hyper-prompts for each encoder layer. Figure 3 (top) shows that the network has lower attention mass on hyper- prompts in the lower layers and gradually increases attention mass for higher layers. This phenomenon indicates that higher-levels of Transformer becomes more task-specialized while it is beneficial for the lower-levels to learn task-agnostic representation Yosinski et al. (2014) by casting lower attention mass on hyper-prompts. Furthermore, we calculate the entropy of the attention scores on the tokens. For HyperPrompt-Global, we remove the hyper-prompts from the calculation and re-normalize the attention scores on the tokens to make a fair comparison with the MTL baseline. Figure 3 (bottom) shows a shift of entropy distribution towards higher values for HyperPrompt-Global. This signifies that injecting hyper-prompts encourages a more diverse attention distribution, which seems to be beneficial to model generalization. Figure 3: Visualization of attention mass and entropy distribution. (a) Decoder (b) Encoder Figure 4: Impact of hyper-prompt length in HyperPrompt-Global (GLUE score on T5 Base). ### 4.7 Impact of Hyper-Prompt Length HyperPrompt prepends $l$ trainable hyper-prompts to the keys and values of self-attention layer at every Transformer layer. In Figure 4, we present the results of tuning the prompt length $l$ on GLUE using T5 Base as the example for HyperPrompt-Global (similar patterns are observed on T5 Large and SuperGLUE). We first add hyper-prompts on the decoder and search the best $l$ and then search the best $l$ for the encoder with the fixed best decoder hyper-prompt length. As shown in Figure 4(a), $l=6$ is the best for the decoder. As shown in Figure 4(b), HyperPrompt-Global achieves the best result of 86.8 when $l=16$ on the encoder with $l=6$ fixed for the decoder. The experiments show that hyper-prompts with length $l\sim\mathcal{O}(10)$ are good enough to achieve superior performance. Note that the original sequence length is 512 on the encoder and 32 on the decoder. Therefore, HyperPrompt does not substantially increase the time complexity of self-attention layers in practice. ### 4.8 Encoder vs Decoder To understand the effect of adding task-conditioned parameters to different parts of the network, we present the results of HyperPrompt-Global and HyperFormer++ with adding hyper-prompts/adapters to: (1) encoder-only, (2) decoder-only, and (3) both encoder-decoder. As shown in Table 5, we observe adding task-conditioned parameters to encoder (encoder-only) performs better than decoder-only on GLUE. However, the opposite is true for SuperGLUE, where encoder-only is substantially worse than decoder-only. This potentially could be a trainability issue when prompts are inserted into encoders, i.e. a different learning rate might be required to learn the prompt parameters from scratch. We leave this investigation as a future work. Based on this experiment, we add task-conditioned parameters to the decoder for SuperGLUE in our experiments. Model | #Params | GLUE | SuperGLUE ---|---|---|--- MTL | 1.0x | 85.5 | 77.2 HyperFormer++-Encoder | 1.02x | 85.9 | 74.4 HyperFormer++-Decoder | 1.02x | 85.7 | 78.2 HyperFormer++-Enc-Dec | 1.04x | 86.5 | 74.8 HyperPrompt-Encoder | 1.02x | 86.6 | 76.5 HyperPrompt-Decoder | 1.02x | 86.3 | 78.9 HyperPrompt-Enc-Dec | 1.04x | 86.8 | 78.7 Table 5: Ablation of inserting hyper-prompts or adapters into Encoder/Decoder/Enc-Dec (Base model). ## 5 Related Work Prompt-Tuning. Prompt tuning is becoming a new paradigm for adapting pre- trained general-purpose language models to downstream tasks, as a lightweight alternative to the popular fine-tuning approach. Here, we use the term Prompt- Tuning to cover a general family of methods following the prompting idea in GPT-3 Brown et al. (2020). To avoid manually design the prompts, recent efforts have focused on search for discrete prommpting words automatically Shin et al. (2020). On the other hand, soft prompts Li & Liang (2021); Hambardzumyan et al. (2021); Lester et al. (2021); Liu et al. (2021) in the form of continuous vectors are introduced to simplify the process and have shown competitive results in both natural language understanding Lester et al. (2021); Liu et al. (2021) and generation tasks Li & Liang (2021). In particular, Lester et al. (2021) show that soft prompts can become competitive against full fine-tuning for a 11B parameters model, but with a big performance gap when the model size is moderate. In our work, we close this gap in the full fine-tuning setting and demonstrated that HyperPrompt can outperform strong multi-task baselines across all model sizes studied. Adapter-Tuning. Adapter tuning Houlsby et al. (2019a, b); Karimi Mahabadi et al. (2021) is an alternative approach for parameter-efficient lightweight tuning of pre-trained langauge models for downstream tasks. Task-specific adapter layers Houlsby et al. (2019a) are inserted into the Transformer block for fine-tuning while the rest of the backbone model is frozen. By adding only a few percent of additional parameters, Karimi Mahabadi et al. (2021) show that competitive performance can be obtained on NLU benchmarks such as GLUE Wang et al. (2018). However, one limitation from the existing work is the evaluation of NLU on GLUE dataset, which is known to be no longer suitable for measuring the progress of language understanding Wang et al. (2019). In our work, we evaluate HyperPrompt on SuperGLUE in addition to GLUE dataset, and show that indeed higher-difficulty tasks such as SuperGLUE requires full- tuning of the model beyond adapter tuning, to be competitive against state-of- the-art multi-task baselines. We also demonstrate that it is advantageous to inject prompts into self-attention than adding adapters. Multi-task Natural Language Understanding. Multi-task learning is an important and challenge research direction in both full fine-tuning and prompt-tuning paradigms because of the competing needs of training and serving a single model while achieving Pareto efficiency in all tasks. The T5 model Raffel et al. (2019) renders all NLP tasks as a Text-to-Text problem. However, the best results are obtained by task-specific fine-tuning. MTDNN (multi-task deep neural network) Liu et al. (2019a) shares parameters between several NLP tasks, and achieves strong performance on the GLUE benchmark. Aghajanyan et al. (2021) use around 50 tasks to boost the multi- task learning performance. Aribandi et al. (2021) builds an extremely diverse set of 107 NLP tasks for extreme multi-task scaling and demonstrate superior performances on a wide range of benchmarks. Recently, Wei et al. (2021); Sanh et al. (2021) also illustrated how a multi-task learning stage can greatly improve the zero-shot prompting performance of large language models. ## 6 Conclusion We propose a novel architecture for prompt-based task-conditioning of self- attention in Transformers. The hyper-prompts are generated by a HyperNetwork to enable flexible information sharing among tasks while remain efficient in parameters and computation. HyperPrompt allows the network to learn task- specific feature maps where the hyper-prompts serve as task global memories, encouraging a more diverse distribution of attention. Extensive experiments show that HyperPrompt can achieve superior performances over strong T5 multi- task learning baselines and parameter-efficient models including Prompt-Tuning and HyperFormer++ on GLUE and SuperGLUE benchmarks. ## References * Aghajanyan et al. (2021) Aghajanyan, A., Gupta, A., Shrivastava, A., Chen, X., Zettlemoyer, L., and Gupta, S. Muppet: Massive multi-task representations with pre-finetuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pp. 5799–5811, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.468. URL https://aclanthology.org/2021.emnlp-main.468. * Aribandi et al. (2021) Aribandi, V., Tay, Y., Schuster, T., Rao, J., Zheng, H. S., Mehta, S. V., Zhuang, H., Tran, V. Q., Bahri, D., Ni, J., Gupta, J., Hui, K., Ruder, S., and Metzler, D. Ext5: Towards extreme multi-task scaling for transfer learning, 2021. * Brown et al. (2020) Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), _Advances in Neural Information Processing Systems_ , volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. * Dehghani et al. (2021) Dehghani, M., Arnab, A., Beyer, L., Vaswani, A., and Tay, Y. The efficiency misnomer. _arXiv preprint arXiv:2110.12894_ , 2021. * Devlin et al. (2019) Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423. * Ha et al. (2017) Ha, D., Dai, A. M., and Le, Q. V. Hypernetworks. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net, 2017. URL https://openreview.net/forum?id=rkpACe1lx. * Hambardzumyan et al. (2021) Hambardzumyan, K., Khachatrian, H., and May, J. WARP: Word-level Adversarial ReProgramming. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pp. 4921–4933, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.381. URL https://aclanthology.org/2021.acl-long.381. * He et al. (2021) He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., and Neubig, G. Towards a unified view of parameter-efficient transfer learning, 2021\. * Houlsby et al. (2019a) Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. Parameter-efficient transfer learning for NLP. In Chaudhuri, K. and Salakhutdinov, R. (eds.), _Proceedings of the 36th International Conference on Machine Learning_ , volume 97 of _Proceedings of Machine Learning Research_ , pp. 2790–2799. PMLR, 09–15 Jun 2019a. URL https://proceedings.mlr.press/v97/houlsby19a.html. * Houlsby et al. (2019b) Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_ , pp. 2790–2799. PMLR, 2019b. * Karimi Mahabadi et al. (2021) Karimi Mahabadi, R., Ruder, S., Dehghani, M., and Henderson, J. Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , August 2021. * Langley (2000) Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), _Proceedings of the 17th International Conference on Machine Learning (ICML 2000)_ , pp. 1207–1216, Stanford, CA, 2000\. Morgan Kaufmann. * Lester et al. (2021) Lester, B., Al-Rfou, R., and Constant, N. The power of scale for parameter-efficient prompt tuning. _arXiv preprint arXiv:2104.08691_ , 2021. * Lewis et al. (2020) Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pp. 7871–7880, 2020. * Li & Liang (2021) Li, X. L. and Liang, P. Prefix-tuning: Optimizing continuous prompts for generation. _arXiv preprint arXiv:2101.00190_ , 2021. * Liu et al. (2019a) Liu, X., He, P., Chen, W., and Gao, J. Multi-task deep neural networks for natural language understanding. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pp. 4487–4496, Florence, Italy, July 2019a. Association for Computational Linguistics. doi: 10.18653/v1/P19-1441. URL https://aclanthology.org/P19-1441. * Liu et al. (2019b) Liu, X., He, P., Chen, W., and Gao, J. Multi-task deep neural networks for natural language understanding. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pp. 4487–4496, 2019b. * Liu et al. (2021) Liu, X., Zheng, Y., Du, Z., Ding, M., Qian, Y., Yang, Z., and Tang, J. Gpt understands, too, 2021. * Raffel et al. (2019) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. _arXiv preprint arXiv:1910.10683_ , 2019. * Sanh et al. (2021) Sanh, V., Webson, A., Raffel, C., Bach, S. H., Sutawika, L., Alyafeai, Z., Chaffin, A., Stiegler, A., Scao, T. L., Raja, A., Dey, M., Bari, M. S., Xu, C., Thakker, U., Sharma, S. S., Szczechla, E., Kim, T., Chhablani, G., Nayak, N., Datta, D., Chang, J., Jiang, M. T.-J., Wang, H., Manica, M., Shen, S., Yong, Z. X., Pandey, H., Bawden, R., Wang, T., Neeraj, T., Rozen, J., Sharma, A., Santilli, A., Fevry, T., Fries, J. A., Teehan, R., Biderman, S., Gao, L., Bers, T., Wolf, T., and Rush, A. M. Multitask prompted training enables zero-shot task generalization, 2021\. * Shazeer & Stern (2018) Shazeer, N. and Stern, M. Adafactor: Adaptive learning rates with sublinear memory cost. In _International Conference on Machine Learning_ , pp. 4596–4604. PMLR, 2018. * Shazeer et al. (2018) Shazeer, N., Cheng, Y., Parmar, N., Tran, D., Vaswani, A., Koanantakool, P., Hawkins, P., Lee, H., Hong, M., Young, C., et al. Mesh-tensorflow: Deep learning for supercomputers. _arXiv preprint arXiv:1811.02084_ , 2018. * Shin et al. (2020) Shin, T., Razeghi, Y., Logan IV, R. L., Wallace, E., and Singh, S. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 4222–4235, Online, November 2020\. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.346. URL https://aclanthology.org/2020.emnlp-main.346. * Sukhbaatar et al. (2019) Sukhbaatar, S., Grave, E., Lample, G., Jegou, H., and Joulin, A. Augmenting self-attention with persistent memory. _arXiv preprint arXiv:1907.01470_ , 2019. * Tay et al. (2020) Tay, Y., Zhao, Z., Bahri, D., Metzler, D., and Juan, D.-C. Hypergrid: Efficient multi-task transformers with grid-wise decomposable hyper projections. _arXiv preprint arXiv:2007.05891_ , 2020. * Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. In _Advances in neural information processing systems_ , pp. 5998–6008, 2017. * von Oswald et al. (2019) von Oswald, J., Henning, C., Sacramento, J., and Grewe, B. F. Continual learning with hypernetworks. _arXiv preprint arXiv:1906.00695_ , 2019. * Wang et al. (2018) Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Glue: A multi-task benchmark and analysis platform for natural language understanding. _arXiv preprint arXiv:1804.07461_ , 2018. * Wang et al. (2019) Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Superglue: A stickier benchmark for general-purpose language understanding systems. _arXiv preprint arXiv:1905.00537_ , 2019. * Wang et al. (2020) Wang, Y., Zhao, Z., Dai, B., Fifty, C., Lin, D., Hong, L., and Chi, E. H. Small towers make big differences, 2020. * Wei et al. (2021) Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., and Le, Q. V. Finetuned language models are zero-shot learners, 2021. * Wu et al. (2020) Wu, S., Zhang, H. R., and Ré, C. Understanding and improving information transfer in multi-task learning. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=SylzhkBtDB. * Yosinski et al. (2014) Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. How transferable are features in deep neural networks? In _Advances in neural information processing systems_ , pp. 3320–3328, 2014. * Zaken et al. (2021) Zaken, E. B., Ravfogel, S., and Goldberg, Y. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. _arXiv preprint arXiv:2106.10199_ , 2021. ## Appendix A Appendix This section covers the parameter count of HyperPrompt, the experimental details, the calculation of attention mass and entropy, and per-task performance of GLUE and SuperGLUE. ### A.1 Parameter Count of HyperPrompt (§3.4) Since the encoder and the decoder of Transformers have approximately the same capacity, the calculation considers only the decoder-side for simplicity. First, we have global task prompts $\bm{P}_{\tau}\in\mathbb{R}^{l\times d}$ for the $\tau$-th task, which contains $dlT$ parameters for $T$ tasks. The global HyperNetworks contain four weight matrices $\bm{W}^{D_{k}}\in\mathbb{R}^{(d\times b)\times t}$, $\bm{W}^{D_{v}}\in\mathbb{R}^{(d\times b)\times t}$, $\bm{W}^{U_{k}}\in\mathbb{R}^{(b\times h\times d_{h})\times t}$ and $\bm{W}^{U_{v}}\in\mathbb{R}^{(b\times h\times d_{h})\times t}$, which result in $4(bdt)$ parameters (we let $d=h\times d_{h}$). To obtain layer-aware task embedding, HyperPrompt learns task embedding $k_{\tau}\in\mathbb{R}^{t^{\prime}}$ for the $\tau$ task and layer embedding $z_{m}\in\mathbb{R}^{t^{\prime}}$ for the $m$-th Transformer block, which in total results in $Tt^{\prime}+Mt^{\prime}$ parameters. Besides, a task projection network $h_{t}$ is applied to fuse the task embedding and the layer embedding into the final layer-aware task embedding $\bm{I}_{\tau}^{m}\in\mathbb{R}^{t}$. $h_{t}$ is a two-layer feed-forward networks and contains $(2t^{\prime}+t)e$ parameters, where $e$ is the hidden dimension for $h_{t}$. ### A.2 Experimental Details (§4.1) Our models were implemented using Mesh Tensorflow222https://github.com/tensorflow/mesh Shazeer et al. (2018) with the T5 library333https://github.com/google-research/text-to-text-transfer- Transformer Raffel et al. (2019). Following Raffel et al. (2019), all data are preprocessed as into a ”sequence-to-sequence” format. The length of the sequence is 512 at the encoder and 32 at the decoder. For all experiments, we train models 300K steps with a batch size of 128 and each batch is a mixture which samples each task proportionately to the number of examples in the dataset. Learning rate is a constant of $1\text{e-}3$ with Adafactor optimizer (Shazeer & Stern, 2018). For hyper-parameters tuning, the length of prompt $l$ is selected from $\\{12,16,20,20,24\\}$ at the encoder and $\\{2,4,6,8,10,12,14,16\\}$ at the decoder. The bottleneck dimension $b$ in the transform matrices is set to $d/r$, where $d$ is the model dimension of the T5 models and $r$ is a reduction factor and selected from $\\{16,32,64\\}$. The dimension $t$ of the layer-aware task embedding is selected from $\\{32,64,128\\}$. For a fair comparison, the hyper-parameters of baseline methods are set to have approximately the same numbers of parameters as HyperPrompt with the exception that Prompt-Tuning and MTL-Prompt-Share are extremely parameter-efficient with significantly fewer parameters. ### A.3 Attention Mass and Entropy calculation (§4.6) To calculate the attention mass over hyper-prompts per layer, we averaged the hyper-prompt attention softmax scores across 100 validation examples and each attention head in a layer, and summed across each query attending to the hyper-prompts. In other words, we aggregated the amount of attention given to hyper-prompts by queries. To calculate the attention entropy over tokens (other than hyper-prompts), we calculated the entropy of the attention distributions (averaged across attention heads) for 100 validation examples. This results in $\sum_{n=1}^{100}\sum_{L=1}^{12}{|X_{n}|}$ entropies calculated and visualized in Figure 3 (bottom). For the HyperPrompt model, this involved re-normalizing the softmax distribution after removing hyper- prompts, as we wanted to understand how the original tokens are attended to. ### A.4 Per-Task Performance of GLUE and SuperGLUE Table 6 and 7 below show the comparison of fine-tuning the entire model against task-specific parameters only on GLUE and SuperGLUE datasets. Table 8 and 9 show the detailed results of full-tuning of HyperPrompt against baselines on T5 Base. Table 10 and 11 show the detailed results of full-tuning of HyperPrompt against baselines on T5 Large. Tunable Parameters | Model | CoLA | SST-2 | MRPC | SST-B | QQP | MNLI | QNLI | RTE | AVG ---|---|---|---|---|---|---|---|---|---|--- All | MTL | 59.4 | 96.6 | 93.3/90.7 | 90.6/90.4 | 89.8/92.3 | 90.8/90.8 | 95.2 | 90.8 | 88.3 All | HyperFormer++-T5.1.1${}_{\textsc{large}}$ | 63.3 | 96.6 | 93.2/90.7 | 92.1/91.9 | 89.7/92.3 | 90.5/90.7 | 95.1 | 89.9 | 88.8 All | HyperPrompt-T5.1.1${}_{\textsc{large}}$ | 64.6 | 96.7 | 94.0/91.8 | 91.3/91.4 | 90.0/92.4 | 90.8/91.0 | 95.4 | 91.9 | 89.4 Task-Specific | HyperFormer++-T5.1.1${}_{\textsc{large}}$ | 58.9 | 95.7 | 92.7/90.0 | 91.6/91.5 | 87.7/90.7 | 89.8/90.0 | 94.5 | 87.0 | 87.3 Task-Specific | HyperPrompt-T5.1.1${}_{\textsc{large}}$ | 57.5 | 96.7 | 93.6/91.2 | 91.9/92.0 | 87.0/90.1 | 90.3/90.6 | 95.0 | 87.7 | 87.5 Table 6: Comparison of fine-tuning all vs task-specific parameters on GLUE. Tunable Parameters | Model | BoolQ | CB | COPA | MultiRC | ReCoRD | RTE | WiC | WSC | AVG ---|---|---|---|---|---|---|---|---|---|--- All | MTL | 88.5 | 95.8/98.2 | 87.0 | 85.5/56.3 | 89.2/88.6 | 91.7 | 74.0 | 89.4 | 85.9 All | HyperFormer++-T5.1.1${}_{\textsc{large}}$ | 88.9 | 98.7/98.2 | 86.7 | 85.4/56.7 | 89.4/88.8 | 92.1 | 74.5 | 90.7 | 86.4 All | HyperPrompt-T5.1.1${}_{\textsc{large}}$ | 88.7 | 99.1/98.8 | 91.0 | 85.0/55.6 | 89.8/89.1 | 91.3 | 74.2 | 92.0 | 87.0 Task-Specific | HyperFormer++-T5.1.1${}_{\textsc{large}}$ | 85.2 | 90.9/94.6 | 76.7 | 81.5/48.8 | 87.2/86.4 | 87.7 | 67.8 | 82.1 | 80.5 Task-Specific | HyperPrompt-T5.1.1${}_{\textsc{large}}$ | 85.2 | 95.2/95.5 | 75.5 | 82.9/52.9 | 89.1/88.3 | 85.7 | 71.1 | 82.2 | 81.5 Table 7: Comparison of fine-tuning all vs task-specific parameters on SuperGLUE. Model | #Params | CoLA | SST-2 | MRPC | SST-B | QQP | MNLI | QNLI | RTE | AVG ---|---|---|---|---|---|---|---|---|---|--- MTL | 1.0x | 49.8 | 94.6 | 92.5/89.8 | 90.7/90.5 | 89.2/91.9 | 88.8/88.5 | 93.3 | 85.0 | 85.5 Vanilla Adapter | 1.06x | 60.0 | 95.4 | 92.7/89.8 | 90.2/90.2 | 89.3/91.9 | 88.5/88.1 | 93.5 | 84.4 | 86.7 HyperFormer++ | 1.04x | 56.9 | 94.8 | 92.9/90.1 | 91.1/90.9 | 88.9/91.7 | 88.7/88.3 | 93.4 | 85.6 | 86.5 Prompt-Tuning | 1.0003x | 48.0 | 95.0 | 92.2/89.0 | 90.3/90.2 | 89.0/91.7 | 88.8/88.5 | 93.2 | 82.9 | 84.8 MTL-Prompt-Share (ours) | 1.008x | 56.2 | 94.7 | 93.0/90.4 | 90.6/90.4 | 89.2/91.9 | 88.7/88.4 | 93.4 | 85.2 | 86.4 MTL-Prompt-Sep (ours) | 1.06x | 57.2 | 94.6 | 93.8/91.4 | 91.0/90.8 | 89.2/91.9 | 88.5/88.4 | 93.4 | 86.6 | 86.8 HyperPrompt (ours) | 1.04x | 57.0 | 95.2 | 93.4/90.9 | 90.4/90.2 | 89.2/92.0 | 88.7/88.5 | 93.4 | 87.1 | 86.8 Table 8: Comparison of HyperPrompt with baselines on GLUE using T5 Base. Model | #Params | BoolQ | CB | COPA | MultiRC | ReCoRD | RTE | WIC | WSC | AVG ---|---|---|---|---|---|---|---|---|---|--- MTL | 1.0x | 82.6 | 93.4/93.5 | 65.7 | 76.7/39.7 | 80.9/80.2 | 85.6 | 70.5 | 81.4 | 77.2 Vanilla Adapter | 1.03x | 83.5 | 93.4/94.6 | 65.3 | 77.6/42.7 | 81.0/80.2 | 88.2 | 71.0 | 76.9 | 77.5 HyperFormer++ | 1.02x | 83.5 | 96.2/97.0 | 66.3 | 77.8/41.9 | 81.2/80.4 | 87.4 | 71.0 | 80.1 | 78.2 Prompt-Tuning | 1.0003x | 82.5 | 94.0/95.8 | 68.0 | 76.9/40.2 | 80.9/80.2 | 84.1 | 69.3 | 80.8 | 77.3 MTL-Prompt-Share (ours) | 1.004x | 83.1 | 95.7/95.2 | 67.7 | 77.3/41.3 | 81.9/81.0 | 87.4 | 70.4 | 80.8 | 78.2 MTL-Prompt-Sep (ours) | 1.03x | 83.3 | 97.8/97.0 | 61.7 | 77.6/42.3 | 81.5/80.6 | 86.8 | 71.4 | 78.2 | 77.5 HyperPrompt (ours) | 1.02x | 83.3 | 96.6/96.4 | 69.7 | 77.5/41.0 | 81.7/80.9 | 86.8 | 70.5 | 83.7 | 78.9 Table 9: Comparison of HyperPrompt with baselines on SuperGLUE using T5 Base. Model | #Params | CoLA | SST-2 | MRPC | SST-B | QQP | MNLI | QNLI | RTE | AVG ---|---|---|---|---|---|---|---|---|---|--- MTL | 1.0x | 59.4 | 96.6 | 93.3/90.7 | 90.6/90.4 | 89.8/92.3 | 90.8/90.8 | 95.2 | 90.8 | 88.3 Vanilla Adapter | 1.06x | 63.8 | 96.5 | 93.7/91.3 | 92.0/91.9 | 90.0/92.5 | 90.6/90.5 | 94.9 | 88.7 | 88.8 HyperFormer++ | 1.02x | 63.3 | 96.6 | 93.2/90.7 | 92.1/91.9 | 89.7/92.3 | 90.5/90.7 | 95.1 | 89.9 | 88.8 Prompt-Tuning | 1.0001x | 62.5 | 96.7 | 93.4/91.0 | 91.3/91.0 | 90.0/92.4 | 90.9/91.0 | 95.4 | 89.9 | 88.8 MTL-Prompt-Share (ours) | 1.008x | 65.0 | 96.7 | 93.8/91.6 | 91.1/90.8 | 90.0/92.4 | 90.8/91.1 | 95.3 | 91.3 | 89.3 MTL-Prompt-Sep (ours) | 1.06x | 63.9 | 96.6 | 94.6/92.6 | 92.0/91.7 | 90.0/92.4 | 90.9/91.0 | 95.2 | 91.6 | 89.4 HyperPrompt (ours) | 1.02x | 64.6 | 96.7 | 94.0/91.8 | 91.3/91.4 | 90.0/92.4 | 90.8/91.0 | 95.4 | 91.9 | 89.4 Table 10: Comparison of HyperPrompt with baselines on GLUE using T5 Large. Model | #Params | BoolQ | CB | COPA | MultiRC | ReCoRD | RTE | WIC | WSC | AVG ---|---|---|---|---|---|---|---|---|---|--- MTL | 1.0x | 88.5 | 95.8/98.2 | 87.0 | 85.5/56.3 | 89.2/88.6 | 91.7 | 74.0 | 89.4 | 85.9 Vanilla Adapter | 1.03x | 88.8 | 98.3/98.8 | 86.0 | 85.3/56.0 | 89.3/88.7 | 91.2 | 73.6 | 91.3 | 86.1 HyperFormer++ | 1.01x | 88.9 | 98.7/98.2 | 86.7 | 85.4/56.7 | 89.4/88.8 | 92.1 | 74.5 | 90.7 | 86.4 Prompt-Tuning | 1.0001x | 88.5 | 97.6/98.8 | 85.0 | 84.9/55.2 | 89.0/88.4 | 91.5 | 72.8 | 90.1 | 85.6 MTL-Prompt-Share (ours) | 1.004x | 88.5 | 98.7/98.2 | 88.0 | 85.2/55.8 | 89.7/89.1 | 91.8 | 74.1 | 93.9 | 86.8 MTL-Prompt-Sep (ours) | 1.03x | 88.6 | 97.6/98.8 | 87.7 | 85.2/56.4 | 89.7/89.1 | 91.6 | 73.5 | 89.4 | 86.1 HyperPrompt (ours) | 1.01x | 88.7 | 99.1/98.8 | 91.0 | 85.0/55.6 | 89.8/89.1 | 91.3 | 74.2 | 92.0 | 87.0 Table 11: Comparison of HyperPrompt with baselines on SuperGLUE using T5 Base.
Gaëtan GilbertENS<EMAIL_ADDRESS> # Formalising Real Numbers in Homotopy Type Theory ###### Abstract Cauchy reals can be defined as a quotient of Cauchy sequences of rationals. The limit of a Cauchy sequence of Cauchy reals is defined through lifting it to a sequence of Cauchy sequences of rationals. This lifting requires the axiom of countable choice or excluded middle, neither of which is available in homotopy type theory. To address this, the Univalent Foundations Program uses a higher inductive-inductive type to define the Cauchy reals as the free Cauchy complete metric space generated by the rationals. We generalize this construction to define the free Cauchy complete metric space generated by an arbitrary metric space. This forms a monad in the category of metric spaces with Lipschitz functions. When applied to the rationals it defines the Cauchy reals. Finally, we can use Altenkirch and Danielson (2016)’s partiality monad to define a semi-decision procedure comparing a real number and a rational number. The entire construction has been formalized in the Coq proof assistant. It is available at https://github.com/SkySkimmer/HoTTClasses/tree/CPP2017. ## 1 Introduction The usual process of defining the set of Cauchy real numbers proceeds in three stages: first define Cauchy sequences of rationals, then define an equivalence between Cauchy sequences, and finally quotient Cauchy sequences by the equivalence. However, proving that the so-defined Cauchy reals are Cauchy complete, i.e. that Cauchy sequences of Cauchy reals have Cauchy real limits requires the axiom of countable choice. Alternatively, the quotient step can be replaced by working with Cauchy sequences as a setoid: this approach is used e.g. in OConnor07 [2007] which defines the completion of arbitrary metric spaces. This comes at the cost of having to make all of abstract algebra be about setoids in order to use its results for real numbers. Moreover, in the context of homotopy type theory we would like to be able to use the large amount of results about the homotopic identity but we can only do so for results about identities between bare Cauchy sequences. For instance, suppose we wish to use the principle of unique choice (which is true in homotopy type theory) to construct the unique $x:\mathbb{R}$ such that $P(x)$. Since there are multiple different Cauchy sequences representing the same real number, this will in fact only be possible if $P$ does not respect the setoid equivalence, i.e. it should be considered a property on Cauchy sequences rather than a property on real numbers. The Higher Inductive Inductive types (HIIT) from Homotopy Type Theory [HoTT, 2013] provide another construction, in only one step and without the need for an axiom of choice to prove completeness. The construction and the proof that it produces an Archimedean ordered field were outlined in the HoTT book, however formalization in the Coq proof assistant would have required workarounds for the lack of inductive-inductive types until an experimental branch by M. Sozeau started in 2015. In section 2 we define a notion of _premetric space_ , which on the meta level is a generalization of a metric space. From this we can define basic notions such as Lipschitz functions and limits of Cauchy sequences (or rather the equivalence but easier to work with Cauchy approximations). Section 3 generalizes the construction of the Cauchy completion of rationals from the HoTT book to arbitrary premetric spaces. This generalization shows that Cauchy completion is a monadic operator on premetric spaces (where the arrows are Lipschitz functions). Lemmas relating to the specific structure of Cauchy reals (such as lemmas about the order on reals) are retained as shown in section 4. The monadic structure also provides a more natural way to define multiplication than that used in HoTT [2013]. In section 5 we investigate how partial functions as per Partiality, Revisited [2016] can be defined on our definition of Cauchy reals through the example of a semi-decision procedure for the property $0<x$. ## 2 Premetric spaces We follow OConnor07 [2007] in defining distance as a relation expressing when two elements are sufficiently close. For O’Connor a metric space is a space with a relation $B_{\varepsilon}(x,y)$ where $x$ and $y$ are elements of the space and $\varepsilon:\mathbb{Q}^{+}$, which is interpreted as $d(x,y)\leq\varepsilon$. In contrast, HoTT [2013] defines a relation $x\approx_{\varepsilon}y$ for $x$ and $y$ Cauchy reals which is interpreted as $d(x,y)<\varepsilon$. We follow HoTT in using the strict order $<$. ###### Definition 2.1 (Premetric space). A _premetric space_ is a type $A$ together with a parametric mere relation $\\_\approx_{\\_}\\_:\mathbb{Q}^{+}\rightarrow A\rightarrow A\rightarrow Prop$ verifying the following properties: * • reflexivity: $\forall(\varepsilon:\mathbb{Q}^{+})(x:A),x\approx_{\varepsilon}x$ * • symmetry: $\forall(\varepsilon:\mathbb{Q}^{+})(x\;y:A),x\approx_{\varepsilon}y\rightarrow y\approx_{\varepsilon}x$ * • separatedness: $\forall x\;y:A,(\forall\varepsilon:\mathbb{Q}^{+},x\approx_{\varepsilon}y)\rightarrow x=_{A}y$ * • triangularity: $\forall(x\;y\;z:A)(\varepsilon\;\delta:\mathbb{Q}^{+}),x\approx_{\varepsilon}y\rightarrow y\approx_{\delta}z\rightarrow x\approx_{\varepsilon+\delta}z$ * • roundedness: $\forall(\varepsilon:\mathbb{Q}^{+})(x\;y:A),x\approx_{\varepsilon}y\leftrightarrow\exists\delta:\mathbb{Q}^{+},\delta<\varepsilon\wedge x\approx_{\delta}y$ $\approx$ is called the closeness relation of $A$, with $x\approx_{\varepsilon}y$ read as "$x$ and $y$ are $\varepsilon$-close" or "the distance between $x$ and $y$ is less than $\varepsilon$". ###### Remark 2.2. Classically, we can take $d(x,y)=\sup\\{\varepsilon:\mathbb{Q}^{+},x\approx_{\varepsilon}y\\}$ with values in $\mathbb{R}+\\{\infty\\}$ to turn a premetric space into a metric space. If we remain constructive, we expect a need for a locatedness property such as $\forall(x\;y:A)\;(q\;r:\mathbb{Q}^{+}),q<r\rightarrow x\approx_{r}y\vee x\not\approx_{q}y$. We have not carried out the constructions due to lack of time, so these may not be the exact properties required. For instance without countable choice the position of the truncation may need to be different: this can be seen in HoTT [2013] lemma 11.4.1. We now work in an arbitrary premetric space $A$. ###### Definition 2.3 (Cauchy approximation). $\operatorname{Approximation}A\coloneqq\Sigma_{x:\mathbb{Q}^{+}\rightarrow A}\forall\varepsilon\;\delta:\mathbb{Q}^{+},x_{\varepsilon}\approx_{\varepsilon+\delta}x_{\delta}$ A Cauchy approximation $x:\operatorname{Approximation}A$ can be seen as a function which given $\varepsilon$ produces a value at distance up to $\varepsilon$ of an hypothetical limit. By abuse of notation we confuse $x:\operatorname{Approximation}A$ with its first projection. ###### Definition 2.4 (Limit). $l:A$ is a limit of the approximation $x$ when $\forall\varepsilon,\delta:\mathbb{Q}^{+},x_{\varepsilon}\approx_{\varepsilon+\delta}l$ Since we want to express $d(x_{\varepsilon},l)\leq\varepsilon$ but closeness is interpreted as $<$ we introduce an additional $\delta$. ###### Lemma 2.5. Limits are unique: if $l_{1}$ and $l_{2}$ are limits of $x:\operatorname{Approximation}A$ then $l_{1}=l_{2}$. We may then talk about _the_ limit of an approximation. ###### Proof. By separatedness and triangularity. ∎ ###### Definition 2.6 (Cauchy completeness). $A$ is Cauchy complete when every Cauchy approximation has a limit. Since the limit is unique, this is equivalent to having a function $lim:\operatorname{Approximation}A\rightarrow A$ producing the limit for every approximation. ###### Theorem 2.7. Rationals form a premetric space with $q\approx_{\varepsilon}r\coloneqq|q-r|<\varepsilon$ as its closeness. The following lemmas make working with limits easier. ###### Lemma 2.8. Let $y:\operatorname{Approximation}A$, $l_{y}$ and $x:A$ and $\varepsilon$ and $\delta:\mathbb{Q}^{+}$ such that $l_{y}$ is the limit of $y$ and $x\approx_{\varepsilon}y_{\delta}$. Then $x\approx_{\varepsilon+\delta}l_{y}$. ###### Proof. First strengthen the hypothesis $x\approx_{\varepsilon}y_{\delta}$ by roundedness, then finish with triangularity. ∎ ###### Lemma 2.9. Let $x$ and $y:\operatorname{Approximation}A$, and $\varepsilon\;\delta\;\kappa:\mathbb{Q}^{+}$ such that $x_{\delta}\approx_{\varepsilon}y_{\kappa}$, then if $l_{x}$ is the limit of $x$ and $l_{y}$ is the limit of $y$, $l_{x}\approx_{\varepsilon+\delta+\kappa}l_{y}$. ###### Proof. By two applications of lemma 2.8. ∎ ###### Lemma 2.10. If $x\;y:\operatorname{Approximation}A$ and $\varepsilon:\mathbb{Q}^{+}$ are such that $\forall\delta\;\kappa:\mathbb{Q}^{+},x_{\kappa}\approx_{\varepsilon+\delta}y_{\kappa}$, then for $l_{x}$ limit of $x$ and $l_{y}$ limit of $y$, $\forall\delta:\mathbb{Q}^{+},l_{x}\approx_{\varepsilon+\delta}l_{y}$. ###### Proof. Using lemma 2.9, since $\varepsilon+\delta=(\varepsilon+\frac{\delta}{3})+\frac{\delta}{3}+\frac{\delta}{3}$. ∎ ### 2.1 Continuity notions We will be interested in certain properties of functions between premetric spaces $A$ and $B$. ###### Definition 2.11 (Lipschitz function). A function $f:A\rightarrow B$ is Lipschitz with constant $L:\mathbb{Q}^{+}$ when $\forall(\varepsilon:\mathbb{Q}^{+})(x,y:A),x\approx_{\varepsilon}y\rightarrow f\;x\approx_{L*\varepsilon}f\;y$ If $L$ is $1$ we say that $f$ is non-expanding. ###### Definition 2.12 (Continuous function). A function $f:A\rightarrow B$ is continuous when $\forall(\varepsilon:\mathbb{Q}^{+})(x:A),\exists\delta:\mathbb{Q}^{+},\forall y:A,x\approx_{\delta}y\rightarrow f\;x\approx_{\varepsilon}f\;y$ ###### Lemma 2.13. Lipschitz functions are continuous. ###### Proof. Using $\delta\coloneqq\frac{\varepsilon}{L}$. ∎ Premetric spaces with continuous functions form a category. Premetric spaces with Lipschitz functions also form a category. Notably the identity is non-expanding. ### 2.2 The premetric space of functions Let $A$ a type and $B$ a premetric space. ###### Definition 2.14 (Closeness of functions). $f\approx_{\varepsilon}g\coloneqq\exists\delta:\mathbb{Q}^{+},\delta<\varepsilon\wedge\forall x:A,f\;x\approx_{\delta}f\;y$ This expresses that $d(f,g)=\sup\\{d(f\;x,g\;x)|x:A\\}$. ###### Lemma 2.15. For $\varepsilon:\mathbb{Q}^{+}$ and $f\;g:A\rightarrow B$, if $f\approx_{\varepsilon}g$ then $\forall x:A,f\;x\approx_{\varepsilon}g\;x$. ###### Proof. By roundedness. ∎ ###### Theorem 2.16. $A\rightarrow B$ forms a premetric space. If $B$ is Cauchy complete then so is $A\rightarrow B$, and the limit of $s:\operatorname{Approximation}(A\rightarrow B)$ is $\lambda y,lim\;(\lambda\varepsilon,s\;\varepsilon\;y)$. ###### Lemma 2.17 (Limit of Lipschitz functions). Suppose $A$ is a premetric space and $B$ is Cauchy complete. If $s:\operatorname{Approximation}(A\rightarrow B)$ is such that $\forall\varepsilon:\mathbb{Q}^{+}$, $s\;\varepsilon$ is Lipschitz with constant $L$, then $lim\;s$ is Lipschitz with constant $L$. ###### Proof. Let $\varepsilon:\mathbb{Q}^{+}$ and $x\;y:A$ such that $x\approx_{\varepsilon}y$. By roundedness there merely is $\delta\;\kappa:\mathbb{Q}^{+}$ such that $\varepsilon=\delta+\kappa$ and $x\approx_{\delta}y$. By hypothesis $\forall\eta:\mathbb{Q}^{+},s_{\eta}\;x\approx_{L*\delta}y$, then by roundedness $\forall\eta\;\eta^{\prime}:\mathbb{Q}^{+},s_{\eta}\;x\approx_{L*\delta+\eta^{\prime}}s_{\eta}\;y$. By lemma 2.10 and unfolding the definition of $\lim s$ we have $\forall\eta:\mathbb{Q}^{+},\lim s\;x\approx_{L*\delta+\eta}\lim s\;y$, then since $L*\varepsilon=L*\delta+L*\kappa$ we have $\lim s\;x\approx_{L*\varepsilon}\lim s\;y$. ∎ ## 3 Cauchy completion ### 3.1 Definition and eliminators In classical logic, we define the completion of a metric space $T$ as the quotient of the Cauchy sequences (or equivalently of Cauchy approximations) in $T$ by the equivalence $\lim f=\lim g$ (or rather an equivalent statement which doesn’t assume the limit is defined). The axiom of countable choice is then used to prove that Cauchy approximations in the quotient have limits in the quotient. Using higher inductive types, we can instead define $\operatorname{\mathcal{C}}T$ the free complete premetric space generated by $T$. By unfolding this statement we can see what constructors it needs: * • generated by $T$: so there is a constructor of type $T\rightarrow\operatorname{\mathcal{C}}T$. * • premetric space: so we need to construct the closeness relation, and truncate $\operatorname{\mathcal{C}}T$ to make it separated. * • Cauchy complete: there is a constructor of type $\operatorname{Approximation}(\operatorname{\mathcal{C}}T)\rightarrow\operatorname{\mathcal{C}}T$. ###### Definition 3.1. $\operatorname{\mathcal{C}}T$ has the following constructors $\displaystyle\operatorname{\eta}$ $\displaystyle:T\rightarrow\operatorname{\mathcal{C}}T$ $\displaystyle\lim$ $\displaystyle:\operatorname{Approximation}(\operatorname{\mathcal{C}}T)\rightarrow\operatorname{\mathcal{C}}T$ The constructors of the closeness relation and the path constructors for $\operatorname{\mathcal{C}}T$ and its closeness construct proof-irrelevant values. As such, we do not name them but instead give them as inference rules in fig. 1. $\forall\varepsilon:\mathbb{Q}^{+},x\approx_{\varepsilon}y$ $x=y$ | $p,q:x\approx_{\varepsilon}y$ $p=q$ ---|--- $q\approx_{\varepsilon}r$ $\operatorname{\eta}q\approx_{\varepsilon}\operatorname{\eta}r$ | $x_{\delta}\approx_{\varepsilon-\delta-\kappa}y_{\kappa}$ $\lim x\approx_{\varepsilon}\lim y$ $\operatorname{\eta}q\approx_{\varepsilon-\delta}y_{\delta}$ $\operatorname{\eta}q\approx_{\varepsilon}\lim y$ | $x_{\delta}\approx_{\varepsilon-\delta}\operatorname{\eta}r$ $\lim x\approx_{\varepsilon}\operatorname{\eta}r$ Figure 1: Proof irrelevant constructors of $\operatorname{\mathcal{C}}$ We can use an explicit $fix$ expression in Coq to define the fully general induction principle with the type given in HoTT [2013], however it is only used through the following functions. ###### Definition 3.2 (Simple $\operatorname{\mathcal{C}}-$induction). Given a mere predicate $A:\operatorname{\mathcal{C}}T\rightarrow Prop$ such that the hypotheses in fig. 2 are verified, $\forall x:\operatorname{\mathcal{C}}T,A\;x$ $A\;(\operatorname{\eta}q)$ | $\forall\varepsilon:\mathbb{Q}^{+},A\;x_{\varepsilon}$ $A\;(\lim x)$ ---|--- Figure 2: Hypotheses for simple $\operatorname{\mathcal{C}}-$induction. ###### Definition 3.3 (Simple $\approx-$induction). Given a mere predicate $P:\mathbb{Q}^{+}\rightarrow\operatorname{\mathcal{C}}T\rightarrow\operatorname{\mathcal{C}}T\rightarrow Prop$ such that the hypotheses in fig. 3 are verified, $\forall\varepsilon\;x\;y,x\approx_{\varepsilon}y\rightarrow P\;\varepsilon\;x\;y$ $q\approx_{\varepsilon}r$ $P\;\varepsilon\;(\operatorname{\eta}q)\;(\operatorname{\eta}r)$ $x_{\delta}\approx_{\varepsilon-\delta-\kappa}y_{\kappa}$ $P\;(\varepsilon-\delta-\kappa)\;x_{\delta}\;y_{\kappa}$ $P\;\varepsilon\;(\lim x)\;(\lim y)$ $\operatorname{\eta}q\approx_{\varepsilon-\delta}y_{\delta}$ $P\;(\varepsilon-\delta)\;(\operatorname{\eta}q)\;y_{\delta}$ $P\;\varepsilon\;(\operatorname{\eta}q)\;(\lim y)$ $x_{\delta}\approx_{\varepsilon-\delta}\operatorname{\eta}r$ $P\;(\varepsilon-\delta)\;x_{\delta}\;(\operatorname{\eta}r)$ $P\;\varepsilon\;(\lim x)\;(\operatorname{\eta}r)$ Figure 3: Hypotheses for simple $\approx-$induction. ###### Definition 3.4 (Mutual $\operatorname{\mathcal{C}}-$recursion). Let $A:Type$, a mere predicate $\sim:\mathbb{Q}^{+}\rightarrow A\rightarrow A\rightarrow Prop$ and functions $\displaystyle f_{\eta}:T\rightarrow A$ $\displaystyle f_{lim}:\forall(x:\operatorname{Approximation}(\operatorname{\mathcal{C}}T))(f_{x}:\mathbb{Q}^{+}\rightarrow A),$ $\displaystyle(\forall(\varepsilon,\delta:\mathbb{Q}^{+}),f_{x}\;\varepsilon\sim_{\varepsilon+\delta}f_{x}\;\delta)\rightarrow A$ If the hypotheses in fig. 4 are verified, then we have $\displaystyle f:\operatorname{\mathcal{C}}T\rightarrow A$ $\displaystyle f_{\approx}:\forall(x,y:\operatorname{\mathcal{C}}T)(\varepsilon:\mathbb{Q}^{+}),x\approx_{\varepsilon}y\rightarrow f\;x\sim_{\varepsilon}f\;y$ such that $\displaystyle f\;(\operatorname{\eta}q)\coloneqq f_{\eta}\;q$ $\displaystyle f\;(\lim x)\coloneqq f_{lim}\;x\;(f\circ x)\;\left(\lambda\varepsilon\;\delta,f_{\approx}\;(\varepsilon+\delta)\;x_{\varepsilon}\;y_{\delta}\right)$ $\forall\varepsilon:\mathbb{Q}^{+},x\sim_{\varepsilon}y$ $x=y$ $q\approx_{\varepsilon}r$ $f_{\eta}\;q\sim_{\varepsilon}f_{\eta}\;r$ | $f_{x}\;\delta\sim_{\varepsilon-\delta-\kappa}f_{y}\;\kappa$ $f_{lim}\;x\;f_{x}\;H_{x}\sim_{\varepsilon}f_{lim}\;y\;f_{y}\;H_{y}$ ---|--- $f_{\eta}\;q\sim_{\varepsilon-\delta}f_{y}\;\delta$ $f_{\eta}\;q\sim_{\varepsilon}f_{lim}\;y\;f_{y}\;H_{y}$ | $f_{x}\;\delta\sim_{\varepsilon-\delta}f_{\eta}\;r$ $f_{lim}\;x\;f_{x}\;H_{x}\sim_{\varepsilon}f_{\eta}\;r$ ---|--- Figure 4: Hypotheses for mutual $\operatorname{\mathcal{C}}-$recursion. ### 3.2 Properties of the completion We now seek to * • show that $\operatorname{\mathcal{C}}T$ is indeed a premetric space, and that $\lim$ constructs limits. * • characterize the closeness relation: for instance $\operatorname{\eta}q\approx_{\varepsilon}\operatorname{\eta}r$ should be equivalent to $q\approx_{\varepsilon}r$. Constructors of $\approx$ give us separatedness and proof irrelevance. ###### Lemma 3.5 (Reflexivity). $\forall(u:\operatorname{\mathcal{C}}T)(\varepsilon:\mathbb{Q}^{+}),u\approx_{\varepsilon}u$ ###### Proof. By simple induction on $u$: * • Let $u:T$ and $\varepsilon:\mathbb{Q}^{+}$. $T$ is a premetric space so $u\approx_{\varepsilon}u$, then $\operatorname{\eta}u\approx_{\varepsilon}\operatorname{\eta}u$. * • Let $x:\operatorname{Approximation}(\operatorname{\mathcal{C}}T)$ such that $\forall(\varepsilon,\delta:\mathbb{Q}^{+}),x_{\varepsilon}\approx_{\delta}x_{\varepsilon}$ Let $\varepsilon:\mathbb{Q}^{+}$. Then $x_{\varepsilon/3}\approx_{\varepsilon/3}x_{\varepsilon/3}$, so $\lim x\approx_{\varepsilon}\lim x$. ∎ ###### Lemma 3.6. $\operatorname{\mathcal{C}}T$ is a set. ###### Proof. By HoTT [2013] theorem 7.2.2 and separatedness. ∎ ###### Lemma 3.7 (Symmetry). $\forall(\varepsilon:\mathbb{Q}^{+})(xy:\operatorname{\mathcal{C}}T),x\approx_{\varepsilon}y\rightarrow y\approx_{\varepsilon}x$ ###### Proof. By simple $\approx-$induction, since $T$ has a symmetric closeness relation. ∎ To go further we need a way to deconstruct proofs of closeness. This is done by defining a function $B_{\\_}(\\_,\\_):\mathbb{Q}^{+}\rightarrow\operatorname{\mathcal{C}}T\rightarrow\operatorname{\mathcal{C}}T\rightarrow Prop$ recursively on the two $\operatorname{\mathcal{C}}T$ arguments which is equivalent to $\approx$. $B$ will be defined by mutual $\operatorname{\mathcal{C}}-$recursion as it is proof-relevant. In order to be able to prove the side conditions we will first inhabit a subtype then obtain $B$ by projection. ###### Definition 3.8 (Concentric balls). A set of concentric balls is a value of type $Balls\coloneqq\Sigma_{B:\operatorname{\mathcal{C}}T\rightarrow\mathbb{Q}^{+}\rightarrow Prop}\\\ \left(\forall y\;\varepsilon,B_{\varepsilon}\;y\leftrightarrow\exists\delta<\varepsilon,B_{\delta}\;y\right)\\\ \wedge\left(\forall\varepsilon\;\delta\;y\;z,y\approx_{\varepsilon}z\rightarrow B_{\delta}\;y\rightarrow B_{\delta+\varepsilon}\;z\right)$ We call the first property _ball roundedness_ , and the second _ball triangularity_. For $\varepsilon:\mathbb{Q}^{+}$ and $B_{1},B_{2}:Balls$, let $B_{1}\approx_{\varepsilon}B_{2}$ when for $\\{i,j\\}=\\{1,2\\}$ $\forall y\;\delta,{B_{i}}_{\delta}\;y\rightarrow{B_{j}}_{\delta+\varepsilon}\;y$ ###### Definition 3.9 (Upper cut). An upper cut is a predicate on $\mathbb{Q}^{+}$ which is upward rounded, i.e. $Upper\coloneqq\Sigma_{U:\mathbb{Q}^{+}\rightarrow Prop}\left(\forall\varepsilon,U_{\varepsilon}\leftrightarrow\exists\delta<\varepsilon,U_{\delta}\right)$ For $\varepsilon:\mathbb{Q}^{+}$ and $U_{1},U_{2}:Upper$, let $U_{1}\approx_{\varepsilon}U_{2}$ when for $\\{i,j\\}=\\{1,2\\}$ $\forall\delta,{U_{i}}_{\delta}\rightarrow{U_{j}}_{\delta+\varepsilon}$ ###### Lemma 3.10. The closeness on $Balls$ is separated. ###### Proof. Let $B^{(1)},B^{(2)}:Balls$ such that $B^{(1)}$ and $B^{(2)}$ are $\varepsilon-$close for all $\varepsilon$. Let $\varepsilon$ and $y$, we need $B^{(1)}_{\varepsilon}\;y=B^{(2)}_{\varepsilon}\;y$. By univalence this is $B^{(1)}_{\varepsilon}\;y\leftrightarrow B^{(2)}_{\varepsilon}\;y$. Suppose $B^{(1)}_{\varepsilon}\;y$, by ball roundedness there merely is $\delta<\varepsilon$ such that $B^{(1)}_{\delta}\;y$. $B^{(1)}$ and $B^{(2)}$ are $(\varepsilon-\delta)-$close, so we have $B^{(2)}_{\varepsilon}\;y$. The second direction is the same by symmetry. ∎ ###### Lemma 3.11. The closeness on $Upper$ is separated. ###### Proof. Like with lemma 3.10 we use first roundedness then the definition of upper cut closeness at the appropriate $\varepsilon-\delta$. ∎ ###### Lemma 3.12 (Concentric balls from upper cuts). Suppose $B:\operatorname{\mathcal{C}}T\rightarrow Upper$ is non-expanding, then the underlying $\operatorname{\mathcal{C}}T\rightarrow\mathbb{Q}^{+}\rightarrow Prop$ is a set of concentric balls ###### Proof. Ball roundedness property is exactly upper cut roundedness. $B$ verifies ball triangularity because it is non-expanding. ∎ ###### Definition 3.13 (Balls around a base element). Let $q:T$. The set of concentric balls around $q$ is $B_{\\_}(\operatorname{\eta}q,\\_)$ defined by mutual $\operatorname{\mathcal{C}}-$recursion as a non-expanding function of type $\operatorname{\mathcal{C}}T\rightarrow Upper$ suitable for lemma 3.12. The proof relevant values are as follows: * • base case: $B_{\varepsilon}(\operatorname{\eta}q,\operatorname{\eta}r)\coloneqq q\approx_{\varepsilon}r$. This produces an upper cut by roundedness of $T$. * • limit case: $B_{\varepsilon}(\operatorname{\eta}q,\lim x)\coloneqq\exists\delta<\varepsilon,B_{\varepsilon-\delta}(\operatorname{\eta}q,x_{\delta})$. This produces an upper cut by the induction hypothesis and roundedness at the recursive call. The remaining hypotheses expressing that the construction is non-expanding are hard to see through on paper. In Coq however reduction makes how to proceed obvious. Let us consider the $\eta-lim$ case. Let $q\;r:T$, $\varepsilon\;\delta:\mathbb{Q}^{+}$ such that $\delta<\varepsilon$, and $y:\operatorname{Approximation}(\operatorname{\mathcal{C}}T)$ such that we have $\lambda\kappa\;\xi,B_{\xi}(\operatorname{\eta}q,y_{\kappa})$. This later function is an approximation on upper cuts. Finally the induction hypothesis is that $\left(\lambda\kappa,q\approx_{\kappa}r\right)\approx_{\varepsilon-\delta}\left(\lambda\kappa,B_{\kappa}(\operatorname{\eta}q,x_{\delta})\right)$ as upper cuts. In that context, we need to prove that $\left(\lambda\kappa,q\approx_{\kappa}r\right)\approx_{\varepsilon}\left(\lambda\kappa,B_{\kappa}(\operatorname{\eta}q,\lim x)\right)$ as upper cuts. Let $\kappa:\mathbb{Q}^{+}$, we have two goals: * • If $q\approx_{\kappa}r$ then $B_{\kappa+\varepsilon}(\operatorname{\eta}q,\lim x)$ i.e. $\exists\delta<\kappa+\varepsilon,B_{\kappa+\varepsilon-\delta}(\operatorname{\eta}q,x_{\delta})$ By the induction hypothesis and $q\approx_{\kappa}r$ we have $B_{\varepsilon-\delta+\kappa}(\operatorname{\eta}q,x_{\delta})$ with $\delta<\varepsilon<\varepsilon+\kappa$. * • If $\exists xi<\kappa,B_{\kappa-\xi}(\operatorname{\eta}q,x_{\xi})$ then $q\approx_{\kappa+\varepsilon}r$. Because $\lambda\kappa\;\xi,B_{\xi}(\operatorname{\eta}q,y_{\kappa})$ is a cut approximation we have $B_{\kappa-\xi+\delta+\xi}(\operatorname{\eta}q,x_{\delta})=B_{\kappa+\delta}(\operatorname{\eta}q,x_{\delta})$. Then by induction hypothesis $q\approx_{\kappa+\varepsilon}r$. We then similarly define the concentric balls around a limit point, and show that this definition and definition 3.13 respect $\approx$ using simple $\operatorname{\mathcal{C}}-$induction. In order to have space for more interesting proofs we shall simply recap what results we obtain from this process. ###### Theorem 3.14. We have for all $(\varepsilon:\mathbb{Q}^{+})$ and $x\;y:\operatorname{\mathcal{C}}T$, $B_{\varepsilon}(x,y):Prop$ such that $\lambda x\;y\;\varepsilon,B_{\varepsilon}(x,y)$ is a non-expanding function from $\operatorname{\mathcal{C}}T$ to $Balls$. Additionally we have the following computation rules: $\displaystyle B_{\varepsilon}(\operatorname{\eta}q,\operatorname{\eta}r)\coloneqq q\approx_{\varepsilon}r$ $\displaystyle B_{\varepsilon}(\operatorname{\eta}q,\lim y)\coloneqq\exists\delta<\varepsilon,B_{\varepsilon-\delta}(\operatorname{\eta}q,y_{\delta})$ $\displaystyle B_{\varepsilon}(\lim x,\operatorname{\eta}r)\coloneqq\exists\delta<\varepsilon,B_{\varepsilon-\delta}(x_{\delta},\operatorname{\eta}r)$ $\displaystyle B_{\varepsilon}(\lim x,\lim y)\coloneqq\exists\delta+\kappa<\varepsilon,B_{\varepsilon-\delta-\kappa}(x_{\delta},y_{\kappa})$ ###### Theorem 3.15. $B_{\varepsilon}(x,y)$ and $x\approx_{\varepsilon}y$ are equivalent. ###### Proof. We prove both sides of the equivalence separately: * • $\forall(u,v:\operatorname{\mathcal{C}}T)(\varepsilon:\mathbb{Q}^{+}),B_{\varepsilon}(u,v)\rightarrow u\approx_{\varepsilon}v$ By simple induction on $u$ then $v$, then using the computation rules of $B$ and the constructors of $\approx$. * • $\forall(\varepsilon:\mathbb{Q}^{+})(u,v:\operatorname{\mathcal{C}}T),u\approx_{\varepsilon}v\rightarrow B_{\varepsilon}(u,v)$ By simple $\approx-$induction, with each case being trivial. ∎ We can now use the computation rules in theorem 3.14 as computation rules for $\approx$. ###### Theorem 3.16. $\operatorname{\mathcal{C}}T$ forms a premetric space. ###### Proof. Roundedness of $B$ as a closeness relation is obtained from roundedness as a function into $Balls$, then we use that $B$ equals $\approx$ to have roundedness of $\approx$. The triangularity property of $B$ as a function into balls together with theorem 3.15 shows that $\approx$ is triangular. Separatedness comes by definition of $\operatorname{\mathcal{C}}T$, and the other properties of a premetric space are already proven in lemmas 3.7 and 3.5. ∎ ###### Corollary 3.17. $\operatorname{\eta}$ is injective. ###### Proof. By separatedness. ∎ ###### Theorem 3.18. $\operatorname{\mathcal{C}}T$ is Cauchy complete, i.e. for all $x:\operatorname{Approximation}(\operatorname{\mathcal{C}}T)$, $\lim x$ is the limit of $x$. ###### Proof. Lemma 2.8 also holds for $\operatorname{\mathcal{C}}T$: $\forall(u:\operatorname{\mathcal{C}}T)(y:\operatorname{Approximation}(\operatorname{\mathcal{C}}T))(\varepsilon,\delta:\mathbb{Q}^{+}),\\\ u\approx_{\varepsilon}y_{\delta}\rightarrow u\approx_{\varepsilon+\delta}\lim y$ By simple induction on $u$: * • Let $v:T,y:\operatorname{Approximation}(\operatorname{\mathcal{C}}T)$ and $\varepsilon,\delta:\mathbb{Q}^{+}$ such that $\operatorname{\eta}v\approx_{\varepsilon}y_{\delta}$. Then by constructor $\operatorname{\eta}v\approx_{\varepsilon+\delta}\lim y$. * • Let $x:\operatorname{Approximation}(\operatorname{\mathcal{C}}T)$ such that (induction hypothesis) $\forall(\varepsilon_{0},\varepsilon,\delta:\mathbb{Q}^{+})(y:\operatorname{Approximation}(\operatorname{\mathcal{C}}T)),\\\ x_{\varepsilon_{0}}\approx_{\varepsilon}y_{\delta}\rightarrow x_{\varepsilon_{0}}\approx_{\varepsilon+\delta}\lim y$ and let $y,\varepsilon,\delta$ such that $\lim x\approx_{\varepsilon}y_{\delta}$. By roundedness, there merely exist $\kappa,\theta:\mathbb{Q}^{+}$ such that $\varepsilon=\kappa+\theta$ and $\lim x\approx_{\kappa}y_{\delta}$. The induction hypothesis used with $y\coloneqq x$ and reflexivity of $\approx$ gives that $\forall(\varepsilon,\delta:\mathbb{Q}^{+}),x_{\varepsilon}\approx_{\varepsilon+\delta}\lim x$ (i.e. $\lim x$ is the limit of $x$). Specifically, $x_{\theta/4}\approx_{3\theta/4}\lim x$. By triangularity, $x_{\theta/4}\approx_{3\theta/4+\kappa}y_{\delta}$. By constructor $\lim x\approx_{\theta+\kappa+\delta}\lim y$. Then $\lim x\approx_{\varepsilon+\delta}\lim y$. Then using this result and lemma 3.7 shows that $\lim x$ is the limit of $x$. ∎ ### 3.3 Monadic structure of the completion Continuity lets us characterize functions on $\operatorname{\mathcal{C}}T$ based on their behaviour on the base elements $\operatorname{\eta}x$. If a function is sufficiently continuous, i.e. Lipschitz, we can even define its value on $\operatorname{\mathcal{C}}T$ from its value on $T$: this turns the completion into a monad. ###### Theorem 3.19. Let $A$ a premetric space and $f,g:\operatorname{\mathcal{C}}T\rightarrow A$ continuous functions such that $\forall u:T,f\;(\operatorname{\eta}u)=g\;(\operatorname{\eta}u)$ Then $\forall x:\operatorname{\mathcal{C}}T,f\;x=g\;x$ ###### Proof. By induction on $x$ (the desired property is a mere proposition because premetric spaces are sets). The base case is trivial. Let $x:\operatorname{Approximation}(\operatorname{\mathcal{C}}T)$ with the induction hypothesis $\forall\varepsilon:\mathbb{Q}^{+},f\;x_{\varepsilon}=g\;x_{\varepsilon}$ By separatedness it suffices to prove that $\forall\varepsilon:\mathbb{Q}^{+},f(\lim x)\approx_{\varepsilon}g(\lim x)$ Let $\varepsilon:\mathbb{Q}^{+}$. Continuity of $f$ and $g$ at $\lim x$ and $\varepsilon/2$ shows that there merely exist $\delta_{f}$ and $\delta_{g}:\mathbb{Q}^{+}$ such that $\forall y:\operatorname{\mathcal{C}}T,\lim x\approx_{\delta_{f}}y\rightarrow f(\lim x)\approx_{\varepsilon/2}f\;y$ $\forall y:\operatorname{\mathcal{C}}T,\lim x\approx_{\delta_{g}}y\rightarrow g(\lim x)\approx_{\varepsilon/2}g\;y$ Let $\delta:\mathbb{Q}^{+}$ such that $\delta<\delta_{f}$ and $\delta_{g}$. By roundedness and because $\lim x$ is the limit of $x$, $\lim x\approx_{\delta_{f}}x_{\delta}$ and $\lim x\approx_{\delta_{g}}x_{\delta}$. Then $f\;(\lim x)\approx_{\varepsilon/2}f\;x_{\delta}=g\;x_{\delta}$ and $g\;(\lim x)\approx_{\varepsilon/2}g\;x_{\delta}$. By triangularity $f\;(\lim x)\approx_{\varepsilon}g\;(\lim x)$. ∎ Repeated application of theorem 3.19 lets us deal with multiple variables. For instance, if $f$ and $g:\operatorname{\mathcal{C}}T_{1}\rightarrow\operatorname{\mathcal{C}}T_{2}\rightarrow A$ are continuous in both arguments (i.e. for all $x$, $f\;x$ and $g\;x$ are continuous, and for all $y$, $\lambda x,f\;x\;y$ and $\lambda x,g\;x\;y$ are continuous) and they coincide on $T_{1}$ and $T_{2}$ then they are equal. ###### Theorem 3.20. Let $A$ a Cauchy complete premetric space and $f:T\rightarrow A$ Lipschitz with constant $L$. There exists $\overline{f}:\operatorname{\mathcal{C}}T\rightarrow A$ Lipschitz with constant $L$ such that $\forall x:T,\overline{f}(\operatorname{\eta}x)=f\;x$ ###### Proof. We define $\overline{f}:\operatorname{\mathcal{C}}T\rightarrow A$ by mutual recursion, guaranteeing that the images of $\varepsilon$-close values are $L*\varepsilon$-close. This condition is exactly that $\overline{f}$ is Lipschitz with constant $L$. In the base case we simply use $f$. In the limit case, the induction hypothesis is $\overline{f_{x}}:\mathbb{Q}^{+}\rightarrow A$ such that $\forall\varepsilon,\delta:\mathbb{Q}^{+},\overline{f_{x}}\;\varepsilon\approx_{L*(\varepsilon+\delta)}\overline{f_{x}}\;\delta$ Then $\lambda\varepsilon,\overline{f_{x}}\;(\varepsilon/L)$ is a Cauchy approximation and we take its limit. The coherence properties necessary for mutual recursion are easy given lemmas 2.8 and 2.9. ∎ ###### Theorem 3.21. If $T$ is Cauchy complete then $\operatorname{\mathcal{C}}T=T$. ###### Proof. The identity of $T$ is non-expanding, so it can be extended into $\overline{id_{T}}:\operatorname{\mathcal{C}}T\rightarrow T$. $\overline{id_{T}}\circ\operatorname{\eta}_{T}$ is convertible to $id_{T}$. $\operatorname{\eta}_{T}\circ\overline{id_{T}}=id_{\operatorname{\mathcal{C}}T}$ by continuity. Then $\overline{id_{T}}$ is an equivalence from $\operatorname{\mathcal{C}}T$ to $T$, and by univalence they are equal. ∎ The above result uses univalence to get a strong identity result as opposed to the more common isomorphism. Note however that we do not get an isomorphism without univalence: the identity $\operatorname{\eta}_{T}\circ\overline{id_{T}}=id_{\operatorname{\mathcal{C}}T}$ is proven by theorem 3.19 at $A\coloneqq\operatorname{\mathcal{C}}T$. To know that $\operatorname{\mathcal{C}}T$ is a premetric space we need theorem 3.14, which needs univalence to show that it preserves equality (in lemma 3.10). ###### Theorem 3.22. The Cauchy completion is an idempotent monad on the category of premetric spaces with Lipschitz functions. ###### Proof. Given $f:A\rightarrow B$ a Lipschitz function with constant $L$, $\operatorname{\eta}\circ f:A\rightarrow\operatorname{\mathcal{C}}B$ and $\overline{\operatorname{\eta}\circ f}:\operatorname{\mathcal{C}}A\rightarrow\operatorname{\mathcal{C}}B$ are Lipschitz functions with constant $L$. The identities about extension of identity and extension of composition are verified by continuity. Then completion is a functor, and the previous theorem shows it is an idempotent monad. ∎ ###### Remark 3.23. OConnor07 [2007] defines Cauchy completion as a monad on the category of metric spaces with uniformly continuous functions (with setoid identities). However the map operation requires the domain to have the additional _prelength space_ property (reversed triangularity: $\forall\varepsilon\;\delta\;a\;c,a\approx_{\varepsilon+\delta}c\rightarrow\exists b,a\approx_{\varepsilon}b\wedge b\approx_{\delta}c$) to be well-defined. It therefore seems that restricting the arrows to Lipschitz functions spared us from having to define and work with this property. Repeated Lipschitz extension can be applied to functions taking multiple arguments: if $f:A\rightarrow B\rightarrow T$ is Lipschitz in both arguments, the function $f_{1}:A\rightarrow\operatorname{\mathcal{C}}B\rightarrow T$ obtained by pointwise Lipschitz extension is itself a Lipschitz function into the Cauchy complete space $\operatorname{\mathcal{C}}B\rightarrow T$. ###### Lemma 3.24. If $A$ is Cauchy complete and $f,g:T\rightarrow A$ are Lipschitz functions with constant $L$ and $\varepsilon:\mathbb{Q}^{+}$ is such that $\forall(u:T)(\delta:\mathbb{Q}^{+}),f\;u\approx_{\varepsilon+\delta}g\;u$ Then $\forall(u:\operatorname{\mathcal{C}}T)(\delta:\mathbb{Q}^{+}),\overline{f}\;u\approx_{\varepsilon+\delta}\overline{g}\;u$ ###### Proof. By simple induction on $u$, using lemma 2.10 in the limit case. ∎ ###### Theorem 3.25 (Binary Lipschitz extension). If $T$ is Cauchy complete and $f:A\rightarrow B\rightarrow T$ is such that for all $x:A$, $f\;x\;\\_$ is Lipschitz with constant $L_{1}$ and for all $y:B$, $f\;\\_\;y$ is Lipschitz with constant $L_{2}$, then $f$ can be extended into $\overline{\overline{f}}:\operatorname{\mathcal{C}}A\rightarrow\operatorname{\mathcal{C}}B\rightarrow T$ with the same Lipschitz properties and coinciding with $f$ on $\operatorname{\eta}$ values. ###### Proof. Unary Lipschitz extension gives us $f_{1}\coloneqq\lambda x,\overline{f\;x}:A\rightarrow\operatorname{\mathcal{C}}B\rightarrow T$ such that for all $x:A$, $f_{1}\;x\;\\_$ is Lipschitz with constant $L1$. $f_{1}$ is Lipschitz with constant $L_{2}$: let $\varepsilon:\mathbb{Q}^{+}$ and $x,y:A$ such that $x\approx_{\varepsilon}y$. We need to show that $f_{1}\;x\approx_{L_{2}*\varepsilon}f_{1}\;y$, i.e. there merely exist $\delta_{1},\kappa_{1}:\mathbb{Q}^{+}$ such that $L_{2}*\varepsilon=\delta_{1}+\kappa_{1}$ and $\forall z:B,\overline{f\;x}\;z\approx_{\delta_{1}}\overline{f\;y}\;z$. By roundedness there merely exist $\delta,\kappa:\mathbb{Q}^{+}$ such that $\varepsilon=\delta+\kappa$ and $x\approx_{\delta}y$. Use $\delta_{1}\coloneqq L_{2}*\delta$ and $\kappa_{1}\coloneqq L_{2}*\kappa$. By roundedness there merely exist $\delta^{\prime},\kappa^{\prime}:\mathbb{Q}^{+}$ such that $\delta=\delta^{\prime}+\kappa^{\prime}$ and $x\approx_{\delta^{\prime}}y$. By lemma 3.24 it suffices to prove $\forall(z:B)(\theta:\mathbb{Q}^{+}),f\;x\;z\approx_{L_{2}*\delta^{\prime}+\theta}f\;y\;z$ Since $f\;\\_\;z$ is Lipschitz with constant $L_{2}$ we have $f\;x\;z\approx_{L_{2}*\delta^{\prime}}f\;y\;z$ then by roundedness the desired property. $\operatorname{\mathcal{C}}B\rightarrow T$ is Cauchy complete, so we have $\overline{\overline{f}}\coloneqq\overline{f_{1}}:\operatorname{\mathcal{C}}A\rightarrow\operatorname{\mathcal{C}}B\rightarrow T$ Lipschitz with constant $L_{2}$. By lemma 2.15 we have that for all $y:\operatorname{\mathcal{C}}B$, $\overline{\overline{f}}\;\\_\;y$ is Lipschitz with constant $L_{2}$. By $\operatorname{\mathcal{C}}$-induction and lemma 2.17 we have that for all $x:\operatorname{\mathcal{C}}A$, $\overline{\overline{f}}\;x\;\\_$ is Lipschitz with constant $L_{1}$. ∎ ## 4 Cauchy reals We now have enough to define concepts specific to the Cauchy completion of the rationals, i.e. the Cauchy reals. Our goal is to show that they form an archimedean ordered field, a lattice, and that the closeness relation has the intended meaning $x\approx_{\varepsilon}y\leftrightarrow|x-y|<\varepsilon$ (with absolute value of $x$ being the join of $x$ and $-x$). Note that we use the constructive sense of ordered field, such that we have an apartness relation $x\operatorname{\\#}y$ expressing $0<|x-y|$ and multiplicative inverse can only be applied on values apart from $0$. ### 4.1 Addition and order relations The Cauchy reals are the Cauchy completion of the rationals $\operatorname{\mathcal{C}}\mathbb{Q}$. Let $\operatorname{rat}:\mathbb{Q}\rightarrow\mathbb{R}_{c}$ be an alias for $\operatorname{\eta}$. We follow HoTT [2013] for the additive and order structure of : $0$ is $\operatorname{rat}0_{\mathbb{Q}}$, $1$ is $\operatorname{rat}0_{\mathbb{Q}}$, and $+$, $-$, $\cup$, $\cap$ and $|\\_|$ are defined by Lipschitz extension. The HoTT book states: > Furthermore, the extension is unique as long as we require it to be non- > expanding in each variable, and just as in the univariate case, identities > on rationals extend to identities on reals. Since composition of non- > expanding maps is again non-expanding, we may conclude that addition > satisfies the usual properties, such as commutativity and associativity. This is a simple application of theorem 3.19. More complex uses need us to pay a little more attention to two issues: * • Consider transitivity of $\leq$: $\forall x\;y\;z:\real,x\cup y=y\rightarrow y\cup z=z\rightarrow x\cup z=z$ This cannot be directly proven by continuity as the statement of theorem 3.19 does not allow for hypotheses which depend on the universally quantified variables. We can however strengthen this specific statement into one that can be solved by theorem 3.19: $\forall x\;y\;z:\real,x\cup((x\cup y)\cup z)=(x\cup y)\cup z$. Doing this strengthening when we wish to use theorem 3.19 has not been an issue, but it is unclear where it might be a problem and so should be kept in mind. * • When showing that is a group we need to prove $\forall x:\real,x+(-x)=0$. The issue is that for a binary function $f:A\rightarrow B\rightarrow C$, knowing that for all $x$ and $y$ $\lambda y,f\;x\;y$ and $\lambda x,f\;x\;y$ are continuous is not sufficient to show that $\lambda x,f\;x\;x$ is continuous. The hypothesis we really want is that $f$ as the uncurried function from $A\times B$ to $C$ is continuous. If $\lambda y,f\;x\;y$ and $\lambda x,f\;x\;y$ are both Lipschitz with respective constant $L$ and $K$ then $f$ is Lipschitz with constant $L+K$, so this is not a problem when dealing with functions defined through Lipschitz extension like addition. However, showing that multiplication is continuous as an uncurried function deserves an explicit proof. Except for those which have to do with multiplication, the proofs from HoTT [2013] can be adapted with at most minor adjustments aside from the above remarks. Then is a group, a lattice, $x\approx_{\varepsilon}y$ is equivalent to $|x-y|<\varepsilon$, etc. The book lacks the proof that $\lambda y,x+y$ preserves $<$. We show this by proving that $x<y$ if and only if there merely is $\varepsilon:\mathbb{Q}^{+}$ such that $x+\operatorname{rat}\varepsilon\leq y$, which then allows us to use properties proven by continuity. ###### Lemma 4.1. Let $x,y:\real$ such that $x<y$. Then $\exists\varepsilon:\mathbb{Q}^{+},x+\operatorname{rat}\varepsilon\leq y$. ###### Proof. By definition of $<$ there merely are $q,r:\mathbb{Q}$ such that $x\leq\operatorname{rat}q<\operatorname{rat}r\leq y$. We take $\varepsilon\coloneqq r-q$. $x\leq\operatorname{rat}q$ so $x+\operatorname{rat}\varepsilon=\operatorname{rat}(r-q)+x\leq\operatorname{rat}(r-q)+\operatorname{rat}q=\operatorname{rat}r\leq y$ ∎ For the second direction, it is enough to show that $x<x+\operatorname{rat}\varepsilon$. We need a helper lemma first. ###### Lemma 4.2. Let $\varepsilon:\mathbb{Q}^{+}$ and $x,y:\real$ such that $x\approx_{\varepsilon}y$. Then $y\leq x+\operatorname{rat}\varepsilon$. ###### Proof. $y-x\leq|x-y|<\operatorname{rat}\varepsilon$ so $y\leq x+\operatorname{rat}\varepsilon$. ∎ We can generalize HoTT [2013] lemma 11.3.43: ###### Lemma 4.3. Let $x\;y\;z:\real$ and $\varepsilon:\mathbb{Q}^{+}$ such that $x<y$ and $x\approx_{\varepsilon}z$. Then $z<y+\operatorname{rat}\varepsilon$. ###### Proof. There merely is $q:\mathbb{Q}$ between $x$ and $y$. By HoTT [2013] lemma 11.3.43, $z<\operatorname{rat}(q+\varepsilon)\leq y+\operatorname{rat}\varepsilon$. Note here that we cannot prove $\operatorname{rat}(q+\varepsilon)<y+\operatorname{rat}\varepsilon$ since $\lambda u,u+\operatorname{rat}\varepsilon$ preserving $<$ is a future lemma. ∎ ###### Lemma 4.4. $<$ is cotransitive: $\forall x,y,z:\real,x<y\rightarrow x<z\vee z<y$ Note that $\vee$ is the truncated disjunction, i.e. the case distinction can only be made when proving a mere proposition. ###### Proof. By definition of $<$ we can reduce to the case where $x\coloneqq\operatorname{rat}q$ and $y\coloneqq\operatorname{rat}r$ for some $q,r:\mathbb{Q}$. Then we use simple $\operatorname{\mathcal{C}}-$induction on $z$. In the base case, we inherit the property from $\mathbb{Q}$. In the limit case, we have $x:\operatorname{Approximation}\real$ such that (induction hypothesis) $\forall(\varepsilon:\mathbb{Q}^{+})(q,r:\mathbb{Q}),q<r\rightarrow\operatorname{rat}q<x_{\varepsilon}\vee x_{\varepsilon}<\operatorname{rat}r$ Let $q,r:\mathbb{Q}$ such that $q<r$. There are $q_{1},r_{1}:\mathbb{Q}$ such that $q<q_{1}<r_{1}<r$, and $\delta:\mathbb{Q}^{+}$ such that $\delta<q_{1}-q$ and $\delta<r-r_{1}$. Using the induction hypothesis with $\delta$ and $q_{1}<r_{1}$ we can do a case distinction: * • if $\operatorname{rat}q_{1}<x_{\delta}$, we have $-x_{\delta}<\operatorname{rat}(-q_{1})$ and since $x_{\delta}\approx_{q_{1}-q}\lim x$ and $-$ is non-expanding we have using lemma 4.3 that $-\lim x<\operatorname{rat}(-q_{1}+(q_{1}-q))=\operatorname{rat}(-q)$. * • if $x_{\delta}<\operatorname{rat}r_{1}$ using lemma 4.3 we have $\lim x<\operatorname{rat}(r_{1}+(r-r_{1}))=\operatorname{rat}r$. ∎ ###### Lemma 4.5. For all $x:\real$ and $\varepsilon:\mathbb{Q}^{+}$, $x<x+\operatorname{rat}\varepsilon$. ###### Proof. By simple $\operatorname{\mathcal{C}}-$induction on $x$. In the base case we inherit the result from $\mathbb{Q}$. In the limit case, let $x:\operatorname{Approximation}\real$ such that (induction hypothesis) $\forall\varepsilon,\delta:\mathbb{Q}^{+},x_{\varepsilon}<x_{\varepsilon}+\operatorname{rat}\delta$ Let $\varepsilon:\mathbb{Q}^{+}$. By lemma 4.3 and the induction hypothesis we have $\forall\delta,\kappa:\mathbb{Q}^{+},\lim x<x_{\delta}+\operatorname{rat}(\delta+\kappa)$. Using $\delta\coloneqq\varepsilon/3$ and $\kappa\coloneqq 2\varepsilon/9$, by cotransitivity of $<$ (HoTT [2013] lemma ) for $\lim x+\operatorname{rat}\varepsilon$ we have either * • $\lim x<\lim x+\operatorname{rat}\varepsilon$ as desired * • $\lim x+\operatorname{rat}\varepsilon<x_{\delta}+\operatorname{rat}(\delta+\kappa)$, but this is absurd: By lemma 4.2 $x_{\delta}\leq\lim x+\operatorname{rat}(\delta+\varepsilon/9)$, then by adding $\delta+\kappa=11\varepsilon/9$ to both sides $x_{\delta}+\operatorname{rat}(\delta+\kappa)\leq\lim x+\operatorname{rat}\varepsilon<x_{\delta}+\operatorname{rat}(\delta+\kappa)$. ∎ We also need to prove $x\leq y$ from $\neg y<x$. ###### Lemma 4.6. Real numbers can be approximated from below: let $x:\real$, then $\lambda\varepsilon:\mathbb{Q}^{+},x-\operatorname{rat}\varepsilon$ is an approximation with limit $x$. ###### Proof. HoTT [2013] theorem 11.3.44 (expressing $x\approx_{\varepsilon}y$ as $|x-y|<\operatorname{rat}\varepsilon$) lets us reduce this to bureaucratic work. ∎ ###### Lemma 4.7. Let $f:\real\rightarrow\real$ Lipschitz with constant $L$ and $x:\operatorname{Approximation}\real$. Then $\lambda\varepsilon,f\;x_{\varepsilon/L}$ is an approximation with limit $f\;(\lim x)$. ###### Proof. Easy. ∎ ###### Lemma 4.8. Given $x,y:\real$, if $x<y$ is false then $y\leq x$. ###### Proof. Let $x,y:\real$ such that $x<y$ is false. Let $z\coloneqq x-y$. First note that $\forall\varepsilon:\mathbb{Q}^{+},-\operatorname{rat}\varepsilon<z$: let $\varepsilon:\mathbb{Q}^{+}$. Since $y-\operatorname{rat}\varepsilon<y$ by cotransitivity either $y-\operatorname{rat}\varepsilon<x$ as desired, or $x<y$ which is absurd. $y\leq x$ is equivalent to $0\leq z$ i.e. $0\cup z=z$. By lemma 4.6 $0=\lim(\lambda\varepsilon,-\operatorname{rat}\varepsilon)$ so by lemma 4.7 $0\cup z=\lim(\lambda\varepsilon,-\operatorname{rat}\varepsilon\cup z)=\lim(\lambda\varepsilon,z)=z$. ∎ We still need to define multiplication, prove that it is continuous and behaves well regarding $<$, and show that reals apart from $0$ are invertible. ### 4.2 Multiplication Multiplication is not Lipschitz over all of $\mathbb{Q}$, so we cannot simply use Lipschitz extension. The definition in HoTT [2013] first defines squaring and uses the identity $u*v=\frac{(u+v)^{2}-u^{2}-v^{2}}{2}$ to define multiplication from it. We stay closer to simple Lipschitz extension by defining multiplication on bounded intervals then joining these to cover . ###### Definition 4.9 (Definition by surjection). Let $A\;B$ and $C$ sets, and $f:A\rightarrow C$ and $g:A\rightarrow B$ functions such that $g$ is a surjection and $f$ respects $\sim_{g}$ the equivalence relation on $A$ induced by $g$. Then $B$ is equivalent to $A/\sim_{g}$ the quotient of $A$ by $\sim_{g}$ and there is a function $f_{\sim_{g}}:A/\sim_{g}\rightarrow C$ acting like $f$. Composing $f_{\sim_{g}}$ with the equivalence defines the function $\overline{f}_{\sim_{g}}:B\rightarrow C$ such that $\forall x:A,\overline{f}_{\sim_{g}}\;(g\;x)=f\;x$. ###### Definition 4.10 (Intervals). For $a,b:\mathbb{Q}$ (resp. $a,b:\real$), the interval space $[a,b]\coloneqq\Sigma_{x}a\leq x\leq b$ inheriting the closeness relation from the first projection forms a premetric space. For $x:\mathbb{Q}$ (resp. $x:\real$), $a\leq a\cup(x\cap b)\leq b$ so we can define $[x]_{a,b}:[a,b]$. If $a\leq x\leq b$ then $[x]_{a,b}$ has its first projection equal to $x$. ###### Definition 4.11 (Left multiplication by a rational). For any $q:\mathbb{Q}$, $\lambda r:\mathbb{Q},q*r$ is Lipschitz with constant $q+1$, so we define $\lambda(q:\mathbb{Q})(y:\real),q*y$ by Lipschitz extension with constant $a$. ###### Definition 4.12 (Bounded multiplication). For $a:\mathbb{Q}^{+}$ and $y:[-\operatorname{rat}a,\operatorname{rat}a]$ we define $\lambda x:\real,x*_{a}y$ by Lipschitz extension. ###### Proof. We need to check that $\lambda q:\mathbb{Q},q*y$ is Lipschitz with constant $a$. Using HoTT [2013] theorem 11.3.44 it suffices to show that for $x:\real$ such that $|x|\leq\operatorname{rat}a$ we have $\forall q\;r:\mathbb{Q},|q*x-r*x|\leq\operatorname{rat}(|q-r|*a)$. This is obtained by continuity. ∎ ###### Lemma 4.13. Cauchy reals are bounded by rationals, i.e. for all $x:\real$ there merely is $q:\mathbb{Q}^{+}$ such that $|x|<\operatorname{rat}q$. ###### Proof. By simple $\operatorname{\mathcal{C}}-$induction on $x$. In the base case we take $q\coloneqq|x|+1$. In the limit case, where $x$ is $\lim f$, by the induction hypothesis there merely is $q:\mathbb{Q}^{+}$ such that $|f\;1|<\operatorname{rat}q$. $|f\;1|\approx_{2}|x|$ so $x<\operatorname{rat}(q+2)$. ∎ ###### Lemma 4.14. Let the following function be defined by the obvious projections: $\\{\\_\\}:\Sigma_{a:\mathbb{Q}^{+}}[-\operatorname{rat}a,\operatorname{rat}a]\rightarrow\real$ It is surjective and respects bounded multiplication, i.e. $\forall x,y,z,\\{x\\}=\\{y\\}\rightarrow z*_{x_{1}}x_{2}=z*_{y_{1}}y_{2}$ ###### Proof. The function is surjective because reals are bounded by rationals. It respects bounded multiplication by continuity. ∎ ###### Definition 4.15 (Multiplication). For $x:\real$ we define $\lambda y:\real,x*y$ from $\lambda y:\Sigma_{a:\mathbb{Q}^{+}}[-\operatorname{rat}a,\operatorname{rat}a],x*_{y_{1}}y_{2}$ and surjectivity of $\\{\\_\\}$. Multiplication is now defined, with the following properties by definition: ###### Lemma 4.16. For $x:\real$ and $a:\mathbb{Q}^{+}$ and $y:[-\operatorname{rat}a,\operatorname{rat}a]$ $x*y_{1}=x*_{a}y$ ###### Proof. By unfolding definition 4.9. ∎ ###### Lemma 4.17. Multiplication computes on rationals: $\forall q,r:\mathbb{Q},\operatorname{rat}q*\operatorname{rat}r\equiv\operatorname{rat}(q*r)$ ###### Proof. Checking a conversion is decidable so this proof is left as an exercise to the reader. ∎ We now need to show that multiplication is continuous as an uncurried function. ###### Lemma 4.18. For $a:\mathbb{Q}^{+}$ and $y:\real$ such that $|y|\leq\operatorname{rat}a$, $\lambda x:\real,x*y$ is Lipschitz with constant $a$. ###### Proof. Using lemmas 4.16 and 4.12. ∎ ###### Lemma 4.19. For all $y:\real$, $\lambda x:\real,x*y$ is continuous. ###### Proof. Let $y:\real$, there merely is $a:\mathbb{Q}^{+}$ such that $|y|\leq\operatorname{rat}a$. By lemma 4.18 $\lambda x:\real,x*y$ is Lipschitz with constant $a$ and therefore continuous. ∎ ###### Lemma 4.20. For $q:\mathbb{Q}$ and $x:\real$, $\operatorname{rat}q*x=q*x$. ###### Proof. Using lemma 4.16 for some $a$ bounding $x$. ∎ ###### Lemma 4.21. Multiplication and negation distribute inside the absolute value: $\forall a,b,c:\real,|a*b-a*c|=|a|*|b-c|$ ###### Proof. We can reduce to the case where $a$ is rational by continuity, then use lemma 4.20 to replace real to real multiplication with rational to real multiplication and finish by continuity. ∎ ###### Lemma 4.22. Multiplication is compatible with $\leq$ under absolute value: for $a,b,c,d:\real$, if $|a|\leq|c|$ and $|b|\leq|d|$ then $|a|*|b|\leq|c|*|d|$. ###### Proof. Again we use continuity to reduce $a$ and $c$ (the variables appearing to the left of the multiplications) to their rational case, then rewrite the desired property to use multiplication of a rational and a real and finish with continuity. ∎ ###### Theorem 4.23. Multiplication is continuous as a function of 2 variables, i.e. given $u_{1}$ and $v_{1}:\real$ and $\varepsilon:\mathbb{Q}^{+}$ there merely exists $\delta:\mathbb{Q}^{+}$ such that for all $u_{2}$ and $v_{2}:\real$, if $u_{1}\approx_{\delta}u_{2}$ and $v_{1}\approx_{\delta}v_{2}$ then $u_{1}*v_{1}\approx_{\varepsilon}u_{2}*v_{2}$. ###### Proof. Let $u_{1},v_{1}:\real$ and $\varepsilon:\mathbb{Q}^{+}$. There merely is $\delta:\mathbb{Q}^{+}$ such that $|u_{1}|<\operatorname{rat}\delta$ and $|v_{1}|<\operatorname{rat}\delta$. Let $\kappa\coloneqq\delta+1$, then in the lemma’s statement we take $\delta\coloneqq 1\cap\frac{\varepsilon}{2(\kappa+1)}$. Let $u_{2},v_{2}:\real$ such that * • $u_{1}\approx_{1\cap\frac{\varepsilon}{2(\kappa+1)}}u_{2}$ * • $v_{1}\approx_{1\cap\frac{\varepsilon}{2(\kappa+1)}}v_{2}$ Then: * • $u_{1}*v_{1}\approx_{\varepsilon/2}u_{2}*v_{1}$: $|v_{1}|\leq\operatorname{rat}\delta$ so $\lambda y:\real,y*v_{1}$ is Lipschitz with constant $\delta$ and it suffices to prove $u_{1}\approx_{\varepsilon/2\delta}u_{2}$. This is true from roundedness and the first $\approx$ hypothesis since $1\cap\frac{\varepsilon}{2(\kappa+1)}\leq\varepsilon/2\delta$. * • $u_{2}*v_{1}\approx_{\varepsilon/2}u_{2}*v_{2}$: By HoTT [2013] theorem 11.3.44 we look to prove $|u_{2}*v_{1}-u_{2}*v_{2}|=|u_{2}|*|v_{1}-v_{2}|<\varepsilon/2$. In fact we have * – $|u_{2}|\leq|\kappa|=\kappa$ since $|u_{1}|\leq\kappa$ and $u_{1}\approx_{1}u_{2}$. * – $|v_{1}-v_{2}|\leq|\frac{\varepsilon}{2(\kappa+1)}|=\frac{\varepsilon}{2(\kappa+1)}$ since $|v_{1}-v_{2}|<1\cap\frac{\varepsilon}{2(\kappa+1)}$. Then by lemma 4.22 we have $|u_{2}|*|v_{1}-v_{2}|\leq|\kappa|*|\frac{\varepsilon}{2(\kappa+1)}|=\varepsilon/2*\frac{\kappa}{\kappa+1}<\varepsilon/2$. By triangularity $u_{1}*v_{1}\approx_{\varepsilon}u_{2}*v_{2}$. ∎ This is enough to show that forms a partially ordered ring, but we still need to link multiplication and $<$. ###### Lemma 4.24. Multiplication of positive values produces a positive value: let $x,y:\real$ such that $0<x$ and $0<y$, then $0<x*y$. ###### Proof. Let $x,y:\real$ such that $0<x$ and $0<y$, then there merely are $\varepsilon,\delta:\mathbb{Q}^{+}$ such that $\varepsilon<x$ and $\delta<y$. By continuity multiplication preserves $\leq$ for nonnegative values, so $0<\operatorname{rat}(\varepsilon*\delta)\leq x*y$. ∎ ###### Lemma 4.25. For $x,y:\real$, if $0\leq x$ and $0<x*y$ then $0<y$. ###### Proof. There merely is $\varepsilon:\mathbb{Q}^{+}$ such that $\operatorname{rat}\varepsilon<x*y$. By lemma 4.13 there merely is $\delta:\mathbb{Q}^{+}$ such that $|x|<\operatorname{rat}\delta$. Then it suffices to prove $0<\varepsilon/\delta\leq y$. We do this using lemma 4.8: suppose $y<\varepsilon/\delta$. Since $0\leq y$ (if $y<0$ then $x*y\leq 0$ which is absurd), $x*y\leq|x|*y\leq\varepsilon<x*y$ which is absurd. ∎ ### 4.3 Multiplicative inverse The multiplicative inverse for $\mathbb{Q}$ is Lipschitz on intervals $[\varepsilon,+\infty]$ for $\varepsilon:\mathbb{Q}^{+}$. We use this to extend it to positive reals, then to reals apart from $0$ using negation. ###### Definition 4.26. For $\varepsilon:\mathbb{Q}^{+}$ the function $\lambda q:\mathbb{Q},\frac{1}{\varepsilon\cup q}$ is defined and Lipschitz with constant $\varepsilon^{-2}$. Then for $x:\Sigma_{\varepsilon:\mathbb{Q}^{+},x:\real}\operatorname{rat}\varepsilon\leq x$ we define $/_{\Sigma}x\coloneqq\left(\overline{\lambda q:\mathbb{Q},\frac{1}{x_{\varepsilon}\cup q}}\right)x_{x}$ ###### Definition 4.27. We define the inverse of positive reals by surjection (definition 4.9) using $/_{\Sigma}$ and the obvious surjection from $x:\Sigma_{\varepsilon:\mathbb{Q}^{+},x:\real}\operatorname{rat}\varepsilon\leq x$ to $\Sigma_{x:\real}0<x$. For negative values we use the identity $\frac{1}{x}\coloneqq-\frac{1}{-x}$. This gives $\frac{1}{x}$ for any $x$ such that $x\operatorname{\\#}0$. ###### Lemma 4.28. $\forall q:\mathbb{Q},\operatorname{rat}q\operatorname{\\#}0\rightarrow\frac{1}{\operatorname{rat}q}=\operatorname{rat}(\frac{1}{q})$ ###### Proof. The negative case is easily reduced to the positive case. In the positive case there merely are $r,s:\mathbb{Q}$ such that $0\leq r<s\leq q$, then $\frac{1}{\operatorname{rat}q}$ reduces to $\operatorname{rat}\frac{1}{q\cup s}$ which is equal to $\operatorname{rat}\frac{1}{q}$ since $s\leq q$. ∎ ###### Lemma 4.29. For $x:\real$ and $\varepsilon:\mathbb{Q}^{+}$ such that $\operatorname{rat}\varepsilon\leq x$, $\frac{1}{x}=\left(\overline{\lambda q:\mathbb{Q},\frac{1}{\varepsilon\cup q}}\right)x$. ###### Proof. Easy. ∎ ###### Lemma 4.30. $\forall x:\real$, if $x\operatorname{\\#}0$ then $x*\frac{1}{x}=1$. ###### Proof. We can reduce to the case where $0<x$. Then there merely is $\varepsilon:\mathbb{Q}^{+}$ such that $\operatorname{rat}\varepsilon\leq x$. By continuity $x*\left(\overline{\lambda q:\mathbb{Q},\frac{1}{\varepsilon\cup q}}\right)x=1$ for all $x$ such that $\operatorname{rat}\varepsilon\leq x$, and by definition, $\frac{1}{x}=\left(\overline{\lambda q:\mathbb{Q},\frac{1}{\varepsilon\cup q}}\right)x$. ∎ Together with the results from HoTT [2013] section 11.3.3 we now have all results needed for to form an Archimedean ordered field as desired. ## 5 A partial function on Cauchy reals Without additional axioms, we can’t define any non-constant function from to booleans $\mathbb{B}$. In other words, no non-trivial property on is decidable. However we can encode non-termination as an effect in the _partiality monad_ , where the type of computations producing values of type $A$ is denoted $A_{\bot}$. Then we can define a function $isPositive:\real\rightarrow 2_{\bot}$ which produces $true$ on positive reals, $false$ on negative reals and does not terminate on $0$. ### 5.1 The partiality monad In Partiality, Revisited [2016], Altenkirch and Danielsson define the type $A_{\bot}$ of computations producing values of type $A$ as a HIIT. They implemented it in Agda and proved certain properties such as the existence of fixpoints and that it forms the free $\omega-$CPO on $A$. ###### Definition 5.1 (Increasing sequences). $\operatorname{IncreasingSequence}A\coloneqq\Sigma_{f:\mathbb{N}\rightarrow A}\forall n,f_{n}\leq f_{Sn}$ As with Cauchy approximations we confuse $f:\operatorname{IncreasingSequence}A$ with the underlying function in our notations. ###### Definition 5.2. Given $A$ a type, the type $A_{\bot}$ is defined simultaneously with its order. It has the following constructors: * • $\operatorname{\eta}:A\rightarrow A_{\bot}$ * • $\bot:A_{\bot}$ * • $sup:\operatorname{IncreasingSequence}A_{\bot}\rightarrow A_{\bot}$ with a path constructor of type $\forall x,y:A_{\bot},x\leq y\rightarrow y\leq x\rightarrow x=y$. The order has constructors of types * • $\forall x:A_{\bot},x\leq x$ * • $\forall x:A_{\bot},\bot\leq x$ * • $\forall f,x,sup\;f\leq x\rightarrow\forall n,f_{n}\leq x$ * • $\forall f,x,(\forall n,f_{n}\leq x)\rightarrow sup\;f\leq x$ and is truncated to be propositional. As with the Cauchy completion we have simple induction on values and simple induction on the auxiliary relation $\leq$ to prove inhabitedness of propositional types depending on computations, and non-dependent mutual recursion to define values from computations. Altenkirch and Danielsson suggest a way of defining $isPositive:\mathbb{R}^{q}\rightarrow\boldsymbol{2}_{\bot}$, where $\mathbb{R}^{q}$ is the quotient of Cauchy sequences of $\mathbb{Q}$ by the appropriate equivalence. They first define it on Cauchy sequences of $\mathbb{Q}$ using the fixpoint operator provided by the partiality functor, then show that it respects the equivalence and extend it to the quotient $\mathbb{R}^{q}$. We could not adapt that definition for the HIIT Cauchy real numbers. However, an alternate definition is possible: * • For $P:Prop$, $\left(\Sigma_{p:\boldsymbol{1}_{\bot}}p=\operatorname{\eta}\star\leftrightarrow P\right)$ is propositional. We can use simple $\operatorname{\mathcal{C}}-$induction to define $p$ for all $P\coloneqq x<\operatorname{rat}q$. * • From $p$ and $q:\boldsymbol{1}_{\bot}$ such that $p$ and $q$ are not both $\operatorname{\eta}\star$, we define $interleave\;p\;q:\boldsymbol{2}_{\bot}$ indicating which if any is $\operatorname{\eta}\star$. * • We interleave the values defined from $-x<0$ and $x<0$ to define $isPositive\;x$. We assume the properties of $A_{\bot}$ for arbitrary $A$ from Partiality, Revisited [2016]. Let us then focus on the properties of $\boldsymbol{1}_{\bot}$. ### 5.2 The Sierpinski space If $A_{\bot}$ is the type of possibly non-terminating computations returning a value of type $A$, then $\boldsymbol{1}_{\bot}$ is the type of semi-decision procedures: $p:\boldsymbol{1}_{\bot}$ semi-decides all propositions equivalent to $p=\operatorname{\eta}\star$. ###### Definition 5.3. $\boldsymbol{1}_{\bot}$ has a greatest element $\top\coloneqq\operatorname{\eta}\star$. ###### Proof. $\forall x:\boldsymbol{1}_{\bot},x\leq\top$ by simple induction on $x$. ∎ We can interpret $p:\boldsymbol{1}_{\bot}$ as the proposition $p=\operatorname{\eta}\star$ (equivalently, $\operatorname{\eta}\star\leq p$). Then trivially $\top\leftrightarrow\boldsymbol{1}$, $\bot\leftrightarrow\boldsymbol{0}$. ###### Lemma 5.4. For all $a\;b:\boldsymbol{1}_{\bot}$, $a\leq b$ if and only if $a\rightarrow b$. ###### Proof. if $a\leq b$ then $a\rightarrow b$: suppose $a$, i.e. $\top\leq a$. Then $\top\leq a\leq b$, i.e. $b$. if $a\rightarrow b$ then $a\leq b$: by simple induction on $a$, each case being trivial. ∎ We can also interpret $\vee$ into $\boldsymbol{1}_{\bot}$ (and $\wedge$, but we do not need it for $isPositive$). ###### Definition 5.5 (Join on $\boldsymbol{1}_{\bot}$). ###### Proof. We define an auxiliary function by mutual recursion: for all $y:\boldsymbol{1}_{\bot}$ there is $\cup_{y}:\boldsymbol{1}_{\bot}\rightarrow\Sigma_{z:\boldsymbol{1}_{\bot}}y\leq z$, then $x\cup y\coloneqq(\cup_{y}\;x)_{1}$. Then $x\cup y$ is the first projection of $\cup_{y}\;x$. It computes as follows: * • $\top\cup\;y\coloneqq\top$ * • $\bot\cup\;y\coloneqq y$ * • $(\sup f)\cup\;y\coloneqq\sup(\lambda n,f_{n}\cup\;y)$ The proofs of the required properties are trivial. Note that we need the auxiliary function as we need a proof that $\forall x:\Sigma_{z:\boldsymbol{1}_{\bot}}y\leq z,\cup_{y}\;\bot=y\leq x_{1}$. ∎ ###### Lemma 5.6. $x\cup y$ is the least upper bound of $x$ and $y$. Then $\cup$ is a monoid operator with identity element $\bot$. ###### Proof. By definition and simple inductions. ∎ ###### Lemma 5.7. For all $a\;b:\boldsymbol{1}_{\bot}$, $a\cup b$ if and only if $a\vee b$. ###### Proof. If $a\vee b$ then trivially $a\cup b$, $a\cup b$ being an upper bound of $a$ and $b$. The other direction is obtained by simple induction on $a$. ∎ $\boldsymbol{1}_{\bot}$ has a countable join operator, but it is limited to increasing sequences. Thanks to the binary join we can remove this limit to define interpret properties $\exists n:\mathbb{N},P_{n}$ and even $\exists x:A,P\;x$ when $A$ is enumerable. ###### Definition 5.8. For all $f:\mathbb{N}\rightarrow\boldsymbol{1}_{\bot}$ there is a least upper bound $\sup f$ of all the $f_{n}$. ###### Proof. We have $\sup f$ for monotonous sequences, so for arbitrary $f:\mathbb{N}\rightarrow\boldsymbol{1}_{\bot}$ we define $f^{\leq}:\mathbb{N}\rightarrow\boldsymbol{1}_{\bot}$ by $f^{\leq}\;n\coloneqq\bigcup_{m\leq n}f\;m$. Then $f^{\leq}$ is monotonous and $\sup f\coloneqq\sup f^{\leq}$ is the least upper bound of all the $f_{n}$. ∎ That $\sup f$ semi-decides $\exists x,f\;x$ is trivial. ### 5.3 Interleaving ###### Definition 5.9 (Disjoint). $a$ and $b:\boldsymbol{1}_{\bot}$ are disjoint when they do not both hold, i.e. $a\rightarrow b\rightarrow\boldsymbol{0}$. Interleaving lets us define a value in $\boldsymbol{2}_{\bot}$ from two values in $\boldsymbol{1}_{\bot}$ which are not both $\top$. If we see $x\;y:\boldsymbol{1}_{\bot}$ as semi-decision procedures then the interleaving of $x$ and $y$ is $\operatorname{\eta}true$ if $x$ terminates (i.e. $x=\top$), $false$ if $y$ terminates and does not terminate if neither terminates. If computing on a Turing machine it would be obtained by interleaving simulated steps of $x$ and $y$ until one terminates, then returning a value depending on which one terminated. We can only interleave disjoint values: a Turing machine could pick whichever one terminates first, but we have hidden those distinctions away using higher inductive types. ###### Definition 5.10. We define by mutual induction a function $\begin{split}interleave\star:\forall a\;b:\boldsymbol{1}_{\bot},disjoint\;a\;b\rightarrow\\\ \Sigma_{c:\boldsymbol{1}_{\bot}}(map\;(\lambda\\_,false)\;b)\leq c)\end{split}$ where $map:\forall A\;B:Type,(A\rightarrow B)\rightarrow A_{\bot}\rightarrow B_{\bot}$ is the map of the partiality monad, and in parallel a proof that for all $a\;a^{\prime}:\boldsymbol{1}_{\bot}$, if $a\leq a^{\prime}$ then for all $b:\boldsymbol{1}_{\bot}$ disjoint with $a$ and with $a^{\prime}$, $interleave_{\star}\;a\;b\leq interleave_{\star}\;a^{\prime}\;b$. Then the interleaving function $interleave$ is the first projection of $interleave_{\star}$. It computes as follows: * • $interleave\;\top\;b\coloneqq\operatorname{\eta}true$ * • $interleave\;\bot\;b\coloneqq map\;(\lambda\\_,false)b$ * • $interleave\;(\sup f)\;b\coloneqq\sup(\lambda n,interleave\;f_{n}\;b$ Some attention must be taken to keep track of the disjointness proofs which are left implicit on paper. ###### Lemma 5.11. If $a:\boldsymbol{1}_{\bot}$ is disjoint from $\top$ then $interleave\;a\;\top=\operatorname{\eta}false$ ###### Proof. $a$ is disjoint from $\top$ so $a=\bot$ and $interleave\;a\;\top=map\;(\lambda\\_,false)\;\top=\operatorname{\eta}false$. ∎ ###### Lemma 5.12. For $a\;b:\boldsymbol{1}_{\bot}$ disjoint $interleave\;a\;b=\operatorname{\eta}true$ (resp. $\operatorname{\eta}false$) if and only if $a$ holds (resp. $b$ holds). ###### Proof. By simple induction on $a$ in the first direction, by computation in the second (note that if $b$ then $a=\bot$ as they are disjoint). ∎ ### 5.4 Partial comparison of real numbers with rational numbers ###### Lemma 5.13. For all $x:\real$ and $q:\mathbb{Q}$, $x<\operatorname{rat}q$ is semi- decidable, i.e. $\exists s:\boldsymbol{1}_{\bot},s\leftrightarrow x<\operatorname{rat}q$. ###### Proof. By simple induction on $x$. In the base case, for all $q\;r:\mathbb{Q}$, $\operatorname{rat}q<\operatorname{rat}r$ is decidable so we pick $s\coloneqq\top$ or $\bot$ as appropriate. In the limit case, if $x:\operatorname{Approximation}\real$ such that for all $\varepsilon$ and $q$, $x_{\varepsilon}<\operatorname{rat}q$ is semi- decidable, let $q:\mathbb{Q}$, we take $s\coloneqq\exists\varepsilon\;\delta:\mathbb{Q}^{+},x_{\varepsilon}<\operatorname{rat}(q-\varepsilon-\delta)$ (interpreted as a value in $\boldsymbol{1}_{\bot}$). Then to show correctness: * • if $\exists\varepsilon\;\delta:\mathbb{Q}^{+},x_{\varepsilon}<\operatorname{rat}(q-\varepsilon-\delta)$ then $\lim x<\operatorname{rat}q=\operatorname{rat}(q-\varepsilon-\delta+\varepsilon+\delta)$ by lemma 4.3. * • if $\lim x<\operatorname{rat}q$, there merely is $r:\mathbb{Q}$ such that $\lim x<\operatorname{rat}r$ and $r<q$. Let $\varepsilon\coloneqq q-r$. Then $x_{\frac{\varepsilon}{4}}<\operatorname{rat}(q-\varepsilon-\varepsilon)=\operatorname{rat}(r+\varepsilon+\varepsilon)$ by lemma 4.3. ∎ ###### Definition 5.14. For $x:\real$ let $isPositive\;x$ be the interleaving of the semi-decisions for $-x<0$ and $x<0$. ###### Theorem 5.15. Let $x:\real$. * • $0<x$ if and only if $isPositive\;x=\operatorname{\eta}true$ * • $x<0$ if and only if $isPositive\;x=\operatorname{\eta}false$ * • $isPositive\;0=\bot$ ###### Proof. By lemmas 5.11 and 5.12 and computation. ∎ ## 6 Conclusion We have defined a Cauchy completion operation which is a monad on the category of spaces with an appropriate closeness relation and Lipschitz functions. When applied to the space of rational numbers it produces a Cauchy complete archimedean ordered field generated by rationals and limits of Cauchy approximations, i.e. the Cauchy reals. Finally we have defined and proven correct a semi-decision procedure (in the sense of Partiality, Revisited [2016]) for comparing a Cauchy real and a rational number. ## Acknowledgements This paper is the result of two internships under the direction of Bas Spitters, at Radbough University and at the University of Aarhus. ## References * * HoTT [2013] The Univalent Foundations Program, Institute for Advanced Study _Homotopy Type Theory: Univalent Foundations of Mathematics_ http://homotopytypetheory.org/book/ * MathClasses [2011] Bas Spitters and Eelis van der Weegen _Type Classes for Mathematics in Type Theory_ https://math-classes.github.io/ * Partiality, Revisited [2016] Thorsten Altenkirch, Nils Anders Danielsson _Partiality, Revisited_ TYPES 2016 * OConnor07 [2007] Russell O’Connor _A Monadic, Functional Implementation of Real Numbers_ Mathematical Structures in Computer Science 17(1): 129-159 (2007)
# Matter in Discrete Space-Times P. P. Divakaran (Email<EMAIL_ADDRESS> ###### Abstract In the Einsteinian model of space-time as a 4-dimensional pseudo-Riemannian manifold, special relativity holds exactly in the tangent space of every point. A quantised matter field of a given mass and spin, corresponding to an elementary particle of matter, is then to be regarded as being defined by a unitary representation (UR) of the Poincaré group at each point. This Wignerian viewpoint leads to a more general reformulation of the equivalence principle as the unitary equivalence of these URs as the point is varied. In this background, the main question addressed in these notes is whether, as a necessary first step in a discretisation of gravity, the Wigner construction can be carried over to a model space-time which is a 4-dimensional lattice embeddable in real Minkowski space with a distance function inherited from it (but physically not so embedded). Working with a hypercubic lattice, it is shown in full mathematical detail that the Wigner paradigm continues to be valid but with some exotic new features. The description of spin is essentially the same. In contrast, the momentum space is the 4-torus, identified as the Brillouin zone of space-time where all physical phenomena occur: 4-momentum is defined and conserved only modulo a reciprocal lattice vector, implying that there is no notion of an invariant mass except when it vanishes. Nevertheless, massless particles continue to have a constant invariant speed (the ‘speed of light’), a result of crucial importance for the viability of discrete relativity. A massive particle in contrast is characterised by a rest mass and under large boosts it will pass through phases of superluminal propagation. If the lattice spacing is taken as a fixed fundamental length of the order of the Planck length, such effects can be observed only in the evolution of the very early regime of the conventional big bang universe, of which the two most dramatic manifestations are i) cosmic Umklapp processes, leading to a degradation of energies of individual particles, as a possible source of ultra-high energy cosmic rays and ii) primordial superluminal expansion as a contribution to or even as the root cause of cosmic inflation. A fundamentally discrete space-time is not in conflict with known physics; it may in fact be of help in explaining some otherwise mysterious aspects of early cosmology. ## 1 A fundamental length? Explorations of the possibility that space or space-time may have a discrete structure on a scale much smaller than currently accessible have a long history. Though the motivations for considering such a fine structure have changed somewhat over time, they have a common origin in the old observation of Planck that the fundamental constants $c$ and $\hbar$, which are independent of dynamics, can be combined with the coupling strength $G_{N}$ of the gravitational interaction of matter in the Newtonian limit to produce a unit of length $L=(\hbar G_{N}/c^{3})^{1/2}$, of a far smaller scale than any length we can conceivably measure. The expectation, supported by ingenious thought experiments and some theoretical considerations, is that in a quantum theory of gravity (if and when we succeed in constructing one), or even in a semi-classical approximation to it, distances of the order of $L$ or smaller cannot have an operational significance.111There is an extensive review of work in this area, particularly valuable for the historical and motivational background, in [1]. It is also very useful for its comprehensive bibliography as of the time of its writing. New work continues to be produced in profusion but it seems fair to say that there has been no significant recent breakthrough. We may then consider two physically distinct types of models of space-time as posssible ways of accommodating a fundamental length. The more popular is one in which space-time is still a pseudo-Riemannian manifold but does not admit physical measurements of lengths below a finite length much smaller than any that is accessible to present day methods (which we may take, at least tentatively, to be the Planck length $L$). Models of this type have problems of reconciliation with principles we hold to be inviolable, such as the observer-dependence of the magnitude of $L$, making its interpretation as a fundamental characteristic of space-time itself somewhat questionable. There are many variants of such models, not always mutually compatible, but all of them have to invoke some sort of deformation of special relativity, whose consequences are not fully understood. A more ambitious approach is to take space-time to be fundamentally discrete, not embedded physically in a manifold (but of course, mathematically, so embeddable). The fundamental dynamical variables – the counterparts of local fields on manifolds – are then to be defined on the points of a discrete set (and only on them because that is all there is) endowed with a notion of ordering, a lattice, derived perhaps from the metric in an embedding manifold. It will obviously be a formidable undertaking, already at the classical level, to transcribe the geometry of general relativity to lattice space-times. Different types of lattices have been studied in the literature and it is fair to say that it is too early for a prognosis of where these studies might lead, see the review [1] and the many references therein. To then extend it to the quantum domain will be an even greater challenge. An idea of the conceptual and technical difficulties that have to be overcome may be had from, for example, the reviews [2] and [3] (written some years ago; the situation is much the same today). There are very good reasons, nevertheless, for hoping that the successful incorporation of a fundamental length in physics at very short distance (reciprocally, very large momentum) scales can be a major step forward in an eventual quantisation of gravity. First, it is the gravitational constant $G_{N}$ itself that allows the introduction of a natural lattice structure for space-time through $L$. Indeed, in a world without gravitation, the Planck length vanishes; conversely and more speculatively, if $L$ is taken to 0 keeping $c$ and $\hbar$ finite, the space-time manifold will tend to flatness. Secondly, in a quantum or semiclassical framework, $L^{-1}$ provides a momentum and energy cutoff that makes possible the calculation of finite gravitational effects, conventionally unrenormalisable;222The first attempts at using a discrete space-time to make perturbatively finite (special) relativistic quantum field theory date from the 1930s ([1], section 2), well ahead of the time standard perturbative renormalisation theory came to be developed. Subsequently, much effort has gone into exploring the lack of renormalisability of gravitationsl interactions and its possible cures. And, finally, current attempts at quantising the geometry of space-time, even without a fundamental length introduced ab initio, seem to point to the need for imposing a discrete structure (as in loop gravity for instance). The Einstein equations not only have a left side originating in the geometry of the space-time manifold ${\cal M}$. They have also a right side concerned with matter. At the fundamental level – the level at which ideas such as quantisation can be meaningfully addressed – this takes the form of contributions of various matter fields to the energy-momentum density. The description of matter fields involves only flat space-time, the tangent spaces to ${\cal M}$, essentially because of the equivalence principle (as briefly recollected in the next section). Therefore any discretisation of ${\cal M}$ will entail a discretisation of flat Minkowski space $M$ in its guise as a tangent space and it becomes necessary then to study lattices in $M$ and to ask whether matter fields can be defined on them in physically satisfactory and mathematically unambiguous terms. To take a first serious look at this particular question – a necessary step and one which is much less daunting than the discretisation of gravity – is the purpose of this article. Although there are some mathematical issues which are yet to be fully resolved, the first answers are affirmative: ‘local’ matter fields can be defined on regular lattices, without contradicting physics as we know it at length scales that are accessible today, but are still enormously large in comparison to $L$. As is to be expected, intriguing new physics, with possible (and perhaps desirable) cosmological consequences, does emerge at the scale of $L$. Even though the discrete space-time envisaged in this paper is not to be thought of as a subset of a true physical continuum space-time, the working out of the discrete physics will often use some of the very detailed knowledge we have of the continuum physics as a model. For that reason, there will be occasional recapitulations of well-known facts from standard theory, sometimes in a more general form than we are used to. This is especially the case in the next section on the relevance of the equivalence principle in defining matter fields and in the summary (section 5) of their construction as representations of the group of special relativity. Also, for the benefit of readers who may not be interested in the unavoidable but often tedious arguments in the main part of the paper, I have given in the next few sections a schematic outline of some of the novel points that arise. Throughout this paper, ${\cal M}$ and $M$ have dimension (1,3) with the metric having the signature $(+,-,-,-)$. ## 2 Matter fields: tangent spaces and a general equivalence principle In general relativity, the bridge between the two sides of the Einstein equation is the classical equivalence principle, usually stated simply as the equality of the inertial mass and the gravitational mass. A natural starting point for the description of matter within general relativity is therefore the equivalence principle itself,333Rewritings of the standard field equations, e.g., the Dirac equation, so as to make them consistent with GR have a long history; most of them do not explicitly invoke the equivalence principle. suitably reinterpreted where necessary. In this section, I recall in a qualitative way the relevant issues. At a fundamental (‘elementary particle’) level, there are two steps involved. Firstly, matter fields contributing to the energy-momentum are defined not globally on ${\cal M}$ but locally at each point $x\in{\cal M}$ – specifically, on the tangent space $T_{x}{\cal M}=:M_{x}$ which is by defintion flat and whose spatial projection is the local inertial space at $x$. The reason of course is that an invariant mass is an attribute associated with the Poincaré group $P$ operating on flat Minkowski space. Thus a complete theoretical understanding of the principle as first formulated by Einstein had to wait till Wigner’s landmark paper ([4], see also [5]), according to which an invariant mass is one of the parameters labelling irreducible unitary representations (URs for short) of the Poincaré group $P$.444Several variants of this foundational principle, ranging from the descriptive to the abstractly mathematical, are still current in the literature, more than a century after Einstein first proposed his version of it. The specal theory is not primarily just a weak field approximation to the general theory but a constituent part of it that holds exactly in every tangent space, thereby defining precisely the intuitive notion of an inertial frame; that is the content of the equivalence principle. The formulation given here is a modern, physically and mathematically sharp version of the original Einstein formulation that takes account of some key insights that came later, such as the significance of symmetries in quantum theory. (Precise characterisations of this and other relevant groups will be given below as and when they are needed). It follows that a logical precondition for the formulation of the equivalence principle is a precise notion of an equivalence of the tangent spaces as $x$ is varied, and a consequent criterion for the independence of the mass of a test particle of the point $x$ in space-time where it is measured. It is to be noted that this condition requires only the special theory for its formulation. Only after its validity is accepted can the second step, the assertion of the equality of this common inertial mass with the gravitational mass, which is otherwise a property deriving from the global geometry of ${\cal M}$, be taken. As everyone knows, Wigner’s main result is that an irreducible UR of the universal covering group $\bar{P}$ of $P$, satisfying certain physically reasonable conditions (absence of tachyons and boundedness of helicities) is characterised by a fixed non-negative real number (the square of the mass) and a fixed set of integral or half-integral helicities (‘spin’). In other words, if we ignore ‘internal quantum numbers’ (‘charges’), such an irreducible UR of $\bar{P}$ (a Wigner UR in short) can be identified with an elementary particle with a given mass (including vanishing mass) and spin. The Wigner construction has been much studied in the literature (and will be very briefly reviewed below as it is the basis on which the description of matter in discrete space- times rests). Here I make two qualitative observations which can be expected to have a bearing on the general philosophy of matter fields in a theory of gravity already in the ‘continuum limit’. 1\. Wigner’s construction is the foundation of a quantum description of matter. The Hilbert space ${\cal H}^{1}$ on which an irreducible UR of $\bar{P}$ is realised is the 1-particle state space. An $n$-particle state then belongs to the $n$th tensor power $\otimes^{n}{\cal H}^{1}=:{\cal H}^{n}$ (symmetrised or antisymmetrised according to whether the particle is a boson or a fermion) and a general state in a quantum field description of the system is a linear sum of tensor product states (with decay conditions on the coefficients to ensure that they form a Hilbert space ${\cal H}$) – creation and annihilation operators are essentially operators which map ${\cal H}^{n}$ to ${\cal H}^{n+1}$ and ${\cal H}^{n-1}$ respectively. This is standard and the point of bringing it up is to highlight the fact that in a putative quantum theory of gravity, the description of matter à la Wigner can be expected to remain valid. Indeed, we have no other way of characterising matter at the quantum level, a recognition that is implicit in all current work. 2\. To give meaning to the notion of the identity of a matter particle independently of its gravitational environment – to be able to say as is commonly done that an electron is an electron whether in a vanishingly weak gravitational field or near a black hole – it is necessary to formulate the equivalence principle somewhat more generally. A reasonable and obvious generalisation would be to postulate that the Wigner UR $U$ corresponding to a given particle (not just its partial attribute, the mass) is abstractly the same at all points $x\in{\cal M}$. The qualification ‘abstractly’ is necessary since, even though the group $\bar{P}_{x}$ is, abstractly, the same at all $x$, as a transformation group on the tangent space $M_{x}$ it depends on $x$ and so does the UR $U_{x}$ as the unitary group of ${\cal H}^{1}_{x}$ (where I have distinguished structures localised at $x$ by a subscript). This $x$-dependence of the representation as a whole (unlike its Casimir, the mass, which is supposed constant over ${\cal M}$), leads to an immediate natural linkage between matter fields and gravity: specifying a connection (or some other equivalent object such as the covariant derivative) on the tangent bundle of ${\cal M}$ allows the parallel transport of a frame in $M_{x}$ to $M_{y}$ (together with their induced Minkowski metric), and hence of their group of isometries: $\bar{P}_{x}$ to $\bar{P}_{y}$, and hence of a given UR: $U_{x}$ to $U_{y}$. Since these representations are abstractly identical, they will correspond to the same mass and spin. The equivalence can in fact be stated in a form respecting the quantum nature of the Wigner description of matter: there is a unitary map $W(x,y):{\cal H}_{x}^{1}\rightarrow{\cal H}_{y}^{1}$ such that $W(x,y)U_{x}=U_{y}W(x,y)$. The setting best suited to the exploration of the interrelationship between gravity and matter – equivalently, of the geometry of ${\cal M}$ and the representation theory of the isometries of $M_{x}$ – is thus that of the tangent bundle over the space-time manifold. Its full working out will be a major undertaking. What is already clear from the above very preliminary remarks is that the matter side of the equation is not only an essential part of such a project, but may even be a good starting point in the search for a quantum theory of gravity, much as the formulation of quantum electrodynamics is most profitably approached by starting with the invariance properties of charged fields. In any case, independently of how such a programme will work out, one cannot avoid dealing with the representation theory underpinning the description of matter. In the continuum case, the representations are very well understood from Wigner’s work. Their significance as the foundation of a quantum definition of matter is also well known. It is less well appreciated that in that role it is central to the description of matter in the general theory as well, by enabling the original formulation of the equivalence principle to be generalised and completed as indicated above. The purpose of the present work is however more limited: to show how that description can be adapted to the case of space-time being discrete and to bring out the ways the results deviate from standard wisdom. The most crucial of these deviations will turn out to result in a departure from causal propagation of massive matter at extremely high (Planck scale?) speeds, perfectly acceptable in our present state of knowledge, perhaps even desirable in current models of very early cosmology. ## 3 Discrete Minkowski space and its symmetries The core of this paper is concerned with the question of whether and how elementary matter fields can be associated with URs of the restriction of the Poincaré group to a suitably chosen discrete subgroup. Answering the question will involve the following steps: 1\. The choice of a discretisation of $M$, i.e., ${\bf R}^{4}$ with the Minkowski metric. The simplest choice is the hypercubic lattice ${\bf Z}^{4}$ of points in ${\bf R}^{4}$ with integral coordinates with respect to a fixed set of axes. This means in particular that space and time coordinates are related by the speed of light, assumed to be a fundamental constant and put equal to unity. (Or, equivalently, the speed of light is defined as the ratio of the spatial and temporal lattice spacing – but measured in what units?). Thus there is a unique lattice spacing which is the unit of length and time, also put equal to unity in most of what follows. (Where necessary, the Planck length will be brought in to play that role). From the point of view of symmetries, this is the simplest choice; other regular lattices will not pose any conceptual problems but will add to the technical and computational burden. Random lattices are excluded from consideration since they will entail randomly distributed lattice spacings and cannot naturally accommodate a unique fundamental length. A distance function is defined on ${\bf Z}^{4}$ by restricting the Minkowski metric in ${\bf R}^{4}$ to its integral points: if $X=\\{X_{\mu}\in{\bf Z};\mu=0,1,2,3\\}$ is a point of ${\bf Z}^{4}$, its (length)2 is given by $X_{\mu}X_{\mu}:=X_{0}^{2}-X_{1}^{2}-X_{2}^{2}-X_{3}^{2}=:X^{2}$ (and similarly for the (distance)2 between two points $X$ and $Y$). The discrete set ${\bf Z}^{4}$ together with this distance function is our discrete Minkowski space and will be denoted by $M({\bf Z})$ in what follows. 2\. The identification of the lattice Poincaré group. In the continuum, the relativity group is $P({\bf R})=L({\bf R})\vec{\times}T({\bf R})$, where $L({\bf R})=SO(3,1,{\bf R})/\\{\pm 1\\}$ is the connected (proper, orthochronous) Lorentz group, $T({\bf R})$ $(\sim{\bf R}^{4})$ is the translation group (in a more explicit notation than in the introductory remarks) and $\vec{\times}$ denotes the semidirect product, the arrow indicating that (the quotient group) $L$ operates on (the normal subgroup) $T$. The connectedness requirement on $SO(3,1)$ (the quotienting by $\\{\pm 1\\}$) keeps out reflections which are not, observationally, symmetries of matter field interactions among themselves even though they leave the metric invariant. The discrete Lorentz group is then the subgroup of $L({\bf R})$ obtained by restricting every $4\times 4$ matrix $\lambda\in L({\bf R})$ to have integral entries: $L({\bf Z}):=SO(3,1,{\bf Z})/\\{\pm 1\in SO(3,1,{\bf Z})\\}$. Our discrete Poincaré group $P({\bf Z})\subset P({\bf R})$ is therefore the semidirect product of this group with the discrete translation group $T({\bf Z})$ $(\sim{\bf Z}^{4}\subset{\bf R}^{4}$). It is an interesting fact that, while $L({\bf Z})$ (generalised in the obvious manner) is the 2-element group in 1 + 1 dimensions, it is an infinite group in all higher dimensions. 3\. Determination of the appropriate representations of $P({\bf Z})$. The guiding spirit in identifying and constructing the representations will be the work of Wigner on the corresponding problem for $P({\bf R})$. Technically, this will be the major concern of the present paper. Here I limit myself to describing the physics and mathematics background to the identification of the relevant representations. The starting point is the recognition (due, also, to Wigner [6]) that the group of symmetries of a quantum system is represented on its state space by projective URs. Wigner ([4]) first establishes the result that every continuous projective UR of $P({\bf R})$ lifts to a continuous UR of its universal covering group $\bar{P}({\bf R})=\bar{L}({\bf R})\vec{\times}T({\bf R})$, with $\bar{L}({\bf R})=SL(2,{\bf C})$; i.e., given a projective UR of $P({\bf R})$, we can find a UR of $\bar{P}({\bf R})$ whose projection onto the quotient group $P({\bf R})$ is the given projective UR. This key result has the following ingredients: i) though $T({\bf R})$ has nontrivial projective URs (i.e., projective URs which are not equivalent to URs of itself), they do not extend to the whole of $P({\bf R})$ as nontrivial projective URs and can be ignored; ii) though semidirect product Lie groups $G\vec{\times}A$ with $A$ abelian can in general have nontrivial projective URs which restrict to $A$ as URs, this does not happen for $P({\bf R})$ on account of the semisimplicity of $L({\bf R})$; and iii) every projective representation (not necessarily unitary) of $L({\bf R})$ lifts to a (linear) representation of its universal cover $SL(2,{\bf C})$, again because of semisimplicity. The realisation of an irreducible PUR for a particle of a given mass and a given spin, integral or half-odd-integral, then involves the choice of a mass shell (an orbit of $L({\bf R})$ in momentum space) and a finite dimensional (necessarily non-unitary) representation of $SL(2,{\bf C})$ which determines the spin. These results, in particular the assertion iii), are obviously specific to Lie groups and are therefore not directly applicable to the projective URs of their discrete subgroups. There are however theorems relating projective representations (not limited to projective URs) of any group to linear representations of related ‘universal’ groups other than the universal cover. Specifically, given a group $G$, we can construct a group $\hat{G}$, called a universal central extension of $G$, of which $G$ is a quotient group, and having the property that every projective representation of $G$ is the projection of a linear representation of $\hat{G}$.555Even for Lie groups, $\hat{G}$ is not necessarily $\bar{G}$. Questions regarding projective representations of $G$ are most efficiently addressed in the language of the theory of central extensions of $G$ by appropriate abelian groups and the associated group cohomology theory. For a clear and thorough treatment of the topic, including the construction of universal central extensions, see Raghunathan ([7]) and, for a physicists’ version with applications to many examples, see Divakaran ([8]). The theory can in fact be used to reformulate the process of quantisation of a system entirely in terms of its symmetries in a manner free from commonly encountered ambiguities ([9]); in particular, the superselection structure of the state space is seen to be of cohomological origin, a fact which may play a role in the specification of particle states in discrete relativity (see section 7 below). Wigner’s work ([4]) was of course the first to determine the state space of an elementary particle explicitly as a projective UR of $P({\bf R})$ but he did not connect it to the consequent superselection rule, that of univalence. The statements i) to iii) above are in fact specialisations of properties of universal central extensions to Lie groups having different structural properties. In particular, iii) follows from a theorem which says that a connected semisimple Lie group has a unique universal central extension and that it is the same as its universal cover; so equivalence classes of its projective representations are classified by the Pontryagin dual (the group of 1-dimensional representations or characters) of its fundamental group. This is the reason why it is legitimate to work with $\hat{L}({\bf R})=\bar{L}({\bf R})=SL(2,{\bf C})$.666Many commonly met groups in physics serve as examples of the distinction between $\hat{G}$ and $\bar{G}$. Thus, the universal cover of the 2-dimensional rotation group $SO(2)$ is the real line but its universal extension is itself (it has no nontrivial projective UR – hence no non- integral spin) while the vector (translation) group ${\bf R}^{n},\,n>1$ has nontrivial projective URs but is its own universal cover. The indiscriminate substitution of $G$ by $\bar{G}$, rather than by the always correct $\hat{G}$ has led to much misunderstanding in the physics literature, see [8]. To deal with the discrete groups of interest to us with anything like this degree of completeness is not a feasible option, primarily because of the lack of a physically satisfactory criterion (such as continuity) for acceptable representations. And it is, to a great extent, unnecessary for our purpose; physically, it is sufficient to note first that the discrete groups of our interest are subgroups of the corresponding Lie groups by construction and then to find those representations which, in the limit, approach in a well- defined sense the physically acceptable representations of the embedding Lie groups. That is possible thanks to the fact that $L({\bf Z})$ and $\hat{L}({\bf Z})$, as subgroups of the embedding Lie groups, have a property known as Borel density which enables them to inherit several useful results regarding their representations from the Lie groups (see below for details). This is in fact one of the mathematical inputs that make our project at all feasible. 4\. Interpreting representations as particles. The implementation of the programme outlined above presents some (though surprisingly few) serious mathematical obstacles. The resulting physical picture too, naturally, differs in some significant respects from the continuum theory. Firstly, since the momentum space (the space of characters of the translation group) is now the 4-torus ${\bf T}^{4}$ rather than ${\bf R}^{4}$, momentum itself is defined and conserved only modulo a reciprocal lattice vector (which is the same as the momentum cut-off, the Planck momentum by choice). This has consequences somewhat like the familiar momentum space properties of an electron moving in a crystal; in particular, the mass shell is the Minkowski metric analogue of the Fermi surface of an empty lattice. Secondly, the spin of a representation can no longer be defined generally as an attribute of rotation invariance as the discrete ‘rotation’ group, being a discrete subgroup of the compact Lie group $SO(3,{\bf R})$, is a finite group with a finite set of inequivalent irreducible URs. But this is not a serious handicap since it turns out (essentially because of the Borel-density property of the discrete $SL(2)$) that spin, both integral and half-integral, can be defined by reference to the Lorentz group alone (in the continuum, the two ways of defining spin are of course equivalent).777Another respect in which the identification of representations with particle states in the discrete world differs from continuum relativity is that they may apparently be chosen to be (highly) reducible. Whether this freedom is physically significant is at present unclear, see the discussion in section 7 below. If we accept these deviations from the received wisdom of continuum relativity – which, we shall see, are not in contradiction with our current state of knowledge – projective URs of $P({\bf Z})$, of a certain general type, have a perfectly reasonable interpretation as elementary particles. Of the deviations from standard lore, the more dramatic are those having their origin in the compactness of the momentum space. These include in particular possible apparent violations of energy-momentum conservation in elementary processes – the analogue of the Umklapp processes of crystal physics – involving energies of the order of the Planck mass. More intriguingly, the distinction between time-like and space-like momenta is no more an invariant concept. ‘Massive’ orbits of the discrete Lorentz group in momentum space do not have an invariant mass associated to them and have tachyonic branches that begin to sprout around the Planck scale: the (energy-momentum)2 can be negative even when the (rest-mass)2 (which, being a zero-momentum attribute, is a valid concept) is positive. The light cone itself is well-defined as a closed hypersurface in the momentum space ${\bf T}^{4}$; zero mass orbits lie within it and have no tachyonic branches. (This is a result of independent and fundamental importance, as described in a separate added note.) The general scheme for the construction of URs following from these considerations will be described later on with a degree of mathematical detail, as well as, qualitaively, some of the unfamiliar physical consequences of discrete relativity. What is certain is that the exotic features that emerge have no impact on elementary particle phenomena at energies presently accessible to experiments or many orders of magnitude higher; they will, however, have cosmological implications which also will be touched upon at the end. The main results of this work, then, hold no bad surprises: the description of elementary matter as founded on special relativity survives discretisation, subject to some reinterpretations which are capable of being tested. But these are only the first steps and much still needs to be done. Moreover, beyond the continued validity of the Wigner definition of elementary particles, a lattice structure for space-time as put forward here will have other macroscopic manifestations, ‘macroscopic’ in the present context meaning (here as elsewhere in this article) length scales characterising the structure and interactions of the currently known particles and greater: issues such as possible deviations from isotropy, the accommodation of truly macroscopic (in dimension and/or mass) classical systems etc. They have been much written about in the literature (see [1] and the many references therein) and this paper will have not much to add to it. These introductory sections are meant to bring out, more or less qualitatively, the following points: a) the critical importance of a good description of matter fields – including a suitable formulation of the equivalence principle – before we can think of bringing together quantum mechanics and general relativity; b) the feasibility of such a programme in a discrete space-time; and c) a brief foretaste of the necessary (but not widely known in the physics literature) mathematical material, described in the next few sections, that it entails. ## 4 The discrete Lorentz and Poincaré groups and their central extensions In the rest of this paper, the connected real Lorentz group $L({\bf R})$ will be called simply the Lorentz group without any qualifiers and its discrete subgroup $L({\bf Z})$ the discrete Lorentz group (and corresondingly for the Poincaré groups). As noted in section 3, the semisimplicity of the Lorentz group has the consequence that the universal central extensions of $L({\bf R})$ and $P({\bf R})$ are in fact their universal covering groups and hence that all their projective representations can be obtained as projections of true (linear) representations of the universal covers. To repeat, this is the reason why $L({\bf R})$ is replaced by $\bar{L}({\bf R})=SL(2,{\bf C})$ in the determination of physical (i.e., projective) URs of the full symmetry group of special relativity and, hence, for their interpretation as the state spaces of elementary particles. Lacking the completeness of such Lie-theoretic concepts and results, the method followed here has the limited aim of finding certain finite-dimensional projective representations of $L({\bf Z})$ (and, eventually, of projective URs of $P({\bf Z})$) which are inherited naturally from those of $L({\bf R})$. In other words, we shall look for a certain subgroup $\hat{L}({\bf Z})$ of $SL(2,{\bf C})$ having the property that every projective representation of $L({\bf Z})$ that is the restriction of a projective representation of $L({\bf R})$ lifts to a linear representation of $\hat{L}({\bf Z})$. This requirement is met if $\hat{L}({\bf Z})$ has a central ${\bf Z}_{2}$ subgroup such that $\hat{L}({\bf Z})/{\bf Z}_{2}=L({\bf Z})$, exactly as in the corresponding Lie group case where it is a standard construction found in text books (see for example [10]). In fact, the result in the discrete case is a direct transcription of this standard construction which I therefore recall. Denote by $H(2,{\bf C})$ the real vector space of $2\times 2$ complex hermitian matrices and by $\tau_{i},i=1,2,3$, the Pauli spin matrices. The association of $x\in M$ to the matrix $\mathrm{x}:=x_{\mu}\tau_{\mu}$ ($\tau_{0}=$ unit matrix), with $x_{\mu}=(1/2)\mathrm{tr}(\tau_{\mu}\mathrm{x})$, is a bijection of $M$ and $H(2,{\bf C})$, such that $x^{2}=\mathrm{det(x)}$. $SL(2,{\bf C})$ has an action on $H(2,{\bf C})$ preserving the determinant: $\mathrm{x}\rightarrow\alpha\mathrm{x}\alpha^{*},\alpha\in SL(2,{\bf C})$. Correspondingly, for any $\alpha$ there is a $\lambda\in L({\bf R})$ such that $(\lambda x)_{\mu}\tau_{\mu}=\alpha\mathrm{x}\alpha^{*}$ and hence a homomorphism $SL(2,{\bf C})\rightarrow L({\bf R})$ whose kernel is easily seen to be the central subgroup ${\bf Z}_{2}=\\{\pm 1\in SL(2,{\bf C})\\}$. Given this explicit identification of $SL(2,\bf C)$ as the (unique) nontrivial central extension of $L(\bf R)$ by $\bf Z_{2}$, its adaptation to the discrete case is straightforward, thanks to the fact that the basis $\\{\tau_{\mu}\\}$ of $H(2,\bf C)$ are actually matrices over the ring ${\bf Z}[i]$ of Gaussian integers, i.e., complex numbers whose real and imaginary parts are integers. Replacing $M$ by its discrete counterpart $M({\bf Z})$ therefore gives a bijection of $M({\bf Z})$ and $H(2,{\bf Z}[i])$ exactly as in the real case (even though they are no longer vector spaces), $X\rightarrow X_{\mu}\tau_{\mu}=:\mathrm{X}$, such that $X^{2}=\mathrm{det(X)}$. And, as before, i) the group $SL(2)$ over Gaussian integers, $SL(2,{\bf Z}[i])\subset SL(2,\bf C)$, acts on $H(2,{\bf Z}[i])$ by $\mathrm{X}\rightarrow A\mathrm{X}A^{*}$; ii) given any $A\in SL(2,{\bf Z}[i])$, there is a discrete Lorentz transformation $\Lambda$ such that $(\Lambda X)_{\mu}\tau_{\mu}=A\mathrm{X}A^{*}$ and a homomorphism $SL(2,{\bf Z}[i])\rightarrow L({\bf Z})$; and iii) the kernel of this homomorphism is ${\bf Z}_{2}=\\{\pm 1\in SL(2,{\bf Z}[i]\\}$. In other words, $SL(2,{\bf Z}[i])$ is a nontrivial central extension of $L({\bf Z})$ by ${\bf Z}_{2}:SL(2,{\bf Z}[i])/{\bf Z}_{2}=L({\bf Z})$; every finite dimensional representation of $SL(2,{\bf Z}[i])$ will project to $L({\bf Z})$ as a projective representation, either as a true representation or as a ‘representation up to sign’. This is the reason why this group is denoted by $\hat{L}({\bf Z})$; it plays the same role in the theory of projective representations in discrete relativity as $\hat{L}({\bf R})=SL(2,\bf C)$ does traditionally. In particular, it will allow for the presence of states of half-integral helicities in discrete quantum relativity as we shall see below. As noted in section 3 above, the full discrete relativity group – the discrete Poincaré group – is the semidirect product group $P({\bf Z})=L({\bf Z})\vec{\times}T({\bf Z})$, $T({\bf Z})\sim{\bf Z}^{4}$ being the discrete translation group; it is this group whose projective URs we would like to interpret as the state spaces of elementary matter. By the general theory of central extensions, they can be obtained as the (linear) URs of the corresponding central extension888In general, inequivalent central extensions of any group $G$ are classified by the second cohomology group $H^{2}(G)$ with appropriate coefficients. $H^{2}$ of groups having a semidirect product structure $G\vec{\times}A$, $A$ abelian, can have contributions other than just $H^{2}(G)$. They are absent in the continuum case (see section 3) and, for that reason, will be ignored (if they are present; the answer is not known to me) in the discrete case as well. See, however, the discussion in section 7 on the role of irreducibility in the particle interpretation of URs of $\hat{P}({\bf Z})$. For completeness I add that, in the semidirect product, the action of $\hat{L}({\bf Z})$ on $T({\bf Z})$ is the same as that of $L({\bf Z})$, extended by letting the central ${\bf Z}_{2}$ act trivially, as in the real case. $\hat{P}({\bf Z})=SL(2,{\bf Z}[i])\vec{\times}{\bf Z}^{4}$. ## 5 Unitary representations of $\hat{P}(\bf R)$ – an overview For the construction of physically acceptable URs of $\hat{P}({\bf Z})$ the model will be Wigner’s method of ‘inducing from little groups’ for the corresponding Lie group [4].999Wigner’s pioneering work was given a general treatment, in particular as it applies to semidirect product groups, by Mackey ([11]). There are many subsequent accounts of the method in the literature. For the positive mass URs the summary given here is based on the description in [8] and, for the massless URs, the treatment below of the group-theoretic origin of the subsidiary condition appears to be new. The generality of the method allows room for its adaptation, with suitable adjustments, to the discrete case. More importantly, the method naturally highlights the physical attributes, mass and (Lorentz) helicity, that allow a direct association of elementary quantum fields with URs, thereby serving as a model for deciding which URs of $\hat{P}({\bf Z})$ can be considered ‘physical’. The following is a compressed description of the essential elements of the Wigner construction in the continuum case. The momentum space is the dual group of the translation group $T(\bf R)$, isomorphic also to ${\bf R}^{4}$ and denoted by $M^{*}$ with coordinates $\\{p_{\mu}\\}$. Let $O$ be an orbit of $\hat{L}(\bf R)$ in $M^{*}$ for the natural action of $L(\bf R)$, lifted to $\hat{L}(\bf R)$ by letting the central ${\bf Z}_{2}$ act trivially,101010The action of $SL(2,\bf C)$ on $M^{*}$ will often be denoted as $p\rightarrow\alpha p$ (as though $\alpha\in L(\bf R)$) and likewise in the corresponding discrete case. No misunderstanding is likely to arise. and let $S$ be the little group (stabiliser) of any point in $O$. $O$ can then be identified with $\hat{L}({\bf R})/S$. Suppose given a finite dimensional representation $\rho$ of $\hat{L}(\bf R)$ on a Hilbert space $V$ with the property that the restriction of $\rho$ to $S$ is unitary. Denote by $\pi$ the projection of $\hat{L}(\bf R)$ onto $O$ and by $\sigma$ a section of $\pi$, i.e., a map $O\rightarrow\hat{L}(\bf R)$ such that $\pi(\sigma(p))=p$ for all $p\in O$. Finally, let ${\cal H}_{O,V}$ be the space of vector valued functions $\phi,\psi:O\rightarrow V$, square-integrable with respect to the (positive) $\hat{L}(\bf R)$-invariant measure $\omega$ on $O$. On ${\cal H}_{O,V}$, define a bracket $\langle\;,\;\rangle$ by $\langle\phi,\psi\rangle=\int_{O}d\omega(p)\langle(\rho(\sigma(p)^{-1})\phi(p),(\rho(\sigma(p)^{-1})\psi(p)\rangle_{V},$ $\langle\;,\;\rangle_{V}$ being the scalar product on $V$. If $\sigma$ and $\sigma^{\prime}$ are two sections of $\pi$, it follows from $\pi(\sigma(p))=\pi(\sigma^{\prime}(p))\;(=p)$ that $\sigma(p)^{-1}\sigma^{\prime}(p)$ is in $S$. And, since $\rho$ restricts to $S$ unitarily, the bracket $\langle\;,\;\rangle$ is independent of the section used to define it, making it a scalar product. Moreover, it follows from the positivity of the measure that the norm of $\phi$ vanishes if and only if $\phi=0$ identically, making (the completion of) ${\cal H}_{O,V}$ a Hilbert space. With these notions in place, one verifies first that the action of $\hat{P}(\bf R)$ on ${\cal H}_{O,V}$ given by $(U(\alpha,x)\phi)(p):=\chi_{p}(x)\rho(\alpha)\phi(\alpha^{-1}p),\hskip 14.22636pt\alpha\in\hat{L}({\bf R}),x\in T({\bf R}),$ where $\chi_{p}(x)=\exp(ip_{\mu}x_{\mu}$) is the character of $T(\bf R)$ corresponding to $p$, is a representation. It is in fact unitary because: i) we have $\|U(\alpha,x)\phi\|^{2}=\int d\omega(p)\|\rho(\sigma(\alpha p)^{-1})\rho(\alpha)\phi(p)\|^{2}_{V}$ using the invariance of the measure under $p\rightarrow\alpha p$; ii) $\sigma(\alpha p)$ and $\alpha\sigma(p)$ have the same projection onto $O$ and hence differ by an element of $S$, implying $\rho(\sigma(\alpha p)^{-1})\rho(\alpha)=\rho(s)\rho(\sigma(p)^{-1})$ for some $s\in S$; and iii) $\rho$ is unitary on $S$. It is irreducible whenever $V$ is irreducible under $\hat{L}(\bf R)$. In the language of induced representations, it is the UR induced by the (unitary) restriction of $\rho$ to $S\subset\hat{L}(\bf R)$. Physically, it is helpful to refer to it as the UR supported on a given mass shell (the orbit $O_{m}$) and ranging over a given spectrum of helicities (the representation $V$). When $O$ is a positive (mass)2 positive (or negative) energy mass shell of mass $m$ – i.e., the orbit $O_{m}$ through any point $p$ with $p^{2}=m^{2}$, in particular through $(m,0,0,0)$ – the stabiliser $S_{m}$ is isomorphic to $SU(2)$ and every irreducible representation of $\hat{L}(\bf R)$ on a Hilbert space ${\cal H}_{m}$ constructed as above (dropping the subscript $V$) restricts to $S_{m}$ as an irreducible UR. Hence the helicity spectrum of $U_{m}$ is determined equivalentely and alternatively by the $\hat{L}(\bf R)$ or $S_{m}(=SU(2))$ transformation properties of the functions $\phi$. The spin determines the helicity; in particular the number of distinct helicity states in $U_{m}$ is $\dim V$. Moreover, the condition that $S_{m}$ fixes $p$ translates as the condition $(U_{m}(s,x)\phi)(p)=\chi_{p}(x)\rho(s)\phi(p)$ for all $s\in S_{m}$. This, or rather its Lie algebra version, is the ‘invariant wave equation’ or the free field equation corresponding to the UR $U_{m}$ of $\hat{P}({\bf R})$ ([5]). In the discrete case, it is (the discrete form of) this condition which will replace the wave equation. Since all known elementary particles have non-negative (mass)2 and finite sets of helicities, it is customary to reject URs of $\hat{P}(\bf R)$ not having these two properties as unphysical. As will be seen below, in the discrete context there is no ‘invariant mass’ – a fact that has to be physically interpreted with care – though the notion of a rest mass $m$ still makes sense; a physical UR will then have to be characterised as one supported on an orbit passing through $(m,0,0,0)$ with $m^{2}\geq 0$ and ranging over a finite dimensional representation of $\hat{L}(\bf Z)$. The condition on the helicity spectrum is a powerful one already at the continuum level: in the cases where $S$ is a non-compact (Lie) group, it puts strong restrictions on its admissible URs from which the induction process may be initiated (as also will be seen below). Apart from the one-point orbit that is the origin of the momentum space $M^{*}$, there are two nontrivial $m=0$ orbits, the open upper and lower half light cones in $M^{*}$. To construct physical URs of $\hat{P}(\bf R)$ supported on the upper half light cone $C_{+}$ for example, consider the stabiliser $S_{0}$ of the representative point $(p_{0},0,0,p_{0}),\;p_{0}>0$, consisting of the upper triangular matrices of $SL(2,\bf C)$ which we may parametrise as $s(\theta,z)=\left(\begin{array}[]{cc}\exp(i\theta)&z\exp(-i\theta)\\\ 0&\exp(-i\theta)\end{array}\right),\;0\leq\theta<2\pi,\;z\in{\bf C}.$ The group law in $S_{0}$: $s(\theta_{1},z_{1})s(\theta_{2},z_{2})=s(\theta_{1}+\theta_{2}\;\mathrm{mod}\,2\pi,z_{1}+z_{2}\exp(2i\theta_{1}))$, identifies it as the Euclidean group in 2 dimensions $E(2,{\bf R})=SO(2,{\bf R})\vec{\times}{\bf R}^{2}$, with ${\bf R}^{2}=\\{(\mathrm{Re}\,z,\mathrm{Im}\,z)\\}$ on which the $SO(2)$ subgroup acts as the 2-fold cover of the circle group; physically this $SO(2)$ is in fact the group of rotations about the 3rd axis111111That it covers the circle twice is the reason why massless particles can have half-integral helicities. The Lie algebra of our $E(2)$ has $J_{3},J_{1}+K_{2},J_{2}-K_{1}$ as a basis, the $J$s and the $K$s being generators of rotations and boosts in the standard physics terminology. Obviously, the ${\bf R}^{2}$ subgroup does not relate to physical translations. (more generally the direction of the momentum vector). Its characters in a (massless) UR of $\hat{P}({\bf R})$ constitute what is generally called its helicity spectrum. In contrast to the massive case, this (Lorentz) helicity cannot be defined through the full rotation group, which is natural since a massless state cannot be transformed to rest. Now, a finite dimensional UR of $E(2,\bf R)$ is necessarily non-faithful; indeed the only such URs are characters of $SO(2,\bf R)$ and have the normal subgroup ${\bf R}^{2}$ as kernel.121212A thorough account of the unitary representation theory of $E(2,\bf R)$ is available in [12]. Hence $\hat{L}(\bf R)$, being simple, cannot have any finite dimensional representation restricting to $E(2,\bf R)$ unitarily. This in turn means that the procedure of inducing from the stabiliser no longer works as directly as in the massive case and has to be modified suitably. The well known way to do this ([12], [5]) is to specialise the space ${\cal H}_{0,V}$ of functions $\phi:C_{+}\rightarrow V$ as defined earlier to the subspace ${\cal H}^{\prime}_{0}$ (dropping again the subscript $V$) on which the action of ${\bf{R}}^{2}(\subset E(2,{\bf R})\subset SL(2,\bf C))$ is trivial: $\rho(r)\phi(r^{-1}p)=\phi(p),\>r\in{\bf R}^{2}.$ The Lie algebra form of this condition encodes the familiar subsidiary conditions satisfied by massless fields. A character of $SO(2,{\bf R})=E(2,{\bf R})/{\bf{R}}^{2}:\rho(\theta)\phi(p)=\exp(im\theta/2)\phi(p)$ with $\theta/2\in SO(2,{\bf R}),m\in\bf Z$, then induces a UR of $\hat{P}(\bf R)$ on ${\cal H}^{\prime}_{0}$. The fact to be noted, especially relevant in the context of discrete relativity, is that massless finite helicity URs exist because the appropriate stabiliser has a normal subgroup with compact quotient group. It also follows that a physically acceptable irreducible UR has precisely one (Lorentz) helicity, namely the character of $SO(2,{\bf R})$ to which $\rho$ restricts. As for representations supported on orbits with (mass)${}^{2}<0$, they are doubly unphysical. Not only will they correspond to particles which are tachyonic at all momenta, they will suffer from unphysical helicities as well: the stabiliser, which is $SL(2,\bf R)$, has no finite dimensional nontrivial URs at all. This summary of Wignerism is meant also to remind us that the general theory of quantum fields having a particle interpretation is no more than the representation theory of the group of relativistic symmetries, subject to certain physical criteria. The significance of this foundational construction as the prelude to any attempt to quantise gravity has been noted in sections 1 and 2 above. The question now is whether this identification of particles and representations can be carried over realistically to space-times which are discrete. ## 6 Masses and helicities in discrete relativity We turn first to an examination of how the fundamental notions of mass and helicity survive the reduction of the group of symmetries from $P(\bf R)$ to $P(\bf Z)$, both for their intrinsic significance and as preparation for the construction of physically acceptable projective URs of the latter. The momentum space of discrete space-time is the dual group of the discrete translation group $T(\bf Z)$ ($\sim{\bf Z}^{4}$), namely the 4-torus, denoted by $B$ from now on. As in other familiar lattice problems, it is useful to think of $B=M^{*}/{\bf Z}^{4}$ as the fundamental domain for the action of $\bf Z^{4}$ on $M^{*}$ where $M^{*}$ $(\sim{\bf R}^{4})$ as earlier is the momentum space of the continuum translation group and ${\bf Z}^{4}$ is to be identified with the reciprocal lattice. In terms of coordinates $\\{p_{\mu}\\}$ in $M^{*}$, $B$ is thus the cube $\\{-\pi<P_{\mu}\leq\pi,\;P_{\mu}:=p_{\mu}\;\mathrm{mod}\;2\pi\\}$, i.e., the unit cell of the reciprocal lattice, the relativistic analogue of the Brillouin zone or, in its cosmological role, the Brillouin zone of the universe (called simply the Brillouin zone from now on). Physically, this means of course that the momentum components are defined and conserved modulo $2\pi$, a circumstance that has far more fundamental consequences here than in the crystal physics context (keeping in mind that it is only a minuscule part of the interior of this space-time Brillouin zone, far from its boundary, that terrestrial experiments and observations and the theories that deal with them can explore). Nevertheless, as in crystal physics, we can study the action of the discrete group $\hat{L}(\bf Z)$ on $B$ by starting with its action on $M^{*}$ and then translating the coordinates of the image of a point in $B$ considered as a point of $M^{*}$ back to $B$ by some integral multiples of $2\pi$. Under this projection $M^{*}\rightarrow B$, $P_{\mu}=\pi$ and $P_{\mu}=-\pi$ get identified for each $\mu$. Thus a generic orbit $O_{B}({\bf Z})$ of $\hat{L}({\bf Z})$ in $B$, through a given point $P\neq 0$, can be determined by first finding the orbit $O(\bf Z)$ through $P$ of $\hat{L}({\bf Z})\subset\hat{L}(\bf R)$ in all of $M^{*}$ and then translating the points of $O({\bf Z})$ outside $B$ back to $B$. It is to be expected that, generically, the orbits will be quite ‘wild’.131313The study of the action of discrete subgroups of Lie groups on manifolds is an active field of research and is potentially of great use in the physics of discrete systems in general. $O({\bf Z})$ being a subset of the orbit of $\hat{L}(\bf R)$ through $P$ considered as a point of $M^{*}$, a good starting point then is the projection onto $B$ of the relevant orbits in the standard continuum picture. Consider first the orbit $O_{m}$ of $\hat{L}(\bf R)$ through $(m,0,0,0),\;m\neq 0$, i.e., the set of points $\\{p_{\mu}\\}$ with $p_{\mu}p_{\mu}=m^{2}$ (the positive energy mass shell of rest mass $m$). It projects on to $B$ as the set of points $\\{P_{\mu}=p_{\mu}$ mod $2\pi\\}$, the ‘torus mass shell’ of rest mass $m$. Figure 1 is a depiction of this (for a value of $m$ chosen to be very large so as to bring out its complicated structure) as its intersection with, say, the (0,1) plane. The corresponding orbit $O_{B,m}(\bf Z)$ of $\hat{L}(\bf Z)$ for any rest mass $0<m<\pi$, our objects of interest, will be discrete subsets of such torus mass shells. Figure 1: The intersection of a torus mass shell with the (0,1) plane. The arrows indicate that the particle is being boosted from rest in the positive 1 direction. The dotted segments illustrate the first few superluminal phases. At the classical level, the torus mass shell summarises the kinematics – the relationships among energy, momentum and velocity – of a particle of a given rest mass moving in discrete space-time. The following qualitative remarks are self-evident. Within the Brillouin zone, i.e., where $P_{\mu}=p_{\mu}$, kinematics is entirely as in continuum relativity; in particular, the rest mass $m$ is the invariant mass for transformations in $M^{*}$ which keep the particle within the Brillouin zone. But when a particle initially at rest is subjected to larger and larger boosts, say along the positive direction 1, there occur a sequence of critical points on the boundary of $B$ at which the energy or the momentum appears to undergo a discontinuous change of magnitude $2\pi$ in our units (or $2\pi/L$ in terms of the Planck length141414For numerical guidance we may take it to be of the order of the Planck mass, $10^{20}$ GeV or so.), induced by the identification of the points $p_{\mu}$ and $p_{\mu}+2\pi n_{\mu}$ in $M^{*}$ for arbirary integers $n_{\mu}$. The first critical point occurs at $(P_{0}=\pi,P_{1}=\sqrt{\pi^{2}-m^{2}}$). A further boost in the same direction will give rise to the segment of a hyperbola (restricting the torus mass shell to the (0,1) plane as in the figure) connecting the translate $(-\pi,\sqrt{\pi^{2}-m^{2}})$ of this point with the next critical point, namely the translate $(\sqrt{\pi^{2}+m^{2}}-2\pi,\pi)$ of the point $(\sqrt{\pi^{2}+m^{2}},\pi)$ (which is outside $B$). And so on ad infinitum. In all the segments except the initial one, $P_{0}^{2}-P_{1}^{2}\neq m^{2}$. More generally, $P_{\mu}P_{\mu}$ is not invariant under the action of the Lorentz group, continuous or discrete; ‘mass’ in the term ‘torus mass shell’ refers to the rest mass as the parameter labelling distinct mass shells. This is just a reflection of the fact that there are no $\hat{L}(\bf Z)$ (and hence no $\hat{L}(\bf R)$) invariant non-zero periodic functions on $M^{*}$ and stems from the periodicity of the momentum components in $M^{*}$. It is also clear, most simply and graphically from Figure 1, that all segments of the mass hyperbola except the first have parts with $P_{0}^{2}-P_{1}^{2}$ negative, where the particle’s speed is greater than 1: under large boosts the particle behaves intermittently as a tachyon with an imaginary effective mass $m_{\mathrm{eff}}=\sqrt{P_{\mu}P_{\mu}}$. In the limit of an infinitely large boost, the effective mass tends to zero and the mass shell for any non-zero rest mass tends to the torus light cone. The full torus mass shell embedding a massive orbit thus has a complicated structure.To be noted in particular is that the speed of light is not an impassable barrier: a particle in a tachyonic kinematic regime can always be deboosted to rest. For ease of reference, I will refer to the part of a torus mass shell on which the rest mass is also the invariant mass as the ‘conventional part of the mass shell’ or ‘the conventional regime’. It consists of the orbit through $(m,0,0,0)$ before it hits the first critical point $(P_{0}=p_{0}=\pi,|\vec{P}|=|\vec{p}|=\sqrt{\pi^{2}-m^{2}})$; none of the exotic aspects of the new kinematics manifest themselves when the energy and momentum are constrained to be in the conventional part of a mass shell. In contrast, consider the light cone $C$ of $M^{*}$. Under projection onto $B$ (translations by multiples of $2\pi$), its image $C_{B}$ remains in the torus light cone, the part of the light cone lying in $B$ (with the usual boundary conditions), In consequence, any boost of a point in $C_{B}$ will take it first to a point in $C$ and then, on translation back into $B$, to a point in $C_{B}$ itself. Masslessness is an invariant property in discrete relativity – the polynomial $p_{\mu}p_{\mu}$ is periodic and invariant as long as it vanishes. The orbit $O_{B,0}$ of any point in $C_{B}$ for the action of $\hat{L}(\bf Z)$ on $B$ is a set of discrete points in $C_{B}$. As will be seen below, this is a property of capital importance. Next is the problem of whether and how helicities can be defined in discrete space-time. They cannot be defined as arising from the URs of the rotation group by restriction since $SO(3,\bf R)$ or $SU(2)$, being compact, has only finite groups as discrete subgroups and therefore cannot accommodate general spins. The way out is to define helicity relativistically, as arising directly from the Lorentz group, without invoking the rotation group and without transforming to rest. (Recall the discussion in section 5 of how massless helicities are defined in the Wigner construction). We have in fact the key result: Every finite dimensional irreducible representation of $\hat{L}(\bf R)$ restricts to its discrete subgroup $\hat{L}(\bf Z)$ as an irreducible representation. This is a special case of a general theorem, the density theorem of A. Borel ([13]; see also [14] for an account of the general theoretical framework), on representations of discrete subgroups of non-compact semisimple Lie groups. A general formulation of the theorem is as follows. Let $G$ be a semisimple Lie group none of whose factors is compact and $\Gamma$ a discrete subgroup of $G$ having the property that the quotient $G/\Gamma$ has finite volume (i.e., $\Gamma$ is sufficiently dense in $G$). Then every finite dimensional irreducible representation of $G$ restricts to $\Gamma$ irreducibly.151515Work subsequent to [13] and [14] has extended this result in several directions. For our purpose, the original density theorem is enough. The group $SL(2,{\bf Z}[i])$ has finite covolume in $SL(2,\bf C)$ and hence meets the conditions of the theorem, thereby enabling the taking over of the helicity content of any finite dimensional irreducible representation of $SL(2,\bf C)$ as that of the (irreducible) representation of $SL(2,{\bf Z}[i])$ to which it restricts. An easy illustration of the density theorem is provided by the important special case of spin 1/2. The defining (left-chiral) representation $\rho_{L}$ of $SL(2,{\bf C}):\rho_{L}(\alpha)=\alpha\in SL(2,{\bf C})$ restricts to $SL(2,{\bf Z}[i])$ as $\rho_{L}(A)=A\in SL(2,{\bf Z}[i])$. Let $K$ be an operator on ${\bf C}^{2}$, the representation space of $\rho_{L}$, that commutes with $\rho_{L}(A)$ for all $A\in SL(2,{\bf Z}[i])$. Each of the Pauli matrices (multiplied by $i$) is in $SL(2,{\bf Z}[i])$ and so will commute with $K$ by assumption. Hence $K$ is a multiple of the unit operator and, by Schur’s lemma, $\rho_{L}$ remains irreducible when restricted to $SL(2,{\bf Z}[i])$. The same conclusion – and the same argument – holds for the conjugate (right-chiral) representation. We may note incidentally that any two of the Pauli matrices can be picked to belong to the generators of $SL(2,{\bf Z}[i])$([15]). The density theorem is critically important because of our general condition that only those representations are physically relevant whose ‘continuum limits’ exist and are the Lie group representations we are used to. This is ensured if only those irreducible representations of $\hat{L}({\bf Z})$ are considered physical which are restrictions of (continuous) finite dimensional irreducible representations of $\hat{L}({\bf R})$.161616It is useful to remember that even in the standard Wigner philosophy, certain URs of the Poincaré group are excluded from physics solely on the ground of lack of experimental support, e.g., those corresponding to imaginary mass or infinite/continuous spin. These criteria will be assumed to be valid in the discrete case as well. Thus URs of $\hat{P}({\bf Z})$ which cannot be deboosted to rest – corresponding to tachyons in continuum relativity – are excluded. ## 7 Matter fields as unitary representations of $\hat{P}({\bf Z})$ Having seen how the basic notions of momentum, mass and helicity carry over to a discrete space-time, we can turn now to our primary objective, that of identifying the elementary consistuents (particles) of matter with the fields that constitute Hilbert spaces of those irreducible URs of the discrete Poincaré group that are subject to the two physical criteria introduced in the last section. These are, to repeat, i) the orbit supporting a UR must be massless (which is an invariant property) or must have a real rest mass; and, ii) the representation of $\hat{L}(\bf Z)$ over which the UR ranges must be the restriction of a representation of $\hat{L}(\bf R)$. To these must be added a third, apparently more technical, criterion which arises as follows. A point $P$ of the Brillouin zone $B$ will be said to be a rational point if its coordinates $\\{P_{\mu}\\}$ are all rational multiples of $\pi$: $P_{\mu}=(q_{\mu}/d_{\mu})\pi$, with $q_{\mu},d_{\mu}\in\bf Z$, $-|d_{\mu}|\leq q_{\mu}\leq|d_{\mu}|$, and an irrational point otherwise. Every point in the orbit of $\hat{L}(\bf Z)$ through a rational (irrational) point is rational (irrational), since $\hat{L}(\bf Z)$ acts on $B$ by integral linear transformations followed by shifts by integral multiples of $\pi$. Consider rational orbits first. Reexpress the coordinates $\\{q_{\mu}/d_{\mu}\\}$ (dropping the factor $\pi$ for the time being) in terms of the lowest positive common multiple $D$ of {$d_{\mu}\\}$ , i.e., $P_{\mu}=Q_{\mu}/D$ with $-D\leq Q_{\mu}\leq D$ and no positive integer $D^{\prime}<D$ exists such that $P_{\mu}=Q^{\prime}_{\mu}/D^{\prime}$ for any $\\{Q^{\prime}_{\mu}\\}$ with $-D^{\prime}\leq Q^{\prime}_{\mu}\leq D^{\prime}$; $\\{Q_{\mu}/D\\}$ will be referred to as the standard coordinates of the rational point $P$. Suppose now that the transform by $A$ of $P$ in standard form is not in standard form, i.e., $(AQ)_{\mu}$ has a common factor with $D$ for each $\mu$. Then the standard coordinates of $AP$ will have a denominator $D_{A}$ strictly less than $D$; if there is no common factor, the denominator of course remains unchanged. But the same argument applies also to $A^{-1}$ acting on $AP$, implying that $D$ cannot be greater than $D_{A}$. It follows that $\hat{L}(\bf Z)$ acts on every rational point without changing the denominator of its standard coordinates. Since the numerators $\\{Q_{\mu}\\}$ lie between $-D$ and $D$, we can conclude that a rational orbit of $\hat{L}(\bf Z)$ in $B$ is a finite set. Recalling the identification of the orbit with the quotient of $\hat{L}(\bf Z)$ by the stabiliser, we see thus that the stabiliser of any rational point is a subgroup of finite index, in other words almost all of $\hat{L}(\bf Z)$. The situation is not very different from that of the one-point orbit consisting of the origin $P=0$ – the stabiliser has no finite dimensional UR from which to induce a UR of $\hat{P}(\bf Z)$ with a finite helicity spectrum (even when subjected to a finite set of subsidiary conditions, see the discussion of the massless URs of $\hat{P}(\bf R)$ in section 5). So our third criterion is: rational orbits must be excluded from consideration in constructing physically reasonable URs of $\hat{P}(\bf Z)$. The next task is to determine the stabiliser of an irrational point $P$ for the action of $\hat{L}(\bf Z)$ on $B$. This group is $\Sigma_{P}:=\\{A\in\hat{L}({\bf Z}):(AP)_{\mu}=P_{\mu}\;\mathrm{mod}\;\bf Z\\}.$ It is most simply characterised in terms of the discrete Lorentz group $L(\bf Z)$ as the subgroup satisfying the conditions $(\Lambda P)_{\mu}=P_{\mu}+N_{\mu},\hskip 14.22636pt\Lambda\in L(\bf Z),$ for arbitrary integers $\\{N_{\mu}\\}$. For an irrational $P$, these conditions put strong restrictions on the admissible values of $N_{\mu}$: writing them as $(\Lambda-I)^{-1}N=P$ ($I$ is the unit matrix), we see that $P$ will be a rational point (since $(\Lambda-I)^{-1}$ has rational entries ($\Lambda-I$ is integral)) unless all $N_{\mu}$ vanish. It follows that the stabiliser $\Sigma_{P}$ of an irrational $P$ for the action of $\hat{L}(\bf Z)$ on $B$ coincides with its stabiliser for the $\hat{L}(\bf Z)$ action on the whole of $M^{*}$; it is determined by the condition $(\Lambda P)_{\mu}=P_{\mu}$ exactly as in the continuum case. Thus, if $P$ is in a massive orbit (of rest mass $m$) of $\hat{L}(\bf Z)$, the corresponding stabiliser $\Sigma_{m}$ consists of all unitary matrices in $SL(2,{\bf Z}[i])$. It is a finite (of course) group of order 8, isomorphic to the quaternion group: $\Sigma_{m}=\\{\pm 1,\pm i\tau_{i}\\}$ where $\\{\tau_{i}\\}$ as earlier are the Pauli matrices. Also as in continuum relativity, when $P$ is in a massless orbit, its stabiliser $\Sigma_{0}$ is the subgroup of $SL(2,{\bf Z}[i])$ consisting of upper triangular matrices (for a suitable choice of basis in $M^{*}$) $s(\zeta,Z)=\left(\begin{array}[]{cc}\zeta&\zeta^{-1}Z\\\ 0&\zeta^{-1}\end{array}\right);\;\zeta,\zeta^{-1},Z\in{\bf Z}[i].$ The only elements of ${\bf Z}[i]$ with inverses in ${\bf Z}[i]$, namely the units, being $\zeta=\pm 1,\pm i$, the subgroup of diagonal matrices is the cyclic group ${\bf Z}_{4}$ and the subgroup of elements $s(1,Z)$ is the planar lattice of points $(\mathrm{Re}\;Z,\mathrm{Im}\;Z)$. Composition in $\Sigma_{0}$ is given by $s(\zeta_{1},Z_{1})s(\zeta_{2},Z_{2})=s(\zeta_{1}\zeta_{2},Z_{1}+{\zeta_{1}}^{2}Z_{2})$, confirming that $\Sigma_{0}$ is indeed the discrete Euclidean group $E(2,{\bf Z})=SO(2,{\bf Z})\vec{\times}{\bf Z}^{2}$, with $\zeta\in{\bf Z}_{4}$ acting on ${\bf Z}^{2}$ by multiplication by $\zeta^{2}$ – which, as in the continuum case, is a reminder that $\hat{L}(\bf Z)$ covers $L(\bf Z)$ twice, thereby accommodating representations of half-integral helicities. We are thus confirmed in our expectation that the stabilisers are just the natural discretisations of their Lie group counterparts. The exercise also reinforces the choice of $SL(2,{\bf Z}[i])$ as the correct replacement for $SL(2,\bf C)$. In constructing URs of $\hat{P}({\bf Z})$, whether massive or massless, we can now try to imitate Wigner’s method in the continuum case, keeping in mind that all orbits are now discrete sets. Define momentum space fields as functions $\phi:O_{B}\rightarrow V$, where $O_{B}$ is $O_{B.m}$ or $O_{B,0}$ as the case may be and $V$ as before is the space of an irreducible representation $\rho$ of $\hat{L}(\bf R)$ (and hence, by the density theorem, of $\hat{L}(\bf Z)$). Given a set-section $\sigma:O_{B}\rightarrow\hat{L}({\bf Z})$, such fields form a Hilbert space ${\cal H}_{O_{B}}$ with scalar product $\langle\phi,\psi\rangle=\sum_{P\in O_{B}}\langle\rho(\sigma(P)^{-1})\phi(P),\rho(\sigma(P)^{-1})\psi(P)\rangle_{V}$ (subject to the condition $\langle\phi,\phi\rangle<\infty$). On this Hilbert space, we have a UR $U_{O_{B}}$ of $\hat{P}(\bf Z)$ given by $(U_{O_{B}}(A,X)\phi)(P)=\exp(iP_{\mu}X_{\mu})\phi(A^{-1}P),\>A\in\hat{L}({\bf Z}),X\in T({\bf Z}).$ To repeat, all this is no more than a direct adaptation of the Wigner method to the discrete situation. It is to be noted in particular that the Fourier transform $\phi^{*}$ of $\phi$ is the field defined on space-time; since $\phi$ is a function on ${\bf T}^{4}$, $\phi^{*}$ is supported on ${\bf Z}^{4}$. But there are also some significant differences. The less serious one conceptually is that the discrete counterparts of the conventional field equations, which are Lie-algebraic statements to the effect that the stabiliser fixes points in an orbit (see section 5), have of course no infinitesimal version. Nor has the subsidiary condition which, in the massless case, enforces the requirement that admissible representations must have the vector subgroup of the Euclidean group as kernel. More seriously, while the orbit of $\hat{L}(\bf R)$ through $p\in M^{*}$ is the whole mass shell, massive or massless, containing $p$ – i.e., $\hat{L}(\bf R)$ operates transitively on the submanifold of $M^{*}$ defined by a constant $p^{2}$ – that is not the case for the action of $\hat{L}(\bf Z)$ on the torus mass shell; momenta which are linearly independent over the rationals cannot be connected by a discrete Lorentz transformation even if they are both massless or have the same rest mass. We can then take the orbit through a point $P^{\prime}$ that is not in the orbit through $P$ (i.e., $P^{\prime}$ and $P$ are rationally independent) but is in the same torus mass shell, and construct another irreducible UR of $\hat{P}(\bf Z)$, and so on. All of them will have the same stabiliser and the same rest mass and the same set of helicities; the Wignerian association of an elementary particle of a given mass and a given spin with an irreducible UR of $\hat{P}(\bf R)$ does not hold in discrete relativity. There is then a choice to be made. One option is to associate each of the irreducible URs as constructed above with a particle type. Since all of them have the same rest mass and spin, distinguishing among them will require the introduction of new ‘quantum numbers’ which, however, must still have their origin in the kinematics of discrete space-time itself. What might they be? It is unlikely that they can be related to any of the charges (‘internal quantum numbers’) of models currently in favour; among the more pragmatic reasons, we know of no infinite multiplets of particles of different charges all having the same mass and spin. An alternative possibility is that the different irreducible URs are superselection sectors. Recall that superselection rules are vetoes on the unrestricted superposability of states; they decompose the total state space into a collection of sectors whose direct sums do not represent states. An immediate physical consequence is that no observable can connect states from two distinct sectors (which in fact was the formulation they were first given [16]). The vetoes arise ([8,9]) from the non-additivity of inequivalent projective URs of a symmetry group and are determined by its 2nd cohomology group with suitable coefficients. In the present context, the symmetry group is $P(\bf Z)$ and its 2nd cohomology group is determined by that of its Lorentz subgroup $L({\bf Z})$. Group cohomolgy of discrete subgroups of Lie groups is a very active field but I have not been able to find the specific result needed here in the literature. (See also the discussion in section 3). Space-time symmetries can certainly give rise to superselection rules, the prime example being univalence, the rule that forbids the superposition of integral and half-integral spins, both in the continuum and, as we have seen in this paper, discrete situations. The question is whether in the discrete case the relevant cohomology group, which must contain the univalence ${\bf Z}_{2}$, is in fact larger.171717Very elementary examples of discretisations of Lie groups giving rise to additional nontrivial projective URs with interesting physical applications are provided by the symmetries of 2-dimensional spaces. Thus the cylinder group ${\bf S}^{1}\times{\bf R}$ (the natural group of an infinitely long strip with the two edges identified) has vanishing 2nd cohomolgy but its subgroup ${\bf S}^{1}\times{\bf Z}$ has nontrivial projective URs [17]. Similarly, the torus group ${\bf T^{2}}$ (acting on a rectangular space with pairs of opposite edges identified) has vanishing 2nd cohomology but its discrete subgroups ${\bf Z}_{n_{1}}\times{\bf Z}_{n_{2}}$ for positive $n_{1}$ and $n_{2}$ have non-zero cohomologies. The corresponding nontrivial projective URs are the underlying causes of phenomena related to quantum Hall effects ([18,19]). If it turns out that there is no extra superselection structure a third option will be to give up the Wignerian criterion of associating a particle to an irreducible UR of $\hat{P}({\bf Z})$. We are then free to assign a particle to a UR constructed, at least formally to start with, as the space of functions from the union of all irrational orbits of a given rest mass – i.e., all of a given torus mass shell omitting rational orbits – to a given irreducible representation of $\hat{L}(\bf R)$ (and hence, by the density theorem, of $\hat{L}({\bf Z})$). Though highly reducible, such a UR of $\hat{P}({\bf Z})$ will still be characterised by a unique rest mass and a unique spin. Transformations belonging to $\hat{L}(\bf R)$ but not to $\hat{L}({\bf Z})$ will connect different irreducible components of this UR which, in the continuum limit in some suitable sense, should tend to an irreducible UR of $\hat{P}(\bf R)$. Which of these options is the right one is a well-posed mathematical problem that is unsolved at present but solvable in principle. Until that is done, it seems prudent not to favour one over the others. ## 8 Physical effects: generalities The picture that has emerged can be summarised as follows. A discrete Minkowskian space-time incorporating a fundamental length $L$ – to be thought of as one of the primordial constants of nature – is fully capable of supporting the Wignerian correspondence between projective URs of its group of isometries and the quantum fields of elementary particles as defined by their masses and spins, subject to certain criteria of physical admissibility. Unsurprisingly, discretisation of special relativity makes no difference to the physics involving these particles as long as they are massless or are not subjected to large boosts taking them outside the conventional part of their mass shells. At energies and momenta resulting from large boosts there are new effects. This section and the next are devoted to a preliminary survey, mostly qualitative and occasionally speculative, of some such effects. (It is implicit that $L$ is (of the order of) the Planck length.) It is meant to be no more than an introduction to the issues involved and to convince the reader that the idea of a fundamentally discrete world is not to be rejected out of hand; it is premature to think of detailed quantitative computations. Virtually all these deviations from conventional wisdom have their origin in the compactness of momentum space.181818This is a very general feature, independent of any particular discretisation: the Pontryagin dual of a discrete abelian group is a compact abelian group. In general terms, its basic consequence is that energy and momentum are defined and conserved only modulo reciprocal lattice vectors, implying in turn that the notion of an invariant mass needs a reformulation. The details are easiest to describe for the hypercubic discretisation employed here, for which the reciprocal lattice is also hypercubic, with the (Minkowskian) Brillouin zone given by (restoring its physical role to the lattice spacing $L$) $B=\\{-\pi/L<P_{\mu}\leq\pi/L\\}$. Firstly, masslessness of a field/particle as an invariant concept survives discretisation: the torus light cone is mapped into itself under all boosts $P\rightarrow P^{\prime}$. To remind ourselves, this property derives fundamentally from the absolute constancy of the speed of light, which itself is just the ratio of the spatial and temporal lattice spacings in conventional units (see section 3).191919There is a subtle point of principle involved here. The general result, free from conventions, is that massless URs of $\hat{P}({\bf Z})$ exist and that the speed of propagation of the cirresponding particles, in particular the photon, is constant and equal to the ratio of the spatial and temporal lattice spacings. That our initial definition of the lattice using the speed of light as a conversion factor guarantees this outcome is far from obvious at the outset. A reassessment of the fundamental significance of the speed of light, both logically and historically, will be found in a review now in preparation. Next, though a non-zero mass remains invariant under ‘small’ boosts $p\rightarrow p^{\prime}$ with $p,p^{\prime}\in B$, that is not the case for ‘large’ boosts taking $p$ out of $B$, resulting in discontinuous changes in $P$ at the boundary of $B$. The physically meaningful (and useful) notions are that of a rest mass and of a mass shell of rest mass $m$, the orbit of the Lorentz group in $B$ through the point $(m,0,0,0)$ (the torus mass shell of section 7). Consequently, when a particle of rest mass $m$ undergoes large boosts, it traverses intermittently regions of the mass shell where its speed becomes superluminal. In the limit of an infinitely large boost, the speed will tend to the speed of light, passing through, in the process, an infinite number of superluminal phases (Figure 1). It is also clear that the momentum intervals in which the particles have a transluminal speed grow as its rest mass $m$ increases (reminder: Figure 1 is for $m$ of the order of $L^{-1}$). In looking at the observable effects of these unfamiliar kinematical properties, a reservation will be in order: our inability, as of now, to resolve the question of reducibility/degeneracy of URs of $\hat{P}(\bf Z)$, discussed at the end of section 7. As long as the identity of an elementary particle is primarily decided by its mass and spin – i.e., assuming, as seems justified in our present state of knowledge, that internal charges and the concomitant gauge interactions have an origin outside space-time geometry – irreducibility by itself may not be a critically important criterion. Such a position would require us to accept that matter considered elementary at currently accessible scales, including the invisible quarks and gluons, will continue to be describable as elementary quantum fields of the discrete Poincaré group. That would be a big extrapolation; on the natural scale of the Planck mass, the known elementary particles are massless to an extraordinarily good approximation, to an accuracy of some eighteen orders of magnitude. Aside from the cosmological constant problem, there are no other examples in fundamental physics of such an enormous deviation from naturalness. One can wonder whether there actually are – or were – elementary particles of mass comparable to $L^{-1}$ but, as of now, that would be idle speculation. So, in checking the exotic effects of discreteness against facts on the ground – or, rather, in the heavens – no specific assumption will be made here regarding the possible presence of such ultra-heavy matter particles through the history of the universe. (Modulo poorly understood phenomena such as dark matter or conjectural entities like the inflaton, modern cosmological theory, including that of the very early universe, has no place for any exotic particles). Of the direct effects of discreteness, testing for deviations from homogeneity and isotropy of space is no more or less than determining its ‘crystal structure’. Intuitively it is clear that any such procedure will need to measure spatial (and, in the case of homogeneity of time, temporal) resolutions finer than $L$ (the Planck length by default). For illustration, for a beam of particles (emitted somewhere in the universe) interacting with the fields $\phi^{*}$ concentrated at the ‘lattice sites’ of space to produce a measurable (macroscopic) diffraction pattern, the wavelength of the particle has to be of the order of the lattice spacing, just from Bragg’s law. There is no need to add that such tests are out of reach by very many orders of magnitude. An essentially similar conclusion holds for tests of isotropy, even though angles (as opposed to lengths) are not restricted to be discrete. For a periodic lattice such as the cubic one considered here, a reasonable statement of isotropy would appear to be that an arbitrary straight line through any chosen origin 0 (‘the line of sight’) should pass through an infinite number of lattice points $lX$, $l=0,1,2,\cdots;\;X\in M(\bf Z)$. (It is enough that it passes through one such point, say $l=1$). This sharp formulation however does not quite work. When the line of sight through 0 is, for example, in the 1-2 plane: $X=(0,X_{1},X_{2},0)$, and makes an angle $\theta$ with the 1 axis, $\tan\theta=X_{2}/X_{1}$ will be rational whereas there is a general result (part of what is known as Niven’s theorem [20]) which says that the only rational values of the tangent function at rational angles (rational multiples of $\pi$) are $0,\pm 1$. In other words, if the line of sight is at a rational angle (except for $0$ and $\pi/4$ in the first quadrant) it will pass through no lattice points at all. The remedy suggests itself: the continuity of the tangent function implies that, given a point $X$, there are angles $\theta$ such that the corresponding lines of sight pass as close as desired to the points $lX$ for all $l$ less than any given finite integer. This slightly weaker formulation of isotropy is perfectly satisfactory both theoretically and observationally. ## 9 Physical effects: some qualitative cosmology The kinematical effects of the boundedness of momentum and energy seem, likewise, to be far beyond the capabilities of any controllable experiment even in a remote future: how to boost particles to energy/momentum of magnitudes of the order of $10^{20}$ GeV, i.e., to the edge of the Brillouin zone? That leaves cosmic observations as the only realistic test of discreteness, especially those which are sensitive to the physics of processes which we believe happen in the early universe. One such is the deboosting202020Throughout this article, I have avoided using the terms ‘acceleration’ and ‘deceleration’ in favour of ‘boost’ and ‘deboost’ for the reason that energy and momentum are always within $B$ and change discontinuously when they hit its boundary. Velocity, defined as $\vec{V}=\vec{P}/P_{0}$ so as to be consistent with the continuum limit, also has discontinuities. Boost operations are well-defined in $B$ and, along any of the three spatial axes, can be ordered by the value of the parameter (sometimes called rapidity) they depend on. The essential point is that an infinite boost does not result in infinite energy, momentum or velocity. For the same reason, the evolution of the discrete universe from the big bang to the present is to be thought of as parametrised by a monotonically decreasing (average) boost parameter of matter particles rather than energy or velocity. of matter in the initial hot dense ‘stuff’ that leads to expansion and cooling (in the accepted, ‘standard’, cosmological model). In the discrete world the process will go through a series of steps in which every particle of matter crosses the light barrier, into and out of the light cone, alternately (Figure 1 with the arrows reversed shows the final stages of the process) until it makes its final transit into the interior (to the conventional part of its mass shell), after which it will behave exactly as in conventional special relativity. The question is how these repeated transluminal episodes fit in with the generally agreed post-big-bang picture of the early cosmos. Considering that all transits including the final one into the conventional regime, the subluminal world of our experience, took place when particle energies were close to $L^{-1}$ (see Figure 1 and recall again that what decreases monotonically with time as the universe evolves is not energy but the boost parameter) and therefore in the very remote past about which we know little, the answer to the question has to be that cosmology as we know it today is definitely not in conflict with the above picture. Indeed, any possibility of identifying superluminal expansion originating in discreteness is likely to be subsumed in the by-now standard – though still mysterious – cosmic inflation. The two differ in their origins. The superluminal propagation arises here kinematically and in a purely special relativistic context – gravity plays no role – whereas the root cause of inflation remains unsettled despite the enormous amount of work, mixing the usual quantum field theory ideas with general relativity, that has gone into it. The details of the process are also different: episodic with a long series of superluminal phases in our case and one big blowout in inflation. One may then ask: if the world is really discrete, can one do without inflation in its current formulation(s)? In partial answer, let us note the encouraging fact that the energy of a massive particle vanishes – the torus mass shell intersects the cube that is the subspace $P_{0}=0$ of the Brillouin zone (the horizontal axis in Figure 1 is its projection) – in a repeated series of events until it makes its final transit into the conventional regime. Such events occur during highly superluminal phases: when $P_{0}=0$, the velocity is infinite.212121The sequential superluminal expansion implied by discreteness and advocated here has nothing conceptually in common with models, some of them very explicit (an early example is [21]), which postulate that the speed of light itself has changed enormously over cosmic time scales, cause unknown. In the discrete universe the agency of homogenisation is not radiation (massless particles have an absolutely constant, conventional, speed) but massive matter. I add that cosmological issues related to inflation continue to see an enormous amount of activity, as regards both its conceptual moorings and their empirical validation; for a thorough review of the current situation, see [22]. Aside from global consequences like superluminal expansion, the lack of strict energy-momentum conservation – conservation only modulo $2\pi/L$ – also leads directly to a class of Planck-scale Umklapp processes, namely interactions of individual elementary particles in which some of them in the initial or/and final states would have energies or momenta or both outside the Brillouin zone if they were strictly conserved, i.e., before enforcing the periodicity of reciprocal space. Umklapp phenomena in crystal physics222222An excellent general reference is the book of Peierls ([23]) who first recognised the phenomenon and its origin in the discrete symmetry groups of crystals. concern only the momenta and velocities of the relevant elementary excitations (mainly phonons and electrons) – energy is always strictly conserved since time is not discrete – and they influence chiefly the transport properties in crystalline media. Moreover, the medium itself is not discrete – only the symmetry group is – and is of finite size. Consequently, the effect of periodicity is just to modulate the 1-particle wave function (Bloch’s theorem) and the apparent violation of momentum conservation is only an artefact of translating all momenta back to the Brillouin zone. Physically, the momentum of the total system consisting of the elementary excitations and the crystal is conserved; the crystal as a whole recoils to balance the loss or gain of momentum by the crystalline equivalent of matter. In contrast, when the ‘crystal’ is space-time itself, the violation of energy- momentum conservation must be regarded as a real physical phenomenon: there is no sensible physical meaning that can be given to the notion of Minkowskian space-time having kinematical attributes like energy and momentum. Nevertheless, since very few processes in solids are measurably sensitive to crystal recoil,232323Exceptions would be resonance absorption of photons and neutrinos. we can adapt the methodology of crystal physics – extending them to include energy in addition to momentum – for a first orientation on recoilless space-time Umklapp interactions. An interaction in which the initial state has particles each of which is in its conventional regime, with momenta $\\{P_{\mu}^{(i)}\\}$, can result in two general classes of final states: either i) all of the particles in the final state, with momenta $\\{P_{\mu}^{{}^{\prime}(j)}\\}$, are in their conventional regimes – normal processes in the language of crystal physics; or ii) some of the particles will go outside the Brillouin zone $B$ if energy and momentum were strictly conserved, $\sum_{i}P_{\mu}^{(i)}=\sum_{j}P_{\mu}^{{}^{\prime}(j)}$, which have then to be translated back to $B$ – Umklapp processes. The general conservation law, always valid, is $\sum_{i}P_{\mu}^{(i)}-\sum_{j}P_{\mu}^{{}^{\prime}(j)}=\frac{2\pi N_{\mu}}{L}$ for some $N_{\mu}\in\bf Z$, depending on the values of $P_{\mu}^{(i)}$ and $P_{\mu}^{{}^{\prime}(j)}$. As the simplest possible example of the new kinematics, consider first the decay at rest of a particle of rest mass $M$ into two particles of equal rest mass $m$. Assuming strict momentum conservation, the daughter particles will have momenta in opposite directions, say along the 1-axis, of equal magnitude $k$. Assuming also strict energy conservation, we have $4(k^{2}+m^{2})=M^{2}$ or $k=\sqrt{M^{2}/4-m^{2}}$. From the reality of $k$ (for the decay to be kinematically possible), $m<M/2$. And since $M<\pi/L$ (from the definition of the rest mass), it follows further that $k<M/2<\pi/2L$. No energy or momentum is outside the Brillouin zone and the decay is a normal process. No surprise here. But this apparently trivial example becomes interesting when we consider its inverse reaction: the collision of two equal rest mass particles to produce a final state whose centre-of-mass is at rest, including the possibility of a single particle at rest. Assuming strict conservation, the total energy of the final state, $E=2\sqrt{k^{2}+m^{2}}$ (which is also its invariant mass), will exceed the Brillouin zone bound $\pi/L$ if either $k>\pi/2L$ or $m>\pi/2L$ (or both). For such values of $k$ and $m$ strict conservation cannot hold: an Umklapp shift down has to be made (one shift is enough since $k$ and $m$ are bounded above by $\pi/L$), resulting in a final state energy of $2\sqrt{k^{2}+m^{2}}-2\pi/L$. The example can be extended to more general kinematic situations in a straightforward manner though the energy-momentum book-keeping can get quite involved. The inescapable fact is that dramatic degradations of energy and momentum in interactions of particles of Planck scale masses and energies are a generic consequence of discrete special relativity. Once again, the only realistic possibilities of testing the effect would seem to involve the physics of the very early cosmos but the fact that the shift can result in relatively moderate final energies gives some hope. Perhaps ultra high energy cosmic rays (with energies some 8 or 9 orders of magnitude smaller than $1/L$) are the final result of such degradation, tantalising relics from the early history of the universe, rather than the product of some unknown cosmic mechanism involving extreme acceleration. While on the subject of possible future lines of work, we should also note that the breakdown of conventional energy momentum conservation – but in a precisely quantifiable way – has other consequence which are relevant to the study of the early universe, specifically in its hot dense phase. In general terms, what is required is a reformulation of thermodynamics in extreme conditions in which the kinematics of binary collisions is governed by the new conservation laws. That is a well-posed problem but, obviously, a very challenging one. We can surely expect a reliable theory incorporating them to lead to deviations from currently popular cosmological models of the young hot dense universe. Whether they will turn out to be theoretically and observationally acceptable is for the future to decide. What is generally gratifying in the meantime is that the knowledge we already have accommodates the effects of discreteness in the structure of space-time quite comfortably without any extra assumptions. Theoretically, the deep connection between the existence and properties of elementary particles and the symmetries of space- time survives intact and, observationally, none of our hard-won insights into the mechanisms of the physical world is put at risk. One can then hope that the hypothesis of a fundamental length may, through the novel kinematics it entails, provide an opening into some of the more ad hoc features of current cosmological theory and into fundamentally kinematic phenomena for which there is no convincing basis yet (such as, obviously, dark matter and dark energy). To conclude, it is worth repeating that these novel effects do not invoke the curvature of space-time in any way; the consequences of the gravitational aspects of general relativity will be above and beyond them. It seems fitting to end with Riemann’s much-cited exhortation on the need to respect both experience and logic in studying the nature of space (in his famous Habilitation lecture “On the Hypotheses which Lie at the Foundation of Geometry”, 1854): > . . . it is a necessary consequence that . . . those properties which > distinguish space from other conceivable triply extended (3-dimensional) > quantities can only be deduced from experience. Thus arises the problem of > seeking out the simplest data from which the metric relations of space can > be determined . . . These data, like all data, are not logically necessary, > they are hypotheses; one can therefore investigate their likelihood, . . . > and afterwards decide on the legitimacy of extending them beyond the bounds > of observation, both in the direction of the immeasurably large and (my > italics) in the direction of the immeasurably small . . . I acknowledge with warm thanks the invaluable advice of M. S. Narasimhan, M. S. Raghunathan, Parameswaran Sankaran and T. N. Venkataramana on the mathematical aspects of this work, first addressed, in a different context, in an unpublished manuscript ([24]). On questions related to the current state of knowledge of early cosmology, especially of the role of inflation, I have benefited greatly from the explanations of Swagat Mishra. For that, and for his help with Figure 1, I express my gratitude. Finally, I acknowledge here the hospitality of the Inter-University Centre for Astronomy and Astrophysics (Pune) over the years that helped bring this work to a (tentative) close. Bibliography [1] S. Hossenfelder, Living Rev. Relativity 16, 2 (2013). [2] R. Loll, arXiv:gr-qc/9805049 (1998). [3] R. Williams, J. Phys. Conference Series 33, 38 (2006). [4] E. P. Wigner, Ann. Math. 40, 149 (1939). [5] V. Bargmann and E. P. Wigner, PNAS 34, 211 (1048). [6] Wigner, Group Theory and its Applications to the Quqntum Mechanics of Atomic Spectra (Academic Press, New York, 1959). [7] M. S. Raghunathan, Rev. Math. Phys. 6, 207 (1994). [8] P. P. Divakaran, Rev, Math, Phys. 6, 167 (1994). [9] P. P. Divakaran, Phys. Rev. Letters 79, 2159 (1997). [10] R. F. Streater and A. S. Wightman PCT, Spin-Statistics and all that (Benjamin, New York, 1964). [11] G. W. Mackey, Ann. Math. 55, 101 (1952); 58, 143 (1953). [12] M. Sugiura, Unitary Representations and Harmonic Analysis (2nd ed.) (North Holland/Kodansha, Amsterdam/Tokyo, 1990). [13] A. Borel, Ann. Math. 72, 179 (1960). [14] M. S. Raghunathan, Discrete Subgroups of Lie Groups (Springer, Berlin, 1972). [15] R. G. Swan, Adv. Math. 6, 1 (1971). [16] G-C Wick, A. S. Wightman and E. P. Wigner, Phys. Rev. 88, 101 (1954). [17] G. Date and P. P. Divakaran, Ann. Phys. 309, 429 (2004). [18] P. P. Divakaran and A. K. Rajagopal, Int. J. Mod. Phys. B 9, 261 (1995). . [19] P. P. Divakaran, unpublished. [20] I. Niven, Irrational Numbers (Mathematical Association of America, New York, 1956). [21] J W Moffat, Int. J. Mod. Phys. D 2, 351 (1993). [22] Swagat S. Mishra. “Cosmic Inflation: Background dynamics, Quantum fluctuations and Reheating”, arXiv: 2403.10606[gr-qc] (2024). [23] R E Peierls, Quantum Theory of Solids (Clarendon Press, Oxford, 1955). [24] P. P. Divakaran, arXiv:hep-lat/0204027 (2002).
# The Replica Symmetry Broken States of some Glass Models J. Yeo Department of Physics, Konkuk University, Seoul 05029, Korea M. A. Moore Department of Physics and Astronomy, University of Manchester, Manchester M13 9PL, United Kingdom. ###### Abstract We have studied in detail the $M$-$p$ balanced spin glass model, which is a candidate for being a model for structural glasses. Such models possess two kinds of broken replica states; those with one-step replica symmetry breaking (1RSB) and those with full replica symmetry breaking (FRSB). To determine which arises requires studying the Landau expansion to quintic order. There are 9 quintic order coefficients, and 5 quartic order coefficients, whose values we determine for this model. We show that it is only for $2\leq M<2.4714\cdots$ that the transition at mean-field level is to a state with FRSB, while for larger $M$ values there is either a continuous transition to a state with 1RSB (when $M\leq 3$) or a discontinuous transition for $M>3$. The Gardner transition from a 1RSB state at low temperatures to a state with FRSB also requires the Landau expansion to be taken to quintic order. Our result for the form of FRSB in the Gardner phase is similar to that found when $2\leq M<2.4714\cdots$, but differs from that given in the early paper of Gross et al. [Phys. Rev. Lett. 55, 304 (1985)]. Finally we discuss the effects of fluctuations on our mean-field solutions using the scheme of Höller and Read [Phys. Rev. E 101, 042114 (2020)] and argue that such fluctuations will remove the continuous 1RSB transition in dimension $d$ when $8>d\geq 6$ leaving just the FRSB continuous transition (and possibly also the discontinuous 1RSB transition). We suggest values for $M$ and $p$ which might be used in simulations to resolve the outstanding question of whether fluctuation corrections can remove the discontinuous 1RSB transition. ## I Introduction Spin models of the $p$-spin or Potts glass variety Gross _et al._ (1985); Gardner (1985) played an important role in the development of one of the current theories of structural glasses, the Random First Order Transition (RFOT) picture Kirkpatrick _et al._ (1989); Kirkpatrick and Thirumalai (2015); Lubchenko and Wolynes (2007); Cavagna (2009); Biroli and Bouchaud (2010). These models have been primarily studied in the infinite dimensionality limit, which is equivalent to mean-field theory. Of course what is really wanted is an understanding of what happens in the physical realm of two and three dimensions, and for these dimensions simulations Franz and Parisi (1999); Campellone _et al._ (1998) of models of the type studied in this paper have revealed that they behave completely differently from what is predicted by the mean-field calculations. In particular in the simulations there is no sign of the random first-order transition which is one of the central features of RFOT theory. Below the ideal glass transition there is supposed to exist the ideal glass state, a state of low configurational entropy but with a high stability due to the assumed paucity of glass states. This state in replica language has one-step replica symmetry breaking (1RSB). The transition temperature to this state is identified as the Kauzmann temperature in RFOT theory, which is the temperature at which the entropy of the glass state becomes equal to that of the crystalline state Kauzmann (1948). While a discontinuous transition was not seen in the simulations, evidence was found for the existence of long correlation lengths, which is also the behavior found in real-space renormalization group (RG) calculations Yeo and Moore (2012a, b) of $p$-spin models in three dimensions. That simulations in three dimensions lead to a picture quite different to that which arises from mean-field calculations has largely been ignored: Work has continued apace using the large $d$ limit and mean-field techniques. We have therefore begun a program of trying to understand why the mean-field picture does not extend to three dimensions Yeo and Moore (2020). For one particular $p$-spin model, the $M$-$p$ spin glass model with $p=6$, we were able to give an argument that the 1RSB state of that model was unstable in any finite dimension due to the excitation of droplets of flipped spins whose interface free energy are very small Moore (2006). That argument is specific to glass models with a particular form of time reversal symmetry which gives rise to a field theory in which the cubic term $w_{2}$ is zero (see Eq. (25)). Unfortunately the generic field theories thought relevant to glasses have $w_{2}$ non-zero and it is these which we study in this paper. The 1RSB phase for $p=6$ spin glasses is destroyed by non-perturbative droplet excitations. For generic glass models with $w_{2}$ non-zero, we can only find perturbative arguments. They are strong enough to lead us to the conclusion that the continuous phase transition to a state with 1RSB will not exist for dimensions $d$ less than $8$ and will be replaced by a continuous transition to a state with FRSB. We shall suggest that fluctuation corrections to the coupling terms in Eq. (25) might also drive the system away from having a discontinuous transition to a 1RSB state to a continuous transition to a state with full replica symmetry breaking (FRSB), but we do not know whether the fluctuation corrections are large enough to bring that about. We suspect that this question will only be resolved by simulations and values of $p$ and $M$ which might be appropriate for such simulations are suggested in Sec. III. Our procedure is based upon the old idea Rudnick and Nelson (1976) of using the renormalization group recursion relations for the coupling constants of the field theory to map the coefficients of the critical field theory into a region where the correlation lengths are small and Landau theory (i.e. mean- field theory) with small fluctuation corrections can be employed. This program has also been used by Höller and Read Höller and Read (2020) on the problem of the de Almeida-Thouless transition of the Ising spin glass in a field de Almeida and Thouless (1978). It has a field theory identical to that of the $M$-$p$-spin glass models discussed in this paper, i.e. that of Eq. (25), but with different numerical values for the coefficients. (To discuss finite dimensions a gradient term of the form $\int d^{d}r\sum_{a,b}(\nabla q_{ab}(r))^{2}$ would need to be included in Eq. (25).) The program therefore requires us to understand in detail the stationary solutions i.e. mean-field solutions of Eq. (25), and the bulk of this paper is devoted to this task. Because Höller and Read discussed the RG aspects of the calculations in great detail, we shall treat those briefly, just focussing on the implications of numerical studies which were carried out after their paper was written Aguilar-Janita _et al._ (2023). In Sec. II we introduce the balanced $M$-$p$ models and the replica procedure which was used to average their free energy over disorder. The balanced $M$-$p$ spin models are very convenient to study with simulations as they are readily extended to finite dimensions on a $d$-dimensional lattice. When this is done the resulting field theory acquires the already mentioned gradient squared term. One of the attractions of the balanced version of these models is the absence of “hard modes”, which are just usually cast aside (as in the paper of Caltagirone et al. Caltagirone _et al._ (2011)), but this leaves the subsequent calculations of uncertain accuracy. We shall focus on the case $p=4$ and regard the number of types of Ising spins $M$ as a variable which can take non-integer values. The simulations of Campellone et al. Campellone _et al._ (1998) which failed to find a discontinuous 1RSB transition were in fact done for a closely related model with $p=4$ and $M=4$ in three dimensions. At cubic order there are two coupling constants, $w_{1}$ and $w_{2}$, at quartic order, there are five coupling constants, $y_{1},\cdots,y_{5}$ and at quintic order, there are nine coupling constants, $z_{1},\cdots,z_{9}$. The quadratic term $\tau$ vanishes as usual at the mean- field transition temperature $T_{c}$ and is negative when $T<T_{c}$. We calculate the “bare” value of all these coefficients in Appendix A for the case $p=4$. Fluctuation corrections will modify the bare values. In studying the model at non-integer values of $M$ we are anticipating that the fluctuation corrections can modify the bare coefficients. Studying the field theory of Eq. (25) for general values of the coefficients would be a good idea, but there are so many of these coefficients that we have limited our study to those values which can be reached by varying $M$ in the bare values. In Sec. III we discuss what we believe will be the likely consequences of fluctuation effects on the coupling constants. In Sec. II.1 we determine the free energy of the system in the high- temperature or paramagnetic phase where the order parameter $q_{ab}$ is independent of $a$ and $b$, that is, replica symmetric. At mean-field level $q_{ab}=0$, (but fluctuation corrections would leave it replica symmetric but non-zero). If the transition is continuous, so that $q_{ab}$ is small just below the transition, then the expansion of the Landau-Ginzburg free energy functional in powers of $q_{ab}$ should be useful and we give its form in Sec. II.2. Most workers have stopped at the quartic terms, but we have continued up to the quintic terms. This is necessary for two reasons. The difference in free energy between the 1RSB free energy and the FRSB free energy is of $O(\tau^{5})$, (see for example, Ref. Aspelmeier _et al._ (2008)). Thus one needs to worry about the quintic terms when working out whether the state which forms at the continuous transition is of 1RSB type or is of FRSB type. Fortunately, we can show that the borderline value of $M$, $M^{**}\approx 2.47140$ between these types is not dependent on the quintic terms. (For $2\leq M<M^{**}$ the continuous transition is to a state with FRSB, while for $M^{**}<M<3$, the continuous transition is to a state with 1RSB.) The second reason relates to studies of the Gardner transition Gross _et al._ (1985); Gardner (1985). The Gardner transition is the transition from a state with 1RSB to a state with FRSB as the temperature is lowered. Right from the beginning it was realized that the quintic terms are needed for its study Gross _et al._ (1985). We shall find though that our actual FRSB solution is quite different to that of Ref. Gross _et al._ (1985). This is discussed in Sec. II.5. A feature of the FRSB solutions is a singularity first noticed by Goldbart and Elderfield Goldbart and Elderfield (1985). They found that the FRSB solution for $q(x)$ at quartic level could have an unphysical singularity in the interval $0<x<1$ which would imply that the probability of two states having an overlap $q$ would be negative, which is impossible. This problem was studied in some detail by Janiš and colleagues using a non-standard approach to replica symmetry breaking Janiš _et al._ (2013). We find in Sec. II.5 that the singularity at quartic level in fact determines the value of $M^{**}$ and that one avoids the singularity at $M>M^{**}$ by simply being in the state with 1RSB. At the Gardner transition the quintic terms remove the quartic level singularities. However, similar singularities are to be found also at quintic level. Right at the Gardner transition temperature $T_{G}$ , just where the free energies of the FRSB state and the 1RSB state are equal, the Goldbart-Elderfield singularity is at the lower breakpoint $x_{1}$. This causes the derivative of $q(x)$ at $x=x_{1}$ to be infinite. However for temperatures $T$ less than $T_{G}$, the singularity is below $x_{1}$ and the derivative stays finite. In Sec. II.3 we derive the free energy at mean-field level for the 1RSB state. For $M>3$, when $w_{2}/w_{1}>1$, the transition from the high temperature normal phase to a state with 1RSB is a discontinuous transition which takes place at a transition temperature above $T_{c}$. We suspect that this behavior would be seen for all values of $M>3$. However, if one truncates the free energy to quartic level terms, as is commonly done, the 1RSB state only exists in the interval $3<M\lesssim 6.64$. With the inclusion of the quintic terms, the 1RSB forms at a discontinuous transition when $14.41\gtrsim M\gtrsim 3.98$ and $3.27\gtrsim M>3$. Thus with the quintic form the 1RSB state persists up to larger values of $M$. We believe that if all terms were kept then the discontinuous transition to the 1RSB state would exist for all $M>3$. In Sec. II.4 we describe the simplifications which arise in the large $M$ limit. Truncation leads to spurious features as the Landau expansion cannot be expected to be accurate when $q_{ab}$ is not small. Another spurious feature of truncation is the apparent phase transition at low temperatures from the 1RSB state to the replica symmetric state with $q_{ab}$ non-zero. In the large $M$ limit we can solve without truncation and such a transition does not arise (see Sec. II.3). The form of the FRSB solutions at both quartic and quintic level, together with the Gardner transitions, is in Sec. II.5. In Sec. III we discuss how fluctuation corrections to the coupling constants used in the mean-field solution will change the continuous 1RSB transition into the continuous FRSB solution, using extensions of the approach of Höller and Read Höller and Read (2020). We suspect that the discontinuous 1RSB transition might also suffer the same fate, based on the results of simulations in low dimensions Franz and Parisi (1999); Campellone _et al._ (1998), but we cannot support this possibility with analytical arguments. We finally conclude with suggestions of the kinds of model which could be studied numerically to resolve these issues, and also to resolve the question of whether the FRSB state can exist for dimensions $d<6$. ## II The balanced $M$-$p$ model in the fully connected limit In this section, we study the $M$-$p$ spin glass model in the fully connected limit, where one has $M$ different types of Ising spins, $S_{i}(x)$, $i=1,2,\cdots,M$ at each site $x$ coupled with spins on other sites via $p$-body interactions. Here we focus on the so-called balanced model introduced in Ref. Yeo and Moore (2020) for even $p$, where only the coupling between two sets of $p/2$ spins on two different sites is considered. It amounts to considering only the soft mode in a more general $M$-$p$ model, where all the couplings between $k$ spins and $p-k$ spins are included for $k=1,2,\cdots,p-1$. In this paper, we focus on the $p=4$ case. For $p=4$, the balanced model is given by four-spin interactions between a pair of two spins on two different sites. Each site has ${M\choose 2}$ different two-spin combinations. Therefore, for given pair of sites, there are ${M\choose 2}^{2}$ terms in the Hamiltonian. The Hamiltonian is given by $\displaystyle H=-\frac{1}{2}\sum_{x\neq y}$ $\displaystyle\Big{[}\sum_{i_{1}<i_{2}}^{M}\sum_{j_{1}<j_{2}}^{M}J^{(i_{1},i_{2}),(j_{1},j_{2})}_{x,y}$ $\displaystyle\times S_{i_{1}}(x)S_{i_{2}}(x)S_{j_{1}}(y)S_{j_{2}}(y)\Big{]},$ (1) where each $J^{(i_{1},i_{2}),(j_{1},j_{2})}_{x,y}$ is drawn from the Gaussian distribution with zero mean and the variance $\frac{J^{2}}{NM^{p-1}}=\frac{J^{2}}{NM^{3}}.$ (2) We will set $J=1$ for convenience. After neglecting the terms of subleading order in $N$, we can write the replicated partition function averaged over the disorder as $\displaystyle\overline{Z^{n}}=$ $\displaystyle\mathrm{Tr}\exp\Big{[}\frac{\beta^{2}}{4NM^{3}}$ (3) $\displaystyle\times\sum^{n}_{a,b}\Big{\\{}\sum^{N}_{x}\sum_{i_{1}<i_{2}}^{M}S^{a}_{i_{1}}(x)S^{a}_{i_{2}}(x)S^{b}_{i_{1}}(x)S^{b}_{i_{2}}(x)\Big{\\}}^{2}\Big{]}.$ The diagonal terms ($a=b$) in the replica indices give a factor $\exp[nN\beta^{2}C]$ where $\displaystyle C=\frac{1}{4M^{3}}{M\choose 2}^{2}=\frac{(M-1)^{2}}{16M}.$ (4) For $a\neq b$, following the convention used in Ref. Caltagirone _et al._ (2011), we introduce the delta functions enforcing $\displaystyle q_{ab}=\frac{1}{NM^{2}}\sum^{N}_{x}\sum_{i_{1}<i_{2}}^{M}S^{a}_{i_{1}}(x)S^{a}_{i_{2}}(x)S^{b}_{i_{1}}(x)S^{b}_{i_{2}}(x)$ (5) in the replicated partition function. Using the integral representation of the delta function, we can write $\displaystyle\overline{Z^{n}}=e^{nN\beta^{2}C}\int\prod_{a<b}dq_{ab}d\mu_{ab}\;\exp[-NG(\underline{q},\underline{\mu})],$ (6) where $G(\underline{q},\underline{\mu})=-\frac{M}{4}\beta^{2}\sum_{a\neq b}q^{2}_{ab}+\frac{M}{2}\sum_{a\neq b}\mu_{ab}q_{ab}-\ln L(\underline{\mu})$ (7) and $\displaystyle L(\underline{\mu})=\underset{\\{S_{i}^{a}\\}}{\mathrm{Tr}}\exp\Big{[}\frac{1}{2M}\sum_{a\neq b}\mu_{ab}\sum^{M}_{i<j}S^{a}_{i}S^{a}_{j}S^{b}_{i}S^{b}_{j}\Big{]}.$ (8) In the large-$N$ limit, the integral is dominated by the saddle points which are determined by $\displaystyle\mu_{ab}=\beta^{2}q_{ab}$ (9) and $\displaystyle q_{ab}=\frac{1}{M^{2}}\left\langle\sum^{M}_{i<j}S^{a}_{i}S^{a}_{j}S^{b}_{i}S^{b}_{j}\right\rangle_{L},$ (10) where $\langle\cdots\rangle_{L}$ is evaluated with respect to $L$ in Eq. (8). The free energy $F$ is then given by $\displaystyle\frac{\beta F}{N}$ $\displaystyle=-\frac{1}{N}\lim_{n\to 0}\frac{1}{n}\ln\overline{Z^{n}}=-C\beta^{2}+\lim_{n\to 0}\frac{1}{n}G(\underline{q},\underline{\mu}).$ (11) ### II.1 Replica Symmetric Solution We first look for the saddle point solutions in the replica symmetric (RS) form $q_{ab}=q$ and $\mu_{ab}=\mu$ for all $a\neq b$. We have $\displaystyle\lim_{n\to 0}\frac{1}{n}G(q,\mu)=\frac{M}{4}\beta^{2}q^{2}-\frac{M}{2}\mu q-\lim_{n\to 0}\frac{1}{n}\ln L(\mu).$ (12) Using $\displaystyle\sum_{a\neq b}S^{a}_{i}S^{a}_{j}S^{b}_{i}S^{b}_{j}=\left(\sum_{a}S^{a}_{i}S^{a}_{j}\right)^{2}-n$ (13) in Eq. (8) and the Hubbard-Stratonivich transformation on the first term, we can rewrite Eq. (8) as $\displaystyle L(\mu)=$ $\displaystyle e^{-n\mu(K/2M)}\;\underset{\\{S_{i}^{a}\\}}{\mathrm{Tr}}\;\int D^{K}\bm{y}$ $\displaystyle\times\exp\left[\sqrt{\frac{\mu}{M}}\sum_{a}\sum_{i<j}^{M}y_{(i,j)}S^{a}_{i}S^{a}_{j}\right],$ (14) where $K\equiv{M\choose 2}$ (15) and the integral over the $K$-dimensional vector $\bm{y}=(y_{1},y_{2},\cdots,y_{K})\equiv(y_{(1,2)},y_{(1,3)},\cdots,y_{(M-1,M)})$ is defined as $\int D^{K}\bm{y}\equiv\prod_{\alpha=1}^{K}\left(\int_{-\infty}^{\infty}\frac{dy_{\alpha}}{\sqrt{2\pi}}e^{-y^{2}_{\alpha}/2}\right).$ (16) We therefore have $\displaystyle\lim_{n\to 0}\frac{1}{n}\ln L(\mu)=-\frac{K}{2M}\mu+M\ln 2+\int D^{K}\bm{y}\;\ln\zeta(\bm{y},\mu),$ (17) where $\displaystyle\zeta(\bm{y},\mu)\equiv\frac{1}{2^{M}}\underset{\\{S_{i}\\}}{\mathrm{Tr}}\;\exp\left[\sqrt{\frac{\mu}{M}}\bm{y}\cdot\bm{\Psi}\right]$ (18) with the $K$-dimensional vector $\bm{\Psi}=(\Psi_{1},\Psi_{2},\cdots,\Psi_{K})=(S_{1}S_{2},S_{1}S_{3},\cdots,S_{M-1}S_{M})$. The RS free energy is then given by $\displaystyle\frac{\beta F_{\rm RS}}{N}=$ $\displaystyle-C\beta^{2}+\frac{M}{4}\beta^{2}q^{2}-\frac{M}{2}\mu q+\frac{K}{2M}\mu$ $\displaystyle-M\ln 2-\int D^{K}\bm{y}\;\ln\zeta(\bm{y},\mu).$ (19) By varying the free energy with respect to $q$ and $\mu$, respectively, we have saddle point equations, $\displaystyle\mu=\beta^{2}q$ (20) and $\displaystyle q=$ $\displaystyle\frac{1}{M^{2}}\int D^{K}\bm{y}\;\frac{1}{\zeta^{2}(\bm{y},\mu)}$ $\displaystyle\times\sum_{\alpha=1}^{K}\left\\{\frac{1}{2^{M}}\underset{\\{S_{i}\\}}{\mathrm{Tr}}\;\Psi_{\alpha}\exp\left[\sqrt{\frac{\mu}{M}}\bm{y}\cdot\bm{\Psi}\right]\right\\}^{2}.$ (21) At high temperatures, the RS solutions are given by $q=\mu=0$. In that case, $\zeta=1$ and the corresponding free energy is $\frac{\beta F_{\rm RS}}{N}=-C\beta^{2}-M\ln 2.$ (22) The entropy $S=-\partial F/\partial T$ for this phase is $\frac{S_{\rm RS}}{N}=-C\beta^{2}+M\ln 2.$ (23) This becomes negative below $T_{*}=\sqrt{\frac{C}{M\ln 2}}=\frac{M-1}{4M\sqrt{\ln 2}}.$ (24) Some values of $T_{*}$ are $T_{*}$=0.20019 for $M=3$, 0.22521 for $M=4$, 0.25023 for $M=6$ and 0.25738 for $M=7$. It keeps increasing with $M$ and approaches 0.30028 in the $M\to\infty$ limit. ### II.2 Landau Expansion of Free Energy In order to study a possible continuous transition, we expand the free energy, Eq. (11) for small values of the order parameter. We first expand Eq. (8) to $O(\mu^{5})$ and take the trace over the spins. The detailed steps are given in Appendix A. Now using Eqs. (7), (9) and (11), we can write the free energy as $\displaystyle\frac{\beta F}{N}$ $\displaystyle=-C\beta^{2}-M\ln 2+\lim_{n\to 0}\frac{1}{n}\Big{[}\tau\sum_{a,b}q^{2}_{ab}$ (25) $\displaystyle- w_{1}\sum_{a,b,c}q_{ab}q_{bc}q_{ca}-w_{2}\sum_{a,b}q^{3}_{ab}-y_{1}\sum_{a,b}q^{4}_{ab}$ $\displaystyle- y_{2}\sum_{a,b,c}q^{2}_{ab}q^{2}_{bc}-y_{3}\sum_{a,b,c}q^{2}_{ab}q_{bc}q_{ca}-y_{5}\sum_{a,b,c,d}q_{ab}q_{bc}q_{cd}q_{da}$ $\displaystyle- z_{1}\sum_{a,b}q^{5}_{ab}-z_{2}\sum_{a,b,c}q^{3}_{ab}q^{2}_{bc}-z_{3}\sum_{a,b,c}q^{3}_{ab}q_{bc}q_{ca}$ $\displaystyle- z_{4}\sum_{a,b,c}q^{2}_{ab}q^{2}_{bc}q_{ca}-z_{5}\sum_{a,b,c,d}q^{2}_{ab}q_{bc}q_{cd}q_{da}$ $\displaystyle- z_{6}\sum_{a,b,c,d}q^{2}_{ab}q_{bc}q_{cd}q_{db}-z_{7}\sum_{a,b,c,d}q^{2}_{ab}q_{bc}q^{2}_{cd}$ $\displaystyle- z_{8}\sum_{a,b,c,d}q_{ab}q_{bc}q_{cd}q_{da}q_{ac}-z_{9}\sum_{a,b,c,d,e}q_{ab}q_{bc}q_{cd}q_{de}q_{ea}\Big{]},$ where $q_{aa}=0$, $q_{ab}=q_{ba}$, and all the sums over replica indices are without any restriction. The coefficient of the quadratic term is given by $\displaystyle\tau=\frac{M}{4}\beta^{2}\left(1-\frac{K}{M^{3}}\beta^{2}\right)=\frac{M}{4}\beta^{4}\left(T^{2}-T^{2}_{c}\right),$ (26) where $T_{c}\equiv\sqrt{\frac{K}{M^{3}}}=\frac{1}{M}\sqrt{\frac{M-1}{2}}.$ (27) This expression coincides with Eq. (27) of Ref. Caltagirone _et al._ (2011). Some values of $T_{c}$ are 0.33333 for $M=3$, 0.30619 for $M=4$, 0.26352 for $M=6$ and 0.24744 for $M=7$. Note that $T_{c}$ decreases with $M$ and becomes zero in the $M\to\infty$ limit. Note also that $T_{c}>T_{*}$ for $M=2,3,\cdots,6$ and $T_{*}>T_{c}$ for $M\geq 7$. The coefficients of the cubic terms are given by $\displaystyle w_{1}=\frac{\beta^{6}K}{6M^{3}},~{}~{}w_{2}=\frac{\beta^{6}K}{6M^{3}}(M-2).$ (28) The quartic and quintic coefficients are given in Appendix A as functions of $M$. It is known Gross _et al._ (1985); Caltagirone _et al._ (2011) that if the ratio of the cubic terms $w_{2}/w_{1}$, which in our model is equal to $M-2$, is greater than one, a discontinuous transition to the one-step replica symmetry breaking phase (1RSB) occurs. When $M=2$, our model reduces to the Ising spin glass and we can check that the cubic and quartic coefficients coincide with those for the Ising spin glass except for the multiplicity factor of $2^{3}$ for $w_{i}$ and $2^{4}$ for $y_{i}$. ### II.3 The 1RSB Solution We now consider the case where $q_{ab}$ and $\mu_{ab}$ take the one step replica symmetry breaking (1RSB) form taking values $q_{1}$ and $\mu_{1}$ on $n/m_{1}$ diagonal blocks (labelled by $B_{k}$, $k=1,2,\cdots,n/m_{1}$ of size $m_{1}$ and $q_{0}$ and $\mu_{0}$ outside the blocks. We then have the terms in Eq. (7) as $\displaystyle\sum_{a\neq b}q^{2}_{ab}=n[(m_{1}-1)q^{2}_{1}+(n-m_{1})q^{2}_{0}],$ (29) $\displaystyle\sum_{a\neq b}\mu_{ab}q_{ab}=n[(m_{1}-1)\mu_{1}q_{1}+(n-m_{1})\mu_{0}q_{0}].$ (30) We will focus on the 1RSB solutions with $q_{0}=\mu_{0}=0$. By writing $\displaystyle\frac{1}{2M}\sum^{M}_{i<j}\sum_{a\neq b}\mu_{ab}S^{a}_{i}S^{a}_{j}S^{b}_{i}S^{b}_{j}$ $\displaystyle=\frac{\mu_{1}}{2M}\sum_{k=1}^{n/m_{1}}\sum^{M}_{i<j}\left\\{\left[\sum_{a\in B_{k}}S^{a}_{i}S^{a}_{j}\right]^{2}-m_{1}\right\\}$ (31) in Eq. (8) and by using the Hubbard-Stratonovich transformation, we have $\displaystyle\underset{\\{S_{i}^{a}\\}}{\mathrm{Tr}}\,\exp\left[\frac{1}{2M}\sum^{M}_{i<j}\sum_{a\neq b}\mu_{ab}S^{a}_{i}S^{a}_{j}S^{b}_{i}S^{b}_{j}\right]$ (32) $\displaystyle=$ $\displaystyle\exp\left[-n\frac{\mu_{1}K}{2M}\right]$ $\displaystyle\times\Big{[}\int D^{K}\bm{y}\;\Big{\\{}\underset{\\{S_{i}\\}}{\mathrm{Tr}}\,\exp\Big{[}\sqrt{\frac{\mu_{1}}{M}}\sum_{i<j}^{M}y_{(i,j)}S_{i}S_{j}\Big{]}\Big{\\}}^{m_{1}}\Big{]}^{n/m_{1}}.$ Therefore we have $\displaystyle\lim_{n\to 0}\frac{1}{n}\ln L(\underline{\mu})=$ $\displaystyle-\frac{K}{2M}\mu_{1}+M\ln 2$ $\displaystyle+\frac{1}{m_{1}}\ln\int D^{K}\bm{y}\;\zeta^{m_{1}}(\bm{y},\mu_{1}),$ (33) where $\zeta$ is defined in Eq. (18). Using Eqs. (29), (30) and (33) in Eq. (11), $\displaystyle\frac{\beta F_{\rm 1RSB}}{N}=$ $\displaystyle-C\beta^{2}-\frac{M}{4}\beta^{2}(m_{1}-1)q^{2}_{1}$ $\displaystyle+\frac{M}{2}(m_{1}-1)\mu_{1}q_{1}+\frac{K}{2M}\mu_{1}-M\ln 2$ $\displaystyle-\frac{1}{m_{1}}\ln\int D^{K}\bm{y}\;\zeta^{m_{1}}(\bm{y},\mu_{1}).$ (34) Varying the free energy with respect to $q_{1}$ and $\mu_{1}$, respectively, we have $\mu_{1}=\beta^{2}q_{1}.$ (35) and $\displaystyle q_{1}$ $\displaystyle=\frac{1}{M^{2}}\frac{1}{\int D^{K}\bm{y}\;\zeta^{m_{1}}(\bm{y},\mu_{1})}$ (36) $\displaystyle\times\int D^{K}\bm{y}\;\zeta^{m_{1}-2}\sum_{\alpha=1}^{K}\left\\{\frac{1}{2^{M}}\underset{\\{S_{i}\\}}{\mathrm{Tr}}\;\Psi_{\alpha}\exp[\sqrt{\frac{\mu_{1}}{M}}\bm{y}\cdot\bm{\Psi}]\right\\}^{2},$ Now varying the free energy with respect to $m_{1}$, we have $\displaystyle\frac{M}{4}\beta^{2}q^{2}_{1}+\frac{1}{m_{1}^{2}}\ln\int D^{K}\bm{y}\;\zeta^{m_{1}}(\bm{y},\mu_{1})$ $\displaystyle-\frac{1}{m_{1}}\frac{\int D^{K}\bm{y}\;\zeta^{m_{1}}(\bm{y},\mu_{1})\ln\zeta(\bm{y},\mu_{1})}{\int D^{K}\bm{y}\;\zeta^{m_{1}}(\bm{y},\mu_{1})}=0.$ (37) In summary, Eqs. (35), (36), and (37) are the saddle point equations one has to solve for the 1RSB state. Note that when $m_{1}=1$, we can explicitly evaluate $\displaystyle\int D^{K}\bm{y}\;\zeta(\bm{y},\mu_{1})=\exp\left[\frac{K}{2M}\mu_{1}\right].$ (38) From Eq. (34), we see that when $m_{1}=1$, the 1RSB free energy is equal to the RS one: $\frac{\beta F_{\rm 1RSB}}{N}\underset{m_{1}\to 1}{\rightarrow}-C\beta^{2}-M\ln 2=\frac{\beta F_{\rm RS}}{N}.$ (39) To determine the transition temperature $T_{c}^{\rm 1RSB}$ to the 1RSB state, we set $m_{1}=1$ in Eqs. (35), (36) and (37) and solve for $\beta$. For $m_{1}=1$, we can combine these three equations into one equation, $f_{M}(\sigma)=0$ for the parameter $\displaystyle\sigma\equiv\sqrt{\frac{\mu_{1}}{M}},$ (40) where $\displaystyle f_{M}(\sigma)$ $\displaystyle\equiv e^{-K\sigma^{2}/2}\int D^{K}\bm{y}\;\Big{[}\zeta(\bm{y},\mu_{1})\ln\zeta(\bm{y},\mu_{1})$ (41) $\displaystyle-\frac{\sigma^{2}}{4}\frac{\sum_{\alpha=1}^{K}\left\\{2^{-M}\mathrm{Tr}\;\Psi_{\alpha}\exp\left[\sigma\bm{y}\cdot\bm{\Psi}\right]\right\\}^{2}}{\zeta(\bm{y},\mu_{1})}\Big{]}-\frac{K}{2}\sigma^{2}.$ Note that $\zeta(\bm{y},\mu_{1})$ is a function of $\sigma$. If there exists a nonzero solution $\sigma$ to $f_{M}(\sigma)=0$, one can obtain nonzero $q_{1}$ from Eq. (36) and the transition temperature $T_{c}^{\rm 1RSB}$ from Eq. (35). Figure 1: $f_{M}(\sigma)$ defined in Eq. (41) for $M=3$. A nonzero solution $\sigma$ of $f_{M}(\sigma)=0$ would signal a discontinuous transition into the 1RSB state. Figure 2: Same as Fig. 1 with $M=4$. We solve this equation by numerically evaluating multi-dimensional integrals in Eq. (41). In Figs. 1 and 2, $f_{M}$ is plotted as a function of $\sigma$ for $M=3$ and $M=4$. As we can see from the figures, $f_{M}(\sigma)$ starts off very flat and increases monotonically for large values of $\sigma$. For $M=3$, Fig. 1 clearly shows a monotonic increase as a function of $\sigma$, thus we can conclude that the only solution to $f_{3}(\sigma)=0$ is $\sigma=0$. From Eq. (36), we then have $q_{1}=0$ thus no discontinuous transition in this case. For $M=4$, we have to evaluate six-dimensional ($K=6$) integrals in Eq. (41). For that, we use Monte Carlo methods, and the results are shown in Fig. 2. The error bars come from sampling random points in the integrands within the Monte Carlo evaluation of the integrals. We have averaged over 30 trials for each data point. Since $f_{4}(\sigma)$ stays very flat for small $\sigma$ before increasing to large positive values, it is quite difficult to determine, if any, nonzero solution $\sigma$ from this plot alone. To understand the situation more clearly, we study the behavior of $f_{M}(\sigma)$ for small $\sigma$. We can show (see Appendix B for details) that for small $\sigma$, the leading order in the small-$\sigma$ expansion of $f_{M}(\sigma)$ is $O(\sigma^{6})$. In fact, if we write $f_{M}(\sigma)=\sum_{i=0}^{\infty}c_{i}(M)\sigma^{i}$, we find that $c_{i}=0$ for $i$ odd, $c_{0}=c_{2}=c_{4}=0$ and $\displaystyle c_{6}(M)=-\frac{M}{24}(M-1)(M-3)$ (42) for $M=3,4,5,\cdots$. Therefore, for $M=3$, the leading order is actually $O(\sigma^{8})$. The next-order coefficient is given by $\displaystyle c_{8}(M)=-\frac{M}{48}(M-1)(3M^{2}-27M+47),$ (43) for $M\geq 3$. Some steps needed to obtain these are given in Appendix B. We note that $c_{8}(M=3)>0$. This is consistent with the monotonic increase of $f_{3}(\sigma)$ shown in Fig. 1. For $M>3$, $c_{6}$ becomes negative. Combining this fact with the monotonic increase for large $\sigma$, we can conclude that there exists a nonzero solution to $f_{M}(\sigma)=0$ and that a discontinuous transition for $M>3$ is expected. From Eq. (43), we find that $c_{8}(M)>0$ for $M\lesssim 6.64$, therefore for these values of $M$, we can estimate the solution as $\sigma\simeq\sqrt{-c_{6}(M)/c_{8}(M)}$. This program, however, fails when $c_{8}(M)<0$ for $M\gtrsim 6.64$. ($c_{6}<0$ for $M>3$.) We need to go to higher order to study the 1RSB transition beyond this value of $M$. We find, however, that the method in Appendix B becomes too cumbersome to get $c_{10}$. The Landau expansion of the free energy given in Eq.(25) provides a more useful tool. Since $\sigma^{2}\sim\mu_{1}\sim q_{1}$, $O(\sigma^{6})$ and $O(\sigma^{8})$ correspond to the cubic and quartic orders in $q_{ab}$, respectively, and we need quintic order terms in $q_{ab}$ to evaluate $c_{10}$. In Appendix C, we apply the 1RSB form directly to $q_{ab}$ in Eq. (25). When $m_{1}=1$, the saddle point equations can be combined into a form $\displaystyle-\frac{1}{2}(w_{2}-w_{1})q^{3}_{1}-(y_{1}-y_{3}+y_{5})q_{1}^{4}-\frac{3}{2}z_{1}^{\rm eff}q^{5}_{1}=0,$ (44) where $\displaystyle z_{1}^{\rm eff}\equiv z_{1}-z_{3}-z_{4}+z_{5}+z_{8}-z_{9}.$ (45) Recalling that $q_{1}=\mu_{1}/\beta^{2}=M\sigma^{2}/\beta^{2}$ and using the values of $w_{i}$ and $y_{i}$ given in Appendix A, we can identify the first two terms in Eq. (44) as the small-$\sigma$ expansion of $f_{M}(\sigma)$, since we can rewrite $\displaystyle c_{6}(M)=-\frac{M^{3}}{2\beta^{6}}(w_{2}-w_{1}),$ (46) and $\displaystyle c_{8}(M)=-\frac{M^{4}}{\beta^{8}}(y_{1}-y_{3}+y_{5}).$ (47) It follows that the last term in Eq. (44) gives $\displaystyle c_{10}(M)=-\frac{3M^{5}}{2\beta^{10}}z_{1}^{\rm eff}.$ (48) The explicit expression as a function of $M$ is given in Eqs. (C) and (144) in Appendix C. In Fig. 3, $(y_{1}-y_{3}+y_{5})/\beta^{8}$ and $z_{1}^{\rm eff}/\beta^{10}$ are displayed as functions of $M$. We note that $y_{1}-y_{3}+y_{5}$ is negative (and $c_{8}$ is positive) for $2.35\lesssim M\lesssim 6.64$. Therefore, as we mentioned above, we can find the 1RSB solution for $m_{1}=1$ for $3<M\lesssim 6.64$ within the quartic theory. The result for the 1RSB transition temperature obtained in this way is shown as a solid red line in Fig. 4 (a). We note, however, that the result becomes unreliable as we approach the boundary value $M\simeq 6.64$ as it shows a fictitious diverging behavior. We now study how the quintic theory may improve this result. The quintic contribution can be summarized by $z_{1}^{\rm eff}$, which is negative for $4.37\lesssim M\lesssim 12.46$ (and for the narrow region $2\leq M\lesssim 2.12$). Since $c_{10}$ is positive in that interval, we have a chance to extend the result of the quartic theory to larger values of $M$. As one can see in Fig. 4 (a), the 1RSB transition line calculated within the quintic theory indeed extends to large values of $M$. But, since Eq. (44) for $q_{1}\neq 0$ becomes a quadratic equation for $q_{1}$, there are intervals of $M$ where no real solution exists. We find that for $3.27\lesssim M\lesssim 3.98$ and for $M\gtrsim 14.41$, solutions to this equation become complex and no 1RSB solution can be obtained. This can be seen in Fig. 4 (b), where one can see a segment of the 1RSB transition line is missing. Also as in the quartic theory, the transition line displays an apparent divergent behavior as we approach the boundary value $M\simeq 14.41$. Therefore, we can conclude that it is possible to obtain the 1RSB transition line using truncated models, but the truncation of the free energy to a specific order produces some unphysical features. Comparing the results of the quartic and quintic theories in Fig. 4 (a), we expect that a systematic improvement may occur if we go to even higher orders. We also note that the 1RSB transition temperatures obtained in this way always stay above $T_{*}$. The 1RSB transition line discussed above is obtained by setting $m_{1}=1$ where the 1RSB free energy coincides with that of the high-temperature RS phase (with $q=0$). Using the results in Appendix C, we can obtain 1RSB solutions for general values of $0\leq m_{1}\leq 1$ for the truncated model. Rather unexpectedly, we find that for given $M$, the 1RSB solution ceases to exist below a certain finite temperature for which $m_{1}=0$. We note that if $m_{1}=0$, the 1RSB free energy becomes that of the RS phase with nonzero $q$ (see Eq. (132)). Therefore, below that temperature, we only have the RS solution with nonzero $q$. This is illustrated in Fig. 5, where we plot the free energies of both 1RSB and RS solutions calculated within a truncated model. One can clearly see that the 1RSB solution exists only in a finite temperature interval. Within that interval, the system is in the 1RSB phase which has a higher free energy than the RS one with nonzero $q$. However, below that interval, there is no 1RSB solution, so the system returns to the RS phase. We believe that this rather unusual behavior is caused by the truncation of the model in an arbitrary order. In the large-$M$ limit considered in Sec. II.4, where one can find the 1RSB solutions without truncation, we find that the 1RSB solution continues down to zero temperature and has a higher free energy than the RS one. Figure 3: $(y_{1}-y_{3}+y_{5})/\beta^{8}$ (dashed line) and $z_{1}^{\rm eff}/\beta^{10}$ (solid line) as functions of $M$. In the large-$M$ limit, they approach 1/16 and 1/20, respectively. Figure 4: (a) Red and blue solid lines are the 1RSB transition temperatures $T_{c}^{\rm 1RSB}$ as functions of $M$ for the $p=4$ balanced $M$-$p$ model expanded up to quartic (red) and to quintic (blue) orders in the order parameter. Dashed and dot-dashed lines are $T_{*}$ (Eq. (24)) and $T_{c}$ (Eq. (27)), respectively. Two closely-spaced horizontal lines are the large-$M$ limits of $T_{*}$ (lower one, Eq. (62)) and $T_{c}^{\rm 1RSB}$ (upper one, Eq. (70)). (b) Close-up of the same plot for $3\leq M\leq 4$. There is a gap in the solid blue line in the interval $3.27\lesssim M\lesssim 3.98$, where no 1RSB solution exists at $m_{1}=1$ for the quintic theory. The red line corresponds to the quartic theory, which has no gap. The dot-dashed line is $T_{c}$. Figure 5: Dimensionless free energies per spin of the 1RSB solution (solid line) and the RS solution with $q\neq 0$ (dashed line) as functions of temperature calculated for the quartic $M=4$ model. For each case, the free energy difference ($\Delta F$) from that of the high-temperature RS solution ($q=0$, Eq. (22)) is plotted. The 1RSB solution exists only in the temperature interval $0.212\leq T\leq 0.311$. ### II.4 The Large-$M$ Limit In this subsection, we consider the situation where we take the limit $M\to\infty$ from the start. In the large-$M$ limit, Eq. (8) can be rewritten as $\displaystyle L(\underline{\mu})$ $\displaystyle=\underset{\\{S_{i}^{a}\\}}{\mathrm{Tr}}\exp\left[\frac{1}{4M}\sum_{a\neq b}\mu_{ab}\left\\{\left(\sum^{M}_{i}S^{a}_{i}S^{b}_{i}\right)^{2}-M\right\\}\right]$ $\displaystyle\simeq\underset{\\{S_{i}^{a}\\}}{\mathrm{Tr}}\exp\left[\frac{M}{4}\sum_{a\neq b}\mu_{ab}\left(\frac{1}{M}\sum^{M}_{i}S^{a}_{i}S^{b}_{i}\right)^{2}\right],$ (49) where we have neglected the subleading terms in the large-$M$ limit. We now introduce the delta function $\delta(MQ_{ab}-\sum_{i}^{M}S_{i}^{a}S_{i}^{b})$ using the integral representation with the variable $\lambda_{ab}$. Then we have from Eq. (6) $\displaystyle\overline{Z^{n}}=$ $\displaystyle e^{nN\beta^{2}C}\int\prod_{a<b}dq_{ab}d\mu_{ab}dQ_{ab}d\lambda_{ab}$ $\displaystyle\times$ $\displaystyle\exp\Big{[}-NM\Big{\\{}-\frac{1}{4}\beta^{2}\sum_{a\neq b}q^{2}_{ab}+\frac{1}{2}\sum_{a\neq b}\mu_{ab}q_{ab}$ $\displaystyle-\frac{1}{4}\sum_{a\neq b}\mu_{ab}Q^{2}_{ab}+\frac{1}{2}\sum_{a\neq b}\lambda_{ab}Q_{ab}-\ln\tilde{L}(\underline{\lambda})\Big{\\}}\Big{]}$ (50) where $\displaystyle\widetilde{L}(\underline{\lambda})=\underset{\\{S^{a}\\}}{\mathrm{Tr}}\exp\left[\frac{1}{2}\sum_{a\neq b}\lambda_{ab}S^{a}S^{b}\right].$ (51) In the large-$M$ limit, the integral is dominated by the saddle points. In particular, the saddle point equations obtained by varying $q_{ab}$ and $\mu_{ab}$ are, respectively, $\displaystyle\mu_{ab}=\beta^{2}q_{ab}$ (52) and $\displaystyle q_{ab}=\frac{1}{2}Q^{2}_{ab}.$ (53) Inserting this into the above equation, we can rewrite Eq. (50) as $\displaystyle\overline{Z^{n}}=e^{nN(\beta J)^{2}C}\int\prod_{a<b}dQ_{ab}d\lambda_{ab}\;\exp[-NM\widetilde{G}(\underline{Q},\underline{\lambda})],$ (54) with $\widetilde{G}(\underline{Q},\underline{\lambda})=-\frac{1}{16}(\beta J)^{2}\sum_{a\neq b}Q^{4}_{ab}+\frac{1}{2}\sum_{a\neq b}\lambda_{ab}Q_{ab}-\ln\tilde{L}(\underline{\lambda}).$ (55) The free energy in the large-$M$ limit is then given by $\displaystyle\frac{\beta F}{NM}=-(\beta J)^{2}C_{\infty}+\lim_{n\to 0}\frac{1}{n}\widetilde{G}(\underline{Q},\underline{\lambda}),$ (56) where $\displaystyle C_{\infty}=\lim_{M\to\infty}\frac{C}{M}=\frac{1}{16}$ (57) Note that we have restored $J^{2}$ which sets the variance in Eq. (2) explicitly. This free energy is exactly the same as the one for the fully connected $p$ spin glass model with $p=4$, which is given by the Hamiltonian $\displaystyle H=-\sum_{1\leq x_{1}<\cdots<x_{p}\leq N}J_{x_{1},x_{2},\cdots,x_{N}}S(x_{1})S(x_{2})\cdots S(x_{p}),$ (58) for the Ising spin $S(x)$ at site $x$. The bonds $J_{x_{1},x_{2},\cdots,x_{N}}$ are independent random variables satisfying the Gaussian distribution with zero mean and variance $\displaystyle\frac{p!\tilde{J}^{2}}{2N^{p-1}}.$ (59) The free energy for this model is given exactly the same as Eq. (56) with $\tilde{J}^{2}=J^{2}/4$. (The formula for this correspondence for general $p$ is $\tilde{J}^{2}=4C_{\infty}J^{2}$.) We can readily use the known results for this model. The replica symmetric phase with $\lambda=Q=0$ has the free energy per site as $\frac{\beta F_{\rm RS}}{N}=-\frac{(\beta\tilde{J})^{2}}{4}-\ln 2.$ (60) The entropy per site is then given by $\frac{S_{\rm RS}}{N}=\ln 2-\frac{(\beta\tilde{J})^{2}}{4},$ (61) which becomes negative for temperature $T/\tilde{J}<T^{\infty}_{*}/\tilde{J}\equiv 1/(2\sqrt{\ln 2})$. Therefore in the original unit $\displaystyle T^{\infty}_{*}/J=\frac{1}{4\sqrt{\ln 2}}\simeq 0.30028.$ (62) This is the same value as that obtained in the $M\to\infty$ limit of Eq. (24). If we use the 1RSB form for $Q_{ab}$ and $\lambda_{ab}$ in Eq. (56), the free energy becomes $\displaystyle\frac{\beta F^{\infty}_{\rm 1RSB}}{N}=$ $\displaystyle-\frac{(\beta\tilde{J})^{2}}{4}[1+(m_{1}-1)Q_{1}^{p}]+\frac{1}{2}(m_{1}-1)\lambda_{1}Q_{1}$ $\displaystyle+$ $\displaystyle\frac{\lambda_{1}}{2}-\ln 2-\frac{1}{m_{1}}\ln\int Dy\;\cosh^{m_{1}}(\sqrt{\lambda_{1}}y).$ (63) The saddle point equations are as follows: $\lambda_{1}=\frac{(\beta\tilde{J})^{2}}{2}pQ_{1}^{p-1},$ (64) and $Q_{1}=\frac{\int Dy\;\cosh^{m_{1}}(\sqrt{\lambda_{1}}y)\tanh^{2}(\sqrt{\lambda_{1}}y)}{\int Dy\;\cosh^{m_{1}}(\sqrt{\lambda_{1}}y)}.$ (65) There is another saddle point equation which is obtained by varying the free energy with respect to $m_{1}$: $\displaystyle\frac{(\beta\tilde{J})^{2}}{4}Q_{1}^{p}(p-1)+\frac{1}{m_{1}^{2}}\ln\int Dy\;\cosh^{m_{1}}(\sqrt{\lambda_{1}}y)$ $\displaystyle~{}~{}-\frac{1}{m_{1}}\frac{\int Dy\;\cosh^{m_{1}}(\sqrt{\lambda_{1}}y)\ln(\cosh(\sqrt{\lambda_{1}}y))}{\int Dy\;\cosh^{m_{1}}(\sqrt{\lambda_{1}}y)}=0.$ (66) Again, when $m_{1}=1$, $F_{\rm 1RSB}$ becomes equal to $F_{\rm RS}$. We determine the temperature $T^{\infty}_{\rm 1RSB}$ by setting $m_{1}=1$. Using $\int Dy\;\cosh(\sqrt{\lambda_{1}}y)=e^{\lambda_{1}/2}$, we can combine Eqs. (65), (66) and (64) to get $\displaystyle e^{-\lambda_{1}/2}\int Dy\;\cosh(\sqrt{\lambda_{1}}y)\Big{[}\ln\cosh(\sqrt{\lambda_{1}}y)$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\frac{p-1}{2p}\lambda_{1}\tanh^{2}(\sqrt{\lambda_{1}}y)\Big{]}-\frac{\lambda_{1}}{2}=0.$ (67) If we define $\displaystyle\nu\equiv\sqrt{\lambda_{1}},$ (68) then the above equation can be rewritten as $f_{\infty}(\nu)=0$ where $\displaystyle f_{\infty}(\nu)\equiv$ $\displaystyle~{}e^{-\nu^{2}/2}\int Dy\;\Big{[}\cosh(\nu y)\ln\cosh(\nu y)$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\frac{p-1}{2p}\nu^{2}\frac{\sinh^{2}(\nu y)}{\cosh(\nu y)}\Big{]}-\frac{\nu^{2}}{2}.$ (69) This is to be compared with the corresponding Eq. (41) for finite $M$. In Fig. 6, $f_{\infty}(\nu)$ is plotted for $p=4$. From the nonzero solution and from the corresponding $Q_{1}$ in Eq. (65) and the relation Eq. (64), we obtain $T^{\infty}_{\rm 1RSB}/\tilde{J}\simeq 0.61688$ or in the original unit $\displaystyle T^{\infty}_{\rm 1RSB}/J\simeq 0.30844>T^{*}_{\infty}.$ (70) For $f(\nu)$, the small-$\nu$ expansion yields $\displaystyle f_{\infty}(\nu)=$ $\displaystyle\left(\frac{2-p}{4p}\right)\nu^{4}+\left(\frac{2p-3}{6p}\right)\nu^{6}$ $\displaystyle+\left(\frac{5(4-3p)}{24p}\right)\nu^{8}+O(\nu^{10}).$ (71) We can see that for $p>2$, $f_{\infty}(\nu)$ has a negative slope near the origin. For $p=2$, the leading order term is $\nu^{6}$ with a positive coefficient. Figure 6: $f_{\infty}(\nu)$ defined in Eq. (69) for $p=4$. There is a nonzero solution $\nu\simeq 2.1163$ to the equation $f_{\infty}(\nu)=0$. ### II.5 The FRSB Solution Here we consider the FRSB solutions. We first write the free energy in terms of the Parisi function $q(x)$ for $0\leq x\leq 1$. It is given by $\displaystyle\frac{\beta F_{\rm FRSB}}{N}=-C\beta^{2}-M\ln 2-\tau\langle q^{2}\rangle$ $\displaystyle- w_{1}\int_{0}^{1}dx\;\left\\{xq^{3}(x)+3q(x)\int_{0}^{x}dy\;q^{2}(y)\right\\}+w_{2}\langle q^{3}\rangle$ $\displaystyle+y_{1}\langle q^{4}\rangle+y_{2}\Big{\\{}\langle q^{4}\rangle-2\langle q^{2}\rangle^{2}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\int_{0}^{1}dx\int_{0}^{x}dy\;(q^{2}(x)-q^{2}(y))^{2}\Big{\\}}$ $\displaystyle-y_{3}\Big{\\{}2\langle q\rangle\langle q^{3}\rangle+\int_{0}^{1}dx\;q^{2}(x)\int_{0}^{x}dy\;(q(x)-q(y))^{2}\Big{\\}}$ $\displaystyle-y_{5}\Big{\\{}\langle q^{2}\rangle^{2}-4\langle q\rangle^{2}\langle q^{2}\rangle$ $\displaystyle~{}~{}~{}~{}~{}-4\langle q\rangle\int_{0}^{1}dx\;q(x)\int_{0}^{x}dy\;(q(x)-q(y))^{2}$ $\displaystyle~{}~{}~{}~{}~{}-\int_{0}^{1}dx\int_{0}^{x}dy\int_{0}^{x}dz\;(q(x)-q(y))^{2}(q(x)-q(z))^{2}\Big{\\}}$ $\displaystyle+z_{1}\langle q^{5}\rangle,$ (72) where $\displaystyle\langle q^{k}\rangle=\int_{0}^{1}q^{k}(x)dx.$ (73) and we have only kept the first quintic term. The FRSB expressions for the rest of the quintic terms are given in Appendix E. Because the equations for the stationarity equations of the FRSB functional equations are so cumbersome we have relegated them to the Appendices D and E. We can only make progress in solving these equations at the quintic level by making simplifications. The full set of quintic terms is given in Appendix E but in Eq. (72) we have reduced them from 9 terms to just one. We choose the numerical value of that $z_{1}$ to equal $z_{1}^{\rm{eff}}$ in Eq. (C). A second simplification was to set $y_{5}$ =0. When this is done the differential equation of Eq. (153) can be solved analytically. With $y_{5}$ set to be zero we do not think that does much harm to the physics of the problem. For example, the Goldbart-Elderfield singularity Goldbart and Elderfield (1985) still arises. Fortunately at quartic level, that is, if we set $z_{1}=0$, one can solve the differential equation for $q(x)$, Eq. (153), analytically. There is no need to set $y_{5}$ to zero when just working at quartic level. Because it is a first order differential equation, its solution depends on one adjustable constant $x_{0}$. The result is $\displaystyle q(x)=\frac{w_{1}y_{3}-2w_{2}y_{5}-\frac{2(y_{3}-2xy_{5})(y_{3}^{2}-4y_{1}y_{5})x_{0}}{\sqrt{y_{1}-xy_{3}+x^{2}y_{5}}}}{2(-y_{3}^{2}+4y_{1}y_{5})}.$ (74) Figure 7: Plots of $q(x)$ for the FRSB solution (red) and the 1RSB solution (black) at $M=2.25$ at $\tau=-0.001$. The FRSB state is the equilibrium state as it has the higher free energy. These plots are for the quartic theory. $x_{1}$ for the FRSB solution is where $q(x)$ goes to zero, $x_{1}\approx 0.24437$, while the upper breakpoint $x_{2}\approx 0.25870$. Figure 8: Plots of $q(x)$ for the FRSB solution (red) and the 1RSB solution (black) at $M=2.50$ for $\tau=-0.01$. This calculation has been done at quintic level, with just one quintic coefficient, with $z_{1}=z_{1}^{\rm{eff}}$ and with $y_{5}=0$, for both the FRSB and 1RSB solutions, in order to simplify the numerical work in the FRSB case. At this value of $M$, the first transition is to the 1RSB state at $\tau=0$, but below the Gardner transition temperature $T_{G}$, (which corresponds to a value of $\tau_{G}\approx-0.0078$) there is a transition to a state with FRSB. Below $T_{G}$, this FRSB state has a higher free energy than the corresponding 1RSB state. Physical requirements on the choice of $x_{0}$ are that for some interval $0<x_{1}<x<x_{2}<1$, $q(x)$ is real, an increasing function of $x$, and positive. $x_{1}$ is for the solutions discussed in this paper at the point where $q(x_{1})=0$, and solving this equation gives us $x_{1}$ as a function of $x_{0}$. The upper breakpoint, $x_{2}$, is where $q(x)$ takes the constant value $q(x_{2})$ in the interval $1>x>x_{2}$. Its value as a function of $x_{0}$ is determined by solving Eq. (152) at the value $x=x_{2}$. This relates the value of $x_{2}$ to $x_{0}$. The value of $x_{0}$ itself can be determined by setting the right-hand side of Eq. (145) to zero by choosing a value for $x_{0}$, for any value of $x>x_{1}$. The FRSB solution for the case $M=2.25$ at a value of $\tau=-0.001$ is shown in Fig. 7. It is contrasted with the form of $q(x)$ for the 1RSB case at the same values of $M$ and $\tau$. Note that there is an inverse square root singularity in $q(x)$ when $x=x_{s}$, where $y_{1}-x_{s}y_{3}+x_{s}^{2}y_{5}=0$ but this singularity, the Goldbart-Elderfield singularity, Goldbart and Elderfield (1985), causes no problem so long as it occurs at a value of $x_{s}$ which is greater than $x_{2}$ or less than $x_{1}$. In the limit $\tau\to 0$, $q(x)$ also goes to zero ($\sim|\tau|$) so Eq. (152) fixes $x_{2}\to w_{2}/w_{1}=(M-2)$. Hence a FRSB solution can only exist if $x_{2}<x_{s}$, which translates to $M^{**}\leq 2+\sqrt{2}/3\approx 2.47140$. The free energy difference between the FRSB and the 1RSB state differs at order $\tau^{5}$ and we have found numerically that the coefficient of this term goes towards zero as $M\to M^{**}$. One might have thought that one could not ignore the quintic terms when determining $M^{**}$ as they too give a contribution of $O(\tau^{5})$. However, in the limit when $\tau\to 0$, both the 1RSB and the FRSB solutions have their upper breakpoints at $w_{2}/w_{1}$ and at small $\tau$ the value of $q(x)$ on the plateau is the same for both solutions (see Fig. 7). The form of $q(x)$ for the two solutions only differ in the interval between $x_{2}$ and $x_{1}$ and $x_{2}-x_{1}\sim|\tau|$ itself, so in the integrals for the free energy, Eq. (E1), the plateau regions give the contribution of $O(|\tau|^{5})$, which is the same for both solutions, and the region of $x$ where the solutions differ only contributes to the higher order terms in $\tau$. For $3>M>M^{**}$ the continuous transition is to the 1RSB state. For $M>3$, that is for $w_{2}/w_{1}>1$, the transition is discontinuous and is to the 1RSB state. We were unable to find a solution with FRSB which had a higher free energy than the 1RSB solution at the discontinuous transition itself. While the quintic terms are not needed to determine the value of $M^{**}$, it was pointed out years ago that they are needed to obtain the Gardner transition Gross _et al._ (1985). This is the transition which arises in the 1RSB state and it is to a state with FRSB. Provided we set $y_{5}$ to zero and just retain one of the quintic terms $z_{1}$, MATHEMATICA can analytically solve the first order differential equation, but its explicit form is so long that we have not included its form in this paper. In Fig. 8 we show the resulting FRSB solution and the 1RSB solution with the same parameters when $M=2.50$ at a temperature below the Gardner transition temperature, so that the FRSB state has a higher free energy than the 1RSB state. Curiously the form of the FRSB solution is nothing like that given in Ref. Gross _et al._ (1985). They claimed that the continuously varying feature of $q(x)$ grew from the upper plateau. However, our solution is very similar to the FRSB solution for $M<M^{**}$, and it seems natural to us that at low enough temperature that solution should smoothly extend into the region $M>M^{**}$ as $M$ is increased. A feature of the Gardner solution is that right at the critical temperature $T_{G}$ where the Gardner state has a free energy just equal to that of the 1RSB state, its $q(x)$ is such that its derivative $dq(x)/dx$ is infinite right at the lower break point $x_{1}$. This is because at $T_{G}$ the Goldbart-Elderfield singularity of the quintic order solution is just at $x_{1}$. As the temperature is reduced below $T_{G}$, this singularity occurs below $x_{1}$, and $dq(x)/dx$ is finite at $x_{1}$ (as in Fig. 8). For $T>T_{G}$, the FRSB solution ceases to exist. ## III Discussion of Fluctuation Corrections and Behavior in Finite Dimensions Most of this paper has been concerned with calculations at mean-field level. Our motivation to study these was because we wished to move towards the inclusion of fluctuations about the mean-field solutions by using RG equations to renormalize the numerous coupling constants, ($\tau$, $w_{1}$, $w_{2}$, $y_{1}$, $\cdots$, $y_{5}$, $z_{1}$, $\cdots$, $z_{9}$) until they lie in the region where fluctuations have become small and mean-field theory becomes accurate. This is the same program as followed by Höller and Read Höller and Read (2020) for the de Almeida-Thouless (AT) transition de Almeida and Thouless (1978). This is the transition of the Ising spin glass in a field $h$, and in the $h-T$ phase diagram there is a line, the de Almeida-Thouless line which separates the high-temperature paramagnetic phase replica symmetric phase from a state with some version of replica symmetry breaking. The field theory of our problem, Eq. (25) is identical to theirs and the reader should consult their paper for details. However, since their paper was written new simulations have suggested a possible extension of their approach, which we describe. We begin by briefly summarizing some of their results and procedures. For the quartic coefficients below $d<8$ the coefficients $y_{1}$, $y_{2}$, $y_{3}$, $y_{4}$ and $y_{5}$ are dominated by the “box” diagrams for dimensions $8>d>6$ and their bare values become negligible compared to the contribution of the box diagrams, which can be expressed in terms of the values of $w_{1}$ and $w_{2}$. For $d>8$, a good approximation to their values is provided by the bare values of these coefficients. The important combination of coefficients $\displaystyle\tilde{y}(x)=Y(x)=y_{1}-xy_{3}+x^{2}y_{5},$ (75) at the value of $x$ corresponding to the upper break point $x_{2}$ (which in the limit $\tau\to 0$ has the value $w_{2}/w_{1}$) plays a key role in determining the nature of the state below the transition. When $\tilde{y}(\rho)$ is positive, (where $\rho=w_{2}/w_{1})$), the transition is to a state with FRSB, but if it is negative the transition is to a state with 1RSB. (This is how the value of $M^{**}$ was determined in the mean-field calculations by setting $x=\rho=M-2$ and solving $Y(x)=0$ for $M$). Höller and Read found from the box diagrams that $\displaystyle\tilde{y}(\rho)=K_{d}w_{1}^{4}\rho^{2}(22-48\rho-32\rho^{2}-8\rho^{3}+\rho^{4})/(8-d),$ (76) where $K_{d}=2/(\Gamma(d/2)(4\pi)^{d/2})$, (provided $\rho<1$). Höller and Read studied in particular the RG flow equations in dimensions $d=6+\epsilon$, where they could employ the Bray and Roberts Bray and Roberts (1980) RG recursion relations. Using these recursion relation, one finds that under the RG transforms $w_{1}$ and $w_{2}$ scale down towards zero as $\exp[-\frac{1}{2}\epsilon l]$. As $l\to\infty$ both $w_{1}$ and $w_{2}$ approach their fixed point value, (which is $0$) but their ratio $\rho=w_{2}/w_{1}$ approaches a constant as the RG scale parameter $l$ goes to infinity. It is the numerical value of this constant in the large $l$ limit which determines whether $\tilde{y}(\rho)$ is positive or negative. The polynomial in Eq. (76) is such that $\tilde{y}(\rho)$ is positive provided $\rho<0.8418$. Höller and Read left the ratio $\rho$ undetermined, supposing that its value was related in some way to the bare values of the constants. We shall argue that its value is universal and that $\rho=\frac{1}{2}$. Then as $\frac{1}{2}<0.8418$, the state formed will have FRSB and so is in the universality class of the Ising spin glass in a field. The key to understanding this is the real space RG calculation of Angelini and Biroli Angelini and Biroli (2015). This suggested that the transition at the AT line in high dimensions might be controlled by a zero-temperature fixed point. They found that in a simple real-space RG approximation that in high enough dimensions, the RG flows of $h$ and $J$, the standard deviation of the bond distribution, which are initially close to their values on the AT line at some non-zero temperature flowed close to their value on the AT line at zero temperature, but then veer away up the $h$-axis at $T=0$. Then the flow is away from the fixed point at $T=0$ and $h=h_{AT}$, where $h_{AT}$ is the value of the field $h$ on the AT line at $T=0$. In other words the RG flow is controlled by a zero temperature fixed point. Because their RG procedure (the Migdal-Kadanoff approximation) works well only in low dimensions it was uncertain whether their zero-temperature fixed point scenario in high dimensions should be trusted. However, we believe that the recent simulation in six dimensions in Ref. Aguilar-Janita _et al._ (2023) strongly suggests that it should be believed. These simulations showed that in six dimensions that the renormalized vertices related to the “bare” couplings $w_{1}$ and $w_{2}$ were such that their ratio was close to $1/2$. But this is the same value (i.e. $1/2$) as was found at $T=0$ in the mean-field like Bethe lattice calculation of the same renormalized vertices in Ref. Parisi _et al._ (2014). We therefore shall take it that the renormalized value of $\rho$ which should be inserted into Eq. (76) is $1/2$. As a consequence the continuous transition from the high-temperature phase should be to a state with FRSB, and for $d<8$ the continuous 1RSB transition should no longer occur. The same line of argument will also apply to the AT transition of spin glasses in a field. These have been extensively studied by simulations and the most recent of these is that of Bharadwaj et al. Vedula _et al._ (2023). They found numerical evidence that the AT line might not exist below six dimensions. The absence of the AT line below six dimensions was argued for in Ref. Moore (2012), where it was suggested that as $d\to 6$, $h_{AT}^{2}\sim(d-6)$, where $h_{AT}$ is the AT field at $T=0$. If this is correct then in three dimensions there would be no phase transition to a state with replica symmetry breaking, but there could be long length scales according to the droplet picture of spin glasses in a field McMillan (1984); Bray and Moore (1986); Fisher and Huse (1988) and the Imry-Ma argument Imry and Ma (1975), especially if the field is small. That structural glasses might behave as the Ising spin glass in a field was suggested many years ago Moore and Yeo (2006). RG calculations are only useful when there exist long length scales in the physics. For $w_{2}/w_{1}>1$ the transition to the 1RSB state at mean-field level is via a discontinuous transition at which there are no long length scales. Then the question arises as to what the fluctuation corrections then do. Our suspicion is that the effects of such fluctuations is to drive the system to at least smaller values of $w_{2}/w_{1}$ and possibly into the region where the transition might be continuous when it exists. Certainly there is no sign of a discontinuous transition in the real space RG calculations such as Ref. Yeo and Moore (2012b). But at present we cannot really exclude the possibility of a discontinuous transition in physical dimensions but we note once more that the simulations of Ref. Campellone _et al._ (1998) found no evidence for such a transition at $M=4$ in three dimensions. The chief omission of our work is therefore a stronger conclusion on the possible existence of a discontinuous transition and its dependence on the dimensionality $d$ of the system. The only way forward for investigating this question, especially in high dimensions close to or above $d=6$ would seem to be simulations on the one-dimensional proxy models. In these proxy models the form of the long range interactions between the spins can be tuned to mimic behavior in $d$ dimensions. Indeed for the case $p=3$, $M=2$ that has already been done Larson _et al._ (2010). Alas at mean-field level this model has $w_{2}/w_{1}<1$ and so it would not be expected to have a discontinuous transition and indeed there was no sign of such in the simulation. The case when $p=3$ and $M=3$ has $w_{2}/w_{1}=2$ Caltagirone _et al._ (2011) and so might be a good model to simulate as it should have a clear discontinuous transition. The model of the type studied in this paper, $p=4$ but with $M=4$ could also be a good model to simulate using the one-dimensional proxy model: It has also $w_{2}/w_{1}=2$. ###### Acknowledgements. We would like to thank Jairo de Almeida for sharing his notes dating from the seventies on the quintic terms in the presence of FRSB for the Ising spin glass. ## Appendix A Expansion of the free energy to the quintic order in order parameter We expand Eq. (8) to $O(\mu^{5})$. We first write $L\equiv 2^{nM}L^{\prime}$, where $\displaystyle L^{\prime}\equiv\mathrm{Tr}^{\prime}_{\\{S^{a}_{i}\\}}\exp\left[\frac{1}{2M}\sum_{(a,b)}\mu_{ab}f_{ab}\right].$ (77) Here $\mathrm{Tr}^{\prime}\equiv 2^{-nM}\mathrm{Tr}$ satisfies $\mathrm{Tr}^{\prime}_{\\{S^{a}_{i}\\}}1=1$, and we define $\displaystyle f_{ab}\equiv\sum_{\alpha=1}^{K}\Psi^{a}_{\alpha}\Psi^{b}_{\alpha},$ (78) where $\bm{\Psi}^{a}=(S^{a}_{1}S^{a}_{2},S^{a}_{1}S^{a}_{3},\cdots,S^{a}_{M-1}S^{a}_{M})$ is a $K$-dimensional vector for each replica index $a$ with components $\Psi^{a}_{\alpha}$, $\alpha=1,2,\cdots,K\equiv M(M-1)/2$. The expansion of $L^{\prime}$ to $O(\mu^{5})$ has the following structure: $\displaystyle L^{\prime}=1+\tilde{t}_{2}\sum_{(a,b)}\mu^{2}_{ab}+\tilde{w}_{1}\sum_{(a,b,c)}\mu_{ab}\mu_{bc}\mu_{ca}+\tilde{w}_{2}\sum_{(a,b)}\mu^{3}_{ab}$ $\displaystyle+\tilde{y}_{1}\sum_{a,b}\mu^{4}_{ab}+\tilde{y}_{2}\sum_{(a,b,c)}\mu^{2}_{ab}\mu^{2}_{bc}+\tilde{y}_{3}\sum_{(a,b,c)}\mu^{2}_{ab}\mu_{bc}\mu_{ca}$ $\displaystyle+\tilde{y}_{5}\sum_{(a,b,c,d)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}+\tilde{d}_{1}\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu^{2}_{cd}$ $\displaystyle+\tilde{z}_{1}\sum_{(a,b)}\mu^{5}_{ab}+\tilde{z}_{2}\sum_{(a,b,c)}\mu^{3}_{ab}\mu^{2}_{bc}+\tilde{z}_{3}\sum_{(a,b,c)}\mu^{3}_{ab}\mu_{bc}\mu_{ca}$ $\displaystyle+\tilde{z}_{4}\sum_{(a,b,c)}\mu^{2}_{ab}\mu^{2}_{bc}\mu_{ca}+\tilde{z}_{5}\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu_{bc}\mu_{cd}\mu_{da}$ $\displaystyle+\tilde{z}_{6}\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu_{bc}\mu_{cd}\mu_{db}+\tilde{z}_{7}\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu_{bc}\mu^{2}_{cd}$ $\displaystyle+\tilde{z}_{8}\sum_{(a,b,c,d)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}\mu_{ac}+\tilde{z}_{9}\sum_{(a,b,c,d,e)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{de}\mu_{ea}$ $\displaystyle+\tilde{d}_{2}\sum_{(a,b,c,d)}\mu^{3}_{ab}\mu^{2}_{cd}+\tilde{d}_{3}\sum_{(a,b,c,d,e)}\mu^{2}_{ab}\mu_{cd}\mu_{de}\mu_{ec}.$ (79) Here $(a,b)$, $(a,b,c)$, $(a,b,c,d)$ etc. indicate that the sums are over all distinct replica indices. The coefficients are obtained by taking the trace of the spins as we explain below. In order to calculate the free energy, we have to take the logarithm of $L^{\prime}$ and expand $\ln(1+x)$ to $O(\mu^{5})$. there are three contributions to this order coming from $-(1/2)x^{2}$ part. They are $\displaystyle-\frac{1}{2}\tilde{t}^{2}_{2}\sum_{(a,b)}\mu^{2}_{ab}\sum_{(c,d)}\mu^{2}_{cd}$ (80) $\displaystyle=$ $\displaystyle-\frac{1}{2}\tilde{t}^{2}_{2}\left[2\sum_{(a,b)}\mu^{4}_{ab}+4\sum_{(a,b,c)}\mu^{2}_{ab}\mu^{2}_{bc}+\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu^{2}_{cd}\right],$ $\displaystyle-\frac{1}{2}\cdot 2\tilde{t}_{2}\tilde{w}_{1}\sum_{(a,b)}\mu^{2}_{ab}\sum_{(c,d,e)}\mu_{cd}\mu_{de}\mu_{ec}$ (81) $\displaystyle=$ $\displaystyle-\tilde{t}_{2}\tilde{w}_{1}\Big{[}6\sum_{(a,b,c)}\mu^{3}_{ab}\mu_{bc}\mu_{ca}+6\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu_{bc}\mu_{cd}\mu_{db}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\sum_{(a,b,c,d,e)}\mu^{2}_{ab}\mu_{cd}\mu_{de}\mu_{ec}\Big{]},$ and $\displaystyle-\frac{1}{2}\cdot 2\tilde{t}_{2}\tilde{w}_{2}\sum_{(a,b)}\mu^{2}_{ab}\sum_{(c,d)}\mu^{3}_{cd}$ (82) $\displaystyle=$ $\displaystyle-\tilde{t}_{2}\tilde{w}_{2}\Big{[}2\sum_{(a,b)}\mu^{5}_{ab}+4\sum_{(a,b,c)}\mu^{2}_{ab}\mu^{3}_{bc}+\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu^{3}_{cd}\Big{]}.$ Note that the last terms in Eqs. (80), (81) and (82) as well as the terms in Eq. (79) with coefficients, $\tilde{d}_{i}$, $i=1,2,3$ have disconnected parts. When we take the trace over the spins, we have to keep in mind that the Ising spins must be paired to give nonvanishing contribution. For example, we have $\mathrm{Tr}^{\prime}f_{ab}=0$ for $a\neq b$. We evaluate the first few sets of coefficients as follows. $\displaystyle\tilde{t}_{2}=\frac{1}{2!}\frac{2}{(2M)^{2}}\;\mathrm{Tr}^{\prime}f^{2}_{ab}=\frac{1}{2!}\frac{1}{(2M)^{2}}2K=\frac{K}{4M^{2}},$ (83) $\displaystyle\tilde{w}_{1}=\frac{1}{3!}\frac{8}{(2M)^{3}}\;\mathrm{Tr}^{\prime}f_{ab}f_{bc}f_{ca}=\frac{1}{3!}\frac{1}{(2M)^{3}}8K=\frac{K}{6M^{3}},$ (84) $\displaystyle\tilde{w}_{2}=\frac{1}{3!}\frac{4}{(2M)^{3}}\;\mathrm{Tr}^{\prime}f^{3}_{ab}=\frac{1}{3!}\frac{1}{(2M)^{3}}4M(M-1)(M-2)$ $\displaystyle~{}~{}~{}=\frac{K}{6M^{3}}(M-2),$ (85) and $\displaystyle\tilde{d}_{1}$ $\displaystyle=\frac{1}{4!}\frac{12}{(2M)^{4}}\;\mathrm{Tr}^{\prime}f^{2}_{ab}f^{2}_{cd}=\frac{1}{4!}\frac{1}{(2M)^{4}}12K^{2},$ (86) $\displaystyle\tilde{d}_{2}$ $\displaystyle=\frac{1}{5!}\frac{80}{(2M)^{5}}\;\mathrm{Tr}^{\prime}f^{3}_{ab}f^{2}_{cd}$ $\displaystyle=\frac{1}{5!}\frac{1}{(2M)^{5}}80KM(M-1)(M-2),$ (87) $\displaystyle\tilde{d}_{3}$ $\displaystyle=\frac{1}{5!}\frac{160}{(2M)^{5}}\;\mathrm{Tr}^{\prime}f_{ab}f_{bc}f_{ca}f^{2}_{de}=\frac{1}{5!}\frac{1}{(2M)^{5}}160K^{2},$ (88) Here all replica indices are distinct. One can see that $\tilde{d}_{1}=\tilde{t}^{2}_{2}/2$, $\tilde{d}_{2}=\tilde{t}_{2}\tilde{w}_{2}$ and $\tilde{d}_{3}=\tilde{t}_{2}\tilde{w}_{1}$. Therefore all the disconnected terms in $\ln L^{\prime}$ vanish. We therefore have $\displaystyle\ln L^{\prime}=$ $\displaystyle~{}\tilde{t}_{2}\sum_{(a,b)}\mu^{2}_{ab}+\tilde{w}_{1}\sum_{(a,b,c)}\mu_{ab}\mu_{bc}\mu_{ca}+\tilde{w}_{2}\sum_{(a,b)}\mu^{3}_{ab}$ $\displaystyle+$ $\displaystyle\left(\tilde{y}_{1}-\tilde{t}^{2}_{2}\right)\sum_{a,b}\mu^{4}_{ab}+\left(\tilde{y}_{2}-2\tilde{t}^{2}_{2}\right)\sum_{(a,b,c)}\mu^{2}_{ab}\mu^{2}_{bc}+\tilde{y}_{3}\sum_{(a,b,c)}\mu^{2}_{ab}\mu_{bc}\mu_{ca}+\tilde{y}_{5}\sum_{(a,b,c,d)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}$ $\displaystyle+$ $\displaystyle\left(\tilde{z}_{1}-2\tilde{t}_{2}\tilde{w}_{2}\right)\sum_{(a,b)}\mu^{5}_{ab}+\left(\tilde{z}_{2}-4\tilde{t}_{2}\tilde{w}_{2}\right)\sum_{(a,b,c)}\mu^{3}_{ab}\mu^{2}_{bc}+\left(\tilde{z}_{3}-6\tilde{t}_{2}\tilde{w}_{1}\right)\sum_{(a,b,c)}\mu^{3}_{ab}\mu_{bc}\mu_{ca}+\tilde{z}_{4}\sum_{(a,b,c)}\mu^{2}_{ab}\mu^{2}_{bc}\mu_{ca}$ $\displaystyle+$ $\displaystyle\tilde{z}_{5}\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu_{bc}\mu_{cd}\mu_{da}+\left(\tilde{z}_{6}-6\tilde{t}_{2}\tilde{w}_{1}\right)\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu_{bc}\mu_{cd}\mu_{db}+\tilde{z}_{7}\sum_{(a,b,c,d)}\mu^{2}_{ab}\mu_{bc}\mu^{2}_{cd}$ $\displaystyle+$ $\displaystyle\tilde{z}_{8}\sum_{(a,b,c,d)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}\mu_{ac}+\tilde{z}_{9}\sum_{(a,b,c,d,e)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{de}\mu_{ea}.$ (89) The first quartic coefficient is given by $\displaystyle\tilde{y}_{1}=\frac{1}{4!}\frac{8}{(2M)^{4}}$ $\displaystyle\;\mathrm{Tr}^{\prime}f^{4}_{ab}$ $\displaystyle=\frac{1}{4!}\frac{8}{(2M)^{4}}$ $\displaystyle\Big{[}K+3K(K-1)$ $\displaystyle+3M(M-1)(M-2)(M-3)\Big{]}.$ (90) This is valid for $M\geq 3$. For $2\leq M\leq 3$, there are not enough spins whose combination makes the second term in the square bracket. Therefore, the square bracket must be just $K+3K(K-1)$ for $2\leq M\leq 3$. The rest of them are $\displaystyle\tilde{y}_{2}=\frac{1}{4!}\frac{48}{(2M)^{4}}\;\mathrm{Tr}^{\prime}f^{2}_{ab}f^{2}_{bc}=\frac{1}{4!}\frac{48}{(2M)^{4}}K^{2},$ (91) $\displaystyle\tilde{y}_{3}=$ $\displaystyle\frac{1}{4!}\frac{96}{(2M)^{4}}\;\mathrm{Tr}^{\prime}f^{2}_{ab}f_{bc}f_{ca}$ $\displaystyle=$ $\displaystyle\frac{1}{4!}\frac{96}{(2M)^{4}}M(M-1)(M-2),$ (92) and $\displaystyle\tilde{y}_{5}=\frac{1}{4!}\frac{48}{(2M)^{4}}\;\mathrm{Tr}^{\prime}f_{ab}f_{bc}f_{cd}f_{da}=\frac{1}{4!}\frac{48}{(2M)^{4}}K.$ (93) These are valid for $M\geq 2$. We obtain the first quintic coefficient as $\displaystyle\tilde{z}_{1}=\frac{1}{5!}\frac{16}{(2M)^{5}}$ $\displaystyle\;\mathrm{Tr}^{\prime}f^{5}_{ab}$ (94) $\displaystyle=\frac{1}{5!}\frac{16}{(2M)^{5}}$ $\displaystyle\Big{[}10M(M-1)(M-2)K$ $\displaystyle+$ $\displaystyle 12M(M-1)(M-2)(M-3)(M-4)\Big{]}.$ This is valid for $M\geq 4$. For $2\leq M\leq 4$, the second term in the square bracket should be dropped for the same reason as given for $\tilde{y}_{1}$. The next coefficient is given for $M\geq 2$ as $\displaystyle\tilde{z}_{2}$ $\displaystyle=\frac{1}{5!}\frac{320}{(2M)^{5}}\;\mathrm{Tr}^{\prime}f^{3}_{ab}f^{2}_{bc}$ $\displaystyle=\frac{1}{5!}\frac{320}{(2M)^{5}}M(M-1)(M-2)K,$ (95) The third and fourth quintic coefficients are given by $\displaystyle\tilde{z}_{3}=\frac{1}{5!}\frac{320}{(2M)^{5}}$ $\displaystyle\;\mathrm{Tr}^{\prime}f^{3}_{ab}f_{bc}f_{ca}$ $\displaystyle=\frac{1}{5!}\frac{320}{(2M)^{5}}$ $\displaystyle\Big{[}K+3K(K-1)$ $\displaystyle+$ $\displaystyle 3M(M-1)(M-2)(M-3)\Big{]},$ (96) and $\displaystyle\tilde{z}_{4}=\frac{1}{5!}\frac{480}{(2M)^{5}}$ $\displaystyle\;\mathrm{Tr}^{\prime}f^{2}_{ab}f^{2}_{bc}f_{ca}$ $\displaystyle=\frac{1}{5!}\frac{480}{(2M)^{5}}$ $\displaystyle\Big{[}2M(M-1)(M-2)$ $\displaystyle+$ $\displaystyle 2M(M-1)(M-2)(M-3)\Big{]}.$ (97) Again these expressions are valid only for $M\geq 3$. For $2\leq M\leq 3$, the second terms in the square brackets in Eqs. (96) and (97) do not appear. The remaining quintic coefficients are given by $\displaystyle\tilde{z}_{5}$ $\displaystyle=\frac{1}{5!}\frac{960}{(2M)^{5}}\;\mathrm{Tr}^{\prime}f^{2}_{ab}f_{bc}f_{cd}f_{da}$ $\displaystyle=\frac{1}{5!}\frac{960}{(2M)^{5}}M(M-1)(M-2),$ (98) $\displaystyle\tilde{z}_{6}=\frac{1}{5!}\frac{960}{(2M)^{5}}\;\mathrm{Tr}^{\prime}f^{2}_{ab}f_{bc}f_{cd}f_{db}=\frac{1}{5!}\frac{960}{(2M)^{5}}K^{2},$ (99) $\displaystyle\tilde{z}_{7}=\frac{1}{5!}\frac{480}{(2M)^{5}}\;\mathrm{Tr}^{\prime}f^{2}_{ab}f_{bc}f^{2}_{cd}=0,$ (100) $\displaystyle\tilde{z}_{8}$ $\displaystyle=\frac{1}{5!}\frac{960}{(2M)^{5}}\;\mathrm{Tr}^{\prime}f_{ab}f_{bc}f_{cd}f_{da}f_{ac}$ $\displaystyle=\frac{1}{5!}\frac{960}{(2M)^{5}}M(M-1)(M-2),$ (101) and $\displaystyle\tilde{z}_{9}=\frac{1}{5!}\frac{384}{(2M)^{5}}\;\mathrm{Tr}^{\prime}f_{ab}f_{bc}f_{cd}f_{de}f_{ea}=\frac{1}{5!}\frac{384}{(2M)^{5}}K.$ (102) These expressions are valid for all $M\geq 2$. We now convert the summations over replica indices in Eq. (89) into those without any restriction. We obtain $\displaystyle\ln L^{\prime}=t^{\prime}_{2}\sum_{a,b}\mu^{2}_{ab}+w^{\prime}_{1}\sum_{a,b,c}\mu_{ab}\mu_{bc}\mu_{ca}+w^{\prime}_{2}\sum_{a,b}\mu^{3}_{ab}$ $\displaystyle+y^{\prime}_{1}\sum_{a,b}\mu^{4}_{ab}+y^{\prime}_{2}\sum_{a,b,c}\mu^{2}_{ab}\mu^{2}_{bc}+y^{\prime}_{3}\sum_{a,b,c}\mu^{2}_{ab}\mu_{bc}\mu_{ca}$ $\displaystyle+y^{\prime}_{5}\sum_{a,b,c,d}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}+z^{\prime}_{1}\sum_{a,b}\mu^{5}_{ab}+z^{\prime}_{2}\sum_{a,b,c}\mu^{3}_{ab}\mu^{2}_{bc}$ $\displaystyle+z^{\prime}_{3}\sum_{a,b,c}\mu^{3}_{ab}\mu_{bc}\mu_{ca}+z^{\prime}_{4}\sum_{a,b,c}\mu^{2}_{ab}\mu^{2}_{bc}\mu_{ca}$ $\displaystyle+z^{\prime}_{5}\sum_{a,b,c,d}\mu^{2}_{ab}\mu_{bc}\mu_{cd}\mu_{da}+z^{\prime}_{6}\sum_{a,b,c,d}\mu^{2}_{ab}\mu_{bc}\mu_{cd}\mu_{db}$ $\displaystyle+z^{\prime}_{7}\sum_{a,b,c,d}\mu^{2}_{ab}\mu_{bc}\mu^{2}_{cd}+z_{8}^{\prime}\sum_{a,b,c,d}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}\mu_{ac}$ $\displaystyle+z^{\prime}_{9}\sum_{a,b,c,d,e}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{de}\mu_{ea},$ (103) where $t^{\prime}_{2}=\tilde{t}_{2}$, $w^{\prime}_{1}=\tilde{w}_{1}$ and $w^{\prime}_{2}=\tilde{w}_{2}$. The first two quartic coefficients are $\displaystyle y^{\prime}_{1}$ $\displaystyle=\tilde{y}_{1}-\tilde{t}^{2}_{2}-\left(\tilde{y}_{2}-2\tilde{t}^{2}_{2}\right)+\tilde{y}_{5}$ (104) $\displaystyle=\left(\frac{K}{24M^{4}}\right)\begin{cases}2,&\text{if}\ 2\leq M\leq 3\\\ (3M^{2}-15M+20),&\text{if}\ M\geq 3\end{cases}$ and $\displaystyle y^{\prime}_{2}=\tilde{y}_{2}-2\tilde{t}^{2}_{2}-2\tilde{y}_{5}=-\left(\frac{K}{4M^{4}}\right).$ (105) The rest of them are the same as when the summations are restricted. $\displaystyle y^{\prime}_{3}=\tilde{y}_{3},~{}~{}~{}y^{\prime}_{5}=\tilde{y}_{5}.$ (106) The quintic coefficients are given by $\displaystyle z^{\prime}_{1}$ $\displaystyle=\tilde{z}_{1}-2\tilde{t}_{2}\tilde{w}_{2}-\left(\tilde{z}_{2}-4\tilde{t}_{2}\tilde{w}_{2}\right)+\tilde{z}_{5}+\tilde{z}_{7}$ (107) $\displaystyle=\left(\frac{K}{10M^{5}}\right)\begin{cases}5(M-2),&\text{if}\ 2\leq M\leq 4\\\ (M-2)(M^{2}-7M+17),&\text{if}\ M\geq 4\end{cases}$ $\displaystyle z^{\prime}_{2}$ $\displaystyle=\tilde{z}_{2}-4\tilde{t}_{2}\tilde{w}_{2}-2\tilde{z}_{5}-2\tilde{z}_{7}$ $\displaystyle=-\left(\frac{K}{M^{5}}\right)(M-2),$ (108) $\displaystyle z^{\prime}_{3}$ $\displaystyle=\tilde{z}_{3}-6\tilde{t}_{2}\tilde{w}_{1}-2\left(\tilde{z}_{6}-6\tilde{t}_{2}\tilde{w}_{1}\right)+5\tilde{z}_{9}$ (109) $\displaystyle=\left(\frac{K}{6M^{5}}\right)\begin{cases}2,&\text{if}\ 2\leq M\leq 3\\\ (3M^{2}-15M+20),&\text{if}\ M\geq 3\end{cases}$ $\displaystyle z^{\prime}_{4}$ $\displaystyle=\tilde{z}_{4}-\tilde{z}_{7}-\tilde{z}_{8}$ (110) $\displaystyle=\left(\frac{K}{2M^{5}}\right)\begin{cases}0,&\text{if}\ 2\leq M\leq 3\\\ (M-2)(M-3),&\text{if}\ M\geq 3\end{cases}$ and $\displaystyle z^{\prime}_{6}=\tilde{z}_{6}-6\tilde{t}_{2}\tilde{w}_{1}-5\tilde{z}_{9}=-\left(\frac{K}{2M^{5}}\right).$ (111) The other coefficients are unchanged, namely, $\displaystyle z^{\prime}_{i}=\tilde{z}_{i}$ (112) for $i=5,7,8$ and $9$. Finally, the free energy is now given by Eq. (11) with Eq. (7). One of the saddle point equations gives $\mu_{ab}=\beta^{2}q_{ab}$. Inserting this relation into Eq. (11), we obtain the free energy in the form given in Eq. (25) with $\displaystyle w_{i}\equiv\beta^{6}w^{\prime}_{i},~{}~{}~{}~{}~{}y_{j}\equiv\beta^{8}y^{\prime}_{j},~{}~{}~{}~{}~{}z_{k}\equiv\beta^{10}z^{\prime}_{k},$ (113) for $i=1,2$, $j=1,2,3,5$ and $k=1,2,\cdots,9$. ## Appendix B Small-$\sigma$ behavior of $f_{M}(\sigma)$ Here we present some steps leading to the small-$\sigma$ expansion of $f_{M}(\sigma)$ defined in Eq. (41). As mentioned in the main text, we expand $f_{M}(\sigma)$ up to $O(\sigma^{8})$. There are numerous terms to be evaluated. In the following, for brevity, we only list the quantities needed for the calculation of the $O(\sigma^{6})$-coefficient. We first write $\displaystyle\zeta(\bm{y},\mu_{1})\equiv\frac{1}{2^{M}}\underset{\\{S_{i}\\}}{\mathrm{Tr}}\;\exp\left[\sigma\bm{y}\cdot\bm{\Psi}\right]=\sum_{j=0}^{\infty}\frac{\sigma^{j}}{j!}\zeta_{j}(\bm{y}),$ (114) where $\sigma\equiv\sqrt{\mu_{1}/M}$. We immediately see that $\zeta_{1}(\bm{y})=0$ since $\mathrm{Tr}\;\Psi_{\alpha}=0$. Using the fact that $\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}=0$ for $\alpha\neq\beta$, we find that $\zeta_{2}(\bm{y})=\sum_{\alpha}^{K}y^{2}_{\alpha}$ and $\zeta_{3}(\bm{y})=\sum^{K}_{(\alpha,\beta,\gamma)}y_{\alpha}y_{\beta}y_{\gamma}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}$. Higher order contributions are $\displaystyle\zeta_{4}(\bm{y})$ $\displaystyle=\sum_{\alpha}^{K}y^{4}_{\alpha}+3\sum^{K}_{\alpha\neq\beta}y^{2}_{\alpha}y^{2}_{\beta}$ $\displaystyle+\sum^{K}_{(\alpha,\beta,\gamma,\delta)}y_{\alpha}y_{\beta}y_{\gamma}y_{\delta}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta},$ (115) $\displaystyle\zeta_{5}(\bm{y})$ $\displaystyle=10\sum^{K}_{(\alpha,\beta,\gamma)}y^{3}_{\alpha}y_{\beta}y_{\gamma}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}$ $\displaystyle+10\sum^{K}_{(\alpha,\beta,\gamma,\delta)}y^{2}_{\alpha}y_{\beta}y_{\gamma}y_{\delta}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta}$ (116) $\displaystyle+\sum^{K}_{(\alpha,\beta,\gamma,\delta,\sigma)}y_{\alpha}y_{\beta}y_{\gamma}y_{\delta}y_{\sigma}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta}\Psi_{\sigma},$ and $\displaystyle\zeta_{6}(\bm{y})$ $\displaystyle=\sum_{\alpha}^{K}y^{6}_{\alpha}+15\sum^{K}_{\alpha\neq\beta}y^{4}_{\alpha}y^{2}_{\beta}+15\sum^{K}_{(\alpha,\beta,\gamma)}y^{2}_{\alpha}y^{2}_{\beta}y^{2}_{\gamma}$ (117) $\displaystyle+20\sum^{K}_{(\alpha,\beta,\gamma,\delta)}y^{3}_{\alpha}y_{\beta}y_{\gamma}y_{\delta}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta}$ $\displaystyle+15\sum^{K}_{(\alpha,\beta,\gamma,\delta,\sigma)}y^{2}_{\alpha}y_{\beta}y_{\gamma}y_{\delta}y_{\sigma}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta}\Psi_{\sigma}$ $\displaystyle+\sum^{K}_{(\alpha,\beta,\gamma,\delta,\sigma,\mu)}y_{\alpha}y_{\beta}y_{\gamma}y_{\delta}y_{\sigma}y_{\mu}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta}\Psi_{\sigma}\Psi_{\mu}.$ Here $(\alpha,\beta,\gamma)$, etc indicate the summation is over all distinct indices and $K\equiv{M\choose 2}$. Performing the Gaussian integrals, we have $\int D^{K}\bm{y}\;\zeta_{j}(\bm{y})=0$ for $j$ odd, $\int D^{K}\bm{y}\;\zeta_{2}(\bm{y})=K$, $\int D^{K}\bm{y}\;\zeta_{4}(\bm{y})=3K+3K(K-1)$, and $\displaystyle\int D^{K}\bm{y}\;\zeta_{6}(\bm{y})$ $\displaystyle=15K+45K(K-1)$ $\displaystyle+15K(K-1)(K-2).$ (118) For the calculation up to $O(\sigma^{6})$, we also need the following quantities: $\displaystyle\int D^{K}\bm{y}\;\zeta^{2}_{2}(\bm{y})=$ $\displaystyle 3K+K(K-1),$ (119) $\displaystyle\int D^{K}\bm{y}\;\zeta^{3}_{2}(\bm{y})=$ $\displaystyle 15K+9K(K-1)$ $\displaystyle+K(K-1)(K-2),$ (120) $\displaystyle\int D^{K}\bm{y}\;\zeta_{2}(\bm{y})\zeta_{4}(\bm{y})=$ $\displaystyle 15K+21K(K-1)$ $\displaystyle+3K(K-1)(K-2),$ (121) $\displaystyle\int D^{K}\bm{y}\;\zeta^{2}_{3}(\bm{y})=$ $\displaystyle 6M(M-1)(M-2).$ (122) These expressions are valid when $K\geq 2$ or $M=3,4,5,\cdots$. Now in the second term inside the integral in Eq. (41), we can write by symmetry $\displaystyle\sum_{\alpha=1}^{K}\left[\frac{1}{2^{M}}\mathrm{Tr}\;\Psi_{\alpha}\exp\left[\sigma\bm{y}\cdot\bm{\Psi}\right]\right]^{2}$ $\displaystyle=$ $\displaystyle K\left[\frac{1}{2^{M}}\mathrm{Tr}\;\Psi_{1}\exp\left[\sigma\bm{y}\cdot\bm{\Psi}\right]\right]^{2}.$ (123) We then define $\displaystyle\frac{1}{2^{M}}\mathrm{Tr}\;\Psi_{1}\exp\left[\sigma\bm{y}\cdot\bm{\Psi}\right]\equiv\sum_{j=1}^{\infty}\frac{\sigma^{j}}{j!}\eta_{j}(\bm{y}).$ (124) We find that $\eta_{1}(\bm{y})=y_{1}$, $\displaystyle\eta_{2}(\bm{y})=\sum_{(\alpha,\beta)}y_{\alpha}y_{\beta}2^{-M}\mathrm{Tr}\Psi_{1}\Psi_{\alpha}\Psi_{\beta},$ (125) and $\displaystyle\eta_{3}(\bm{y})$ $\displaystyle=y_{1}+3y_{1}\sum_{\alpha\neq 1}y^{2}_{\alpha}$ $\displaystyle+\sum^{K}_{(\alpha,\beta,\gamma)}y_{\alpha}y_{\beta}y_{\gamma}\;\frac{1}{2^{M}}\mathrm{Tr}\Psi_{1}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}.$ (126) For the calculation up to $O(\sigma^{6})$, we need $\displaystyle\int D^{K}\bm{y}\;\eta^{2}_{2}(\bm{y})=4(M-2),$ (127) $\displaystyle\int D^{K}\bm{y}\;\eta_{1}(\bm{y})\eta_{3}(\bm{y})=3K,$ (128) $\displaystyle\int D^{K}\bm{y}\;\eta^{2}_{1}(\bm{y})\zeta_{2}(\bm{y})=K+2.$ (129) It is now a matter of Taylor expanding the functions inside the integral in Eq. (41) and using the above results to get the expansion coefficients in $f_{M}(\sigma)=\sum_{j=0}^{\infty}c_{2j}(M)\sigma^{2j}$. We find that $c_{0}=c_{2}=c_{4}=0$ and the leading order term is $O(\sigma^{6})$. We obtain $\displaystyle c_{6}(M)=-\frac{M}{24}(M-1)(M-3).$ (130) As mentioned in the main text, it becomes negative for $M>3$. To go up to $O(\sigma^{8})$, we need results of more Gaussian integrals similar to Eqs. (118)-(122) and to Eqs. (127)-(129). After a rather long calculation with the help of symbolic algebra packages in MATHEMATICA, we obtain $\displaystyle c_{8}(M)=-\frac{M}{48}(M-1)(3M^{2}-27M+47),$ (131) which is valid for $K\geq 3$ or $M=3,4,5,\cdots$. We note that $c_{8}(M=3)=7/8>0$. ## Appendix C The 1RSB equations for the quintic Landau free energy Here we consider the 1RSB saddle point equations corresponding to the free energy expanded up to quintic order as given in Eq. (25). Let us assume that $q_{ab}$ takes the 1RSB form having values $q_{1}$ on $n/m_{1}$ diagonal blocks of size $m_{1}$ and $q_{0}=0$ outside the blocks. We can then express the cubic, quartic, and quintic terms in $q_{ab}$ in terms of $q_{1}$ and $m_{1}$ as we have done in Eqs. (29) and (30) for the quadratic terms. We obtain $\displaystyle\frac{\beta F_{\rm 1RSB}}{N}=$ $\displaystyle-C\beta^{2}-M\ln 2+\tau(m_{1}-1)q_{1}^{2}-w_{1}(m_{1}-1)(m_{1}-2)q^{3}_{1}-w_{2}(m_{1}-1)q^{3}_{1}$ $\displaystyle- y_{1}(m_{1}-1)q^{4}_{1}-y_{2}(m_{1}-1)^{2}q^{4}_{1}-y_{3}(m_{1}-1)(m_{1}-2)q^{4}_{1}-y_{5}(m_{1}-1)(m_{1}^{2}-3m_{1}+3)q^{4}_{1}$ $\displaystyle- z_{1}(m_{1}-1)q^{5}_{1}-z_{2}(m_{1}-1)^{2}q^{5}_{1}-z_{3}(m_{1}-1)(m_{1}-2)q^{5}_{1}-z_{4}(m_{1}-1)(m_{1}-2)q^{5}_{1}$ $\displaystyle- z_{5}(m_{1}-1)(m_{1}^{2}-3m_{1}+3)q^{5}_{1}-z_{6}(m_{1}-1)^{2}(m_{1}-2)q^{5}_{1}-z_{7}(m_{1}-1)^{3}q_{1}^{5}$ $\displaystyle- z_{8}(m_{1}-1)(m_{1}-2)^{2}q^{5}_{1}-z_{9}(m_{1}-1)(m_{1}-2)(m_{1}^{2}-2m_{1}+2)q^{5}_{1}.$ (132) The saddle point equations are obtained by varying the free energy with respect to $q_{1}$ and $m_{1}$. They are given by $\displaystyle 2\tau q_{1}=$ $\displaystyle 3\Big{[}w_{1}(m_{1}-2)+w_{2}\Big{]}q^{2}_{1}+4\Big{[}y_{1}+y_{2}(m_{1}-1)$ $\displaystyle+y_{3}(m_{1}-2)+y_{5}(m_{1}^{2}-3m_{1}+3)\Big{]}q^{3}_{1}$ $\displaystyle+$ $\displaystyle 5\Big{[}z_{1}+z_{2}(m_{1}-1)+z_{3}(m_{1}-2)+z_{4}(m_{1}-2)$ $\displaystyle+z_{5}(m_{1}^{2}-3m_{1}+3)+z_{6}(m_{1}-1)(m_{1}-2)$ $\displaystyle+z_{7}(m_{1}-1)^{2}+z_{8}(m_{1}-2)^{2}$ $\displaystyle+z_{9}(m_{1}-2)(m_{1}^{2}-2m_{1}+2)\Big{]}q^{4}_{1}$ (133) and $\displaystyle\tau q^{2}_{1}=$ $\displaystyle\Big{[}w_{1}(2m_{1}-3)+w_{2}\Big{]}q^{3}_{1}+\Big{[}y_{1}+2y_{2}(m_{1}-1)$ $\displaystyle+y_{3}(2m_{1}-3)+y_{5}(3m^{2}_{1}-8m_{1}+6)\Big{]}q^{4}_{1}$ $\displaystyle+$ $\displaystyle\Big{[}z_{1}+2z_{2}(m_{1}-1)+z_{3}(2m_{1}-3)+z_{4}(2m_{1}-3)$ $\displaystyle+z_{5}(3m^{2}_{1}-8m_{1}+6)+z_{6}(3m^{2}_{1}-8m_{1}+5)$ $\displaystyle+3z_{7}(m_{1}-1)^{2}+z_{8}(3m^{2}_{1}-10m_{1}+8)$ $\displaystyle+z_{9}(4m^{3}_{1}-15m^{2}_{1}+20m_{1}-10)\Big{]}q^{5}_{1}$ (134) Combining the above equations with the condition $q_{1}\neq 0$, we have $\displaystyle 0=$ $\displaystyle\Big{[}-m_{1}w_{1}+w_{2}\Big{]}+2\Big{[}y_{1}-y_{3}+y_{5}m_{1}(2-m_{1})\Big{]}q_{1}$ $\displaystyle+$ $\displaystyle\Big{[}3z_{1}+z_{2}(m_{1}-1)+z_{3}(m_{1}-4)+z_{4}(m_{1}-4)$ $\displaystyle+$ $\displaystyle z_{5}(-m^{2}_{1}+m_{1}+3)+z_{6}m_{1}(1-m_{1})-z_{7}(m_{1}-1)^{2}$ $\displaystyle+$ $\displaystyle z_{8}(4-m^{2}_{1})+z_{9}m_{1}(-3m^{2}_{1}+10m_{1}-10)\Big{]}q^{2}_{1}$ (135) The 1RSB transition temperature is determined by setting $m_{1}=1$ in the above equation. We obtain $\displaystyle(w_{2}-w_{1})+2(y_{1}-y_{3}+y_{5})q_{1}$ $\displaystyle+3(z_{1}-z_{3}-z_{4}+z_{5}+z_{8}-z_{9})q^{2}_{1}=0.$ (136) Equivalently, we have an equation without factors of $\beta$ as $\displaystyle(w^{\prime}_{2}-w^{\prime}_{1})+2(y^{\prime}_{1}-y^{\prime}_{3}+y^{\prime}_{5})\mu_{1}$ $\displaystyle+3(z^{\prime}_{1}-z^{\prime}_{3}-z^{\prime}_{4}+z^{\prime}_{5}+z^{\prime}_{8}-z^{\prime}_{9})\mu^{2}_{1}=0.$ (137) From Appendix A, the coefficients are given by $\displaystyle w_{2}-w_{1}=\frac{\beta^{6}}{12M^{2}}(M-1)(M-3),$ (138) and $\displaystyle y_{1}-y_{3}+y_{5}$ (139) $\displaystyle=$ $\displaystyle\begin{cases}-\frac{\beta^{8}}{48M^{3}}(M-1)(12M-29),&\text{if}\ 2\leq M\leq 3\\\ \frac{\beta^{8}}{48M^{3}}(M-1)(3M^{2}-27M+47),&\text{if}\ M\geq 3.\end{cases}$ In Sec. II.5, we have defined the effective quintic coefficient $z_{1}^{\rm eff}$ as the one that appears in the above equation, which can be calculated from the results in Appendix A as $\displaystyle z_{1}^{\rm eff}$ $\displaystyle\equiv z_{1}-z_{3}-z_{4}+z_{5}+z_{8}-z_{9}$ (140) $\displaystyle=\begin{cases}\frac{\beta^{10}}{60M^{4}}(M-1)(45M-103),\\\ -\frac{\beta^{10}}{60M^{4}}(M-1)(30M^{2}-195M+283),\\\ \frac{\beta^{10}}{60M^{4}}(M-1)(3M^{3}-57M^{2}+273M-355),\end{cases}$ In the above equation, the three cases from top to bottom correspond to the regions, $2\leq M\leq 3$, $3\leq M\leq 4$ and $M\geq 4$, respectively. This is related to the small-$\sigma$ expansion of $f_{M}(\sigma)$ discussed in Sec. II.3 as follows. If we multiply Eq. (136) by $-q^{3}_{1}/2$ and use $q_{1}=\mu_{1}/\beta^{2}=M\sigma^{2}/\beta^{2}$, Eq. (136) becomes $\displaystyle c_{6}(M)\sigma^{6}+c_{8}(M)\sigma^{8}+c_{10}(M)\sigma^{10}=0,$ (141) where $\displaystyle c_{6}(M)=-\frac{M^{3}}{2\beta^{6}}(w_{2}-w_{1}),$ (142) $\displaystyle c_{8}(M)=-\frac{M^{4}}{\beta^{8}}(y_{1}-y_{3}+y_{5}),$ (143) and $\displaystyle c_{10}(M)=-\frac{3M^{5}}{2\beta^{10}}(z_{1}-z_{3}-z_{4}+z_{5}+z_{8}-z_{9}).$ (144) ## Appendix D FRSB equations for the free energy with one quintic term Taking a functional derivative of the free energy in Eq. (72) with respect to $q(x)$, we have $\displaystyle 0=$ $\displaystyle\frac{\delta}{\delta q(x)}\left(\frac{\beta F_{\rm FRSB}}{N}\right)=-2\tau q(x)-w_{1}\left\\{3xq^{2}(x)+3\int_{0}^{x}dy\;q^{2}(y)+6q(x)\int_{x}^{1}dy\;q(y)\right\\}+3w_{2}q^{2}(x)$ $\displaystyle+4y_{1}q^{3}(x)-4y_{2}\langle q^{2}\rangle q(x)-y_{3}\left\\{2\langle q^{3}\rangle+6\langle q\rangle q^{2}(x)+2\langle q^{2}\rangle q(x)+4xq^{3}(x)-6q^{2}(x)\int_{0}^{x}dy\;q(y)-2\int_{x}^{1}dyq^{3}(y)\right\\}$ $\displaystyle-y_{5}\Bigg{\\{}4\langle q^{2}\rangle q(x)-8\langle q\rangle^{2}q(x)-8\langle q\rangle\langle q^{2}\rangle-4\int_{0}^{1}dx^{\prime}\;q(x^{\prime})\int_{0}^{x^{\prime}}dy\;(q(x^{\prime})-q(y))^{2}$ $\displaystyle~{}~{}~{}-4\langle q\rangle\Big{[}3xq^{2}(x)-4q(x)\int_{0}^{x}dy\;q(y)-2\int_{x}^{1}dy\;q^{2}(y)+\int_{0}^{x}dy\;q^{2}(y)+2q(x)\int_{x}^{1}dy\;q(y)\Big{]}$ $\displaystyle~{}~{}~{}-\Bigg{[}4x^{2}q^{3}(x)-12xq^{2}(x)\int_{0}^{x}dy\;q(y)-4\int_{x}^{1}dy\;yq^{3}(y)+4xq(x)\int_{0}^{x}dy\;q^{2}(y)+4q(x)\int_{x}^{1}dy\;yq^{2}(y)$ $\displaystyle~{}~{}~{}~{}~{}~{}-4\int_{0}^{x}dyq(y)\int_{0}^{x}dz\;q^{2}(z)-4\int_{x}^{1}dy\;q(y)\int_{0}^{y}dz\;q^{2}(z)-8q(x)\int_{x}^{1}dy\;q(y)\int_{0}^{y}dz\;q(z)+8q(x)\left[\int_{0}^{x}dy\;q(y)\right]^{2}$ $\displaystyle~{}~{}~{}~{}~{}~{}+8\int_{x}^{1}dy\;q^{2}(y)\int_{0}^{y}dz\;q(z)+4q(x)\int_{x}^{1}dy\;\int_{0}^{y}dz\;q^{2}(z)\Bigg{]}\Bigg{\\}}+5z_{1}q^{4}(x).$ (145) For $0\leq x\leq 1$ where $q^{\prime}(x)\neq 0$, we can take a derivative of the above equation and have $\displaystyle\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left[\frac{\delta}{\delta q(x)}\left(\frac{\beta F_{\rm FRSB}}{N}\right)\right]=0.$ (146) This gives us $\displaystyle 0=$ $\displaystyle-2\tau- w_{1}\left\\{6xq(x)+6\int_{x}^{1}dy\;q(y)\right\\}+6w_{2}q(x)$ $\displaystyle+$ $\displaystyle 12y_{1}q^{2}(x)-4y_{2}\langle q^{2}\rangle- y_{3}\Bigg{\\{}12\langle q\rangle q(x)+12xq^{2}(x)$ $\displaystyle-12q(x)\int_{0}^{x}dy\;q(y)+2\langle q^{2}\rangle\Bigg{\\}}-y_{5}\Bigg{\\{}4\langle q^{2}\rangle-8\langle q\rangle^{2}$ $\displaystyle-4\langle q\rangle\Big{[}6xq(x)-4\int_{0}^{x}dy\;q(y)+2\int_{x}^{1}dy\;q(y)\Big{]}$ $\displaystyle-\Bigg{[}12x^{2}q^{2}(x)-24xq(x)\int_{0}^{x}dy\;q(y)+4x\int_{0}^{x}dy\;q^{2}(y)$ $\displaystyle~{}~{}+4\int_{x}^{1}dy\;yq^{2}(y)-8\int_{x}^{1}dy\;q(y)\int_{0}^{y}dz\;q(z)$ $\displaystyle~{}~{}+8\left[\int_{0}^{x}dyq(y)\right]^{2}+4\int_{x}^{1}dy\int_{0}^{y}dz\;q^{2}(z)\Bigg{]}\Bigg{\\}}$ $\displaystyle+$ $\displaystyle 20z_{1}q^{3}(x)$ (147) Taking one more derivative with respect to $x$ and divide by $q^{\prime}(x)$, we have for $x$ with $q^{\prime}(x)\neq 0$ $\displaystyle\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left[\frac{\delta}{\delta q(x)}\left(\frac{\beta F_{\rm FRSB}}{N}\right)\right]=0.$ (148) This is given by $\displaystyle 0=$ $\displaystyle-6(w_{1}x-w_{2})+24Y(x)q(x)$ $\displaystyle+12Y^{\prime}(x)\int_{x}^{1}dy\;q(y)+60z_{1}q^{2}(x),$ (149) where $\displaystyle Y(x)\equiv y_{1}-xy_{3}+x^{2}y_{5}.$ (150) Taking a derivative of the above equation with respect to $x$ once again, we have $\displaystyle\frac{d}{dx}\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left[\frac{\delta}{\delta q(x)}\left(\frac{\beta F_{\rm FRSB}}{N}\right)\right]=0,$ (151) This can be written as $\displaystyle 0=$ $\displaystyle-6w_{1}+24Y(x)q^{\prime}(x)+12Y^{\prime}(x)q(x)$ $\displaystyle+24y_{5}\int_{x}^{1}dy\;q(y)+120z_{1}q^{\prime}(x)q(x).$ (152) Eliminating $\int_{x_{0}}^{1}dy\;q(y)$ from Eqs. (149) and (152), we have $\displaystyle q^{\prime}(x)$ $\displaystyle=$ $\displaystyle\frac{-y_{3}w_{1}+2y_{5}w_{2}+2(-y^{2}_{3}+4y_{1}y_{5})q(x)+20z_{1}y_{5}q^{2}(x)}{4Y^{\prime}(x)(Y(x)+5z_{1}q(x))}.$ (153) ## Appendix E FRSB expressions for all quintic terms Here we present the expressions in terms of the Parisi function $q(x)$ for the quintic contributions to the free energy, which is denoted by $F^{(5)}_{\rm FRSB}$. We have $\displaystyle\frac{\beta F_{\rm FRSB}^{(5)}}{N}$ $\displaystyle=z_{1}\langle q^{5}\rangle-z_{2}\Big{[}-\langle q^{5}\rangle+2\langle q^{3}\rangle\langle q^{2}\rangle+\int_{0}^{1}dx\;\int_{0}^{x}dy\;(q^{3}(y)-q^{3}(x))(q^{2}(y)-q^{2}(x))\Big{]}$ $\displaystyle-z_{3}\left[2\langle q\rangle\langle q^{4}\rangle+\int_{0}^{1}dx\;q^{3}(x)\int_{0}^{x}dy\;(q(y)-q(x))^{2}\right]-z_{4}\left[2\langle q^{2}\rangle\langle q^{3}\rangle+\int_{0}^{1}dx\;q(x)\int_{0}^{x}dy\;\left(q^{2}(y)-q^{2}(x)\right)^{2}\right]$ $\displaystyle-z_{5}\Big{[}-4\langle q\rangle^{2}\langle q^{3}\rangle+\langle q^{2}\rangle\langle q^{3}\rangle-3\langle q\rangle\langle q^{2}h\rangle-\langle q^{3}\rangle\langle h\rangle-\int_{0}^{1}dx\;q^{2}(x)\int_{0}^{x}dy\;(q(y)-q(x))(h(y)-h(x))\Big{]},$ $\displaystyle-z_{6}\left[-2\langle q\rangle\langle q^{2}\rangle^{2}-\langle q^{2}\rangle\langle qh\rangle\right]-z_{7}\Big{[}2\langle q^{2}\rangle\langle q^{3}\rangle+\langle q\rangle\langle q^{4}\rangle-4\langle q\rangle\langle q^{2}\rangle^{2}-3\langle q^{2}\rangle\langle g\rangle+\langle q^{2}g\rangle$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}-\langle q\rangle\int_{0}^{1}dx\int_{0}^{x}dy\;\left(q^{2}(y)-q^{2}(x)\right)^{2}-\int_{0}^{1}dx\int_{0}^{x}dy\;(g(y)-g(x))(q^{2}(y)-q^{2}(x))\Big{]},$ $\displaystyle-z_{8}\left[-4\langle q^{2}\rangle\langle q^{3}\rangle-4\langle q\rangle\langle q^{2}h\rangle-\langle qh^{2}\rangle\right]-z_{9}\Big{[}8\langle q\rangle^{3}\langle q^{2}\rangle-4\langle q^{2}\rangle^{2}\langle q\rangle+10\langle q\rangle^{2}\langle qh\rangle-2\langle q^{2}\rangle\langle qh\rangle$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}+2\langle q\rangle\langle q^{2}\rangle\langle h\rangle+3\langle q\rangle\langle h^{2}\rangle+\langle h\rangle\langle qh\rangle+2\langle q\rangle\int_{0}^{1}dx\;q(x)\int_{0}^{x}dy\;(q(y)-q(x))(h(y)-h(x))$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}+\int_{0}^{1}dx\;h(x)\int_{0}^{x}dy\;dy(q(y)-q(x))(h(y)-h(x))\Big{]},$ (154) where $\displaystyle h(x)=\int_{0}^{x}dy\;(q(y)-q(x))^{2}$ (155) $\displaystyle g(x)=\int_{0}^{x}dy\;\left(q^{2}(y)-q^{2}(x)\right)(q(y)-q(x))$ (156) Stationary conditions for the free energy obtained from the quintic contributions are quite complicated. In this Appendix, we only present $\displaystyle 0=\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left[\frac{\delta}{\delta q(x)}\left(\beta F_{\rm FRSB}^{(5)}/N\right)\right].$ (157) This is given by $\displaystyle 0$ $\displaystyle=60z_{1}q^{2}(x)-6z_{2}\langle q^{2}\rangle- z_{3}\left[6\int_{0}^{x}dy\;q^{2}(y)+48q(x)\int_{x}^{1}dy\;q(y)+60xq^{2}(x)\right]$ $\displaystyle- z_{4}\left[12\int_{x}^{1}dy\;q^{2}(y)+24q(x)\int_{x}^{1}dy\;q(y)+60xq^{2}(x)\right]-z_{5}\Big{[}-24\langle q\rangle^{2}+6\langle q^{2}\rangle-6\langle h\rangle-72\langle q\rangle xq(x)$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}+36\langle q\rangle\int_{0}^{x}dy\;q(y)-6x\int_{x}^{1}dy\;q^{2}(y)+60xq(x)\int_{0}^{x}dy\;q(y)-12\left(\int_{0}^{x}dy\;q(y)\right)^{2}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}-54x^{2}q^{2}(x)+6\int_{0}^{x}dy\;h(y)-6xh(x)\Big{]}$ $\displaystyle-z_{6}\left[-6\langle q^{2}\rangle x\right]-z_{8}\Big{[}-24\langle q^{2}\rangle-96\langle q\rangle xq(x)+48\langle q\rangle\int_{0}^{x}dy\;q(y)+96xq(x)\int_{0}^{x}dy\;q(y)$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}-24\left(\int_{0}^{x}dy\;q(y)\right)^{2}-60x^{2}q^{2}(x)-12x\int_{0}^{x}dy\;q^{2}(y)\Big{]}$ $\displaystyle-z_{9}\Big{[}12x\langle q\rangle^{2}+48x\langle q\rangle\\{xq(x)-\int_{0}^{x}dy\;q(y)\\}+12x^{2}h(x)+6x\left(\langle h\rangle-2\int_{0}^{x}dy\;h(y)\right)$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}+48x\left(xq(x)-\int_{0}^{x}dy\;q(y)\right)^{2}+6x\langle h\rangle+72x\langle q\rangle\int_{0}^{x}dy\;(q(x)-q(y))-12x\langle q^{2}\rangle+60x\langle q\rangle^{2}\Big{]}.$ (158) ## References * Gross _et al._ (1985) D. J. Gross, I. Kanter, and H. Sompolinsky, “Mean-field theory of the Potts glass,” Phys. Rev. Lett. 55, 304–307 (1985). * Gardner (1985) E. Gardner, “Spin glasses with p-spin interactions,” Nuclear Physics B 257, 747–765 (1985). * Kirkpatrick _et al._ (1989) T. R. Kirkpatrick, D. Thirumalai, and P. G. Wolynes, “Scaling concepts for the dynamics of viscous liquids near an ideal glassy state,” Phys. Rev. A 40, 1045–1054 (1989). * Kirkpatrick and Thirumalai (2015) T. R. Kirkpatrick and D. Thirumalai, “Colloquium: Random first order transition theory concepts in biology and physics,” Rev. Mod. Phys. 87, 183–209 (2015). * Lubchenko and Wolynes (2007) Vassiliy Lubchenko and Peter G. Wolynes, “Theory of Structural Glasses and Supercooled Liquids,” Annual Review of Physical Chemistry 58, 235–266 (2007). * Cavagna (2009) Andrea Cavagna, “Supercooled liquids for pedestrians,” Physics Reports 476, 51 (2009). * Biroli and Bouchaud (2010) G. Biroli and J. P. Bouchaud, “The Random First-Order Transition Theory of Glasses: A Critical Assessment,” in _Structural Glasses and Supercooled Liquids: Theory, Experiment,and Applications_, edited by P. G. Wolynes and V. Lubschenko (Wiley, Singapore, 2010) Chap. 2. * Franz and Parisi (1999) S. Franz and G. Parisi, “Critical properties of a three-dimensional p-spin model,” The European Physical Journal B - Condensed Matter and Complex Systems 8, 417 (1999). * Campellone _et al._ (1998) Matteo Campellone, Barbara Coluzzi, and Giorgio Parisi, “Numerical study of a short-range $p$-spin glass model in three dimensions,” Phys. Rev. B 58, 12081 (1998). * Kauzmann (1948) W. Kauzmann, “The nature of the glassy state and the behavior of liquids at low temperatures,” Chem. Reviews 43, 219 (1948). * Yeo and Moore (2012a) Joonhyun Yeo and M. A. Moore, “Renormalization group analysis of the $M$-$p$-spin glass model with $p=3$ and $M=3$,” Phys. Rev. B 85, 100405 (2012a). * Yeo and Moore (2012b) Joonhyun Yeo and M. A. Moore, “Origin of the growing length scale in $M$-$p$-spin glass models,” Phys. Rev. E 86, 052501 (2012b). * Yeo and Moore (2020) J. Yeo and M. A. Moore, “Possible instability of one-step replica symmetry breaking in $p$-spin Ising models outside mean-field theory,” Phys. Rev. E 101, 032127 (2020). * Moore (2006) M. A. Moore, “Interface Free Energies in $p$-Spin Glass Models,” Phys. Rev. Lett. 96, 137202 (2006). * Rudnick and Nelson (1976) Joseph Rudnick and David R. Nelson, “Equations of state and renormalization-group recursion relations,” Phys. Rev. B 13, 2208–2221 (1976). * Höller and Read (2020) J. Höller and N. Read, “One-step replica-symmetry-breaking phase below the de Almeida–Thouless line in low-dimensional spin glasses,” Phys. Rev. E 101, 042114 (2020). * de Almeida and Thouless (1978) J R L de Almeida and D J Thouless, “Stability of the sherrington-kirkpatrick solution of a spin glass model,” Journal of Physics A: Mathematical and General 11, 983 (1978). * Aguilar-Janita _et al._ (2023) Miguel Aguilar-Janita, Victor Martin-Mayor, Javier Moreno-Gordo, and Juan Jesus Ruiz-Lorenzo, “Second order phase transition in the six-dimensional Ising spin glass on a field,” (2023), arXiv:2306.00569 [cond-mat.dis-nn] . * Caltagirone _et al._ (2011) F. Caltagirone, U. Ferrari, L. Leuzzi, G. Parisi, and T. Rizzo, “Ising $M$-$p$-spin mean-field model for the structural glass: Continuous versus discontinuous transition,” Phys. Rev. B 83, 104202 (2011). * Aspelmeier _et al._ (2008) T Aspelmeier, A Billoire, E Marinari, and M A Moore, “Finite-size corrections in the Sherrington-Kirkpatrick model,” Journal of Physics A: Mathematical and Theoretical 41, 324008 (2008). * Goldbart and Elderfield (1985) P Goldbart and D Elderfield, “The failure of the Parisi scheme for spin glass models without reflection symmetry,” Journal of Physics C: Solid State Physics 18, L229 (1985). * Janiš _et al._ (2013) V. Janiš, A. Kauch, and A. Klíč, “Free energy of mean-field spin-glass models: Evolution operator and perturbation expansion,” Phys. Rev. B 87, 054201 (2013). * Bray and Roberts (1980) A J Bray and S A Roberts, “Renormalisation-group approach to the spin glass transition in finite magnetic fields,” Journal of Physics C: Solid State Physics 13, 5405 (1980). * Angelini and Biroli (2015) Maria Chiara Angelini and Giulio Biroli, “Spin Glass in a Field: A New Zero-Temperature Fixed Point in Finite Dimensions,” Phys. Rev. Lett. 114, 095701 (2015). * Parisi _et al._ (2014) G Parisi, F Ricci-Tersenghi, and T Rizzo, “Diluted mean-field spin-glass models at criticality,” Journal of Statistical Mechanics: Theory and Experiment 2014, P04013 (2014). * Vedula _et al._ (2023) Bharadwaj Vedula, M. A. Moore, and Auditya Sharma, “Study of the de Almeida–Thouless line in the one-dimensional diluted power-law $XY$ spin glass,” Phys. Rev. E 108, 014116 (2023). * Moore (2012) M. A. Moore, “$1/m$ expansion in spin glasses and the de Almeida-Thouless line,” Phys. Rev. E 86, 031114 (2012). * McMillan (1984) W. L. McMillan, “Domain-wall renormalization-group study of the three-dimensional random Ising model,” Phys. Rev. B 30, 476–477 (1984). * Bray and Moore (1986) A. J. Bray and M. A. Moore, “Scaling theory of the ordered phase of spin glasses,” in _Heidelberg Colloquium on Glassy Dynamics and Optimization_, edited by L. Van Hemmen and I. Morgenstern (Springer, New York, 1986) p. 121. * Fisher and Huse (1988) Daniel S. Fisher and David A. Huse, “Equilibrium behavior of the spin-glass ordered phase,” Phys. Rev. B 38, 386–411 (1988). * Imry and Ma (1975) Yoseph Imry and Shang-keng Ma, “Random-Field Instability of the Ordered State of Continuous Symmetry,” Phys. Rev. Lett. 35, 1399–1401 (1975). * Moore and Yeo (2006) M. A. Moore and J. Yeo, “Thermodynamic Glass Transition in Finite Dimensions,” Phys. Rev. Lett. 96, 095701 (2006). * Larson _et al._ (2010) Derek Larson, Helmut G. Katzgraber, M. A. Moore, and A. P. Young, “Numerical studies of a one-dimensional three-spin spin-glass model with long-range interactions,” Phys. Rev. B 81, 064415 (2010).
# The Preservation of Convexity by Geodesics in the Space of Kähler Potentials on Complex Affine Manifolds Jingchen Hu ###### Abstract On a compact complex affine manifold with a constant coefficient Kähler metric $\omega_{0}$, we introduce a concept: $(S,\omega_{0})$-convexity and show that $(S,\omega_{0})$-convexity is preserved by geodesics in the space of Kähler potentials. This implies that if two potentials are both strictly $(S,\omega_{0})$-convex, then the metrics along the geodesic connecting them are non-degenerate. ## 1 Introduction Results of this paper provide partial answers to the following questions: First, in the space of Kähler potentials, with the metric introduced by Semmes-Mabuchi-Donaldson, any two points can be joined by a weak geodesic, but metrics along the geodesic may be degenerate. The question is can we pose some conditions on two points, so that metrics along the geodesic connecting them do not degenerate? Second, the maximum rank problem has been extensively studied for a general class of fully nonlinear elliptic equations. But the situation for degenerate elliptic equations has remained unexplored, for example, degenerate complex Monge-Ampère equation. One question is whether, under some conditions, maximum rank property holds for solutions of degenerate complex Monge-Ampère equations? In this paper, on a complex affine manifold with a constant coefficient metric $\omega_{0}$, we introduce a concept $(S,\omega_{0})$-convexity, and show that if two potentials are both strictly $(S,\omega_{0})$-convex, then they can be connected by a geodesic with non-degenerate metric. Under similar condition, we can show the Hessian of the solution to the homogenous complex Monge-Ampère equation on an $n+1$ dimensional product space has rank $n$. In section 1.1 we introduce some basic concepts and present some former results; in section 1.2 we introduce the concept of $(S,\omega_{0})$-convexity, an elliptic perturbation of the homogenous complex Monge-Ampère equation and also present our main results; in section 1.3 we provide a bird’s-eye view of results of each section and show the structure of the paper; in section 1.4, more notation and convention are introduced. ### 1.1 Background Given an $n$ dimensional Kähler manifold $(V,\omega_{0})$, we define the space of Kähler potentials: $\displaystyle\mathcal{H}=\\{\phi\in C^{\infty}(V)|\omega_{0}+\sqrt{-1}\partial\overline{\partial}\phi>0\\}.$ (1.1) A Riemannian metric can be introduced to this space. For $\psi_{1},\psi_{2}\in T_{\phi}\mathcal{H}$, let $\displaystyle<\psi_{1},\psi_{2}>_{\phi}=\int_{V}\psi_{1}\psi_{2}(\omega_{0}+\sqrt{-1}\partial\overline{\partial}\phi)^{n}.$ (1.2) With the Riemannian metric above, for a curve $\\{\varphi(t)|t\in[0,1]\\}\subset\mathcal{H}$, its energy is $\displaystyle E(\varphi)=\int_{0}^{1}\int_{\mathcal{V}}\varphi_{t}^{2}(\omega_{0}+\sqrt{-1}\partial\overline{\partial}\varphi)^{n}dt.$ (1.3) In this paper we denote $\omega_{0}=\sqrt{-1}b_{\alpha{\overline{\beta}}}dz^{\alpha}\wedge\overline{dz^{\beta}}$, $\alpha,\beta\in\\{1,...,n\\}$, and $g_{\alpha{\overline{\beta}}}=b_{\alpha{\overline{\beta}}}+\varphi_{\alpha{\overline{\beta}}},\ g^{\theta{\overline{\beta}}}g_{\alpha{\overline{\beta}}}=\delta_{\theta\alpha}$. Then the Euler-Lagrange equation for the energy above is $\displaystyle\varphi_{tt}=\varphi_{t\alpha}g^{\alpha{\overline{\beta}}}\varphi_{t{\overline{\beta}}}.$ (1.4) When $\omega_{0}+\sqrt{-1}\partial\overline{\partial}\varphi>0$, equation (1.4) is equivalent to $\displaystyle\det\left(\begin{array}[]{cc}\varphi_{tt}&\varphi_{t{\overline{\beta}}}\\\ \varphi_{\alpha t}&\varphi_{\alpha{\overline{\beta}}}+b_{\alpha{\overline{\beta}}}\end{array}\right)=0.$ (1.7) Here the curve $\varphi$ is considered as a function defined on $[0,1]\times V$. Let $\displaystyle\mathcal{S}=\\{\tau=t+\sqrt{-1}\theta|0\leq t\leq 1\\}\subset\mathbb{C}.$ (1.8) We can consider $\varphi$ as a function on $\mathcal{S}\times V$ by letting $\varphi(\tau)=\varphi(t)$. Then equation (1.7) becomes a homogenous complex Monge-Ampère equation: $\displaystyle\det\left(\begin{array}[]{cc}\varphi_{\tau\overline{\tau}}&\varphi_{\tau{\overline{\beta}}}\\\ \varphi_{\alpha{\overline{\tau}}}&\varphi_{\alpha{\overline{\beta}}}+b_{\alpha{\overline{\beta}}}\end{array}\right)=0.$ (1.11) Denote the projection from $\mathcal{S}\times V$ to $V$ by $\pi_{V}$ and denote $\pi_{V}^{\ast}(\omega_{0})$ by $\Omega_{0}$. Then equation (1.11) becomes $\displaystyle(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\varphi)^{n+1}=0.$ (1.12) This leads to the study of the following Dirichlet problem on $\mathcal{S}\times V$. ###### Problem 1.1 (Geodesic Problem). Given $\varphi_{0},\varphi_{1}\in\mathcal{H}$, find $\Phi\in C^{1,1}(\mathcal{S}\times V)$, satisfying $\displaystyle(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi)^{n+1}=0,$ $\displaystyle\ \ \ \ \text{ in }\mathcal{S}\times V;$ (1.13) $\displaystyle\ \ \ \Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi\geq 0,$ $\displaystyle\ \ \ \ \text{ in }\mathcal{S}\times V;$ (1.14) $\displaystyle\ \ \ \ \ \ \ \Phi_{\theta}=0,$ $\displaystyle\ \ \ \ \text{ in }\mathcal{S}\times V;$ (1.15) $\displaystyle\ \ \ \ \ \ \ \Phi=\varphi_{0},$ $\displaystyle\ \ \ \ \text{ on }\\{t=0\\}\times V;$ (1.16) $\displaystyle\ \ \ \ \ \ \ \Phi=\varphi_{1},$ $\displaystyle\ \ \ \ \text{ on }\\{t=1\\}\times V.$ (1.17) For a solution $\Phi$ to Problem 1.1, $\Phi(t,\ast)$ may not be in $\mathcal{H}$, so we consider it as a weak or generalized geodesic connection $\varphi_{0}$ and $\varphi_{1}$. More generally, we can replace $\mathcal{S}$ by a Riemann surface $\mathcal{R}$ and consider the following Dirichlet problem. In this paper we only consider the case where $\mathcal{R}$ is a bounded domain in $\mathbb{C}$ with smooth boundary. ###### Problem 1.2 (A Homogenous Monge-Ampère Equation on General Product Spaces). Given $\mathcal{R}$, a bounded domain in $\mathbb{C}$ with smooth boundary, and $F\in C^{\infty}(\partial\mathcal{R}\times V)$ satisfying $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}(F(\tau,\ast))>0,\ \ \ \ \text{ for any }\tau\in\partial\mathcal{R},$ (1.18) find $\Phi\in C^{1,1}(\mathcal{R}\times V)$ which satisfies $\displaystyle(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi)^{n+1}=0,$ $\displaystyle\ \ \ \ \text{ in }\mathcal{R}\times V;$ (1.19) $\displaystyle\ \ \ \Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi\geq 0,$ $\displaystyle\ \ \ \ \text{ in }\mathcal{R}\times V;$ (1.20) $\displaystyle\ \ \ \ \ \ \ \Phi=F,$ $\displaystyle\ \ \ \ \text{ on }\partial\mathcal{R}\times V.$ (1.21) ###### Remark 1.1. Problem 1.1 can be reduced to Problem 1.2. Let $f$ be a holomorphic covering map from $\mathcal{S}$ to an annulus $\\{\tau|\ 1<|\tau|<2\\}.$ If $\Phi$ is a solution to Problem 1.2 with $\mathcal{R}$ being the annulus $\\{\tau|\ 1<|\tau|<2\\}$ and $F|_{\\{|\tau|=1\\}}=\varphi_{0}$, $F|_{\\{|\tau|=2\\}}=\varphi_{1}$, then $\Phi(f(\tau),z)$ is a solution to Problem 1.1. Problem 1.2 and 1.1 were introduced by [M87] [S92] [D99]. The existence of $C^{1,1}$ solution was established by [C00] [B12] [CTW19] [CTW17] and it was also shown by [LV13] and [DL12] that the optimal regularity of general solutions is $C^{1,1}$. Besides regularity, we may ask if $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi(\tau,\ast)>0,\ \ \ \ \ \ \text{for all }\tau\in\mathcal{R},$ (1.22) for a solution $\Phi$ to Problem 1.2. This is similar to the maximum rank problem, which asks if $\displaystyle\text{rank}(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi(\tau,\ast))=n.$ (1.23) It’s easy to see that (1.22) implies (1.23). But the inverse implication may not be true. It turns out that, for a general solution, (1.22) or (1.23) may not be valid. An example was constructed in [RN], when $\mathcal{R}$ is a disc and $V$ is $\mathbb{C}P^{1}$. In this example, a solution $\Phi$ was constructed, which satisfies $\displaystyle\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi=0,$ (1.24) in an open set in $\mathcal{R}\times V$. However, we may ask is it possible to find some conditions for the boundary value $F$, so that if they are satisfied then (1.22) is valid. Theorem 1 of [D02] says that, when $\mathcal{R}$ is a disc, the set of smooth functions $F$ for which a smooth solution to Problem 1.2 exists is open in $C^{\infty}(\partial\mathcal{R}\times V)$. Actually the proof implies that if the boundary value of $\Phi$ is in this set, then (1.22) is satisfied. The proof also suggests that this set is open in $C^{2}$ topology. The proof made use of the foliation structure associated to a solution to homogenous complex Monge-Ampère equations. This technique was also used in [L81], to construct pluri-complex Green’s function. In [CFH20], by partially generalizing this technique to the case where $\mathcal{R}$ is an annulus , we proved that if $|\varphi_{0}|_{5}+|\varphi_{1}|_{5}$ is small enough, then the geodesic connecting $\varphi_{0}$ and $\varphi_{1}$ are $C^{4}$ and $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi(t,\ast)>0,\ \ \ \ \ \ \text{for all }t\in[0,1].$ (1.25) In a recent paper [H22], the author improved the result above, reducing the $C^{5}$ smallness condition to a $C^{2}$ smallness condition. However, [GP12M] [LV13] [H21] and the Appendix A of [H22] all suggest that a proper condition on $F$, which implies (1.22), may be a convexity condition. In [GP12M], it was shown that, when $V$ is a 1-dimensional complex flat torus, if $\varphi_{1}$ and $\varphi_{0}$ both satisfy a convexity condition, then the geodesic connecting them has non-degenerate metric. In the Appendix A of [H22], a similar result was proved, for solutions to Problem 1.2, with a very different method. Furthermore, computations of [LV13] and [H21] suggest that the convexity condition is also necessary. In this paper, when $V$ is a compact complex affine manifold with a constant coefficient metric, we introduce a concept: $(S,\omega_{0})$-convexity and show that boundary values satisfying $(S,\omega_{0})$-convexity implies (1.22). ### 1.2 Notation, Constructions and Main Results In this paper, we will discuss the situation, where $V$ is a compact complex affine manifold with a constant coefficient Kähler metric $\omega_{0}$. By the definition of complex affine manifold, $V$ is equipped with an atlas, so that all transition maps are affine and holomorphic. Furthermore, we require that $\omega_{0}$ is a constant metric, which we mean, in any coordinate neighborhood, with coordinates $\\{z^{\alpha}\\}_{\alpha=1}^{n}$, if $\displaystyle\omega_{0}=\sqrt{-1}b_{\alpha{\overline{\beta}}}dz^{\alpha}\wedge\overline{dz^{\beta}},$ (1.26) then $b_{\alpha{\overline{\beta}}}$ is constant for any $\alpha,\beta\in\\{1,...,n\\}.$ With these preparations, we can introduce the concept of $\omega_{0}$-convexity. ###### Definition 1.1 ($\omega_{0}$-Convexity and Strict $\omega_{0}$-Convexity). A function $\varphi\in C^{0}(V)$ is (strictly) $\omega_{0}$-convex if, in any coordinate chart, with coordinates $\\{z^{\alpha}\\}$, $\displaystyle\varphi+b_{\alpha{\overline{\beta}}}z^{\alpha}\overline{z^{\beta}}$ (1.27) is a (strictly) convex function. The convexity above has been widely used in many works related to Hessian manifolds, for example, it was called local convexity in [CV01] and called g-convexity in [GT21]. However, to estimate the convexity of solutions to Problem 1.2, it’s necessary to extend this concept and introduce the following concept of $(S,\omega_{0})$-convexity. ###### Definition 1.2 ($(S,\omega_{0})$-Convexity and Strict $(S,\omega_{0})$-Convexity for $C^{0}$ Function). Suppose $S$ is a constant section of $T_{2,0}^{\ast}(V)$. Then a function $\varphi\in C^{0}(V)$ is (strictly) $(S,\omega_{0})$-convex if, in any coordinate chart, with coordinates $\\{z^{\alpha}\\}$, $\displaystyle\varphi+b_{\alpha{\overline{\beta}}}z^{\alpha}\overline{z^{\beta}}+\text{Re}(S_{{\alpha\beta}}z^{\alpha}z^{\beta})$ (1.28) is a (strictly) convex function. For a constant section, we mean, in any coordinate chart, the tensor components of $S$ are constant. Obviously, when $S=0$, $(S,\omega_{0})$-convexity is exactly $\omega_{0}$-convexity. Furthermore, to gauge the convexity, we introduce the concept of modulus of convexity. ###### Definition 1.3 (Modulus of $(S,\omega_{0})$-Convexity). Suppose $S$ is a constant section of $T_{2,0}^{\ast}(V)$. Then a function $\varphi\in C^{0}(V)$ is $(S,\omega_{0})$-convex of modulus $\geq\mu$ if, in any coordinate chart, with coordinates $\\{z^{\alpha}\\}$, $\displaystyle\varphi+(1-\mu)b_{\alpha{\overline{\beta}}}z^{\alpha}\overline{z^{\beta}}+\text{Re}(S_{{\alpha\beta}}z^{\alpha}z^{\beta})$ (1.29) is a convex function. It’s $(S,\omega_{0})$-convex of modulus $>\mu$ if (1.29) is strictly convex. ###### Remark 1.2. It’s easy to see that if $\varphi$ is $(S,\omega_{0})$-convex of modulus $>\mu$, for $\mu\geq 0$, then it’s strictly $(S,\omega_{0})$-convex. Another fact is that $\varphi$ is $(S,\omega_{0})$-convex of modulus $>\mu$ implies that it’s $(S,\omega_{0})$-convex of modulus $\geq\mu$. In addition, since $V$ is compact, $\varphi$ is $(S,\omega_{0})$-convex of modulus $\geq\mu$ implies that, for any $\mu^{\prime}<\mu$, $\varphi$ is $(S,\omega_{0})$-convex of modulus $>\mu^{\prime}$. When the function $\varphi$ is $C^{2}$ continuous, the strict $(S,\omega_{0})$-convexity can be defined using complex second order derivatives. Using Lemma A.1, we will see Definition 1.2 is equivalent to the following. ###### Definition 1.4 (Strict $(S,\omega_{0})$-Convexity for $C^{2}$ Functions). Suppose $S$ is a constant section of $T_{2,0}^{\ast}(V)$. Then a function $\varphi\in C^{2}(V)$ is strictly $(S,\omega_{0})$-convex if $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\varphi>0$ (1.30) and the maximal eigenvalue of the tensor $\displaystyle K=(\varphi_{\theta\nu}-S_{\theta\nu})g^{\nu{\overline{\zeta}}}\overline{(\varphi_{\eta\zeta}-S_{\eta\zeta})}g^{\gamma{\overline{\eta}}}dz^{\theta}\otimes\frac{\partial}{\partial z^{\gamma}}$ (1.31) is smaller than 1. ###### Remark 1.3. Here $\varphi_{\alpha\beta}dz^{\alpha}\otimes dz^{\beta}$ is a well defined section of $T_{2,0}^{\ast}(V)$, because the coordinate transition functions between charts are affine. In addition, using basic linear algebra, we know eigenvalues of $K$ are real and non-negative. We introduce another measurement of the convexity: ###### Definition 1.5 (Degree of Convexity). Suppose $S$ is a constant section of $T^{\ast}_{2,0}(V)$ and $\delta$ is a positive number. Then a function $\varphi\in C^{2}(V)$ is $(S,\omega_{0})$-convex of degree $>\delta$ if, for any constant section $\Theta$ of $T_{2,0}^{\ast}(V)$ with $\displaystyle\text{maximum eigenvalue of }\left(\Theta_{\alpha\eta}b^{\eta{\overline{\gamma}}}\ \overline{\Theta_{\rho\gamma}}b^{\beta{\overline{\rho}}}dz^{\alpha}\otimes\frac{\partial}{\partial z^{\beta}}\right)\leq\delta^{2},$ (1.32) $\varphi$ is a strictly $(S+\Theta,\omega_{0})$-convex function. It turns out that the degree of convexity and the modulus of convexity coincide. In Lemma A.2, we show that, for $\varphi\in C^{2}(V)$ and $\delta\geq 0$, $\varphi$ is $(S,\omega_{0})$-convex of degree $>\delta$ if and only if it is $(S,\omega_{0})$-convex of modulus $>\delta$. The main results of the paper are the following. ###### Theorem 1.1 (Estimates for Geodesics). Given $\varphi_{0},\varphi_{1}\in\mathcal{H}$, suppose that there is an $S$ which is a constant section of $T^{\ast}_{2,0}(V)$, so that $\varphi_{0}$ and $\varphi_{1}$ are both $(S,\omega_{0})$-convex of modulus $>\mu$, for $\mu>0$. Let $\\{\varphi_{t}|\ t\in[0,1]\\}$ be the geodesic connecting $\varphi_{0}$ and $\varphi_{1}$. Then, for any $t\in(0,1)$, $\varphi_{t}$ is $(S,\omega_{0})$-convex of modulus $\geq\mu$ and, by definition, this implies $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\varphi_{t}\geq\mu\omega_{0},$ (1.33) in the weak sense. The theorem above is a particular case of the following theorem, according to Remark 1.1. ###### Theorem 1.2 (Estimates on Product Space). Suppose $F$ is a $C^{\infty}$ function on $\partial\mathcal{R}\times V$ and, for a constant $\mu>0$ and a constant section $S$ of $T^{\ast}_{2,0}(V)$, $F(\tau,\ast)$ is $(S,\omega_{0})$-convex of modulus $>\mu$, for any $\tau\in\partial\mathcal{R}$. (1.34) Let $\Phi$ be the solution to Problem 1.2 with boundary value $F$. Then $\Phi(\tau,\ast)$ is $(S,\omega_{0})$-convex of modulus $\geq\mu$, for any $\tau\in\mathcal{R}$, and, as a consequence $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}[\Phi(\tau,\ast)]\geq\mu\omega_{0},$ (1.35) in the weak sense. As we discussed, solutions to Problem 1.2 may only be $C^{1,1}$, but to implement our method we need to use up to 4-th order derivatives. Therefore, we consider an elliptic perturbation of Problem 1.2, for which there are smooth solutions. In many previous works, the following problem was studied: ###### Problem 1.3 (Non-Degenerate Monge-Ampère Equation on Product Space). Given $\mathcal{R}$, a bounded domain in $\mathbb{C}$ with smooth boundary, and $F\in C^{\infty}(\partial\mathcal{R}\times V)$ satisfying $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}(F(\tau,\ast))>0,$ $\displaystyle\ \ \ \ \text{ for any }\tau\in\partial\mathcal{R},$ (1.36) find $\Phi\in C^{1,1}(\mathcal{R}\times V)$ which satisfies $\displaystyle(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi)^{n+1}$ $\displaystyle=\varepsilon\sqrt{-1}d\tau\wedge\overline{d\tau}\wedge\Omega_{0}^{n},$ $\displaystyle\ \ \ \ \text{ in }\mathcal{R}\times V;$ (1.37) $\displaystyle\ \ \ \Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi$ $\displaystyle>0,$ $\displaystyle\ \ \ \ \text{ in }\mathcal{R}\times V;$ (1.38) $\displaystyle\ \ \ \ \ \ \ \Phi$ $\displaystyle=F,$ $\displaystyle\ \ \ \ \text{ on }\partial\mathcal{R}\times V.$ (1.39) In above, $\varepsilon$ is a positive constant. However, in this paper we introduce a different perturbation. ###### Problem 1.4 (An Elliptic Perturbation of Homogenous Complex Monge- Ampère Equation). Suppose $F\in C^{\infty}(\partial\mathcal{R}\times V)$ satisfies that $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}(F(\tau,\ast))>0,\ \ \ \ \text{ for any }\tau\in\partial\mathcal{R}.$ (1.40) Find $\Phi\in C^{\infty}(\mathcal{R}\times V)$ satisfying: $\displaystyle(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi)^{n+1}=\epsilon\sqrt{-1}d\tau\wedge\overline{d\tau}\wedge\Omega_{0}\wedge(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi)^{n-1},$ $\displaystyle\text{ in\ \ \ }\mathcal{R}\times V;$ (1.41) $\displaystyle\ \ \ \ \ \ \ \ \ \ \Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi>0,$ $\displaystyle\text{ in\ \ \ }\mathcal{R}\times V;$ (1.42) $\displaystyle\ \ \ \ \ \ \ \ \ \ \ \ \ \ \Phi=F,$ $\displaystyle\text{ on \ }\partial\mathcal{R}\times V.$ (1.43) Here $\epsilon$ is a positive constant. ###### Remark 1.4. Equation (1.41) is equivalent to $\displaystyle\Phi_{\tau{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}=\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\beta}}},$ (1.44) providing $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}(\Phi(\tau,\ast))>0,\ \ \ \ \text{ for any }\tau\in\mathcal{R}.$ (1.45) We also notice that, when $n=1$, Problem 1.4 is exactly Problem 1.3, with $\varepsilon=\epsilon$. In a previous paper [H22] we proved a particular 1-dimensional case of Theorem 1.2, by working with solutions to Problem 1.3. In Section 4, we prove that under the condition of Theorem 1.2 a smooth solution to Problem 1.4 exists. However, we don’t know if a smooth solution always exists for general boundary value $F$. The convergence of solutions as $\epsilon$ goes to zero is discussed in Section 5. ### 1.3 Structure of the Paper In section 2, by differentiating equation (1.44), we find second order derivatives of $\Phi$, $\displaystyle A_{{\alpha{\overline{\beta}}}}=\Phi_{\alpha{\overline{\beta}}}+b_{\alpha{\overline{\beta}}}\ \ \ \ \ \ \text{and}\ \ \ \ \ \ B_{{\alpha\beta}}=\Phi_{\alpha\beta},$ (1.46) satisfy two non-linear equations (2.2) and (2.3). In section 3.1, using $A$ and $B$, we construct $M_{S}$, which measures the convexity, and $Q^{[p]}_{S}$, which is a smooth approximation of $M_{S}$. Then, using equation (2.2) and (2.3), we show, for an elliptic operator $L^{i\overline{j}}\partial_{i\overline{j}}$ and $Q^{<p>}_{S}=\left(Q^{[p]}_{S}\right)^{p}$, $\displaystyle L^{i\overline{j}}\partial_{i\overline{j}}\left(Q^{<p>}_{S}\right)\geq 0,$ (1.47) providing $Q^{[p]}_{S}\leq 1-\frac{1}{2p}$. In section 3.2, using a continuity argument, we show if $M_{S}<1$ on ${\partial\mathcal{R}\times V}$, then $M_{S}<1$ in ${\mathcal{R}\times V}$. This means the strict $(S,\omega_{0})$-convexity on $\partial\mathcal{R}\times V$ implies the strict $(S,\omega_{0})$-convexity in $\mathcal{R}\times V$. In section 3.3, by altering $S$, we show the $(S,\omega_{0})$-convexity of modulus $>\mu$ on $\partial\mathcal{R}\times V$ implies the $(S,\omega_{0})$-convexity of modulus $>\mu$ in $\mathcal{R}\times V$. We point out that estimates in section 3.1, 3.2 and 3.3 all depends on some apriori assumptions. These assumptions can be removed after we prove the existence of smooth solutions to Problem 1.4. In section 4, we establish $C^{0}$, $C^{1}$, $C^{2}$ and $C^{2,\alpha}$ estimates. The methods are standard in PDE. In the proof, the result of section 3.3 plays an import role. Actually, it basically says equation (1.44) is a uniform elliptic equation, providing $\epsilon>0$. With these estimates, we prove existence of $C^{\infty}$ solution by the method of continuity in section 4.6. Finally, we prove estimates for solutions to Problem 1.1 and 1.2, by letting $\epsilon\rightarrow 0.$ ### 1.4 A Convention for Tensor Contraction In this paper, we need to contract a sequence of rank $2$ tensors to form new tensors. The computation can be simplified by converting rank $2$ tensors to matrices. In this sections, we explain how to do this. Suppose $A$ is a section of $T_{1,1}^{\ast}(V)$ and $B$ is a section of $T_{2,0}^{\ast}(V)$. In a coordinate chart, $A$ and $B$ can be considered as matrix valued functions, which we still denote by $A$ and $B$. Let $\displaystyle A=(A_{{\alpha{\overline{\beta}}}}),\ \ \ \ \ \ B=(B_{\alpha\beta}),$ (1.48) where $\alpha$ is the row index and $\beta$ is the column index. As a matrix, when $A$ is invertible, we denote $\displaystyle(A^{{\alpha{\overline{\beta}}}})=A^{-1},$ (1.49) where $\alpha$ is the column index and $\beta$ is the row index. Then we have $\displaystyle A^{\alpha{\overline{\beta}}}A_{\alpha{\overline{\theta}}}=\delta_{\theta\beta}$ (1.50) and we know $\displaystyle A^{\alpha{\overline{\beta}}}\frac{\partial}{\partial z^{\alpha}}\otimes\overline{\frac{\partial}{\partial z^{\beta}}}$ (1.51) is a section of $T_{1,1}(V)$. Let $\displaystyle K_{\alpha}^{\beta}=B_{{\alpha\theta}}\overline{A^{\mu{\overline{\theta}}}}\ \overline{B_{\mu{\overline{\rho}}}}A^{\beta{\overline{\rho}}},$ (1.52) then $K_{\alpha}^{\beta}dz^{\alpha}\otimes\frac{\partial}{\partial z^{\beta}}$ is a section of $T_{1,0}^{\ast}(V)\otimes T_{1,0}(V)$. Locally, we can consider $K$ as a matrix, with $\displaystyle K=(K_{\alpha}^{\beta}),$ (1.53) where $\alpha$ is the row index and $\beta$ is the column index. Then $\displaystyle K=B\overline{A^{-1}}\ {\overline{B}}{A^{-1}}.$ (1.54) It’s easy to see that eigenvalues of $K$ do not change under coordinate transformations. Furthermore, for $p\in\mathbb{Z}^{+}$, we need to use $K^{p}$, which is also a section of $T_{1,0}^{\ast}(V)\otimes T_{1,0}(V)$. In addition, for a section $\Theta$ of $T_{1,0}^{\ast}(V)\otimes T_{1,0}(V)$, we denote $\displaystyle\Theta<C,$ (1.55) for a constant $C$, if, at any point $p\in V$, the maximum eigenvalue of $\Theta(p)$ is smaller than $C$. ## 2 Equations for Second Order Derivatives In this paper we will mainly work on the product space $\mathcal{R}\times V$, where $\mathcal{R}$ is a compact domain in $\mathbb{C}$ with smooth boundary and $V$ is the complex affine manifold. We denote the coordinate on $\mathcal{R}$ by $\tau$ and denote the coordinates on $V$ by $\\{z^{\alpha}\\}_{\alpha=1}^{n}$. Coordinates on $V$ are indexed by Greek letters, except $\tau$. The coordinate $\tau$ on $\mathcal{R}$ will be considered as the $0$-th coordinate and, in some situation, we denote it by $z^{0}$. Thus, the coordinates on $\mathcal{R}\times V$ will be indexed by Roman letters, running from $0$ to $n$. Suppose $\Phi$ is a solution to Problem 1.4. Let $\displaystyle A_{\alpha{\overline{\beta}}}=\Phi_{\alpha{\overline{\beta}}}+b_{\alpha{\overline{\beta}}},\ \ \ \ \ \ \ \ B_{\alpha\beta}=\Phi_{\alpha\beta}.$ (2.1) As described in section 1.4, $A,B$ can be considered as matrices locally. In this section, by differentiating (1.44), we show, as matrices, $A,B$ satisfy the following equations: $\displaystyle L^{i{\overline{j}}}\partial_{i{\overline{j}}}A=L^{i{\overline{j}}}(\partial_{i}A)A^{-1}(\partial_{\overline{j}}A)+L^{i\overline{j}}(\partial_{\overline{j}}B)\overline{A^{-1}}(\partial_{i}\overline{B}),$ (2.2) $\displaystyle L^{i{\overline{j}}}\partial_{i{\overline{j}}}B=L^{i{\overline{j}}}(\partial_{i}A)A^{-1}(\partial_{\overline{j}}B)+L^{i\overline{j}}(\partial_{\overline{j}}B)\overline{A^{-1}}(\partial_{i}\overline{A}).$ (2.3) Here $L=L^{i{\overline{j}}}\partial_{i\overline{j}}$ is an elliptic operator on $\mathcal{R}\times V$, with $\displaystyle\left(\begin{array}[]{cc}L^{0\overline{0}}&L^{0{\overline{\beta}}}\\\ L^{\alpha\overline{0}}&L^{\alpha{\overline{\beta}}}\end{array}\right)=\left(\begin{array}[]{cc}1&-\Phi_{\mu{\overline{\tau}}}g^{\mu{\overline{\beta}}}\\\ -\Phi_{\tau{\overline{\mu}}}g^{\alpha{\overline{\mu}}}&\Phi_{\tau{\overline{\mu}}}g^{\alpha{\overline{\mu}}}\Phi_{\mu{\overline{\tau}}}g^{\mu{\overline{\beta}}}+\epsilon b_{\eta{\overline{\zeta}}}g^{\alpha{\overline{\zeta}}}g^{\eta{\overline{\beta}}}\end{array}\right).$ (2.8) The computations in this section are similar to the computations of Section 2.1 and 2.2 of [H22]. However, the computations here are much simpler because we are working on a flat affine manifold so $\Phi_{\alpha\beta}$ are coordinate derivatives while in [H22] we need to compute covariant derivatives. Apply $\partial_{\theta}$ to (1.44), we get $\displaystyle\Phi_{\theta\tau{\overline{\tau}}}-\Phi_{\theta{\overline{\beta}}\tau}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}+\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}\theta}=-\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}.$ (2.9) Then apply $\partial_{\gamma}$ to (2.9), we get $\displaystyle\Phi_{\theta\gamma\tau\overline{\tau}}-\Phi_{\theta\gamma{\overline{\beta}}\tau}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}+\Phi_{\theta{\overline{\beta}}\tau}g^{\alpha{\overline{\mu}}}\Phi_{\gamma{\overline{\mu}}\rho}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}-\Phi_{\theta{\overline{\beta}}\tau}g^{\alpha{\overline{\beta}}}\Phi_{\alpha\gamma{\overline{\tau}}}+\Phi_{\tau\gamma{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}$ (2.10) $\displaystyle-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\eta}}}\Phi_{\zeta{\overline{\eta}}\gamma}g^{\zeta{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}+\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\theta\gamma\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\eta}}}\Phi_{\zeta{\overline{\eta}}\gamma}g^{\zeta{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}$ (2.11) $\displaystyle+\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}\Phi_{\alpha\gamma{\overline{\tau}}}-\Phi_{\tau\gamma{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{\alpha\theta{\overline{\tau}}}+\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\zeta{\overline{\mu}}\gamma}g^{\zeta{\overline{\beta}}}\Phi_{\theta\alpha{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{\alpha\theta\gamma{\overline{\tau}}}$ (2.12) $\displaystyle\ \ =\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\zeta}}}\Phi_{\eta{\overline{\zeta}}\gamma}g^{\eta{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}-\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\theta\gamma\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}}+\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\zeta}}}\Phi_{\eta{\overline{\zeta}}\gamma}g^{\eta{\overline{\beta}}}.$ (2.13) We will use the following convention. $(\ast.\ast)_{k}$ stands for the $k$-th term in the line $(\ast.\ast)$, including the sign. For example, $\displaystyle(\ref{eq:20221030-4})_{2}=-\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\theta\gamma\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}},\ \ \ \ (\ref{eq:20221030-1})_{1}=\Phi_{\theta\gamma\tau{\overline{\tau}}},\ \ \ \ (\ref{eq:20221030-3})_{4}=-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{\alpha\theta\gamma{\overline{\tau}}}.$ (2.14) It’s straightforward to verify the following equality: $\displaystyle(\ref{eq:20221030-1})_{1}+(\ref{eq:20221030-1})_{2}+(\ref{eq:20221030-2})_{2}+(\ref{eq:20221030-3})_{4}-(\ref{eq:20221030-4})_{2}$ $\displaystyle=\Phi_{\theta\gamma i{\overline{j}}}L^{i{\overline{j}}},$ (2.15) $\displaystyle(\ref{eq:20221030-1})_{3}+(\ref{eq:20221030-1})_{4}+(\ref{eq:20221030-2})_{1}+(\ref{eq:20221030-3})_{1}-(\ref{eq:20221030-4})_{1}$ $\displaystyle=-\Phi_{\theta{\overline{\beta}}i}g^{\alpha{\overline{\beta}}}\Phi_{\alpha\gamma{\overline{j}}}L^{i{\overline{j}}},$ (2.16) $\displaystyle(\ref{eq:20221030-1})_{5}+(\ref{eq:20221030-2})_{3}+(\ref{eq:20221030-3})_{2}+(\ref{eq:20221030-3})_{3}-(\ref{eq:20221030-4})_{3}$ $\displaystyle=-\Phi_{\theta\rho{\overline{j}}}g^{\rho{\overline{\zeta}}}\Phi_{\gamma{\overline{\zeta}}i}L^{i{\overline{j}}}.$ (2.17) This gives us (2.3). Similarly, we apply $\partial_{\overline{\gamma}}$ to (2.9) and get $\displaystyle\Phi_{\theta{\overline{\gamma}}\tau\overline{\tau}}-\Phi_{\theta{\overline{\gamma}}{\overline{\beta}}\tau}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}+\Phi_{\theta{\overline{\beta}}\tau}g^{\alpha{\overline{\mu}}}\Phi_{{\overline{\gamma}}{\overline{\mu}}\rho}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}-\Phi_{\theta{\overline{\beta}}\tau}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\gamma}}{\overline{\tau}}}+\Phi_{\tau{\overline{\gamma}}{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}$ (2.18) $\displaystyle-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\eta}}}\Phi_{\zeta{\overline{\eta}}{\overline{\gamma}}}g^{\zeta{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}+\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\theta{\overline{\gamma}}\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\eta}}}\Phi_{\zeta{\overline{\eta}}{\overline{\gamma}}}g^{\zeta{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}$ (2.19) $\displaystyle+\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\gamma}}{\overline{\tau}}}-\Phi_{\tau{\overline{\gamma}}{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{\alpha\theta{\overline{\tau}}}+\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\zeta{\overline{\mu}}{\overline{\gamma}}}g^{\zeta{\overline{\beta}}}\Phi_{\theta\alpha{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{\alpha\theta{\overline{\gamma}}{\overline{\tau}}}$ (2.20) $\displaystyle\ \ =\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\zeta}}}\Phi_{\eta{\overline{\zeta}}{\overline{\gamma}}}g^{\eta{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\beta}}}-\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\theta{\overline{\gamma}}\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}}+\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\mu}}\theta}g^{\rho{\overline{\zeta}}}\Phi_{\eta{\overline{\zeta}}{\overline{\gamma}}}g^{\eta{\overline{\beta}}}.$ (2.21) It’s straightforward to verify the following equality $\displaystyle(\ref{eq:20221032-1})_{1}+(\ref{eq:20221032-1})_{2}+(\ref{eq:20221032-2})_{2}+(\ref{eq:20221032-3})_{4}-(\ref{eq:20221032-4})_{2}$ $\displaystyle=\Phi_{\theta{\overline{\gamma}}i{\overline{j}}}L^{i{\overline{j}}},$ (2.22) $\displaystyle(\ref{eq:20221032-1})_{3}+(\ref{eq:20221032-1})_{4}+(\ref{eq:20221032-2})_{1}+(\ref{eq:20221032-3})_{1}-(\ref{eq:20221032-4})_{1}$ $\displaystyle=-\Phi_{\theta{\overline{\beta}}i}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\gamma}}{\overline{j}}}L^{i{\overline{j}}},$ (2.23) $\displaystyle(\ref{eq:20221032-1})_{5}+(\ref{eq:20221032-2})_{3}+(\ref{eq:20221032-3})_{2}+(\ref{eq:20221032-3})_{3}-(\ref{eq:20221032-4})_{3}$ $\displaystyle=-\Phi_{\theta\rho{\overline{j}}}g^{\rho{\overline{\zeta}}}\Phi_{{\overline{\gamma}}{\overline{\zeta}}i}L^{i{\overline{j}}}.$ (2.24) This gives us (2.2). ## 3 Apriori Estimates In this section, we will prove some estimates for solutions to Problem 1.4, with some apriori assumptions. These assumptions can be removed after we prove the existence of $C^{\infty}$ solutions to Problem 1.4 in section 4. One estimate in this section, Prop 3.3, will play an indispensable role in our proof of the existence result. Suppose $S$ is a constant section of $T_{2,0}^{\ast}(V)$ and $F\in C^{\infty}(\partial\mathcal{R}\times V)$ satisfies $\displaystyle F(\tau,\ast)\text{ is strictly $(S,\omega_{0})$-convex, for any $\tau\in\partial\mathcal{R}$. }$ (3.1) For a solution $\Phi$ to Problem 1.4 with boundary value $F$, let $\displaystyle B$ $\displaystyle=(\Phi_{\alpha\beta}),$ (3.2) $\displaystyle A$ $\displaystyle=(\Phi_{\alpha{\overline{\beta}}}+b_{\alpha{\overline{\beta}}}),$ (3.3) $\displaystyle K_{S}$ $\displaystyle=(B-S)\overline{A^{-1}}\ \overline{(B-S)}{A^{-1}},$ (3.4) and, for any $(\tau,z)\in\mathcal{R}\times V$, $\displaystyle M_{S}(\tau,z)=\text{ Maximum Eigenvalue of }K_{S}(\tau,z).$ (3.5) Condition (3.1) implies that $\displaystyle M_{S}<1,\text{ on }\partial\mathcal{R}\times V.$ (3.6) We want to show that $\displaystyle M_{S}<1,\text{ in }\mathcal{R}\times V.$ (3.7) This implies that $\Phi(\tau,\ast)$ is strictly $(S,\omega_{0})$-convex for any $\tau\in\partial\mathcal{R}$. However, it’s difficult to directly work with $M_{S}$, since it may not be differentiable. We introduce the following approximation of $M_{S}$. Let $\displaystyle Q^{<p>}_{S}=\text{tr}(K_{S}^{p}),$ (3.8) $\displaystyle Q^{[p]}_{S}=\left(Q^{<p>}_{S}\right)^{\frac{1}{p}}.$ (3.9) According to basic calculus, for $\lambda_{1},\ ...\ ,\lambda_{n}\geq 0$, $\displaystyle\lim_{p\rightarrow+\infty}\left(\lambda_{1}^{p}+\ ...\ +\lambda_{n}^{p}\right)^{1/p}=\max\\{\lambda_{1},\ ...\ ,\lambda_{n}\\},$ (3.10) so $\displaystyle\lim_{p\rightarrow+\infty}Q^{[p]}_{S}=M_{S}.$ (3.11) If we can show for $p$ big enough, $\displaystyle Q^{[p]}_{S}\leq\max_{{\partial\mathcal{R}\times V}}Q^{[p]}_{S},\ \ \ \ \text{ in }\mathcal{R}\times V,$ (3.12) then we can let $p$ go to $\infty$ and prove (3.7). In section 3.1, we prove that for the elliptic operator $L=L^{i\overline{j}}\partial_{i\overline{j}}$ introduced in section 2, $\displaystyle L^{i\overline{j}}\left(Q^{<p>}_{S}\right)_{i\overline{j}}\geq 0,\ \ \ \ \ \text{ in }\mathcal{R}\times V,$ (3.13) providing $K_{S}\leq 1-\frac{1}{2p}$. In section 3.2, using a continuity argument we prove $\displaystyle Q^{[p]}_{S}\leq\max_{{\partial\mathcal{R}\times V}}Q^{[p]}_{S},\ \ \ \ \text{ in }\mathcal{R}\times V.$ (3.14) In section 3.3, by altering $S$, we prove a convexity estimate, which implies a metric lower bound estimate $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi(\tau,\ast)\geq\mu\omega_{0},$ (3.15) for a constant $\mu>0$. In section 3.2 and 3.3, we need an apriori assumption that for any $\lambda\in[0,1]$, Problem 1.4 has a solution $\Phi^{\lambda}$ with boundary value $\lambda F$ and $\\{\Phi^{\lambda}|\lambda\in[0,1]\\}$ is a continuous curve in $C^{4}(\overline{\mathcal{R}}\times V)$ in $C^{2}$ topology. ### 3.1 Computation Suppose $\Phi$ is a $C^{4}$ solution to Problem 1.4. In a local coordinate chart, let $\displaystyle K$ $\displaystyle=B\overline{A^{-1}}\ {\overline{B}}{A^{-1}}$ (3.16) and, for a positive integer $p$, $\displaystyle Q^{<p>}=\text{tr}(K^{p}).$ (3.17) As matrices, $B,A$ and $K$ depends on the choice of coordinate, while $Q^{<p>}$ is a well defined function on $\mathcal{R}\times V$. In this section, we show $\displaystyle L^{i\overline{j}}\left(Q^{<p>}\right)_{i\overline{j}}\geq 0,\ \ \ \text{providing }K\leq 1-\frac{1}{2p}.$ (3.18) After proving this, we know for $\displaystyle K_{S}=(B-S)\overline{A^{-1}}\ \overline{(B-S)}{A^{-1}}$ (3.19) and $\displaystyle Q^{<p>}_{S}=\text{tr}(K_{S}^{p}),$ (3.20) $\displaystyle L^{i\overline{j}}\left(Q^{<p>}_{S}\right)_{i\overline{j}}\geq 0,\ \ \ \ \text{providing }K_{S}\leq 1-\frac{1}{2p}.$ (3.21) This is because $B-S,A$ satisfy the same set of equations as $B,A$ do. Similar to (2.2) (2.3), we have: $\displaystyle L^{i{\overline{j}}}\partial_{i{\overline{j}}}A$ $\displaystyle=L^{i{\overline{j}}}(\partial_{i}A)A^{-1}\partial_{\overline{j}}A+L^{i\overline{j}}\partial_{\overline{j}}(B-S)\overline{A^{-1}}\partial_{i}\overline{(B-S)};$ (3.22) $\displaystyle L^{i{\overline{j}}}\partial_{i{\overline{j}}}(B-S)$ $\displaystyle=L^{i{\overline{j}}}(\partial_{i}A)A^{-1}\partial_{\overline{j}}(B-S)+L^{i\overline{j}}\partial_{\overline{j}}(B-S)\overline{A^{-1}}(\partial_{i}\overline{A}).$ (3.23) Equations above are equivalent to (2.2) (2.3) because $S$ is a constant section and all derivatives of $S$ are zero. Before the computation of $L^{i\overline{j}}\left(Q^{<p>}\right)_{i\overline{j}}$, we do some preparation. First, we note that the conjugate of equation (2.3) is equivalent to $\displaystyle L^{i\overline{j}}\left(\overline{A^{-1}}\ {\overline{B}}_{i}{A^{-1}}\right)_{{\overline{j}}}=0.$ (3.24) Then we introduce a quantity $\displaystyle{\mathcal{B}_{i}}=B_{i}-A_{i}{A^{-1}}B-B\overline{A^{-1}}\ \overline{A}_{i}.$ (3.25) The reason to introduce ${\mathcal{B}_{i}}$ is to combine some terms with $B_{i}$ to simplify the computation. Even ${\mathcal{B}_{i}}$ can be considered as a tensor, we just need to consider it as a symmetric matrix valued function defined in a local coordinate chart. (3.25) is equivalent to $\displaystyle A^{-1}{\mathcal{B}_{i}}\overline{A^{-1}}=\partial_{i}\left({A^{-1}}B\overline{A^{-1}}\right).$ (3.26) When ${\mathcal{B}_{i}}$ is differentiated by $L^{i\overline{j}}\partial_{{\overline{j}}}$, using (2.2), (2.3) and the conjugate of (2.2), we find $\displaystyle L^{i\overline{j}}\partial_{{\overline{j}}}{\mathcal{B}_{i}}=\left(-B_{{\overline{j}}}\overline{A^{-1}}\ {\overline{B}}_{i}{A^{-1}}B-B\overline{A^{-1}}\ {\overline{B}}_{i}{A^{-1}}B_{{\overline{j}}}\right).$ (3.27) Now we start to compute $L^{i\overline{j}}\left(Q^{<p>}\right)_{i\overline{j}}$. In the expression of $Q^{<p>}$, we combine some terms together to simplify the computation: $\displaystyle Q^{<p>}$ $\displaystyle=\text{tr}\left(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}}\right)^{p}$ (3.28) $\displaystyle=\text{tr}\left[({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\cdot...\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\right].$ (3.29) In (3.29), $({A^{-1}}B\overline{A^{-1}})$ and $\overline{B}$ both appear $p$ times. When $\partial_{i}$ act on any of $({A^{-1}}B\overline{A^{-1}})$ (or $\overline{B}$), we get the same result. So $\displaystyle\partial_{i}Q^{<p>}=$ $\displaystyle p\cdot\text{tr}\left[\partial_{i}({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\cdot...\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\right]$ (3.30) $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[({A^{-1}}B\overline{A^{-1}})\cdot\partial_{i}{\overline{B}}\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\cdot...\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\right].$ (3.31) We plug (3.26) into (3.30) and get $\displaystyle\partial_{i}Q^{<p>}=$ $\displaystyle p\cdot\text{tr}\left[{A^{-1}}{\mathcal{B}_{i}}\overline{A^{-1}}\cdot{\overline{B}}\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\cdot...\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\right]$ (3.32) $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[({A^{-1}}B\overline{A^{-1}})\cdot\partial_{i}{\overline{B}}\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\cdot...\cdot({A^{-1}}B\overline{A^{-1}})\cdot{\overline{B}}\right].$ (3.33) We reorganize terms in the product of (3.32) (3.33): $\displaystyle\partial_{i}Q^{<p>}=$ $\displaystyle p\cdot\text{tr}\left[{\mathcal{B}_{i}}\cdot(\overline{A^{-1}}\ {\overline{B}}{A^{-1}})\cdot B\cdot(\overline{A^{-1}}\ \overline{B}{A^{-1}})\cdot...\cdot B\cdot(\overline{A^{-1}}\ \overline{B}{A^{-1}})\right]$ (3.34) $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[(\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}})\cdot B\cdot(\overline{A^{-1}}\ \overline{B}{A^{-1}})\cdot B\cdot...\cdot(\overline{A^{-1}}\ \overline{B}{A^{-1}})\cdot B\right].$ (3.35) When apply $L^{i\overline{j}}\partial_{{\overline{j}}}$ to $Q^{<p>}_{i}$, $L^{i\overline{j}}\partial_{\overline{j}}$ acts on 6 kinds of terms: * (i) $L^{i\overline{j}}\partial_{\overline{j}}$ acts on ${\mathcal{B}_{i}}$ in (3.34); * (ii) $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $(\overline{A^{-1}}\ {\overline{B}}{A^{-1}})$ in (3.34); * (iii) $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $B$ in (3.34); * (iv) $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $(\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}})$ in (3.35); * (v) $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $B$ in (3.35); * (vi) $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $(\overline{A^{-1}}\ \overline{B}{A^{-1}})$ in (3.35). In the following we do the computation separately. (i) When $L^{i\overline{j}}\partial_{\overline{j}}$ acts on ${\mathcal{B}_{i}}$ in (3.34), the result is $\displaystyle p\cdot\text{tr}\left[L^{i\overline{j}}\partial_{\overline{j}}{\mathcal{B}_{i}}(\overline{A^{-1}}\ \overline{B}{A^{-1}})\cdot\left(B\cdot(\overline{A^{-1}}\ \overline{B}{A^{-1}})\right)^{p-1}\right]L^{i\overline{j}}$ (3.36) $\displaystyle=$ $\displaystyle p\cdot\text{tr}\left[-B_{\overline{j}}\overline{A^{-1}}\ {\overline{B}}_{i}{A^{-1}}\cdot\left(B\cdot(\overline{A^{-1}}\ \overline{B}{A^{-1}})\right)^{p}\right]L^{i\overline{j}}$ (3.37) $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[-\overline{A^{-1}}\ \overline{B}_{i}{A^{-1}}B_{\overline{j}}\cdot\left((\overline{A^{-1}}\ \overline{B}{A^{-1}})\cdot B\right)^{p}\right]L^{i\overline{j}}.$ (3.38) To get this, we need to use equation (3.27). (ii) When $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $(\overline{A^{-1}}\ {\overline{B}}{A^{-1}})$ in (3.34), we need to use the conjugate of (3.26): $\displaystyle\overline{A^{-1}}\ \overline{{\mathcal{B}_{j}}}{A^{-1}}=\partial_{{\overline{j}}}(\overline{A^{-1}}\ \overline{B}{A^{-1}}).$ (3.39) The result is the sum of $p$ terms: $\displaystyle p\cdot\text{tr}\left[{\mathcal{B}_{i}}(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{0}\overline{A^{-1}}\ \overline{{\mathcal{B}_{j}}}{A^{-1}}(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{p-1}\right]L^{i\overline{j}}$ (3.40) $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[{\mathcal{B}_{i}}(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{1}\overline{A^{-1}}\ \overline{{\mathcal{B}_{j}}}{A^{-1}}(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{p-2}\right]L^{i\overline{j}}$ (3.41) $\displaystyle\ \ \ ...$ $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[{\mathcal{B}_{i}}(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{p-1}\overline{A^{-1}}\ \overline{{\mathcal{B}_{j}}}{A^{-1}}(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{0}\right]L^{i\overline{j}}.$ (3.42) In above, fictitious terms $(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{0}$ and $(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{0}$ are added in to make the pattern clearer. We will do this in other parts of the computation either. (iii) When $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $B$ in (3.34), we don’t need to use other equations. The result is the sum of $p-1$ terms: $\displaystyle p\cdot\text{tr}\left[{\mathcal{B}_{i}}(\overline{A^{-1}}\ \overline{B}{A^{-1}})(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{0}B_{{\overline{j}}}(\overline{A^{-1}}\ \overline{B}{A^{-1}})(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{p-2}\right]L^{i\overline{j}}$ (3.43) $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[{\mathcal{B}_{i}}(\overline{A^{-1}}\ \overline{B}{A^{-1}})(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{1}B_{{\overline{j}}}(\overline{A^{-1}}\ \overline{B}{A^{-1}})(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{p-3}\right]L^{i\overline{j}}$ (3.44) $\displaystyle\ \ \ \ ...$ $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[{\mathcal{B}_{i}}(\overline{A^{-1}}\ \overline{B}{A^{-1}})(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{p-2}B_{{\overline{j}}}(\overline{A^{-1}}\ \overline{B}{A^{-1}})(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{0}\right]L^{i\overline{j}}.$ (3.45) (iv) When $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $(\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}})$ in (3.35), the result is zero. This is because of the equation (3.24). (v) When $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $B$ in (3.35), we don’t need to use other equations. Simply differentiating $B$, we get the following result which is the sum of $p$ terms: $\displaystyle p\cdot\text{tr}\left[\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}}(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{0}B_{{\overline{j}}}(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{p-1}\right]L^{i\overline{j}}$ (3.46) $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}}(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{1}B_{{\overline{j}}}(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{p-2}\right]L^{i\overline{j}}$ (3.47) $\displaystyle\ \ \ \ ...$ $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}}(B\overline{A^{-1}}\ {\overline{B}}{A^{-1}})^{p-1}B_{{\overline{j}}}(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{0}\right]L^{i\overline{j}}.$ (3.48) (vi) When $L^{i\overline{j}}\partial_{\overline{j}}$ acts on $(\overline{A^{-1}}\ \overline{B}{A^{-1}})$ in (3.35), we need to use (3.39). The result is the sum of $p-1$ terms $\displaystyle p\cdot\text{tr}\left[(\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}})B(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{0}(\overline{A^{-1}}\ \overline{{\mathcal{B}_{j}}}{A^{-1}})B(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{p-2}\right]L^{i\overline{j}}$ (3.49) $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[(\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}})B(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{1}(\overline{A^{-1}}\ \overline{{\mathcal{B}_{j}}}{A^{-1}})B(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{p-3}\right]L^{i\overline{j}}$ (3.50) $\displaystyle\ \ \ \ ...$ $\displaystyle+$ $\displaystyle p\cdot\text{tr}\left[(\overline{A^{-1}}\partial_{i}{\overline{B}}{A^{-1}})B(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{p-2}(\overline{A^{-1}}\ \overline{{\mathcal{B}_{j}}}{A^{-1}})B(\overline{A^{-1}}\ \overline{B}{A^{-1}}B)^{0}\right]L^{i\overline{j}}.$ (3.51) At a point $(\tau_{0},z_{0})\in\mathcal{R}\times V$, we change coordinate on $V$ to diagonalize $A,B$. As discussed in section 1.4, we need to find $P$, so that $\displaystyle PAP^{\ast}=I,\ \ \ PBP^{T}=\Lambda=\text{diag}(\Lambda_{1},\ ...\ ,\Lambda_{n})\geq 0.$ (3.52) This can be done by Lemma A.4. Denote that $\displaystyle{\mathcal{B}_{i}}=(\mathcal{B}_{i;\alpha\beta}),\ \ \ \ \partial_{i}{\overline{B}}=(\overline{B}_{i;\alpha\beta}).$ (3.53) With these notation, results of (i)-(vi) can be simplified: Result of (i) is: $\displaystyle p\left(-\overline{B}_{i;\alpha\beta}\overline{\overline{B}_{j;\alpha\beta}}(\Lambda_{\alpha}^{2p}+\Lambda_{\beta}^{2p})\right)L^{i\overline{j}};$ (3.54) result of (ii) is $\displaystyle p\left(\mathcal{B}_{i;\alpha\beta}\overline{\mathcal{B}_{j;\alpha\beta}}(\Lambda_{\alpha}^{2p-2}+\Lambda_{\alpha}^{2p-4}\Lambda_{\beta}^{2}+\ ...\ +\Lambda_{\beta}^{2p-2})\right)L^{i\overline{j}};$ (3.55) result of (iii) is $\displaystyle p\left(\mathcal{B}_{i;\alpha\beta}\overline{\overline{B}_{j;\alpha\beta}}(\Lambda_{\alpha}^{2p-3}\Lambda_{\beta}+\Lambda_{\alpha}^{2p-5}\Lambda_{\beta}^{3}\ ...\ +\Lambda_{\alpha}\Lambda_{\beta}^{2p-3})\right)L^{i\overline{j}};$ (3.56) result of (iv) is still $0$; result of (v) is $\displaystyle p\left(\overline{B}_{i;\alpha\beta}\overline{\overline{B}_{j;\alpha\beta}}(\Lambda_{\alpha}^{2p-2}+\Lambda_{\alpha}^{2p-4}\Lambda_{\beta}^{2}+\ ...\ +\Lambda_{\beta}^{2p-2})\right)L^{i\overline{j}};$ (3.57) result of (vi) is $\displaystyle p\left(\overline{B}_{i;\alpha\beta}\overline{\mathcal{B}_{j;\alpha\beta}}(\Lambda_{\alpha}^{2p-3}\Lambda_{\beta}+\Lambda_{\alpha}^{2p-5}\Lambda_{\beta}^{3}\ ...\ +\Lambda_{\alpha}\Lambda_{\beta}^{2p-3})\right)L^{i\overline{j}}.$ (3.58) Summing up results of (i)-(vi), we get $\displaystyle L^{i\overline{j}}\left(Q^{<p>}\right)_{i\overline{j}}=p\sum_{i,j}\sum_{\alpha,\beta}L^{i\overline{j}}(\mathcal{B}_{i;\alpha\beta},\overline{B}_{i;\alpha\beta})\mathcal{W}_{\alpha\beta}\overline{\left(\begin{array}[]{cc}\mathcal{B}_{j;\alpha\beta}\\\ \overline{B}_{j;\alpha\beta}\end{array}\right)},$ (3.61) where $\displaystyle\mathcal{W}_{\alpha\beta}=\left(\begin{array}[]{cc}\sum_{k=0}^{p-1}\Lambda_{\alpha}^{2k}\Lambda_{\beta}^{2p-2-2k}&\sum_{k=0}^{p-2}\Lambda_{\alpha}^{2k+1}\Lambda_{\beta}^{2p-3-2k}\\\ \sum_{k=0}^{p-2}\Lambda_{\alpha}^{2p-3-2k}\Lambda_{\beta}^{2k+1}&\sum_{k=0}^{p-1}\Lambda_{\alpha}^{2k}\Lambda_{\beta}^{2p-2-2k}-\Lambda_{\alpha}^{2p}-\Lambda_{\beta}^{2p}\end{array}\right).$ (3.64) It’s obvious that when $\mathcal{W}_{\alpha\beta}\geq 0$, (3.61)$\geq 0$. In the following, we show when $\Lambda_{\alpha},\Lambda_{\beta}\leq\sqrt{1-\frac{1}{2p}}$, $\mathcal{W}_{\alpha\beta}\geq 0.$ According to linear algebra, $\mathcal{W}_{\alpha\beta}\geq 0$ if and only if $\text{tr}(\mathcal{W}_{\alpha\beta})\geq 0$ and $\det(\mathcal{W}_{\alpha\beta})\geq 0$. $\text{tr}(\mathcal{W}_{\alpha\beta})$ is easy to compute: $\displaystyle\text{tr}(\mathcal{W}_{\alpha\beta})\geq\Lambda_{\alpha}^{2p-2}+\Lambda_{\beta}^{2p-2}-\Lambda_{\alpha}^{2p}-\Lambda_{\beta}^{2p}.$ (3.65) It’s greater than $0$ providing $\displaystyle\Lambda_{\alpha},\Lambda_{\beta}<1.$ (3.66) To investigate the sign of $\det(\mathcal{W}_{\alpha\beta})$, we need to simplify the right-hand side of (3.64). When $\Lambda_{\alpha}=\Lambda_{\beta}$, denoting $\Lambda_{\alpha}=\Lambda_{\beta}=\lambda$, $\displaystyle\mathcal{W}_{\alpha\beta}=\left(\begin{array}[]{cc}p\lambda^{2p-2}&(p-1)\lambda^{2p-2}\\\ (p-1)\lambda^{2p-2}&p\lambda^{2p-2}-2\lambda^{2p}\end{array}\right).$ (3.69) The determinant follows by direct computation: $\displaystyle\det(\mathcal{W}_{\alpha\beta})=(2p-1)\lambda^{4p-4}\left[1-\frac{2p}{2p-1}\lambda^{2}\right].$ (3.70) It’s non-negative proving $\displaystyle\lambda^{2}\leq 1-\frac{1}{2p}.$ (3.71) When $\Lambda_{\alpha}\neq\Lambda_{\beta}$, we use the summation formula for geometric series to compute the summation in (3.64). We assume $\Lambda_{\alpha}^{2}>\Lambda_{\beta}^{2}$. Note that we have $\Lambda_{\alpha},\Lambda_{\beta}\geq 0$, so when $\Lambda_{\alpha}\neq\Lambda_{\beta}$, $\Lambda^{2}_{\alpha}\neq\Lambda^{2}_{\beta}$. The result is $\displaystyle\mathcal{W}_{\alpha\beta}=\left(\begin{array}[]{cc}\frac{\Lambda_{\beta}^{2p}-\Lambda_{\alpha}^{2p}}{\Lambda^{2}_{\beta}-\Lambda^{2}_{\alpha}}&-\frac{\Lambda_{\beta}\Lambda_{\alpha}^{2p-1}-\Lambda_{\beta}^{2p-1}\Lambda_{\alpha}}{\Lambda^{2}_{\beta}-\Lambda^{2}_{\alpha}}\\\ -\frac{\Lambda_{\beta}\Lambda_{\alpha}^{2p-1}-\Lambda_{\beta}^{2p-1}\Lambda_{\alpha}}{\Lambda^{2}_{\beta}-\Lambda^{2}_{\alpha}}&\frac{\Lambda_{\beta}^{2p}-\Lambda_{\alpha}^{2p}}{\Lambda^{2}_{\beta}-\Lambda^{2}_{\alpha}}-\Lambda_{\alpha}^{2p}-\Lambda_{\beta}^{2p}\end{array}\right).$ (3.74) The determinant follows by straightforward computation: $\displaystyle\det(\mathcal{W}_{\alpha\beta})=\frac{(-\Lambda^{4p}_{\beta}+\Lambda^{4p-2}_{\beta})-(-\Lambda^{4p}_{\alpha}+\Lambda^{4p-2}_{\alpha})}{\Lambda^{2}_{\beta}-\Lambda^{2}_{\alpha}}.$ (3.75) Let $f(x)=-x^{2p}+x^{2p-1}.$ Then (3.75) can be simplified as $\displaystyle\det(\mathcal{W}_{\alpha\beta})=\frac{f(\Lambda_{\beta}^{2})-f(\Lambda_{\alpha}^{2})}{\Lambda^{2}_{\beta}-\Lambda^{2}_{\alpha}}.$ (3.76) It’s non-negative, providing $\Lambda_{\alpha}^{2}$ and $\Lambda_{\beta}^{2}$ stay on an interval where $f$ is non-decreasing. By computing the derivative of $f$ we know $f$ is non-decreasing on $[0,1-\frac{1}{2p}]$. Combining with (3.66) (3.71), we know $\det(\mathcal{W}_{\alpha\beta})\geq 0$, providing $\Lambda_{\alpha}^{2},\Lambda_{\beta}^{2}\leq 1-\frac{1}{2p}.$ Therefore, $\displaystyle L^{i\overline{j}}\left(Q^{<p>}\right)_{i\overline{j}}\geq 0,\ \ \ \ \text{ providing }K\leq 1-\frac{1}{2p}.$ (3.77) In the computation above, we can replace $B$ by $B-S$ and get $\displaystyle L^{i\overline{j}}\left(Q^{<p>}_{S}\right)_{i\overline{j}}\geq 0,\ \ \ \ \text{ providing }K_{S}\leq 1-\frac{1}{2p}.$ (3.78) As a result, we have the following proposition: ###### Proposition 3.1. Suppose $\Phi$ is a $C^{4}$ solution to Problem 1.4 and $S$ is a constant section of $T_{2,0}^{\ast}(V)$. For $L,\ K_{S},\ Q^{<p>}_{S}$ and $\ Q^{[p]}_{S}$, defined by (2.8) (3.4) (3.8) (3.9) respectively, we have $\displaystyle L^{i\overline{j}}\partial_{i\overline{j}}\left(Q^{<p>}_{S}\right)\geq 0,$ (3.79) providing $K_{S}\leq 1-\frac{1}{2p},$ and, as a consequence, $\displaystyle Q^{[p]}_{S}\leq\max_{{\partial\mathcal{R}\times V}}Q^{[p]}_{S},\ \ \ \ \ \ \text{ in }\mathcal{R}\times V,$ (3.80) providing $K_{S}\leq 1-\frac{1}{2p},$ in $\mathcal{R}\times V$. ###### Remark 3.1. Let $\displaystyle Q^{[\rho,p]}_{S}=\left(\text{tr}(K_{S}^{\rho p})\right)^{\frac{1}{p}}.$ (3.81) With more complicated computations we can show $\displaystyle L^{i\overline{j}}\left(Q^{[\rho,p]}_{S}\right)_{i\overline{j}}\geq 0,$ (3.82) providing $K_{S}\leq 1-\frac{1}{2\rho}$. So, by letting $p$ go to $\infty$, we know $\displaystyle L^{i\overline{j}}\left(M_{S}^{\rho}\right)_{i\overline{j}}\geq 0,$ (3.83) providing $K_{S}\leq 1-\frac{1}{2\rho}$, in the sense of viscosity solution (see section 6 of [CIL92]). However, the current result (3.77) is enough for our use. We can also consider $\displaystyle\text{tr}\left(e^{pK_{S}}\right)$ (3.84) and achieve a similar result. In [H22], (3.84) is used in $n=1$ case. ### 3.2 Preservation of $(S,\omega_{0})$-Convexity by the Method of Continuity In addition to (3.1), in this section, we make the following assumption. ###### Assumption 3.1. For any $\sigma\in[0,1]$, Problem 1.4 with boundary value $\sigma F$ has a solution $\Phi^{\sigma}$ and $\\{\Phi^{\sigma}|\sigma\in[0,1]\\}$ is a continuous curve in $C^{4}(\overline{\mathcal{R}}\times V)$ in $C^{2}$ topology. ###### Remark 3.2. According to the ellipticity of equation (1.44), which is proved in section 4.1, the solution to Problem 1.4 is unique. As a consequence, $\Phi^{0}$ must equal to $0$. Let $\displaystyle A_{\sigma}$ $\displaystyle=(\Phi^{\sigma}_{\alpha{\overline{\beta}}}+b_{{\alpha{\overline{\beta}}}}),$ (3.85) $\displaystyle B_{S,\sigma}$ $\displaystyle=(\Phi^{\sigma}_{\alpha\beta}+\sigma S_{{\alpha\beta}}),$ (3.86) $\displaystyle K_{S,\sigma}$ $\displaystyle=B_{S,\sigma}\overline{A_{\sigma}^{-1}}\ \overline{B_{S,\sigma}}A_{\sigma}^{-1},$ (3.87) $\displaystyle Q^{<p>}_{S,\sigma}$ $\displaystyle=\text{tr}(K_{S,\sigma}^{p}),$ (3.88) $\displaystyle Q^{[p]}_{S,\sigma}$ $\displaystyle=\left(Q^{<p>}_{S,\sigma}\right)^{\frac{1}{p}},$ (3.89) and for any $(\tau,z)\in\mathcal{R}\times V$, $\displaystyle M_{S,\sigma}(\tau,z)=\text{Maximum Eigenvalue of }K_{S,\sigma}(\tau,z).$ (3.90) According to the assumption (3.1), $\displaystyle\max_{\partial\mathcal{R}\times V}M_{S,1}<1.$ (3.91) So we can choose $p$ large enough, such that $\displaystyle\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,1}<1-\frac{1}{2p}.$ (3.92) This can be done because when $p\rightarrow+\infty$, $\displaystyle 1-\frac{1}{2p}\rightarrow 1$ (3.93) and $\displaystyle\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,1}\rightarrow\max_{\partial\mathcal{R}\times V}M_{S,1}<1.$ (3.94) According to Lemma A.7, for any $(\tau,z)\in\partial\mathcal{R}\times V$, $Q^{[p]}_{S,\sigma}(\tau,z)$ is a monotone non-decreasing function of $\sigma$, so $\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,\sigma}$ is a monotone non-decreasing function of $\sigma$. Thus $\displaystyle\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,\sigma}<1-\frac{1}{2p},$ (3.95) for any $\sigma\in[0,1]$. Therefore, for no $\sigma\in[0,1]$, $\displaystyle\max_{{\overline{\mathcal{R}}}\times V}Q^{[p]}_{S,\sigma}\in\left(\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,\sigma},1-\frac{1}{2p}\right).$ (3.96) This is because if (3.96) is valid, for $\sigma=\sigma_{0}$, then $\displaystyle\max_{{\overline{\mathcal{R}}}\times V}Q^{[p]}_{S,\sigma_{0}}\leq 1-\frac{1}{2p},$ (3.97) and, as a consequence, $\displaystyle\max_{{\overline{\mathcal{R}}}\times V}M_{S,\sigma_{0}}\leq 1-\frac{1}{2p}.$ (3.98) So, using Proposition 3.1, we know $\displaystyle\max_{{\overline{\mathcal{R}}}\times V}Q^{[p]}_{S,\sigma_{0}}\leq\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,\sigma_{0}},$ (3.99) which contradicts with (3.96). The function $\max_{{\overline{\mathcal{R}}}\times V}Q^{[p]}_{S,\sigma}$ is a continuous function of $\sigma$, because of Assumption 3.1. And $\max_{{\overline{\mathcal{R}}}\times V}Q^{[p]}_{S,0}=0$, according to the uniqueness of $C^{2}$ solution. So $\displaystyle\max_{{\overline{\mathcal{R}}}\times V}Q^{[p]}_{S,\sigma}\leq\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,\sigma},\ \ \ \ \text{for any }\sigma\in[0,1].$ (3.100) Figure 1: Method of Continuity As illustrated by Figure 1, the dashed curve $\displaystyle\left\\{(\sigma,\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,\sigma})\big{|}\sigma\in[0,1]\right\\}$ (3.101) raises up from left to right and stays below the line $\\{m=1-\frac{1}{2p}\\}$. The continuous curve $\displaystyle\left\\{(\sigma,\max_{{\overline{\mathcal{R}}}\times V}Q^{[p]}_{S,\sigma})\big{|}\sigma\in[0,1]\right\\},$ (3.102) whose left endpoint is $(0,0)$, cannot intersect with the shadowed area, so it has to stay below the shadowed area. Actually, it has to coincide with the dashed curve. Therefore, we have, for any $\sigma\in[0,1]$, $\displaystyle M_{S,\sigma}\leq\max_{\overline{\mathcal{R}}\times V}Q^{[p]}_{S,\sigma}=\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,\sigma}\leq\max_{\partial\mathcal{R}\times V}Q^{[p]}_{S,1},\ \ \ \ \ \ \text{ in }\mathcal{R}\times V.$ (3.103) Let $p\rightarrow 0$, we get $\displaystyle M_{S,\sigma}\leq\max_{\partial\mathcal{R}\times V}M_{S,1},\ \ \ \ \ \ \text{ in }\mathcal{R}\times V.$ (3.104) In sum, we have the following Proposition. ###### Proposition 3.2. Suppose $S$ is a constant section of $T_{2,0}^{\ast}(V)$ and $F\in C^{2}({\partial\mathcal{R}\times V})$ satisfies that $\displaystyle F(\tau,\ast)\text{ is strictly $(S,\omega_{0})$-convex, for any $\tau\in\partial\mathcal{R}$.}$ (3.105) In addition, we assume that the Assumption 3.1 is satisfied. Then the solution $\Phi$ to Problem 1.4, with boundary value $F$, satisfies $\displaystyle\Phi(\tau,\ast)\text{ is strictly $(S,\omega_{0})$-convex, for any $\tau\in\mathcal{R}$.}$ (3.106) ### 3.3 Convexity and Metric Lower Bound Estimates by Altering $S$ In this section, we assume that the conditions of Proposition 3.2 are satisfied. In addition, we assume that, for a constant $\delta>0$, $\displaystyle F(\tau,\ast)\text{ is $(S,\omega_{0})$-convex of modulus $>\delta$, for any $\tau\in\partial\mathcal{R}$},$ (3.107) or equivalently, according to Lemma A.2, $\displaystyle F(\tau,\ast)\text{ is $(S,\omega_{0})$-convex of degree $>\delta$, for any $\tau\in\partial\mathcal{R}$}.$ (3.108) Condition (3.108) says, for any constant section $\Theta$ of $T_{2,0}^{\ast}(V)$, with $\displaystyle\Theta\overline{W^{-1}}\ \overline{\Theta}W^{-1}\leq\delta^{2},$ (3.109) we have $\displaystyle F(\tau,\ast)\text{ is strictly $(S+\Theta,\omega_{0})$-convex, for any $\tau\in\partial\mathcal{R}$}.$ (3.110) Here $W=(b_{\alpha{\overline{\beta}}})$. So, by Proposition 3.2, for any constant section $\Theta$ of $T_{2,0}^{\ast}(V)$, satisfying (3.109), we have $\displaystyle\Phi(\tau,\ast)\text{ is strictly $(S+\Theta,\omega_{0})$-convex, for any $\tau\in\mathcal{R}$}.$ (3.111) Therefore, $\displaystyle\Phi(\tau,\ast)\text{ is $(S,\omega_{0})$-convex of degree $>\delta$, for any $\tau\in\mathcal{R}$}.$ (3.112) By Lemma A.2, we know (3.112) is equivalent to $\displaystyle\Phi(\tau,\ast)\text{ is $(S,\omega_{0})$-convex of modulus $>\delta$, for any $\tau\in\mathcal{R}$}.$ (3.113) Then, by the definition of modulus of convexity, Definition 1.3, we know $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi(\tau,\ast)>\delta\omega_{0},\ \ \ \ \ \ \text{for any }\tau\in\mathcal{R}.$ (3.114) As a result, we have the following convexity and metric lower bound estimate: ###### Proposition 3.3 (Apriori Convexity and Metric Lower Bound Estimate). Suppose that, for a constant $\delta>0$ and a constant section $S$ of $T_{2,0}^{\ast}(V)$, $F\in C^{\infty}({\partial\mathcal{R}\times V})$ satisfies $\displaystyle F(\tau,\ast)\text{ is $(S,\omega_{0})$-convex of degree $>\delta$, for any $\tau\in\partial\mathcal{R}$}.$ (3.115) In addition, Assumption 3.1 is satisfied. Then a solution $\Phi$ to Problem 1.4, with boundary value $F$ satisfies $\displaystyle\Phi(\tau,\ast)\text{ is $(S,\omega_{0})$-convex of degree $>\delta$, for any $\tau\in\mathcal{R}$}$ (3.116) and, as a consequence, $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi(\tau,\ast)>\delta\omega_{0}\text{, \ \ \ \ \ \ for any $\tau\in\mathcal{R}$}.$ (3.117) ## 4 Existence of Solutions to the Perturbed Equation In this section, we prove the existence of smooth solutions to Problem 1.4. In section 4.1, we discuss some basic properties of equation (1.44), including ellipticity and concavity. Then in section 4.2, we derive a directional partial $C^{2}$ estimate, in the direction of the affine manifold. The estimate allows us to derive $C^{0}$ and $C^{1}$ estimates, in section 4.3. Then with the metric lower bound estimate, Proposition 3.3, we prove $C^{2}$ and $C^{2,\alpha}$ estimates in section 4.4 and 4.5. Finally, in section 4.6, we prove the existence of smooth solutions. ### 4.1 Basic Properties of the Elliptic Perturbation Equation First, equation (1.44) is elliptic. This has been indicated by (2.9). More precisely, let $\Phi^{\lambda}$ be a family of solutions of equation (1.44) with $\Phi^{0}=\Phi$ and $\frac{d}{d\lambda}\Phi^{\lambda}=\Psi$, at $\lambda=0$. Then differentiating $\displaystyle\Phi^{\lambda}_{\tau\overline{\tau}}-\Phi^{\lambda}_{\tau{\overline{\beta}}}g^{{\alpha{\overline{\beta}}}}_{\lambda}\Phi^{\lambda}_{{\overline{\tau}}\alpha}=\epsilon b_{{\alpha{\overline{\beta}}}}g^{{\alpha{\overline{\beta}}}}_{\lambda}$ (4.1) with respect to $\lambda$, at $\lambda=0$, gives $\displaystyle L^{i\overline{j}}\Psi_{i\overline{j}}=0,$ (4.2) where $L^{i\overline{j}}$ was introduced in section 2 by (2.8). In (4.1), $g_{\lambda}^{{\alpha{\overline{\beta}}}}$ is the inverse of $b_{{\alpha{\overline{\beta}}}}+\Phi^{\lambda}_{{\alpha{\overline{\beta}}}}$. Then we consider the concavity. We will show $\displaystyle F(\Phi_{i\overline{j}})=\log(\Phi_{\tau{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha\overline{\beta}}\Phi_{\alpha{\overline{\tau}}})-\log(b_{\alpha\overline{\beta}}g^{\alpha\overline{\beta}})$ (4.3) is a concave function of $\Phi_{i\overline{j}}$, providing $\displaystyle\left(\begin{array}[]{cc}\Phi_{\tau{\overline{\tau}}}&\Phi_{\tau{\overline{\beta}}}\\\ \Phi_{\alpha{\overline{\tau}}}&\Phi_{{\alpha{\overline{\beta}}}}+b_{\alpha{\overline{\beta}}}\end{array}\right)>0.$ (4.6) Actually, if we denote $\displaystyle F_{1}(\Phi_{i\overline{j}})$ $\displaystyle=\log(\Phi_{\tau{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha\overline{\beta}}\Phi_{\alpha{\overline{\tau}}}),$ (4.7) $\displaystyle F_{2}(\Phi_{i\overline{j}})$ $\displaystyle=-\log(b_{\alpha\overline{\beta}}g^{\alpha\overline{\beta}}),$ (4.8) then we can show $F_{1}$ and $F_{2}$ are both concave. These computations are in Appendix A.3. Suppose $\Phi$ is a $C^{4}$ solution to Problem 1.4 and $X$ is a constant vector field in $\mathcal{R}\times V$. We apply $\partial_{X}$ to equation (1.44) and get $\displaystyle\Phi_{X\tau{\overline{\tau}}}-\Phi_{X{\overline{\beta}}\tau}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}+\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{X\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\tau}}}-\Phi_{\tau{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{X\alpha{\overline{\tau}}}=-\epsilon b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{X\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}}.$ (4.9) With the linearized operator $L^{i\overline{j}}\partial_{i{\overline{j}}}$, (4.9) is simplified to $\displaystyle L^{i\overline{j}}\partial_{i{\overline{j}}}(\Phi_{X})=0.$ (4.10) Then apply $\partial_{X}$ to equation (4.9), we get $\displaystyle L^{i\overline{j}}\partial_{i{\overline{j}}}(\Phi_{XX})\geq 0.$ (4.11) This is because of the concavity of (4.3). We can also get (4.11) directly by replacing $\partial_{\theta}$ and $\partial_{\overline{\gamma}}$ in (2.10)-(2.13) by $\partial_{X}$. ### 4.2 Affine-Manifold-Directional $C^{2}$ Estimates Suppose $X$ is a constant real vector field in $\mathcal{R}\times V$, parallel to $V$. This is to say, if we denote the projection from $\mathcal{R}\times V$ to $\mathcal{R}$ by $\pi_{\mathcal{R}}$, then $(\pi_{\mathcal{R}})_{\ast}(X)=0$. By equation (4.11), we know $\displaystyle\Phi_{XX}\leq\max_{\partial\mathcal{R}\times V}\Phi_{XX},\ \ \ \ \ \text{ in }\mathcal{R}\times V.$ (4.12) Because $\omega_{0}$ has a lower bound and $\sqrt{-1}\partial\overline{\partial}F(\tau,\ast)$ has a uniform upper bound, we can find a constant $C>0$, so that $\displaystyle\max_{\partial\mathcal{R}\times V}\Phi_{XX}=\max_{\partial\mathcal{R}\times V}F_{XX}\leq C\omega_{0}(X,JX).$ (4.13) Therefore, $\displaystyle\sqrt{-1}\partial\overline{\partial}\Phi(X,JX)=\frac{\Phi_{XX}+\Phi_{JXJX}}{2}\leq C\omega_{0}(X,JX).$ (4.14) This implies $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi(\tau,\ast)\leq(1+C)\omega_{0},\ \ \ \ \ \ \text{for any }\tau\in\partial\mathcal{R}$ (4.15) and equivalently $\displaystyle(g_{\alpha\overline{\beta}})\leq(1+C)b_{\alpha\overline{\beta}}.$ (4.16) As a consequence, we have $\displaystyle b_{\alpha\overline{\beta}}g^{\alpha\overline{\beta}}\cdot\det(g_{\alpha\overline{\beta}})\leq n(1+C)^{n-1}\cdot\det(b_{\alpha\overline{\beta}}).$ (4.17) ### 4.3 $C^{0}$ and $C^{1}$ Estimates To do the $C^{0}$ and boundary $C^{1}$ estimates, we construct $\Psi$ and $\Phi^{0}$, so that $\displaystyle\Psi\leq\Phi\leq\Phi^{0},\ \ \ \ \ \text{ in }\mathcal{R}\times V,$ (4.18) and $\displaystyle\Psi=\Phi=\Phi^{0},\ \ \ \ \ \text{ on }\partial\mathcal{R}\times V.$ (4.19) We let $\Phi^{0}$ be the solution to Problem 1.2, with the boundary condition $\displaystyle\Phi^{0}=\Phi,\ \ \ \ \ \ \text{ on }{\partial\mathcal{R}\times V}.$ (4.20) We easily know that $\displaystyle\Phi\leq\Phi^{0},\ \ \ \ \ \text{ in }\mathcal{R}\times V,$ (4.21) because $\Phi_{0}$ is a maximal $\Omega_{0}$-PSH function. For the construction of $\Psi$, we need to use the estimate (4.17). Locally, we have $\displaystyle\det(h_{i\overline{j}})=\epsilon b_{\alpha\overline{\beta}}g^{\alpha\overline{\beta}}\cdot\det(g_{\alpha\overline{\beta}})\leq\epsilon n(1+C)^{n-1}\cdot\det(b_{\alpha\overline{\beta}}).$ (4.22) So, for a solution $\Psi$ to Problem 1.3, with $\varepsilon=\epsilon n(1+C)^{n-1}$, we have $\displaystyle(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi)^{n+1}\leq(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Psi)^{n+1}.$ (4.23) Thus $\Psi\leq\Phi$, given the boundary condition $\Psi=\Phi$ on ${\partial\mathcal{R}\times V}$. The global $C^{0}$ and $C^{1}$ estimates for $\Psi$ are known by [CTW17] and [B12]. Therefore, we have the global $C^{0}$ estimate and boundary $C^{1}$ estimate for $\Phi$. The $C^{1}$ interior estimate can be derived from boundary estimates with equation (4.10). ### 4.4 $C^{2}$ Estimates We first prove the boundary $C^{2}$ estimate, then we use equation (4.11) to derive the interior estimate. To do the boundary estimate, we need to flatten the boundary. Around $\tau_{0}\in\partial\mathcal{R}$, find a holomorphic map $\displaystyle f:B_{\delta^{\prime}}(\tau_{0})\cap\overline{\mathcal{R}}\rightarrow\mathbb{C}=\\{\zeta=\xi+\sqrt{-1}\eta\\},$ (4.24) for a small $\delta^{\prime}$. We want that $f^{\prime}\neq 0$, $f(\tau_{0})=0,$ $\displaystyle f(\partial\mathcal{R})\subset\left\\{\zeta|\text{Im}(\zeta)=0\right\\}$ (4.25) and $\displaystyle f(B_{\delta^{\prime}}(\tau_{0})\cap\overline{\mathcal{R}})\supset B_{\delta}^{+}(0)=\\{|\text{Re}(\zeta)|<\delta,0\leq\text{Im}(\zeta)<\delta\\},$ (4.26) for a small $\delta$. For a point $p_{0}\in V$, let $\\{z^{\alpha}\\}$ be a set of coordinates in $B_{r}(p_{0})\subset V$, for a small $r$. Without loss of generality, we can assume that $p_{0}=0$ in this coordinate chart. We also require that the coordinate $z^{\alpha}$ is properly chosen so that the natural metric on $B_{r}(0)$, as a subset of $\mathbb{C}^{n}$, is the metric $\omega_{0}$. In the following, we will work in the coordinate chart $B^{+}_{\delta}(0)\times B_{r}(0)$ and estimate second order derivatives at $(0,0)$. For convenience, we denote $B^{+}_{\delta}(0)\times B_{r}(0)$ by $\mathcal{D}$ and denote $\\{\text{Im}(\zeta)=0\\}\times B_{r}(0)$ by $\Gamma$. In the coordinate chart $\mathcal{D}$, $\Phi(\zeta,\vec{z})$ satisfies $\displaystyle\Phi_{\zeta{\overline{\zeta}}}-\Phi_{\zeta{\overline{\beta}}}g^{\alpha\overline{\beta}}\Phi_{\alpha{\overline{\zeta}}}=\epsilon\cdot k(\zeta)\cdot b_{\alpha\overline{\beta}}g^{\alpha\overline{\beta}}.$ (4.27) Here $k=\frac{1}{|f^{\prime}|^{2}}$, so it is a positive and smooth function $\zeta$. $\displaystyle\frac{1}{K}<k<K,$ (4.28) for a constant $K$. The second order derivatives of $\Phi$ at $(0,0)$ in $\Gamma$ directions are known, they depend on the boundary value $F$. Let $X$ be a constant vector field parallel to $\Gamma$ and with $|X|=1$. We need to estimate $\Phi_{X\eta}$, then using equation we can control $\Phi_{\eta\eta}$. The method of estimating $\Phi_{X\eta}$ is similar to the method used in [G98]. According to the estimate of section 4.3, there is a constant $C_{1}$, so that $\displaystyle|\Phi_{X}|\leq C_{1}.$ (4.29) In the following, we will show $\Phi_{X\eta}(0,0)\leq C_{2}$, for a constant $C_{2}$. First, we need to derive an equation satisfied by $\Phi_{X}$. Applying $\partial_{X}$ to equation (4.27) gives $\displaystyle\Phi_{X\zeta{\overline{\zeta}}}-\Phi_{X{\overline{\beta}}\zeta}g^{\alpha{\overline{\beta}}}\Phi_{\alpha{\overline{\zeta}}}+\Phi_{\zeta{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{X\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}}\Phi_{\alpha{\overline{\zeta}}}-$ $\displaystyle\Phi_{\zeta{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\Phi_{X\alpha{\overline{\zeta}}}$ (4.30) $\displaystyle=-\epsilon kb_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\mu}}}\Phi_{X\rho{\overline{\mu}}}g^{\rho{\overline{\beta}}}+\epsilon(\partial_{X}k)b_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\beta}}}.$ (4.31) We introduce the following operator $\mathcal{L}$, which is a scalar function multiple of $L$, after coordinate transformation. Equivalently, $\mathcal{L}$ can also be considered as the linearization operator of (4.27): $\displaystyle\mathcal{L}=\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}},$ (4.32) with $\displaystyle\left(\mathcal{L}^{i\overline{j}}\right)=\left(\begin{array}[]{cc}\mathcal{L}^{0\overline{0}}&\mathcal{L}^{0{\overline{\beta}}}\\\ \mathcal{L}^{{\overline{\alpha}}0}&\mathcal{L}^{\alpha{\overline{\beta}}}\end{array}\right)=\left(\begin{array}[]{cc}1&-\Phi_{\rho{\overline{\zeta}}}g^{\rho{\overline{\beta}}}\\\ -\Phi_{\zeta{\overline{\mu}}}g^{\alpha{\overline{\mu}}}&\epsilon kb_{\mu{\overline{\rho}}}g^{\mu{\overline{\beta}}}g^{\alpha{\overline{\rho}}}+\Phi_{\zeta{\overline{\mu}}}g^{\alpha{\overline{\mu}}}\Phi_{\rho{\overline{\zeta}}}g^{\rho{\overline{\beta}}}\end{array}\right).$ (4.37) Here $i,j$ run from $0$ to $n$, and the $0$-th coordinate is $\zeta$. With this operator, equation (4.31) becomes $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Phi_{X})=\epsilon(\partial_{X}k)b_{{\alpha{\overline{\beta}}}}g^{{\alpha{\overline{\beta}}}}.$ (4.38) Since we have the metric lower bound estimate, Proposition 3.3, we know the right-hand side of (4.38) is bounded. We can assume, for a constant $\tilde{C}$, $\displaystyle-\tilde{C}\leq\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Phi_{X}).$ (4.39) Then we construct a barrier function $u$, so that $u\geq\Phi_{X}$ and $u(0,0)=\Phi_{X}(0,0)$. Thus we can get an upper bound for $\Phi_{X\eta}$. The barrier function is $\displaystyle u=l+C_{3}\left(|z^{\alpha}|^{2}+\xi^{2}\right)+C_{4}\eta- C_{5}\eta^{2}+C_{6}(\Phi-\Psi).$ (4.40) In above, $l$ is the $\Gamma$-directional linearization of $\Phi_{X}$ at $(0,0)$. That’s to say, $\displaystyle l(0,0)$ $\displaystyle=\Phi_{X}(0,0),$ (4.41) $\displaystyle\partial_{\eta}l$ $\displaystyle=0$ (4.42) and $\displaystyle\nabla_{\Gamma}l(0,0)=\nabla_{\Gamma}\Phi_{X}(0,0).$ (4.43) $\Psi$ is a solution to Problem 1.3 with $\varepsilon=\epsilon n(1+C)^{n-1}+1$ and $\displaystyle\Psi=\Phi,\ \ \ \ \ \ \text{ on }{\partial\mathcal{R}\times V}.$ (4.44) The $\Psi$ constructed here is even smaller than the $\Psi$ constructed in section (4.3), so $\Psi\leq\Phi$. And according to the $C^{2}$ estimate for $\Psi$ [B12], we have $\displaystyle\left(\begin{array}[]{cc}\Psi_{\zeta{\overline{\zeta}}}&\Psi_{\zeta{\overline{\beta}}}\\\ \Psi_{\alpha{\overline{\zeta}}}&\Psi_{{\alpha{\overline{\beta}}}}+b_{\alpha{\overline{\beta}}}\end{array}\right)>\frac{1}{C_{7}}\left(\begin{array}[]{cc}1&\\\ &b_{\alpha{\overline{\beta}}}\end{array}\right).$ (4.49) In the following, we show that we can properly choose parameters $C_{3},C_{4},C_{5},C_{6}$, so that $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}u\leq-\tilde{C},\ \ \ \ \ \text{ in }\Omega,$ (4.50) and $\displaystyle u\geq\Phi_{X},\ \ \ \ \ \ \text{ on }\partial\Omega.$ (4.51) Then the comparison principle implies that $u\geq\Phi_{X}$ in $\Omega$. We compute and estimate $\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}u$ term by term: (i) $\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(l+C_{4}\eta)=0$, because $l+C_{4}\eta$ is a linear function. (ii)Using the metric lower bound estimate, Proposition 3.3, we have $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(|z^{\alpha}|^{2}+\xi^{2})=\epsilon kb_{\theta{\overline{\gamma}}}g^{\theta{\overline{\alpha}}}g^{\alpha{\overline{\gamma}}}+g^{\theta{\overline{\alpha}}}\Phi_{\theta{\overline{\zeta}}}g^{\alpha{\overline{\gamma}}}\Phi_{\zeta{\overline{\gamma}}}+\frac{1}{2}\leq C_{8}\epsilon K+g^{\theta{\overline{\alpha}}}\Phi_{\theta{\overline{\zeta}}}g^{\alpha{\overline{\gamma}}}\Phi_{\zeta{\overline{\gamma}}}.$ (4.52) Here $C_{8}$ depends on the metric lower bound. (iii) $\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(-\eta^{2})=-\frac{1}{2}.$ (iv) For $\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Phi-\Psi)$, we split it into two terms: $\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Phi)+\mathcal{L}^{{\alpha{\overline{\beta}}}}b_{{\alpha{\overline{\beta}}}}$ and $-(\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Psi)+\mathcal{L}^{{\alpha{\overline{\beta}}}}b_{{\alpha{\overline{\beta}}}})$. Using $b_{\alpha{\overline{\beta}}}+\Phi_{\alpha{\overline{\beta}}}=g_{\alpha{\overline{\beta}}}$, we get $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Phi)+\mathcal{L}^{{\alpha{\overline{\beta}}}}b_{{\alpha{\overline{\beta}}}}=2\epsilon kb_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\beta}}}.$ (4.53) For the right-hand side of (4.53), we use the metric lower bound estimate and get $\displaystyle 2\epsilon kb_{\alpha{\overline{\beta}}}g^{\alpha{\overline{\beta}}}\leq 2\epsilon KC_{9}.$ (4.54) For $\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Psi)+\mathcal{L}^{{\alpha{\overline{\beta}}}}b_{{\alpha{\overline{\beta}}}}$, we use (4.49) and get $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Psi)+\mathcal{L}^{{\alpha{\overline{\beta}}}}b_{{\alpha{\overline{\beta}}}}\geq\frac{1}{C_{7}}\left(g^{\alpha{\overline{\eta}}}\Phi_{\zeta{\overline{\eta}}}g^{\mu{\overline{\beta}}}\Phi_{\mu{\overline{\zeta}}}b_{\alpha{\overline{\beta}}}\right).$ (4.55) Note that we have already made the assumption, when choosing the coordinate chart, that, in $\mathcal{D}$, $b_{{\alpha{\overline{\beta}}}}=\delta_{\alpha{\overline{\beta}}}$. So $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Psi)+\mathcal{L}^{{\alpha{\overline{\beta}}}}b_{{\alpha{\overline{\beta}}}}\geq\frac{1}{C_{7}}\left(g^{\alpha{\overline{\eta}}}\Phi_{\zeta{\overline{\eta}}}g^{\mu{\overline{\alpha}}}\Phi_{\mu{\overline{\zeta}}}\right).$ (4.56) The right-hand side of (4.56) can be used to control the right-hand side of (4.52). In sum, we have $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}u\leq(C_{3}\cdot\epsilon KC_{8}+C_{6}\cdot 2\epsilon KC_{9}-\frac{C_{5}}{2})+g^{\alpha{\overline{\eta}}}\Phi_{\zeta{\overline{\eta}}}g^{\mu{\overline{\alpha}}}\Phi_{\mu{\overline{\zeta}}}(C_{3}-\frac{C_{6}}{C_{7}}).$ (4.57) We need to choose $C_{3},C_{5},C_{6}$ so that $\displaystyle C_{6}\geq C_{3}\cdot C_{7}$ (4.58) and $\displaystyle C_{5}\geq 2(C_{3}\cdot 2\epsilon KC_{8}+C_{6}\cdot 2\epsilon KC_{9}+\tilde{C}).$ (4.59) We also want $u\geq\Phi_{X}$ on $\partial\Omega$. We need to choose $C_{3}$ big enough, so that $\displaystyle C_{3}(\sum_{\alpha}|z^{\alpha}|^{2}+\xi^{2})+l\geq\Phi_{X}+(\sum_{\alpha}|z^{\alpha}|^{2}+\xi^{2}),\ \ \ \ \text{ on }\Gamma.$ (4.60) This requires $\displaystyle C_{3}\geq\max_{\Gamma}|D^{2}(\Phi_{X}|_{\Gamma})|+1.$ (4.61) We note that $\displaystyle C_{4}\eta-C_{5}\eta^{2}+C_{6}(\Phi-\Psi)=0,\ \ \ \ \ \ \text{ on }\Gamma,$ (4.62) so $u\geq\Phi_{X}$ on $\Gamma$, given (4.60) is valid. To make $u\geq\Phi_{X}$ on $\partial\Omega-\Gamma$, we choose $C_{4}$ big enough. Given (4.60), we have $\displaystyle u\geq\Phi_{X}+\delta_{1},\ \ \ \ \ \ \text{ on }\partial\Gamma,$ (4.63) for a small positive constant $\delta_{1}$. Then for a small $\delta_{2}\in(0,r)$, $\displaystyle l+C_{3}\left(|z^{\alpha}|^{2}+\xi^{2}\right)-C_{5}\eta^{2}+C_{6}(\Phi-\Psi)\geq\Phi_{X},\ \ \ \ \ \ \text{ on }\\{\eta\leq\delta_{2}\\}\cap\partial\Omega.$ (4.64) $\delta_{2}$ depends on $\delta_{1},C_{5},C_{6}$, second order derivatives of $F$ and the norms of gradients of $\Psi,\Phi$. We also have $\displaystyle l+C_{3}\left(|z^{\alpha}|^{2}+\xi^{2}\right)-C_{5}\eta^{2}+C_{6}(\Phi-\Psi)>\Phi_{X}-C_{10},\ \ \ \ \ \ \text{ in }\Omega,$ (4.65) for a constant $C_{10}$, depending on $C_{5},C_{6}$, second order derivatives of $F$, the $C^{1}$ norm of $\Phi$ and the $C^{0}$ norm of $\Psi$. We can choose $\displaystyle C_{4}>\frac{C_{10}}{\delta_{2}}.$ (4.66) Then $u\geq\Phi_{X}$ on $\partial\mathcal{D}$. In sum, we choose $C_{3}$ large enough with condition (4.61), then choose $C_{6}$ with condition (4.58), then choose $C_{5}$ with condition (4.59) and finally choose $C_{4}$ according to condition (4.66). Thus, we have an upper bound for $\Phi_{X\eta}$. To get the lower bound, we simply replace $\partial_{X}\Phi$ by $\partial_{-X}\Phi$. Then we can use equation (4.27) to get the estimate of $\Phi_{\eta\eta}$. Just note that $4\Phi_{\zeta{\overline{\zeta}}}=\Phi_{\xi\xi}+\Phi_{\eta\eta}$. So $\displaystyle\Phi_{\eta\eta}=-\Phi_{\xi\xi}+4\Phi_{\zeta{\overline{\beta}}}g^{\alpha\overline{\beta}}\Phi_{\alpha{\overline{\zeta}}}+4\epsilon\cdot k(\zeta)\cdot b_{\alpha\overline{\beta}}g^{\alpha\overline{\beta}}.$ (4.67) The estimate of the right-hand side of (4.67) depends on the boundary value $F$, the metric lower bound estimate and the estimate of $\Phi_{X\eta}$. Given the boundary estimate, we can go back to the original $(\tau,\vec{z})$ coordinates and use equation (4.11). Similar to section 4.2, for any constant vector $X$ in $\mathcal{R}\times V$, we have $\displaystyle\Phi_{XX}\leq\max_{\partial\mathcal{R}\times V}\Phi_{XX},\ \ \ \ \ \ \text{ in }\mathcal{R}\times V.$ (4.68) For the lower bound estimate of $\Phi_{XX}$, we have $\displaystyle-\Phi_{XX}$ $\displaystyle=\Phi_{JXJX}-2\sqrt{-1}\partial\overline{\partial}\Phi(X,JX)$ (4.69) $\displaystyle=\Phi_{JXJX}-2(\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi)(X,JX)+2\Omega_{0}(X,JX).$ (4.70) Then, using $\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi\geq 0$, we get the lower bound of $\Phi_{XX}$. In sum, we get $\displaystyle|\Phi|_{C^{2}(\overline{\mathcal{R}}\times V)}\leq C_{11},$ (4.71) for a constant $C_{11}$, which depends on $\epsilon$, $|F|_{C^{3}(\overline{\mathcal{R}}\times V)}$, $\omega_{0}$, the metric lower bound estimate and the boundary of $\mathcal{R}$. In particular, when $\epsilon\rightarrow 0$, the constant $C_{11}$ does not go to $\infty$. However, we don’t need to use this fact. ### 4.5 $C^{2,\alpha}$ Estimates With the $C^{2}$ estimate in the previous section, we know the operator $\mathcal{L}$ and $L$ are uniform elliptic. We only need to prove the boundary $C^{2,\alpha}$ estimate. Then, with the uniform ellipticity, concavity, $C^{2}$ estimate and the boundary $C^{2,\alpha}$ estimate, we can derive the interior $C^{2,\alpha}$ estimate with standard method [H16]. For the boundary $C^{2,\alpha}$ estimate we need to flatten the boundary again. Adopting notation of section 4.4, we know that $\Phi$ satisfies the equation $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Phi_{X})=\epsilon(\partial_{X}k)b_{{\alpha{\overline{\beta}}}}g^{{\alpha{\overline{\beta}}}}\ \ \ \ \ \ \text{ in $\mathcal{D}$}.$ (4.72) We construct a function $\displaystyle\mathcal{F}(\zeta,\vec{z})=\partial_{X}F(\text{Re}(\zeta),\vec{z}).$ (4.73) Then $\displaystyle\Phi_{X}-F$ $\displaystyle=0,$ $\displaystyle\text{ on }\Gamma;$ (4.74) $\displaystyle\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}(\Phi_{X}-F)$ $\displaystyle=\epsilon(\partial_{X}k)b_{{\alpha{\overline{\beta}}}}g^{{\alpha{\overline{\beta}}}}-\mathcal{L}^{i\overline{j}}\partial_{i\overline{j}}\mathcal{F},$ $\displaystyle\text{ in }\mathcal{D}.$ (4.75) The right-hand side of (4.75) is bounded, according to the $C^{2}$ estimate in the previous section, so we can use Theorem 1.2.16 of [H16] and get the $C^{\alpha}$ estimate for $\partial_{\eta}(\Phi_{X}-\mathcal{F})=\Phi_{X\eta}$ in a small neighborhood of $0$ in $\Gamma$, for an $\alpha\in(0,1)$. Using equation (4.67), we get the $C^{\alpha}$ estimate for $\Phi_{\eta\eta}$. ### 4.6 Existence of Smooth Solutions by the Method of Continuity Suppose $F\in C^{\infty}({\partial\mathcal{R}\times V})$ satisfies condition (3.1), for a constant section $F$ of $T_{2,0}^{\ast}(V)$. Then according to Lemma A.7, $\displaystyle\sigma F(\tau,\ast)\text{ is strictly $(S,\omega_{0})$-convex for any $\tau\in\mathcal{R}$ and any $\sigma\in[0,1]$ }.$ (4.76) Consider the set $\displaystyle\mathscr{S}=\\{\sigma\in[0,1]\big{|}$ Problem 1.4 with boundary value $s\cdot F$ has a solution $\Phi^{s}$, for any $s\leq\sigma$, (4.77) $\displaystyle\ \ \ \text{and $\Phi^{s}$ is a continuous curve in $C^{4}(\overline{\mathcal{R}}\times V)$ with $C^{2}$ topology}\\}.$ (4.78) Obviously, $\mathscr{S}$ is non-empty, since it contains $0$. So, if $\mathscr{S}$ is both open and closed, then $\mathscr{S}=[0,1]$ Before we prove the openness and closeness, we point out that if a solution $\Phi$ is in $C^{2,\alpha}(\overline{\mathcal{R}}\times V)$ then we can use the standard bootstrap technique (Theorem 5.1.9 and 5.1.10 of [H16])) to show that $\Phi$ is actually in $C^{\infty}(\overline{\mathcal{R}}\times V)$. This is because of the condition (1.42) and the ellipticity of the equation (1.41). The openness can be proved with standard implicit function theorem. This is because of the condition (1.42) and the ellipticity. Without such condition, the openness can be quite difficult to prove. For example, in [CFH20], we used Nash-Moser inverse function theorem to prove an openness result for geodesic equations. For the closeness, given $\\{\sigma_{i}\\}_{i\in\mathbb{Z}^{+}}\subset\mathscr{S}$ with $\lim_{i\rightarrow\infty}\sigma_{i}=\sigma_{\infty}$, we need to show that $\sigma_{\infty}\in\mathscr{S}$. According to the $C^{2,\alpha}$ estimate, the sequence of solutions $\Phi^{\sigma_{i}}$ satisfy $\displaystyle|\Phi^{\sigma_{i}}|_{C^{2,\alpha}}\leq C$ (4.79) and $\displaystyle\Omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi^{\sigma_{i}}\geq\frac{1}{C}(\Omega_{0}+\sqrt{-1}d\tau\wedge\overline{d\tau}),$ (4.80) for a constant $C$, which depends on $\epsilon$. It’s easy to know that $\Phi^{\sigma_{i}}$ is a Cauchy sequence in $C^{0}(\overline{\mathcal{R}}\times V)$. So, using interpolation, we can find $\Phi^{\sigma_{\infty}}\in C^{2,\frac{\alpha}{2}}(\overline{\mathcal{R}}\times V)$ and $\displaystyle\Phi^{\sigma_{i}}\rightarrow\Phi^{\sigma_{\infty}},\ \ \ \ \ \ \ \ \text{ in $C^{2,\frac{\alpha}{2}}$ norm.}$ (4.81) The $C^{2,\frac{\alpha}{2}}$ convergence implies $\Phi^{\sigma_{\infty}}$ satisfies equation (1.41) and condition (1.42). Therefore, $\Phi^{\sigma_{\infty}}$ is a solution to Problem 1.4, with boundary value ${\sigma_{\infty}}F$. It remains to show that, for any $\lambda\in[0,\sigma_{\infty})$, there is a solution $\Phi^{\lambda}$ and $\\{\Phi^{\lambda}|\lambda\in[0,\sigma_{\infty}]\\}$ is a continuous curve with $C^{2}$ topology. In the following, when we say solution $\Phi^{\theta}$, we always mean a solution to Problem 1.4 with boundary value $\theta F$. For any $\nu<\sigma_{\infty}$, let $\delta_{\nu}=\frac{\sigma_{\infty}-\nu}{2}$. There is a $\sigma_{k}>\nu+\delta$ because $\sigma_{k}\rightarrow\sigma_{\infty}$. $\sigma_{k}$ being in $\mathscr{S}$ implies solution $\Phi^{\nu}$ with boundary value $\nu F$ exists and $\\{\Phi^{s}|s\in[0,\nu+\delta_{\nu}]\\}$ is a continuous curve with $C^{2}$ topology. So $\displaystyle\\{\Phi^{\lambda}|\lambda\in[0,\sigma_{\infty})\\}=\bigcup_{\nu<\sigma_{\infty}}\\{\Phi^{\lambda}|\lambda\in[0,\nu+\delta_{\nu})\\}$ (4.82) is $C^{2}$ continuous everywhere. Therefore, for any sequence $\nu_{k}\rightarrow\sigma_{\infty}$, solution $\Phi^{\nu_{k}}$ has uniform $C^{2,\alpha}$ estimate. It’s easy to know $\Phi^{\nu_{k}}$ converges to $\Phi^{\sigma_{\infty}}$ in $C^{0}$ norm, so, by interpolation, we know $\Phi^{\nu_{k}}$ converges to $\Phi^{\sigma_{\infty}}$ in $C^{2}$ norm and the curve $\\{\Phi^{\lambda}|\lambda\in[0,\sigma_{\infty}]\\}$ is $C^{2}$ continuous everywhere. In sum, we have the following. ###### Theorem 4.1 (Existence of Smooth Solutions to Problem 1.4 and Convexity Estimates). Suppose that, for a constant $\delta>0$ and a constant section $S$ of $T_{2,0}^{\ast}(V)$, $F\in C^{\infty}({\partial\mathcal{R}\times V})$ satisfies $\displaystyle F(\tau,\ast)\text{ is $(S,\omega_{0})$-convex of modulus $>\delta$, for any $\tau\in\partial\mathcal{R}$}.$ (4.83) Then Problem 1.4 with boundary value $F$ has a unique and smooth solution $\Phi$. In addition, $\displaystyle\Phi(\tau,\ast)\text{ is $(S,\omega_{0})$-convex of modulus $>\delta$, for any $\tau\in\mathcal{R}$},$ (4.84) and, consequently, $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi(\tau,\ast)>\delta\omega_{0}\text{, \ \ \ \ \ \ for any $\tau\in\mathcal{R}$}.$ (4.85) ## 5 Estimates For Homogenous Monge-Ampère Equations In this section, we prove estimates for solutions to Problem 1.2 and Problem 1.1. Solutions we talk about in this section all have the same boundary value $F$ which satisfies condition (1.34). Suppose $\Phi^{\epsilon}$ is the solution to Problem 1.4 and $\Phi^{0}$ is the solution to Problem 1.2. We will show that $\Phi^{\epsilon}$ converges to $\Phi^{0}$ in $C^{0}$ norm and $\Phi^{0}$ satisfies estimates, which are satisfied by $\Phi^{\epsilon}$. Let $\Psi^{\epsilon}$ be the solution to Problem 1.3 with $\varepsilon=\epsilon n(1+C)^{n-1}$, where the constant $C$ is from (4.22). Then we know $\displaystyle\Psi^{\epsilon}\leq\Phi^{\epsilon}\leq\Phi^{0},\ \ \ \ \ \ \text{ in }{\mathcal{R}\times V}.$ (5.1) According to estimates, in [C00] or [B12], for solutions to Problem 1.3, $\Psi^{\epsilon}\rightarrow\Phi^{0}$ in $C^{0}$ norm, so $\Phi^{\epsilon}\rightarrow\Phi^{0}$ in $C^{0}$ norm. In particular, for any $\tau\in\mathcal{R}$, $\displaystyle\Phi^{\epsilon}(\tau,\ast)\rightarrow\Phi^{0}(\tau,\ast),\ \ \ \ \ \ \text{ in }C^{0}\text{ norm. }$ (5.2) According to Theorem 4.1, in every local coordinate chart, $\displaystyle\Phi^{\epsilon}(\tau,\ast)+(1-\mu)b_{\alpha{\overline{\beta}}}z^{\alpha}\overline{z^{\beta}}+\text{Re}\left(S_{\alpha\overline{\beta}}z^{\alpha}\overline{z^{\beta}}\right)$ (5.3) is a convex function, for any $\tau\in\mathcal{R}$. Then, because of the $C^{0}$ convergence of $\Phi^{\epsilon}\rightarrow\Phi^{0}$, we can replace $\Phi^{\epsilon}$ by $\Phi^{0}$ in (5.3) and get $\displaystyle\Phi^{0}(\tau,\ast)+(1-\mu)b_{\alpha{\overline{\beta}}}z^{\alpha}\overline{z^{\beta}}+\text{Re}\left(S_{\alpha\overline{\beta}}z^{\alpha}\overline{z^{\beta}}\right)$ (5.4) is a convex function, for any $\tau\in\mathcal{R}$. Thus $\Phi^{0}(\tau,\ast)$ is $(S,\omega_{0})$-convex of modulus $\geq\mu$, for any $\tau\in\mathcal{R}$. For the metric lower bound estimate, the proof is standard, we only need to do an integration by parts. Theorem 4.1 implies that, for any positive function $\eta$, $\displaystyle\int_{V}(\omega_{0}(1-\mu)+\sqrt{-1}\partial\overline{\partial}\Phi^{\epsilon}(\tau,\ast))\wedge\omega_{0}^{n-1}\eta\geq 0,$ (5.5) for any $\tau\in\mathcal{R}$. Then for any $\eta$ with sufficiently small support, we can find $\rho_{0}$, so that $\displaystyle\omega_{0}=\sqrt{-1}\partial\overline{\partial}\rho_{0},\ \ \ \ \ \ \text{ in the support of $\eta$}.$ (5.6) Thus $\displaystyle\int_{V}(\rho_{0}(1-\mu)+\Phi^{\epsilon}(\tau,\ast))\wedge\omega_{0}^{n-1}\wedge\sqrt{-1}\partial\overline{\partial}\eta\geq 0,$ (5.7) for any $\tau\in\mathcal{R}$. Let $\epsilon\rightarrow 0$, we get $\displaystyle\int_{V}(\rho_{0}(1-\mu)+\Phi^{0}(\tau,\ast))\wedge\omega_{0}^{n-1}\wedge\sqrt{-1}\partial\overline{\partial}\eta\geq 0,$ (5.8) for any $\tau\in\mathcal{R}$. So $\displaystyle\omega_{0}+\sqrt{-1}\partial\overline{\partial}\Phi^{0}(\tau,\ast)\geq\mu\omega_{0},$ (5.9) in the weak sense, for any $\tau\in\mathcal{R}$. Thus, Theorem 1.2 is proved. ###### Remark 5.1. Here, we cannot use Lemma A.2 to directly derive metric lower bound estimate from convexity estimate, because $\Phi^{0}(\tau)$ may not be $C^{2}$ continuous and the degree of $(S,\omega_{0})$-convexity may not be well defined. ## Appendix A Algebra Lemmas ### A.1 Lemmas for Convexity Estimate In this appendix, we show that if $\varphi$ is $C^{2}$ then Definition 1.2 and Definition 1.4 are equivalent. Furthermore, the modulus of $(S,\omega_{0})$-convexity and the degree of $(S,\omega_{0})$-convexity also coincide. The main results are Lemma A.1 and Lemma A.2. ###### Lemma A.1 (Equivalent Definitions of Strict $(S,\omega_{0})$-Convexity). Suppose that $\varphi$ is a $C^{2}$ continuous function on $V$. Then it satisfies the condition of Definition 1.2 if and only if it satisfies the condition of Definition 1.4 ###### Lemma A.2 (Equivalence between Modulus of Convexity and Degree of Convexity). Suppose that $\varphi$ is a $C^{2}$ continuous function on $V$ and $S$ is a constant section of $T_{2,0}^{\ast}(V)$. Then $\varphi$ is $(S,\omega_{0})$-convex of degree $>\delta$ if and only if it is $(S,\omega_{0})$-convex of modulus $>\delta$. In the proof of these Lemmas and also in other parts of the paper, we need to use the following Autonne-Takagi factorization and its corollary Lemma A.4. The following Autonne-Takagi factorization is the Corollary 4.4.4(c) of [HJ13]. ###### Lemma A.3 (Autonne-Takagi Factorization). Given a complex valued symmetric matrix $S$, there is a unitary matrix $U$ such that $\displaystyle S=U^{T}\Sigma U$ (A.1) in which $\Sigma$ is a non-negative diagonal matrix. And obviously the Hermitian matrix $S\overline{S}$ has a decomposition $\displaystyle S\overline{S}=U^{T}\Sigma^{2}\overline{U}.$ (A.2) With Autonne-Takagi Factorization, we can prove the following lemma. ###### Lemma A.4. Suppose $A$ is an $n\times n$ positive definite Hermitian matrix and $B$ is a complex $n\times n$ symmetric matrix. Then we can find an $n\times n$ invertible matrix $P$, so that $\displaystyle PAP^{\ast}=I,\ \ \ \ \ \ \ \ \ PBP^{T}=\Lambda,$ (A.3) in which $\Lambda$ is a non-negative diagonal matrix. ###### Proof of Lemma A.4. Since $A$ is a positive definite Hermitian matrix, we can find an invertible matrix $Q$ so that $\displaystyle QAQ^{\ast}=I.$ (A.4) Then we apply Autonne-Takagi facorization to $QBQ^{T}$ and find $R\in U(n)$ so that $\displaystyle R(QBQ^{T})R^{T}=\Lambda.$ (A.5) $P=RQ$ satisfies condition (A.3). ∎ The proof of Lemma A.1 essentially depends on the following lemma. ###### Lemma A.5. Suppose $U,V,W$ are real valued $n\times n$ matrices and $U,V$ are symmetric. Let $\displaystyle A=\frac{1}{4}(U+W)+\frac{\sqrt{-1}}{4}(V-V^{T}),$ (A.6) $\displaystyle B=\frac{1}{4}(U-W)-\frac{\sqrt{-1}}{4}(V+V^{T}).$ (A.7) Then $\displaystyle\left(\begin{array}[]{cc}U&V\\\ V^{T}&W\end{array}\right)>0$ (A.10) if and only if $\displaystyle A>0\text{\ \ \ and \ \ \ }B\overline{A^{-1}}\ {\overline{B}}{A^{-1}}<1.$ (A.11) We first prove Lemma A.5, then Lemma A.1 follows easily. ###### Proof of Lemma A.5. ($\Rightarrow$) Given $U,V,W$ satisfying A.10, we can construct a strictly convex quadratic polynomial on $\mathbb{C}^{n}$, with coordinate $z^{\alpha}=x^{\alpha}+\sqrt{-1}y^{\alpha}$, $\displaystyle H({\bf z})=U_{\alpha\beta}x^{\alpha}x^{\beta}+2V_{\alpha\beta}x^{\alpha}y^{\beta}+W_{\alpha\beta}y^{\alpha}y^{\beta}.$ (A.12) $H$ is a strictly PSH function, since it’s strictly convex. And it’s straightforward to check that $\displaystyle\partial_{\alpha{\overline{\beta}}}H=A_{\alpha{\overline{\beta}}}\text{ \ \ \ and \ \ \ \ }\partial_{\alpha\beta}H=B_{\alpha\beta}.$ (A.13) Therefore, we know $A>0$. Then using Lemma A.4 we find matrix $P$, so that $\displaystyle PAP^{\ast}=I,\ \ \ \ \ \ \ \ \ PBP^{T}=\Lambda,$ (A.14) in which $\Lambda=\text{diag}(\lambda_{1},\lambda_{2},\ ...\ ,\lambda_{n})$ is non-negative. We consider a new set of coordinates $\\{\zeta^{\alpha}\\}$, with $\displaystyle z^{\alpha}=P^{\alpha}_{\beta}\zeta^{\beta}.$ (A.15) With this new coordinate $\displaystyle(\partial_{\zeta^{\alpha}\zeta^{\beta}}H)=(P_{\alpha}^{\mu}P_{\beta}^{\rho}\partial_{z^{\mu}z^{\rho}}H)=PBP^{T}=\Lambda;$ (A.16) $\displaystyle(\partial_{\zeta^{\alpha}\zeta^{\overline{\beta}}}H)=(P_{\alpha}^{\mu}\overline{P_{\beta}^{\rho}}\partial_{z^{\mu}z^{\overline{\rho}}}H)=PAP^{\ast}=I.$ (A.17) In above, $P=(P_{\alpha}^{\beta})$, where $\alpha$ is the row index and $\beta$ is the column index. It’s obvious that the linear change of coordinates (A.15) does not affect the fact that $H$ is a convex function. So the restriction of $H$ to a complex line $\displaystyle L_{\alpha}=\\{\zeta^{\alpha}=\tau,\zeta^{\mu}=0,\text{ for }\mu\neq\alpha|\tau\in\mathbb{C}\\}$ (A.18) is still a convex function. Therefore, $\displaystyle H|_{L_{\alpha}}=\tau{\overline{\tau}}+\lambda_{\alpha}\frac{\tau^{2}+{\overline{\tau}}^{2}}{2}$ (A.19) is a convex function of $\tau$. This implies $|\lambda_{\alpha}|<1$. So $\Lambda^{2}<1$ and $\displaystyle B\overline{A^{-1}}\ {\overline{B}}{A^{-1}}=P^{-1}\Lambda^{2}P<1.$ (A.20) ($\Leftarrow$) For another direction, we use Lemma A.4 to diagonalize $A,B$ simultaneously. Since $A>0$, we can find $R$ so that $\displaystyle B=R\Lambda R^{T},\ \ \ \ \text{and }\ \ \ \ \ A=RR^{\ast}.$ (A.21) Let $R=R_{1}+\sqrt{-1}R_{2}.$ Then $\displaystyle B=R_{1}\Lambda R_{1}^{T}-R_{2}\Lambda R_{2}^{T}+\sqrt{-1}(R_{2}\Lambda R_{1}^{T}+R_{1}\Lambda R_{2}^{T});$ (A.22) $\displaystyle A=R_{1}R_{1}^{T}+R_{2}R_{2}^{T}+\sqrt{-1}(R_{2}R_{1}^{T}-R_{1}R_{2}^{T}).$ (A.23) We can get $U,V,W$ by adding (A.6) and (A.7) or subtracting one from another: $\displaystyle U=2\text{Re}(A+B),\ \ W=2\text{Re}(A-B),\ \ V=2\text{Im}(A-B).$ (A.24) Then plug (A.22) and (A.23) into (A.24), we get $\displaystyle\left(\begin{array}[]{cc}U&V\\\ V^{T}&W\end{array}\right)=2\left(\begin{array}[]{cc}R_{1}&R_{2}\\\ -R_{2}&R_{1}\end{array}\right)\left(\begin{array}[]{cc}I+\Lambda&\\\ &I-\Lambda\end{array}\right)\left(\begin{array}[]{cc}R_{1}^{T}&-R_{2}^{T}\\\ R_{2}^{T}&R_{1}^{T}\end{array}\right).$ (A.33) Therefore, $\Lambda^{2}<1$ implies (A.10). Similar to (A.20), we have $\displaystyle B\overline{A^{-1}}\ {\overline{B}}{A^{-1}}=R\Lambda^{2}R^{-1}.$ (A.34) So $B\overline{A^{-1}}\ {\overline{B}}{A^{-1}}<1$ implies $\Lambda^{2}<1$. ∎ Lemma A.1 follows immediately, by letting $A=(\varphi_{\alpha{\overline{\beta}}})$ and $B=(\varphi_{{\alpha\beta}})+S.$ The following linear algebra lemma is essential for the equivalence between modulus of convexity and degree of convexity. ###### Lemma A.6. Suppose $A,G$ are positive definite $n\times n$ Hermitian matrices, $B$ is a complex valued symmetric matrix and $\mu$ is a positive constant. Then $\displaystyle A>\mu G\text{,\ \ and\ \ }B\overline{(A-\mu G)^{-1}}\ \overline{B}(A-\mu G)^{-1}<1$ (A.35) if and only if $\displaystyle A>0,\text{\ and\ }(B-\Theta)\overline{A^{-1}}\ \overline{B-\Theta}A^{-1}<1,\text{\ for any symmetric $\Theta$, with }\Theta\overline{G^{-1}}\ \overline{\Theta}G^{-1}\leq\mu^{2}.$ (A.36) ###### Proof of Lemma A.6. We first provide a proof in the $n=1$ case. The lemma in this case is very intuitive and it best demonstrates the idea. In this case, $B,\Theta$ are complex numbers, $A,G$ are positive numbers, and, without loss of generality, we can assume $G=1$. As illustrated by Figure 2, the set $\displaystyle\\{(B,A)\big{|}A>0,|B-\Theta|<A\\}$ (A.37) is a cone with the corner at $(\Theta,0)$. Figure 2: Metric Lower Bound Estimate The condition that, for all $\Theta$ with $|\Theta|\leq\mu$, $\displaystyle(B,A)\subset\\{(B,A)\big{|}A>0,|B-\Theta|<A\\}$ (A.38) is equivalent to $\displaystyle(B,A)\subset\bigcap_{|\Theta|\leq\mu}\\{(B,A)\big{|}A>0,|B-\Theta|<A\\}.$ (A.39) The elementary geometry tells us that $\displaystyle\bigcap_{|\Theta|\leq\mu}\\{(B,A)\big{|}A>0,|B-\Theta|<A\\}=\\{A>\mu+|B|\\}.$ (A.40) Therefore, (A.39) is equivalent to $\displaystyle A>\mu\ \ \ \ \ \ \text{and}\ \ \ \ \ \ \left|\frac{B}{A-\mu}\right|<1,$ (A.41) which is condition (A.35). In general dimension, we need to construct quadratic polynomials from $A,B,G$, similar to the proof of Lemma A.1. Given an Hermitian matrix $H$ and a symmetric matrix $S$, let $\displaystyle K_{S}^{H}({\bf z})=H_{{\alpha{\overline{\beta}}}}z^{\alpha}z^{{\overline{\beta}}}+\text{Re}(S_{\alpha\beta}z^{\alpha}z^{\beta}).$ (A.42) According to Lemma A.1, $A,B,G$ and $\mu$ satisfying (A.35) is equivalent to that $\displaystyle K_{B}^{A-\mu G}\text{\ is a strictly convex function on $\mathbb{C}^{n}$};$ (A.43) $A,B,G$ and $\mu$ satisfying (A.36) is equivalent to that $\displaystyle K_{B-\Theta}^{A}\text{\ is a strictly convex function on $\mathbb{C}^{n}$, for any symmetric $\Theta$, with $\Theta\overline{G^{-1}}\ \overline{\Theta}G^{-1}\leq\mu^{2}$}.$ (A.44) Because $K_{B-\Theta}^{A}$ and $K_{B}^{A-\mu G}$ are both quadratic polynomials, they are strictly convex if and only if they are positive on $\mathbb{C}^{n}-\\{0\\}$. So we need to show $\displaystyle K_{B}^{A-\mu G}>0,\text{\ on \ }\mathbb{C}^{n}\backslash\\{0\\}$ (A.45) if and only if $\displaystyle K_{B-\Theta}^{A}>0,\text{\ on \ }\mathbb{C}^{n}\backslash\\{0\\}\text{, for any symmetric $\Theta$, with $\Theta\overline{G^{-1}}\ \overline{\Theta}G^{-1}\leq\mu^{2}$}.$ (A.46) Equivalently, we need to show $\displaystyle K_{B}^{A}({\bf z})>\mu G_{\alpha{\overline{\beta}}}z^{\alpha}z^{\overline{\beta}},\text{\ \ on \ \ }\mathbb{C}^{n}\backslash\\{0\\}$ (A.47) if and only if $\displaystyle K_{B}^{A}({\bf z})>\text{Re}(\Theta_{\alpha\beta}z^{\alpha}z^{\beta}),\text{\ on \ }\mathbb{C}^{n}\backslash\\{0\\}\text{, for any symmetric $\Theta$, with $\Theta\overline{G^{-1}}\ \overline{\Theta}G^{-1}\leq\mu^{2}$}.$ (A.48) This is valid because $\displaystyle\sup_{\Theta\text{ symmetric, }\Theta\overline{G^{-1}}\ \overline{\Theta}G^{-1}\leq\mu^{2}}\text{Re}(\Theta_{\alpha\beta}z^{\alpha}z^{\beta})=\mu G_{\alpha{\overline{\beta}}}z^{\alpha}z^{\overline{\beta}}.$ (A.49) To prove (A.49), we find $P$, so that $PGP^{\ast}=I$, and we let $\displaystyle\frac{P\Theta P^{T}}{\mu}=S.$ (A.50) Then we find (A.49) is equivalent to $\displaystyle\sup_{S\text{ symmetric, }S\overline{S}\ \leq 1}\text{Re}(S_{\alpha\beta}z^{\alpha}z^{\beta})=\delta_{\alpha{\overline{\beta}}}z^{\alpha}z^{\overline{\beta}}.$ (A.51) (A.51) can be easily proved with basic linear algebra. ∎ Lemma A.2 follows immediately, by letting $A=(\varphi_{\alpha{\overline{\beta}}})$ and $B=(\varphi_{{\alpha\beta}})+S.$ ### A.2 Monotonicity In this section, we prove the following algebra lemma. ###### Lemma A.7 (A Monotonicity Lemma). Suppose $A_{0}$ and $A$ are Hermitian matrices satisfying $\displaystyle A_{0}>0\ \ \ \text{and}\ \ \ A_{0}+A>0,$ (A.52) and $B$ is a symmetric matrix. Let $\displaystyle K_{t}=B\overline{(A_{0}+tA)^{-1}}\ {\overline{B}}(A_{0}+tA)^{-1}.$ (A.53) Then $t^{2p}\text{tr}(K_{t}^{p})$ and the maximum eigenvalue of $t^{2}K_{t}$ are both non-decreasing functions of $t$. Here $p$ is any positive integer. ###### Proof. First of all, we note that condition (A.52) implies $\displaystyle A_{0}+tA>0,\ \ \ \ \text{ for any }t\in(0,1).$ (A.54) So, in (A.53), ${(A_{0}+tA)^{-1}}$ exists. In the following we compute $\displaystyle\frac{d}{dt}\left[t^{2p}\text{tr}(K_{t}^{p})\right]$ (A.55) and show it’s non-negative. We need to simultaneously diagonalize $A$ and $A_{0}+tA$ to simplify the computation. For $t_{0}\in[0,1]$, find $P$ so that $\displaystyle PAP^{\ast}=\Lambda,\ \ \ \ P(A_{0}+t_{0}A)P^{\ast}=I.$ (A.56) Here $\Lambda=\text{diag}(\lambda_{1},\ ...\ ,\lambda_{n})$, with $\lambda_{\alpha}\in\mathbb{R}$. Let $\displaystyle H=PBP^{T},$ (A.57) then $H$ is a symmetric matrix. Plug (A.56) and (A.57) into (A.53) to simplify the expression of $K_{t_{0}}$. We get, at $t=t_{0}$, $\displaystyle K_{t_{0}}=P^{-1}HH^{\ast}P,$ (A.58) and so $\displaystyle PK_{t_{0}}^{p-1}P^{-1}=(HH^{\ast})^{p-1}.$ (A.59) Then we compute the derivative, $\displaystyle\frac{d}{dt}\left[\text{tr}(K_{t}^{p})t^{2p}\right]$ (A.60) $\displaystyle=$ $\displaystyle\frac{d}{dt}\left[t^{2p}\text{tr}\left(B\overline{(A_{0}+tA)^{-1}}\ {\overline{B}}(A_{0}+tA)^{-1}\right)^{p}\right]$ (A.61) $\displaystyle=$ $\displaystyle 2p\cdot t^{2p-1}\text{tr}(K_{t}^{p})-p\cdot t^{2p}\text{tr}\left[B\overline{(A_{0}+tA)^{-1}}\ {\overline{A}}\ \overline{(A_{0}+tA)^{-1}}\ {\overline{B}}(A_{0}+tA)^{-1}K_{t}^{p-1}\right]$ (A.62) $\displaystyle\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -p\cdot t^{2p}\text{tr}\left[B\overline{(A_{0}+tA)^{-1}}\ {\overline{B}}\ {(A_{0}+tA)^{-1}}\ A(A_{0}+tA)^{-1}K_{t}^{p-1}\right].$ (A.63) Plug (A.56) (A.57) and (A.59) into the expression above, we get, at $t=t_{0}$, $\displaystyle\frac{d}{dt}\left[t^{2p}\text{tr}(K_{t}^{p})\right]=p\cdot t_{0}^{2p-1}\left(2\cdot\text{tr}\left[(HH^{\ast})^{p}\right]-t_{0}\cdot\text{tr}\left[(H^{\ast}H)^{p}\Lambda\right]-t_{0}\cdot\text{tr}\left[(HH^{\ast})^{p}\Lambda\right]\right).$ (A.64) Using the fact that $H$ is symmetric, we know $\displaystyle\text{tr}\left[(H^{\ast}H)^{p}\Lambda\right]=\text{tr}\left[\Lambda(HH^{\ast})^{p}\right]=\text{tr}\left[(HH^{\ast})^{p}\Lambda\right].$ (A.65) So $\displaystyle\frac{d}{dt}\left[t^{2p}\text{tr}(K_{t}^{p})\right]$ $\displaystyle=2p\cdot t_{0}^{2p-1}\left(\text{tr}\left[(HH^{\ast})^{p}\right]-t_{0}\cdot\text{tr}\left[(HH^{\ast})^{p}\Lambda\right]\right)$ (A.66) $\displaystyle=2p\cdot t_{0}^{2p-1}\left(\text{tr}\left[(HH^{\ast})^{p}(I-t_{0}\Lambda)\right]\right).$ (A.67) It’s obvious that $HH^{\ast}$ is semi-positive definite, and, according to (A.56), $\displaystyle I-t_{0}\Lambda=PA_{0}P^{\ast}>0.$ (A.68) Therefore, we get $\displaystyle\frac{d}{dt}\left[t^{2p}\text{tr}(K_{t}^{p})\right]\geq 0.$ (A.69) ∎ ### A.3 Concavity In this appendix, we show the operator (4.3) is concave. Suppose $A$ is an $(n+1)\times(n+1)$ positive definite Hermitian matrix and $\mathcal{G}$ is an $n\times n$ positive definite Hermitian matrix. Denote the lower right $n\times n$ block of $A$ by $\mathcal{A}$. Let $\displaystyle F_{1}(A)=\log\left(A_{0\overline{0}}-A_{0{\overline{\beta}}}\mathcal{A}^{\alpha{\overline{\beta}}}A_{\alpha\overline{0}}\right)$ (A.70) and $\displaystyle F_{2}(A)=-\log\left(\mathcal{G}_{\alpha{\overline{\beta}}}\mathcal{A}^{\alpha{\overline{\beta}}}\right).$ (A.71) We will prove ###### Lemma A.8. $F_{1}$ is a concave function of $A$ in the space of positive definite $(n+1)\times(n+1)$ Hermitian matrices; $F_{2}$ is a concave function of $\mathcal{A}$ in the space of positive definite $n\times n$ Hermitian matrices. ###### Proof. For the concavity of $F_{1}$, actually, we can show $\displaystyle f_{1}(A)=A_{0\overline{0}}-A_{0{\overline{\beta}}}\mathcal{A}^{\alpha{\overline{\beta}}}A_{\alpha\overline{0}}$ (A.72) is a concave function of $A$. Let $H$ be an $(n+1)\times(n+1)$ Hermitian matrix. Similar to $A$, denote the lower-right block of $H$ by $\mathcal{H}$. Let $\displaystyle q(t)=f_{1}(A+tH),$ (A.73) for $t$ close to $0$. We will show that $q^{\prime\prime}(0)\leq 0.$ To simplify the computation, we diagonalize $\mathcal{A}$ and $\mathcal{H}$ simultaneously. Find an $n\times n$ matrix $\mathcal{P}$, so that $\displaystyle\mathcal{P}\mathcal{A}\mathcal{P}^{\ast}=I;\ \ \ \ \ \ \mathcal{P}\mathcal{H}\mathcal{P}^{\ast}=\Lambda=\text{diag}(\lambda_{1},\ ...\ ,\lambda_{n}).$ (A.74) Let $\displaystyle P=\left(\begin{array}[]{cc}1&0\\\ 0&\mathcal{P}\end{array}\right).$ (A.77) Note that another expression for $f_{1}$ is $\displaystyle f_{1}(A)=\frac{\det A}{\det{\mathcal{A}}}.$ (A.78) So $\displaystyle q(t)=\frac{\det(A+tH)}{\det(\mathcal{A}+t\mathcal{H})}=\frac{\det[P({{A}}+tH)P^{\ast}]}{\det[\mathcal{P}({\mathcal{A}}+t\mathcal{H})\mathcal{P}^{\ast}]}.$ (A.79) Denote that $\displaystyle PAP^{\ast}=\left(\begin{array}[]{cccc}a_{0\overline{0}}&\cdots&u_{0{\overline{\beta}}}&\cdots\\\ \vdots&&&\\\ u_{\alpha\overline{0}}&&I\\\ \vdots&\end{array}\right),\ \ \ \ \ PHP^{\ast}=\left(\begin{array}[]{cccc}h_{0\overline{0}}&\cdots&v_{0{\overline{\beta}}}&\cdots\\\ \vdots&&&\\\ v_{\alpha\overline{0}}&&\Lambda\\\ \vdots&\end{array}\right).$ (A.88) With these simplifications, $\displaystyle q(t)=a_{0\overline{0}}+th_{0\overline{0}}-\sum_{\alpha}\frac{(u_{0{\overline{\alpha}}}+tv_{0{\overline{\alpha}}})(u_{\alpha\overline{0}}+tv_{\alpha\overline{0}})}{1+t\lambda_{\alpha}}.$ (A.89) Straightforward computation gives $\displaystyle q^{\prime\prime}(0)=-\sum_{\alpha}(u_{0{\overline{\alpha}}}\lambda_{\alpha}-v_{0{\overline{\alpha}}})(u_{\alpha\overline{0}}\lambda_{\alpha}-v_{\alpha\overline{0}})\leq 0.$ (A.90) Therefore, $f_{1}$ is concave, and, consequently, $F_{1}=\log(f_{1})$ is concave. For the concavity of $F_{2}$, actually, we can show $\displaystyle f_{2}=\frac{1}{\text{tr}(\mathcal{G}\mathcal{A}^{-1})}$ (A.91) is a concave function of $\mathcal{A}$. This is a simple consequence of the well-known fact that $\displaystyle\frac{1}{\text{tr}(\mathcal{A}^{-1})}=\frac{\det(\mathcal{A})}{\sigma_{n-1}(\mathcal{A})}$ (A.92) is a concave function of $\mathcal{A}$. Therefore, $F_{2}=\log(f_{2})$ is concave. ∎ ## Acknowledgement This work is supported by the National Natural Science Foundation of China (No. 12288201) and the Project of Stable Support for Youth Team in Basic Research Field, CAS, (No. YSBR-001). The author would like to thank Li Chen, Xiuxiong Chen, Jianchun Chu, Jiyuan Han, Laszlo Lempert, Long Li, Yu Li, Guohuan Qiu, Zaijiu Shang, Li Sheng, Bing Wang, Youde Wang, Bin Xu for very helpful discussions. ## References * [B12] Z. Blocki, On Geodesics in the Space of Kähler Metrics. Adv. Lect. Math. 21, Int. Press, Somerville, MA, 3-19(2012). * [CV01] L. Caffarelli, J. Viaclovsky, On the Regularity of Solutions to Monge-Ampère Equations on Hessian Manifolds, Communications in Partial Differential Equations, 26:11-12, 2339-2351(2001). * [C00] X. Chen, The Space of Kähler Metrics, J. Differential Geom. , 56 (2000) 189-234. * [CFH20] X. Chen, M. Feldman, J. Hu, Geodesic Convexity of Small Neighborhood in the Space of Kähler Potentials. J. Functional Analysis 2020, 279(7):108603. * [CTW19] J. Chu, V. Tosatti, B. Weinkove, The Monge-Ampère Equation for non-Integrable Almost Complex Structures. J. Eur. Math. Soc. Volume 21, Issue 7, 2019, pp. 1949–1984. * [CTW17] J. Chu, V. Tosatti, B. Weinkove, On the $C^{1,1}$ Regularity of Geodesics in the Space of Kähler Metrics. Ann. PDE 3, 15 (2017). * [CIL92] M. Crandal, H. Ishi, P. Lions, User’s Guide to Viscosity Solutions of Second Order Partial Differential Equations, Bulletin of the American Mathematical Society. V. 26, No. 1, July 1992, Page 1-67. * [DL12] T. Darvas, L. Lempert, Weak Geodesics in the Space of Kähler Metrics. Math. Res. Lett. 19 (2012), no. 5, 1127-1135. * [D99] S. K. Donaldson, Symmetric Spaces, Kähler Geometry and Hamiltonian Dynamics, Amer. Math. Soc. Transl. (2) Vol 196. 13-33, 1999. * [D02] S. K. Donaldson, Holomorphic Discs and Complex Monge-Ampère Equation, J. Symplectic Geom. Volume 1, Number 2, 171-196, 2002. * [GT98] D. Gilbarg, N. Trudinger Elliptic Partial Differential Equations of Second Order. Reprint of the 1998 ed. Springer, 2001. * [G98] B. Guan, The Dirichlet Problem for Complex Monge-Ampère Equations and Regularity of the Pluri-Complex Green Function. Communications in Analysis and Geometry, Volume 6, Number 4, 687-703, 1998. * [GP12M] P. Guan, D. H. Phong, A Maximum Rank Problem for Degenerate Elliptic Fully Nonlinear Equations, Math. Ann. 354 (2012), no. 1, 147-169. * [GP12P] P. Guan, D. H. Phong, Partial Legendre Transforms of Non-Linear Equations, Proceedings of the American Mathematical Society, Vol. 140, No. 11, Nov. 2012, Pages 3831-3842. * [GT21] V. Guedj, T. D.Tô, Monge-Ampère Equations on Compact Hessian Manifold, arXiv:2106.14740v1. * [H16] Q. Han, Nonlinear Elliptic Equations of the Second Order, GSM 171, American Mathematical Society, 2016. * [HJ13] R. Horn, C. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge, Second Edition 2013. * [H21] J. Hu, An Obstacle for Higher Regularity of Geodesics in the Space of Kähler Potentials, International Mathematics Research Notices, Volume 2021, Issue 15, August 2021, Pages 11493–11513. * [H22] J. Hu, A Metric Lower Bound Estimate for Geodesics in the Space of Kähler Potentials, arXiv:2208.13651. * [K91] M. Klimek, Pluripotential Theory, Oxford Sciences Publications, Clarendon Press, 1991. * [L81] L. Lempert, La métrique de Kobayashi et la représentation des domain sur la boule, Bull. Soc. Math. Frence, tome 109 (1981), 427-474. * [LV13] L. Lempert, L. Vivas, Geodesics in the Space of Kähler Metrics, Duke Math. J. , Vol. 162, No. 7, 2013. * [M87] T. Mabuchi, Some symplectic geometry on compact Kähler manifolds, I. Osaka J. Math. 24 (1987), no. 2, 227-252. * [M79] R. Moriyón, Regularity of the Dirichlet Problem for the Complex Monge-Ampère Equation, Proc. Natl. Acad. Sci. USA, Vol. 76, No. 3, pp. 1022-1023, March 1979, Mathematics. * [M82] R. Moriyón, Regularity of the Dirichlet Problem for the Degenerate Complex Monge-Ampère Equation, Comm. Pure Appl. Math., Vol. XXXV, 1-27(1982). * [N82] K. Nomizu, The Lorentz-Poincaré metric on the upper half-space and its extension, Hokkaido Math. J. , Vol. 11 (1982) p. 253-261. * [RN] J. Ross, D. W. Nyström, On the Maximal Rank Problem for the Complex Homogenous Monge-Ampère Equation, Analysis and PDE, Vol. 12 Issue 2 p. 493-504. * [S92] S. Semmes, Complex Monge-Ampère and Symplectic Manifolds, Amer. J. Math. , Vol. 114, No. 3. (Jun., 1992), pp. 495-550. * [S95] S. Semmes, The Homogeneous Complex Monge-Ampère Equation and the Infinite Dimensional Versions of Classic Symmetric Spaces, The Gelfand Mathematical Seminars, 1993–1995. Birkhäuser Boston. * [CS43] C. Siegel, Symplectic Geometry, American Journal of Mathematics Vol. 65, No. 1 (Jan., 1943), pp. 1-86. Jingchen Hu Loo-Keng Hua Center for Mathematical Sciences Email<EMAIL_ADDRESS>
Eindhoven University of Technology, The [email protected]://orcid.org/0000-0003-2909-7515Eindhoven University of Technology, The [email protected]://orcid.org/0000-0001-6710-8436 Eindhoven University of Technology, The [email protected]://orcid.org/0000-0003-3049-7962 Myrthe S.C. Spronck, Bas Luttik and Tim A.C. Willemse [500]Theory of computation Modal and temporal logics ###### Acknowledgements. We thank an anonymous reviewer for the observation that weak and strong hyperfairness must be distinguished.Rupak Majumdar and Alexandra Silva 2 35th International Conference on Concurrency Theory (CONCUR 2024) CONCUR 2024 CONCUR 2024 September 9–13, 2024 Calgary, Canada 311 25 # Progress, Justness and Fairness in Modal $\mu$-Calculus Formulae Myrthe S. C. Spronck Bas Luttik Tim A. C. Willemse ###### Abstract When verifying liveness properties on a transition system, it is often necessary to discard spurious violating paths by making assumptions on which paths represent realistic executions. Capturing that some property holds under such an assumption in a logical formula is challenging and error-prone, particularly in the modal $\mu$-calculus. In this paper, we present template formulae in the modal $\mu$-calculus that can be instantiated to a broad range of liveness properties. We consider the following assumptions: progress, justness, weak fairness, strong fairness, and hyperfairness, each with respect to actions. The correctness of these formulae has been proven. ###### keywords: Modal $\mu$-calculus, Property specification, Completeness criteria, Progress, Justness, Fairness, Liveness properties ## 1 Introduction Formal verification through model checking requires a formalisation of the properties of the modelled system as formulae in some logic, such as LTL [33], CTL [18] or the modal $\mu$-calculus [30]. In this paper, we focus on the modal $\mu$-calculus, a highly expressive logic used in established model checkers such as mCLR2 [11] and CADP [20]. A frequently encountered problem when checking liveness properties is that spurious violations are found, such as paths on which some components never make progress. Often, such paths do not represent realistic executions of the system. It is then a challenge to restrict verification to those paths that do represent realistic system executions. For this, we use completeness criteria [22, 23]: predicates on paths that say which paths are to be regarded as realistic runs of the system. These runs are called complete runs. Examples of completeness criteria are progress, justness and fairness. It turns out that writing a modal $\mu$-calculus formula for a property being satisfied under a completeness criterion is non-trivial. Since the $\mu$-calculus is a branching-time logic, we cannot separately formalise when a path is complete and when it satisfies the property, and then combine the two formalisations with an implication. Instead, a more intricate integration of both aspects of a path is needed. Our aim is to achieve such an integration for a broad spectrum of liveness properties and establish the correctness of the resulting formulae. To this end, we shall consider a template property that can be instantiated to a plethora of liveness properties and, in particular, covers all liveness property patterns of [17]. Then, we present modal $\mu$-calculus formulae integrating the completeness criteria of progress, justness, weak fairness, strong fairness, and hyperfairness with this template property. As discussed in [24], for the formulation of realistic completeness criteria it is sometimes necessary to give special treatment to a set of blocking actions, i.e., actions that require cooperation of the environment in which the modelled system operates. Our template formulae are therefore parameterised with a set of blocking actions. We shall see that, given a set of blocking actions, there are two different interpretations of hyperfairness; we call these weak and strong hyperfairness. Regarding our presented formulae, the progress formula is similar to those commonly used for liveness properties even when completeness is not explicitly considered. Our formulae for justness, weak fairness and weak hyperfairness only subtly differ from each other. We characterise the similarities these three share and give a generic formula that can be adapted to represent all completeness criteria that meet these conditions. Lastly, we observe that strong fairness and strong hyperfairness do not meet these conditions. We give alternative formulae that are significantly more complex. Whether more efficient formulae for these completeness criteria exist remains an open problem. Modal $\mu$-calculus formulae are often hard to interpret. Accordingly, it is not trivial to see that our formulae indeed express the integration of liveness properties with completeness criteria. We have therefore included elaborate correctness proofs in the appendices. Our work is essentially a generalisation along two dimensions (viz., the completeness criterion and the liveness property) of the works of [35] and [7, 37]. In [35], the tool PASS is presented for automatically translating common property patterns into modal $\mu$-calculus formulae. Some of those patterns integrate an assumption that excludes paths deemed unrealistic, but since the exact assumption is not stated separately, we cannot make a formal comparison with our approach. In [7], a formula for justness is presented, covering one of the properties we cover. This formula forms the basis for our justness, weak fairness and weak hyperfairness formulae. Our formulae for strong fairness and strong hyperfairness are in part inspired by the formula for termination under strong fairness presented in [37]. The organisation of this paper is as follows. In section 2 we recap the relevant definitions on labelled transition systems, as well as the syntax and semantics of the modal $\mu$-calculus. In section 3, we motive our work with an example, and in section 4 we give the completeness criteria we cover in this paper. In section 5, we formally identify the class of liveness properties we study and relate it to a popular class of properties. Our template formulae are presented in section 6, combining the completeness criteria from section 4 with the property template from section 5. We give a small application example in section 7 and discuss the scope of our work in section 8. Finally, we give our conclusions in section 9. ## 2 Preliminaries We represent models as labelled transition systems (LTSs). In this section, we briefly introduce the relevant definitions on LTSs, as well as the modal $\mu$-calculus. ### 2.1 Labelled Transition Systems ###### Definition 2.1. An LTS is a tuple $M=(\mathcal{S},s_{\mathit{init}},\mathit{Act},\mathit{Trans})$ where * • $\mathcal{S}$ is a set of states, * • $s_{\mathit{init}}\in\mathcal{S}$ is the initial state, * • $\mathit{Act}$ is a set of action labels, also referred to as the alphabet of the LTS, and * • $\mathit{Trans}\subseteq\mathcal{S}\times\mathit{Act}\times\mathcal{S}$ is a transition relation. In this paper, we only consider finite LTSs, such as the kind used in finite- state model checking. In particular, our formulae are proven correct under the assumption that $\mathit{Act}$ is finite. We write $s\xrightarrow{a}s^{\prime}$ as shorthand for $(s,a,s^{\prime})\in\mathit{Trans}$, and for a given transition $t=(s,a,s^{\prime})$ we write $\mathit{src}(\mathit{t})=s$, $\mathit{act}(\mathit{t})=a$ and $\mathit{trgt}(\mathit{t})=s^{\prime}$. For the definitions below, we fix an LTS $M=(\mathcal{S},s_{\mathit{init}},\mathit{Act},\mathit{Trans})$. ###### Definition 2.2. A _path_ is an $($alternating$)$ sequence $\pi=s_{0}t_{1}s_{1}t_{2}\ldots$ of states $s_{0},s_{1},\ldots\in\mathcal{S}$ and transitions $t_{1},t_{2},\ldots\in\mathit{Trans}$. A path must start with a state, and must be either infinite, or end in a state. In the latter case, the end of the path is referred to as the _final state_. For all $i\geq 0$, $t_{i+1}$ must satisfy $\mathit{src}(\mathit{t_{i+1}})=s_{i}$ and $\mathit{trgt}(\mathit{t_{i+1}})=s_{i+1}$. We sometimes refer to transitions on a path as steps. We say an action occurs on a path if a transition labelled with that action is on the path. We call a path on which no action in some set $\alpha$ occurs an _$\alpha$ -free_ path. One path can be appended to another: let $\pi^{\prime}=s_{0}^{\prime}t_{1}^{\prime}s_{1}^{\prime}\ldots t_{n}^{\prime}s_{n}^{\prime}$ and $\pi^{\prime\prime}=s_{0}^{\prime\prime}t_{1}^{\prime\prime}s_{1}^{\prime\prime}\ldots$, where $\pi^{\prime}$ must be finite and $\pi^{\prime\prime}$ may be finite or infinite. Then the path $\pi$ defined as $\pi^{\prime\prime}$ appended to $\pi^{\prime}$ is written as $\pi=\pi^{\prime}\cdot\pi^{\prime\prime}=s_{0}^{\prime}t_{1}^{\prime}s_{1}^{\prime}\ldots t_{n}^{\prime}s_{n}^{\prime}t_{1}^{\prime\prime}s_{1}^{\prime\prime}\ldots$. This is only allowed when $s_{n}^{\prime}=s_{0}^{\prime\prime}$. ###### Definition 2.3. We say that: * • A transition $t\in\mathit{Trans}$ is _enabled_ in a state $s\in\mathcal{S}$ if, and only if, $\mathit{src}(\mathit{t})=s$. * • An action $a\in\mathit{Act}$ is _enabled_ in a state $s\in\mathcal{S}$ if, and only if, there exists a transition $t\in\mathit{Trans}$ with $\mathit{act}(\mathit{t})=a$ that is enabled in $s$. * • An action $a\in\mathit{Act}$ is _perpetually enabled_ on a path $\pi$ if $a$ is enabled in every state of $\pi$. * • An action $a\in\mathit{Act}$ is _relentlessly enabled_ on a path $\pi$ if every suffix of $\pi$ contains a state in which $a$ is enabled. * • A state without enabled actions is called a _deadlock state_. Every action that is perpetually enabled on a path is also relentlessly enabled on that path. ### 2.2 Modal $\mu$-Calculus The modal $\mu$-calculus is given in [30]. Our presentation of the logic is based on [8, 9, 10, 27]. The syntax of the modal $\mu$-calculus is described by the following grammar, in which $a$ ranges over the set of actions $\mathit{Act}$, and $X$ ranges over a set of formal variables $\mathit{Var}$. $\phi,\psi::=\mathit{ff}\mid X\mid\neg\phi\mid\phi\lor\psi\mid\langle\mathit{a}\rangle\phi\mid\mu\mathit{X}.\mathit{\phi}$ Here $\mathit{ff}$ is false; $\neg$ represents negation; $\lor$ is disjunction; $\langle\mathit{\leavevmode\nobreak\ }\rangle$ is the diamond operator; and $\mu$ is the least fixpoint operator. We say that $\mu\mathit{X}.\mathit{\phi}$ binds $X$ in $\phi$. Variables that are unbound in a formula are free, and a formula without free variables is closed. A modal $\mu$-calculus formula $\phi$ must both adhere to this grammar and be _syntactically monotonic_ , meaning that for every occurrence of $\mu X.\psi$ in $\phi$, every free occurrence of $X$ in $\psi$ must always be preceded by an even number of negations. We give the semantics of a modal $\mu$-calculus formula $\phi$ with respect to an arbitrary LTS $M=(\mathcal{S},s_{\mathit{init}},\mathit{Act},\mathit{Trans})$ and environment $\mathit{e}:\mathit{Var}\to 2^{\mathcal{S}}$. $\displaystyle\llbracket\mathit{ff}\rrbracket_{\mathit{e}}^{M}=\emptyset$ $\displaystyle\llbracket\phi\lor\psi\rrbracket_{\mathit{e}}^{M}=\llbracket\phi\rrbracket_{\mathit{e}}^{M}\cup\llbracket\psi\rrbracket_{\mathit{e}}^{M}$ $\displaystyle\llbracket X\rrbracket_{\mathit{e}}^{M}=\mathit{e}(X)$ $\displaystyle\llbracket\langle\mathit{a}\rangle\phi\rrbracket_{\mathit{e}}^{M}=\left\\{s\in\mathcal{S}\mid\exists_{s^{\prime}\in\mathcal{S}}.s\xrightarrow{a}s^{\prime}\land s^{\prime}\in\llbracket\phi\rrbracket_{\mathit{e}}^{M}\right\\}$ $\displaystyle\llbracket\neg\phi\rrbracket_{\mathit{e}}^{M}=\mathcal{S}\setminus\llbracket\phi\rrbracket_{\mathit{e}}^{M}$ $\displaystyle\llbracket\mu\mathit{X}.\mathit{\phi}\rrbracket_{\mathit{e}}^{M}=\bigcap\left\\{\mathcal{S}^{\prime}\subseteq\mathcal{S}\mid\mathcal{S}^{\prime}\supseteq\llbracket\phi\rrbracket_{\mathit{e}[X:=\mathcal{S}^{\prime}]}^{M}\right\\}$ In contexts where the model is fixed, we drop the $M$ from $\llbracket\phi\rrbracket_{\mathit{e}}^{M}$. Additionally, we drop $\mathit{e}$ when the environment does not affect the semantics of the formula, e.g. with closed formulae. We use conjunction, $\land$, and implication, $\Rightarrow$, as the usual abbreviations. We also add several abbreviations: $\mathit{tt}=\neg\mathit{ff}$ for true; $[\mathit{a}]\phi=\neg\langle\mathit{a}\rangle\neg\phi$ for the box operator; and $\nu X.\phi=\neg\mu X.(\neg\phi[X:=\neg X])$ for the greatest fixpoint. To express formulae more compactly, we extend our syntax to allow regular expressions over finite sets of actions to be used in the box and diamond operators. Since we limit this to finite sets of actions, the syntactical extension does not increase the expressivity of the logic, it merely simplifies the presentation. This is a common extension of the $\mu$-calculus syntax, for instance shown in [27], based on the operators defined for PDL [19]. We overload the symbol for a single action to also represent the singleton set containing that action. We use union, intersection, set difference, and set complement to describe sets of actions as usual. Regular expressions over sets of actions, henceforth referred to as _regular formulae_ , are defined by the following grammar: $R,Q::=\varepsilon\mid\alpha\mid R\cdot Q\mid R+Q\mid\mathit{R}^{\star}$ The empty sequence is represented by $\varepsilon$, and $\alpha$ ranges over sets of actions. The symbol $\cdot$ represents concatenation, $+$ the union of formulae, and $\mathit{}^{\star}$ is closure under repetition. We define the meaning of the diamond operator over the new regular formulae as abbreviations of standard modal $\mu$-calculus formulae: $\displaystyle\langle\mathit{\varepsilon}\rangle\phi$ $\displaystyle=\phi$ $\displaystyle\langle\mathit{\alpha}\rangle\phi$ $\displaystyle=\bigvee_{a\in\alpha}\langle\mathit{a}\rangle\phi$ $\displaystyle\langle\mathit{R\cdot Q}\rangle\phi$ $\displaystyle=\langle\mathit{R}\rangle\langle\mathit{Q}\rangle\phi$ $\displaystyle\langle\mathit{R+Q}\rangle\phi$ $\displaystyle=\langle\mathit{R}\rangle\phi\lor\langle\mathit{Q}\rangle\phi$ $\displaystyle\langle\mathit{\mathit{R}^{\star}}\rangle\phi$ $\displaystyle=\mu\mathit{X}.(\mathit{\langle\mathit{R}\rangle X\lor\phi})$ The box operator is defined dually. We say a path $\pi$ _matches_ a regular formula $R$ if the sequence of actions on $\pi$ is in the language of $R$. ## 3 Motivation When analysing algorithms and systems, there are many different properties which may need to be checked. For instance, when model checking mutual exclusion algorithms we want to check linear properties such as mutual exclusion and starvation freedom, but also branching properties such as invariant reachability of the critical section. The modal $\mu$-calculus, which subsumes even CTL⋆, is able to express all these properties and more, and is therefore used in toolsets such as mCLR2 [11] and CADP [20]. An issue that is frequently encountered when checking liveness properties in particular, is that the model admits executions that violate the property but do not represent realistic executions of the real system. For example, models of algorithms that contain a busy waiting loop usually admit executions where processes do nothing except wait. Infinite loops can also be introduced by abstractions of reality, such as modelling a loop to represent an event that occurs an arbitrary, but finite, number of times. Counterexamples that are due to such modelling artefacts obscure whether the property is satisfied on all realistic executions. The problem we address in this paper is how to avoid such counterexamples and check properties only on realistic executions. We illustrate the problem with an example, which we also employ as a running example throughout this paper. ###### Example 3.1. Consider the coffee machine modelled in Figure 1. When a user places an $\mathit{order}$ for one or more cups of coffee, they are required to scan their payment $\mathit{card}$. If the user prefers using coinage, they switch the machine to its alternate mode ($\mathit{to\\_cash}$), and then pay in $\mathit{cash}$. In the alternate mode, the machine can be switched back using $\mathit{to\\_card}$. After payment, the machine will $\mathit{brew}$ the cup(s) of coffee. This is modelled as a non-deterministic choice between a looping and a final $\mathit{brew}$ action, since at least one cup was ordered. Finally, the coffee is $\mathit{deliver}$ed and the machine awaits the next order. We consider three example properties. 1. 1. _Single order_ : whenever an $\mathit{order}$ is made, there may not be a second $\mathit{order}$ until a $\mathit{deliver}$ has taken place, $[\mathit{\mathit{\mathit{Act}}^{\star}\cdot\mathit{order}\cdot\mathit{\overline{\mathit{\mathit{deliver}}}}^{\star}\cdot\mathit{order}}]\mathit{ff}$. 2. 2. _Inevitable delivery_ : whenever an $\mathit{order}$ is made, there will inevitably be an occurrence of $\mathit{deliver}$, $[\mathit{\mathit{\mathit{Act}}^{\star}\cdot\mathit{order}}]\mu X.(\langle\mathit{\mathit{Act}}\rangle\mathit{tt}\land[\mathit{\overline{\mathit{\mathit{deliver}}}}]X)$. 3. 3. _Possible delivery_ : it is invariantly possible to eventually execute the $\mathit{deliver}$ action, $[\mathit{\mathit{\mathit{Act}}^{\star}}]\langle\mathit{\mathit{\mathit{Act}}^{\star}\cdot\mathit{deliver}}\rangle\mathit{tt}$. The described problem occurs with _inevitable delivery_ : $s_{0}t_{1}s_{1}t_{4}(s_{3}t_{6})^{\omega}$ is a violating path, on which infinitely many cups are part of the same order. Similarly, $s_{0}t_{1}(s_{1}t_{2}s_{2}t_{3})^{\omega}$ violates the property because the user never decides on a payment method. The first counterexample represents an impossible scenario, and the second gives information on problematic user behaviour but tells us little about the machine itself. $s_{0}$$s_{1}$$s_{2}$$s_{3}$$s_{4}$$t_{1}$:$\mathit{order}$$t_{2}$:$\mathit{to\\_cash}$$t_{3}$:$\mathit{to\\_card}$$t_{4}$:$\mathit{card}$$t_{5}$:$\mathit{cash}$$t_{6}$:$\mathit{brew}$$t_{7}$:$\mathit{brew}$$t_{8}$:$\mathit{deliver}$ Figure 1: The LTS for the running example. The kind of spurious counterexamples discussed in the example above primarily occur when checking liveness properties. We therefore focus on liveness properties, such as _inevitable delivery_ , in this paper. We will briefly discuss safety properties in section 8. There are ad-hoc solutions to exclude unrealistic counterexamples, e.g. altering the model to remove the unrealistic executions, or tailoring the formula to exclude specific problematic counterexamples [26]. Such ad-hoc solutions are undesirable because they clutter the model or the formula, and are therefore error-prone. We aim for a more generic solution, of which the correctness can be established once and for all. Such a generic solution requires, on the one hand, a general method to distinguish between realistic and unrealistic executions, and, on the other hand, a general class of liveness properties. A general method to distinguish between realistic and unrealistic executions is provided by _completeness criteria_ [22, 23], i.e., predicates on paths that label some as complete and all others as incomplete. If a property is satisfied on all complete paths, it is satisfied under the given completeness criterion. Completeness criteria give us a model-independent way to determine which paths are unrealistic, and therefore a generic solution to the stated problem. Depending on the property and the model, we may prefer a different completeness criterion. We therefore consider several criteria instead of fixing one specific criterion. These completeness criteria are discussed in section 4. To find a general class of liveness properties, we take the _property specification patterns_ (PSP) of [17] as a starting point. Since the modal $\mu$-calculus as presented in Section 2.2 supports references to action occurrences but not state information, we specifically interpret these patterns on action occurrences. Our first contribution, in section 5, will be to characterise a class of liveness properties that subsumes all liveness properties expressible in PSP. Our second and main contribution is then presented in section 6, where we combine the identified completeness criteria with our class of liveness properties, yielding template formulae for each combination. ## 4 Completeness Criteria It is often assumed, sometimes implicitly, that as long as a system is capable of executing actions, it will continue to do so [25]. One could consider this the “default” completeness criterion, also known as _progress_ [22]; it says that only paths that are infinite or end in a deadlock state model complete runs and are hence complete paths. We first present a modified version of the progress assumption that allows some actions to be blocked by the environment. We then define the other completeness criteria considered in this paper. As already remarked in the previous section, the modal $\mu$-calculus is most suited to reasoning about action occurrences. Hence, we focus on completeness criteria defined on action labels. For more general definitions on sets of transitions, see [25]. ### 4.1 Progress with Blocking Actions In [24], it is argued that it is useful to consider some actions of an LTS as blocking. A blocking action is an action that depends on participation by the environment of the modelled system. Consequently, even when such an action is enabled in a state because the system is willing to perform it, it may not be possible for the action to occur because the environment is uncooperative. In this paper, we refer to the set of blocking actions as $\mathcal{B}\subseteq\mathit{Act}$, and the set of non-blocking actions as $\overline{\mathit{\mathcal{B}}}=\mathit{Act}\setminus\mathcal{B}$. Which actions are in $\mathcal{B}$ is a modelling choice. The default progress assumption can be adapted to account for blocking actions [21, 25]. ###### Definition 4.1. A state $s\in\mathcal{S}$ is a _$\mathcal{B}$ -locked state_ if, and only if, all actions enabled in $s$ are in $\mathcal{B}$. A path $\pi$ is _$\mathcal{B}$ -progressing_ if, and only if, it is infinite or ends in a $\mathcal{B}$-locked state. We refer to the assumption that only $\mathcal{B}$-progressing paths represent complete executions as $\mathcal{B}$-progress. The “default” completeness criterion is equivalent to $\emptyset$-progress. ###### Example 4.2. Consider Figure 1. Here, $\mathit{order}$ is an environment action, since it involves the user. If we do not assume that there will always be a next user, we should add $\mathit{order}$ to $\mathcal{B}$. In some cases, we may want to consider the possibility that the machine is broken and not capable of producing coffee. In those cases, we should add $\mathit{brew}$ to $\mathcal{B}$. Our choice of $\mathcal{B}$ affects which paths are progressing: $s_{0}t_{1}s_{1}t_{4}s_{3}$ is not $\emptyset$-progressing, but it is $\\{\mathit{brew}\\}$-progressing. All completeness criteria we discuss in this paper are parameterised with a set of blocking actions. The justness and fairness assumptions discussed in the remainder of this section label paths as incomplete if certain actions do not occur. Since it can never be assumed that the environment supports the occurrence of blocking actions, we do not want justness and fairness to label paths as incomplete due to the non-occurrence of blocking actions. For readability the prefix $\mathcal{B}$\- will sometimes be dropped from the names of the completeness criteria and their acronyms. From this point, we will always discuss completeness criteria with respect to a set of blocking actions. ### 4.2 Justness Justness [21, 25] is a natural extension of progress to exclude infinite paths instead of finite paths. The idea is that in addition to the system as a whole progressing, individual components in that system should also be able to make progress unless they are prevented from doing so by other components. It is a weaker, and hence frequently more justifiable, assumption than the fairness assumptions we cover in the next section. In its original presentation, justness is defined with respect to sets of transitions. Which components contribute to a transition and how they contribute to them determines which transitions interfere with each other. We here consider justness defined with respect to actions instead, based on [7]. We do not go into how it is determined which actions interfere with each other here. For discussions on this topic and when the two definitions coincide, see [6, 7, 21]. Intuitively, justness of actions says that if an action $a$ is enabled at some point of a path, then eventually some action that can interfere with the occurrence of $a$ must occur in that path. That action may be $a$ itself. In order to formalise the concept of interference, we require the concept of a _concurrency relation on actions_ , $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$. ###### Definition 4.3. Relation $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}\subseteq\mathit{Act}\times\mathit{Act}$ is a _concurrency relation on actions_ if, and only if: 1. 1. $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$ is irreflexive, and 2. 2. for all $a\in\mathit{Act}$, if $\pi$ is a path from a state $s$ in which $a$ is enabled to a state $s^{\prime}\in\mathcal{S}$ such that $a\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}b$ for all actions $b$ occurring in $\pi$, then $a$ is enabled in $s^{\prime}$. We write $\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}$ for the complement of $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$. Note that $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$ may be asymmetric. Read $a\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}b$ as “$a$ is concurrent with $b$”, and $a\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}b$ as “$b$ interferes with $a$” or “$b$ eliminates $a$”. A labelled transition system can be extended with a concurrency relation on actions, which produces a _labelled transition system with concurrency_ (LTSC). We here present the definition for justness of actions with blocking actions. ###### Definition 4.4. A path $\pi$ satisfies _$\mathcal{B}$ -justness of actions_ _( $\mathcal{B}$-JA)_ if, and only if, for each action $a\in\overline{\mathit{\mathcal{B}}}$ that is enabled in some state $s$ in $\pi$, an action $a^{\prime}\in\mathit{Act}$ occurs in the suffix $\pi^{\prime}$ of $\pi$ starting in $s$ such that $a\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}a^{\prime}$. ###### Example 4.5. Consider Figure 1, specifically the path $s_{0}t_{1}(s_{1}t_{2}s_{2}t_{3})^{\omega}$. On this path the user keeps switching the mode of the machine, without paying. To see if this path satisfies $\emptyset$-JA, we need a concrete $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$. Consider a $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$ such that $\mathit{card}\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}\mathit{to\\_cash}$, $\mathit{cash}\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}\mathit{to\\_card}$, and $a\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}a$ for all action labels $a$. These are all required for $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$ to be a valid concurrency relation. This is because by 4.3, $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$ must be irreflexive, and when an action is enabled it must remain enabled on any path on which no interfering action occurs. Since $\mathit{card}$ is enabled in $s_{1}$ but not $s_{2}$, it must be the case that $\mathit{card}\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}\mathit{to\\_cash}$. Similarly, we must have $\mathit{cash}\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}\mathit{to\\_card}$. With such a concurrency relation, the path satisfies $\emptyset$-JA since every action that is enabled is subsequently eliminated. In this LTS, there is no valid choice of $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$ that makes this path violate $\emptyset$-JA. However, if we modify Figure 1 by replacing both $\mathit{card}$ and $\mathit{cash}$ with the action $\mathit{pay}$, then 4.3 does not enforce that $\mathit{to\\_cash}$ and $\mathit{to\\_card}$ interfere with the actions on $t_{4}$ and $t_{5}$, since $\mathit{pay}$ is enabled in both $s_{1}$ and $s_{2}$. We can choose whether $\mathit{pay}\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}\mathit{to\\_cash}$ and $\mathit{pay}\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}\mathit{to\\_card}$. If $\mathit{pay}$ is concurrent with both, then the path $s_{0}t_{1}(s_{1}t_{2}s_{2}t_{3})^{\omega}$ violates $\emptyset$-JA. If either interferes with $\mathit{pay}$, then the path satisfies $\emptyset$-JA. ### 4.3 Fairness There are situations where we want to exclude a larger set of infinite paths than those excluded by justness, or where we do not have a concurrency relation. For this, we can use what are called _fairness assumptions_ in the literature. These are a class of predicates on paths that distinguish between _fair_ and _unfair_ infinite paths. It is assumed that only the fair paths are complete. For an overview of many common fairness assumptions, see [25]. In this paper, we consider weak fairness of actions, strong fairness of actions, and (weak and strong) hyperfairness of actions. Each of the assumptions we discuss has the general shape, adapted from [3], “if it is sufficiently often possible for an action to occur, it will occur sufficiently often”. What it means for an action to be “sufficiently often possible” and “occur sufficiently often” depends on the exact assumption. We first discuss _weak fairness of actions_ , which says that actions that are always enabled must eventually occur. It is one of the most commonly discussed fairness assumptions. We define weak fairness of actions formally, with respect to a set of blocking actions $\mathcal{B}$. ###### Definition 4.6. A path $\pi$ satisfies _$\mathcal{B}$ -weak fairness of actions_ _( $\mathcal{B}$-WFA)_ if, and only if, for every suffix $\pi^{\prime}$ of $\pi$, every action $a\in\overline{\mathit{\mathcal{B}}}$ that is perpetually enabled in $\pi^{\prime}$ occurs in $\pi^{\prime}$. ###### Example 4.7. Consider again Figure 1, with $\mathit{card}$ and $\mathit{cash}$ both replaced by $\mathit{pay}$. Then the path $s_{0}t_{1}(s_{1}t_{2}s_{2}t_{3})^{\omega}$ violates $\emptyset$-WFA, since $\mathit{pay}$ is perpetually enabled in a suffix of this path without occurring. If there are two separate actions for paying with cash or card, the path satisfies $\emptyset$-WFA because no actions are perpetually enabled in any suffix. Next, _strong fairness of actions_ says that on a path, all actions that are enabled infinitely often, must occur infinitely often. Formally, we define strong fairness of actions as: ###### Definition 4.8. A path $\pi$ satisfies _$\mathcal{B}$ -strong fairness of actions_ _( $\mathcal{B}$-SFA)_ if, and only if, for every suffix $\pi^{\prime}$ of $\pi$, every action $a\in\overline{\mathit{\mathcal{B}}}$ that is relentlessly enabled in $\pi^{\prime}$ occurs in $\pi^{\prime}$. Strong fairness is a stronger assumption than weak fairness, since it classifies more paths as incomplete. This follows from perpetual enabledness implying relentless enabledness. ###### Example 4.9. The path $s_{0}t_{1}(s_{1}t_{2}s_{2}t_{3})^{\omega}$ in Figure 1 satisfies $\emptyset$-WFA since there are no perpetually enabled actions in any suffix of the path. However, $\mathit{cash}$ is relentlessly enabled in suffixes of this path, and yet does not occur. Hence, this path violates $\emptyset$-SFA. Finally, we discuss _hyperfairness of actions_. Informally, it says that on all fair paths, every action that can always become enabled must occur infinitely often. The idea is that if there is always a reachable future where the action occurs, then it is merely unlucky if the action does not occur infinitely often. The concept of hyperfairness is introduced and named in [4]. For our presentation of hyperfairness, we use the generalisation from [31]. We first formalise what it means that an action “can become” enabled, by defining _reachability_. ###### Definition 4.10. We say that: * • A state $s\in\mathcal{S}$ is _$\mathcal{B}$ -reachable_ from some state $s^{\prime}\in\mathcal{S}$ if, and only if, there exists a $\mathcal{B}$-free path starting in $s^{\prime}$ that ends in $s$. * • An action $a\in\mathit{Act}$ is _$\mathcal{B}$ -reachable_ from some state $s\in\mathcal{S}$ if, and only if, there exists a state $s^{\prime}\in\mathcal{S}$ that is $\mathcal{B}$-reachable from $s$ and in which $a$ is enabled. * • A state $s\in\mathcal{S}$ or action $a\in\mathit{Act}$ is _perpetually $\mathcal{B}$-reachable_ on a path $\pi$ if, and only if, it is $\mathcal{B}$-reachable from every state of $\pi$. * • A state $s\in\mathcal{S}$ or action $a\in\mathit{Act}$ is _relentlessly $\mathcal{B}$-reachable_ on a path $\pi$ if, and only if, every suffix of $\pi$ contains a state from which it is $\mathcal{B}$-reachable. From the intuitive description of hyperfairness, it is clear it is a variant of weak or strong fairness with reachability instead of enabledness, giving us two possible definitions of hyperfairness. We name the two interpretations weak hyperfairness and strong hyperfairness respectively. Both interpretations of hyperfairness are reasonable, and in fact when not considering blocking actions, they coincide [31]. However, this is not the case when blocking actions are included in the definitions. We therefore consider both variants. ###### Definition 4.11. A path $\pi$ satisfies _weak $\mathcal{B}$-hyperfairness of actions_ _( $\mathcal{B}$-WHFA)_ if, and only if, for every suffix $\pi^{\prime}$ of $\pi$, every action $a\in\overline{\mathit{\mathcal{B}}}$ that is perpetually $\mathcal{B}$-reachable in $\pi^{\prime}$ occurs in $\pi^{\prime}$. ###### Definition 4.12. A path $\pi$ satisfies _strong $\mathcal{B}$-hyperfairness of actions_ _( $\mathcal{B}$-SHFA)_ if, and only if, for every suffix $\pi^{\prime}$ of $\pi$, every action $a\in\overline{\mathit{\mathcal{B}}}$ that is relentlessly $\mathcal{B}$-reachable in $\pi^{\prime}$ occurs in $\pi^{\prime}$. Since enabledness implies reachability, WHFA is stronger than WFA, and SHFA is stronger than SFA. Perpetually reachability implies relentless reachability, so SHFA is also stronger than WHFA. However, as the next examples will show, SFA and WHFA are incomparable. ###### Example 4.13. The impact of hyperfairness can clearly be seen when non-determinism is used. Consider the path $s_{0}t_{1}s_{1}t_{4}(s_{3}t_{6})^{\omega}$ in Figure 1. This path satisfies $\emptyset$-SFA, since the only action that is relentlessly enabled on this path, $\mathit{brew}$, also occurs infinitely often. However, as long as $\mathit{deliver}\not\in\mathcal{B}$ and $\mathit{brew}\not\in\mathcal{B}$, this path does not satisfy $\mathcal{B}$-WHFA or $\mathcal{B}$-SHFA: $\mathit{deliver}$ is $\mathcal{B}$-reachable from $s_{3}$, and therefore is perpetually and relentlessly $\mathcal{B}$-reachable in a suffix of this path, but does not occur. We here see $\mathcal{B}$-SFA does not imply $\mathcal{B}$-WHFA. ###### Example 4.14. In Figure 1, consider $s_{0}t_{1}(s_{1}t_{2}s_{2}t_{3})^{\omega}$ with $\mathcal{B}=\\{\mathit{order},\mathit{to\\_cash},\mathit{to\\_card}\\}$. This path satisfies $\mathcal{B}$-WHFA because $\mathit{card}$ and $\mathit{cash}$ are only $\mathcal{B}$-reachable from $s_{1}$ and $s_{2}$ respectively. They are not perpetually $\mathcal{B}$-reachable in any suffix of this path, therefore $\mathcal{B}$-WHFA is satisfied. However, they are relentlessly $\mathcal{B}$-reachable, so $\mathcal{B}$-SHFA is violated. This demonstrates that $\mathcal{B}$-WHFA and $\mathcal{B}$-SFHA do not coincide when blocking actions are considered. The actions $\mathit{card}$ and $\mathit{cash}$ are also relentlessly $\mathcal{B}$-enabled, so $\mathcal{B}$-SFA is also violated. Hence, $\mathcal{B}$-WHFA does not imply $\mathcal{B}$-SFA. ## 5 A Generalisation of the Property Specification Liveness Patterns Dwyer, Avrunin and Corbett observed that a significant majority of properties that are used in practice can be fit into a set of property specification patterns [17]. These patterns consist of a _behaviour_ that must be satisfied and a _scope_ within a path that delimits where the behaviour must be satisfied. We recall the behaviours and scopes presented in [17] in Appendix A. We focus on expressing properties that are captured by PSP. Of all behaviours considered in [17], only existence, existence at least, response and chain response represent pure liveness properties. The global and after scopes, when combined with any of these four behaviours, give liveness properties. We argue why only these patters of PSP represent pure liveness properties in Appendix A. All other scopes result in safety properties or properties that combine safety and liveness. Of those, we cover the until and after-until scopes, since we can incorporate those into our formulae with little difficulty. For behaviours, existence at least says some action in a set $S_{r}$ must occur at least $k$ times in the scope; when $k=1$ we call this existence. The response behaviour requires that whenever an action in a set $S_{q}$ occurs, it must be followed by the occurrence of an action in $S_{r}$. When chains of action occurrences are used instead of individual action occurrences, this is called chain response. For the scopes, global refers to the full path and after to the path after the first occurrence of an action in a set $S_{a}$. The until scope refers to the path before the first occurrence of an action in a set $S_{b}$, or the full path if no such action occurs. Finally, after-until combines after and until, referring to every subpath of the path that starts after any occurrence of an action in $S_{a}$ and ends before the following occurrence of an action in $S_{b}$. If no action in $S_{b}$ occurs, the behaviour must still be satisfied after $S_{a}$. ###### Example 5.1. Consider again the properties we presented in 3.1. _Single order_ is absence after-until, with $S_{a}=\\{\mathit{order}\\}$, $S_{b}=\\{\mathit{deliver}\\}$ and $S_{r}=\\{\mathit{order}\\}$. _Inevitable delivery_ is global response with $S_{q}=\\{\mathit{order}\\}$ and $S_{r}=\\{\mathit{deliver}\\}$. _Possible delivery_ does not fit into the patterns on occurrences of actions, since it contains a requirement on states, specifically that the state admits a path on which $\mathit{delivery}$ occurs. We want to create formulae for all 16 combinations of the selected behaviours and scopes. To make our results more compact and generic, we first generalise these 16 patterns into a single template property. This template works by describing the shape of a violating path for a property that fits one of these patterns. Intuitively, this shape is: “after the occurrence of $\rho$, there are no occurrences of $\alpha_{\mathit{f}}$ up until the (optional) occurrence of $\alpha_{\mathit{e}}$”. For our template formulae to be syntactically correct, it is important that $\rho$ is a regular formula, describing the prefix that a violating path must have, whereas $\alpha_{\mathit{f}}$ and $\alpha_{\mathit{e}}$ are sets of actions. The actions in $\alpha_{\mathit{f}}$ are those that are forbidden from occurring after $\rho$ on a violating path, whereas the actions in $\alpha_{\mathit{e}}$ indicate the end of the scope in which $\alpha_{\mathit{f}}$ may not occur. We formalise this template as follows: ###### Definition 5.2. A path $\pi$ is _$(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$ -violating_ if, and only if, there exist $\pi_{\mathit{pre}}$ and $\pi_{\mathit{suf}}$ such that: 1. 1. $\pi=\pi_{\mathit{pre}}\cdot\pi_{\mathit{suf}}$, and 2. 2. $\pi_{\mathit{pre}}$ matches $\rho$, and 3. 3. $\pi_{\mathit{suf}}$ satisfies at least one of the following conditions: 1. (a) $\pi_{\mathit{suf}}$ is $\alpha_{\mathit{f}}$-free, or 2. (b) $\pi_{\mathit{suf}}$ contains an occurrence of an action in $\alpha_{\mathit{e}}$, and the prefix of $\pi_{\mathit{suf}}$ before the first occurrence of an action in $\alpha_{\mathit{e}}$ is $\alpha_{\mathit{f}}$-free. For readability, we frequently refer to $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating paths as violating paths. We sometimes summarise condition 3 as “$\pi_{\mathit{suf}}$ is $\alpha_{\mathit{f}}$-free up until the first occurrence of $\alpha_{\mathit{e}}$”. See Figure 2 for an illustration of what types of paths are considered violating. $\rho$$\alpha_{\mathit{f}}$-free$\rho$$\alpha_{\mathit{f}}$-free$\rho$$\alpha_{\mathit{f}}$-free$\rho$$\alpha_{\mathit{f}}$-free$\alpha_{\mathit{e}}$$\alpha_{\mathit{e}}$ Figure 2: The four types of $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating paths: finite or infinite, and without or with $\alpha_{\mathit{e}}$. Always, it has a prefix matching $\rho$ and is $\alpha_{\mathit{f}}$-free up until the first occurrence of an action in $\alpha_{\mathit{e}}$. All 16 patterns can indeed be represented by the non-existence of $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating paths, albeit some more directly than others. It turns out that $\rho$, $\alpha_{\mathit{f}}$ and $\alpha_{\mathit{e}}$ can mostly be determined separately for behaviour and scope. For these patterns, $\alpha_{\mathit{f}}$ is only affected by behaviour and $\alpha_{\mathit{e}}$ only by scope. However, we must split up the regular formula $\rho$ into a behaviour component, $\rho_{\mathit{b}}$, and scope component, $\rho_{\mathit{s}}$, such that $\rho=\rho_{\mathit{s}}\cdot\rho_{\mathit{b}}$. See 1(a) and 1(b) for how the variables should be instantiated for the four scopes and three of the four behaviours. For a compact representation, we use $\sum$ to generalise the union operator on regular formulae ($+$). We also use $x^{i}$ to represent $i$ concatenations of $x$, where $x^{0}=\varepsilon$. We do not include chain response in 1(b), since it does not fit into a single formula. However, it is possible to represent chain response as several response formulae placed in conjunction with each other. We include an example of this in Appendix B. Table 1: Variable instantiation for templates. (a) For scopes. Scope | $\rho_{\mathit{s}}$ | $\alpha_{\mathit{e}}$ ---|---|--- Global | $\varepsilon$ | $\emptyset$ Until | $\varepsilon$ | $S_{b}$ After | $\mathit{\overline{\mathit{S_{a}}}}^{\star}\cdot S_{a}$ | $\emptyset$ After-until | $\mathit{\mathit{Act}}^{\star}\cdot S_{a}$ | $S_{b}$ (b) For behaviours. Behaviour | $\rho_{\mathit{b}}$ | $\alpha_{\mathit{f}}$ ---|---|--- Existence | $\varepsilon$ | $S_{r}$ Existence at least $k$ | $\sum_{0\leq i<k}(\mathit{\overline{\mathit{\alpha_{\mathit{e}}\cup S_{r}}}}^{\star}\cdot S_{r})^{i}$ | $S_{r}$ Response | $\mathit{\overline{\mathit{\alpha_{\mathit{e}}}}}^{\star}\cdot S_{q}$ | $S_{r}$ Chain response | See Appendix B | ## 6 Template Formulae In this section, we present the modal $\mu$-calculus formulae representing the non-existence of a violating path, as defined in section 5, that satisfies one of the completeness criteria from section 4. We express the non-existence of such a path, rather than expressing the equivalent notion that all complete paths satisfy the property, because we find the resulting formulae to be more intuitive. We first present a formula for $\mathcal{B}$-progress only. Subsequently, we give the formulae for weak fairness, weak hyperfairness and justness using a common structure all three share. Finally, we present the formulae for strong fairness and strong hyperfairness. In the justness and fairness formulae, $\mathcal{B}$-progress is also included: these assumptions eliminate unrealistic infinite paths, but we still need progress to discard unrealistic finite paths. The proofs of all theorems in this section are included in Appendix D. ### 6.1 Progress A formula for the non-existence of a violating path without progress is uninteresting. If progress is not assumed then all finite paths are complete, and therefore a path consisting of just $\rho$ is a violating path whenever $\alpha_{\mathit{f}}\neq\emptyset$. The non-existence of a violating path would then be captured by $\neg\langle\mathit{\rho}\rangle\mathit{tt}$. This is why we include progress in all our formulae. To represent progress, we must capture that as long as non-blocking actions are enabled, some transitions must still be executed. The following formula captures the non-existence of violating paths under $\mathcal{B}$-progress: $\neg\langle\mathit{\rho}\rangle\nu X.(\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}\lor\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle X)$ (1) Intuitively, this formula says that there is no path that starts with a prefix matching $\rho$, after which infinitely often a transition can be taken that is not labelled with an action in $\alpha_{\mathit{f}}$, or such transitions can be taken finitely often before a state is reached that is $\mathcal{B}$-locked or in which $\alpha_{\mathit{e}}$ is enabled. In the former case there is a $\mathcal{B}$-progressing path on which no actions in $\alpha_{\mathit{f}}$ occur after $\rho$. If a state in which $\alpha_{\mathit{e}}$ is enabled is reached, then it is guaranteed a violating and $\mathcal{B}$-progressing path exists: by arbitrarily extending the path as long as non-blocking actions are still enabled, a $\mathcal{B}$-progressing and violating path can be constructed. ###### Theorem 6.1. A state in an LTS satisfies Formula 1 if, and only if, it does not admit $\mathcal{B}$-progressing paths that are $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. Since representing a liveness pattern without progress leads to uninteresting formulae, it is unsurprising that previous translations of PSP to the $\mu$-calculus have also implicitly included progress. For instance, the translations from [32] for the liveness patterns of PSP are very similar to Formula 1, albeit in positive form and without blocking actions. ### 6.2 Weak Fairness, Weak Hyperfairness and Justness For weak fairness, weak hyperfairness and justness, we employ a trick inspired by the formula for justness presented in [7] (which was in turn inspired by [12]): we translate a requirement on a full path into an invariant that can be evaluated within finitely many steps from every state of the path. We illustrate this using weak fairness. On every suffix of a weakly fair path, every perpetually enabled non-blocking action occurs. To turn this into an invariant, we observe that we can evaluate a property on all suffixes of a path by evaluating it from every state of the path instead. Next we must determine, within finitely many steps, if an action is perpetually enabled on a possibly infinite path. We do this by observing that if an action is not perpetually enabled, it must become disabled within finitely many steps. An equivalent definition of WFA therefore is: a path $\pi$ satisfies WFA if, and only if, for every state $s$ in $\pi$, every action $a\in\overline{\mathit{\mathcal{B}}}$ that is enabled in $s$ occurs or becomes disabled within finitely many steps on the suffix of $\pi$ starting in $s$. This translation of WFA determines three things for every non-blocking action $a$. First, which actions may need to occur because of $a$; in the case of WFA this is $a$ itself. Second, when those actions need to occur; for WFA this is when $a$ is enabled. We refer to this as the action being “on”. Finally, when those actions do not need to occur; for WFA this is when $a$ becomes disabled. We refer to this as the action being “off”. When an action that was previously on becomes off, or one of the required actions occurs, we say the action is “eliminated”. By choosing different definitions for an action being on or off, and when an action is eliminated, we can also represent justness and weak hyperfairness in the same way. We find that completeness criteria for which such a translation can be made can be represented using the same generalised formula. We will present this formula and how to instantiate it for WFA, WHFA and JA. However, we must first formalise what it means for a predicate on paths to be translatable to an invariant that can be evaluated within finitely many steps. We introduce the term _finitely realisable (path) predicates_ for this purpose. ###### Definition 6.2. A path predicate $P$ is _finitely realisable_ if, and only if, there exist mappings $\phi_{\mathit{on}}$ and $\phi_{\mathit{of}}$ from non-blocking actions to closed modal $\mu$-calculus formulae, and a mapping $\alpha_{\mathit{el}}$ from non-blocking actions to sets of actions, such that: 1. 1. A path $\pi$ satisfies predicate $P$ if, and only if, all states $s$ on $\pi$ satisfy the following: for all $a\in\overline{\mathit{\mathcal{B}}}$, if $s$ satisfies $\phi_{\mathit{on}}(a)$ then the suffix $\pi^{\prime}$ of $\pi$ starting in $s$ must contain an occurrence of some action in $\alpha_{\mathit{el}}(a)$ or a state that satisfies $\phi_{\mathit{of}}(a)$. 2. 2. A state $s$ is a $\mathcal{B}$-locked state if, and only if, $s\not\in\llbracket\phi_{\mathit{on}}(a)\rrbracket$ for all $a\in\overline{\mathit{\mathcal{B}}}$. 3. 3. For every state $s$ and for all $a\in\overline{\mathit{\mathcal{B}}}$, $s\in\llbracket\phi_{\mathit{on}}(a)\rrbracket$ implies $s\not\in\llbracket\phi_{\mathit{of}}(a)\rrbracket$. 4. 4. For all states $s$ and all $a\in\overline{\mathit{\mathcal{B}}}$ such that $s\in\llbracket\phi_{\mathit{on}}(a)\rrbracket$, if there exists a finite path $\pi$ from $s$ to a state $s^{\prime}$ such that there is no occurrence of an action in $\alpha_{\mathit{el}}(a)$ on $\pi$ and there is no state on $\pi$ that satisfies $\phi_{\mathit{of}}(a)$, then $s^{\prime}\in\llbracket\phi_{\mathit{on}}(a)\rrbracket$. We refer to these four properties as the _invariant property_ , the _locking property_ , the _exclusive property_ and the _persistent property_ , respectively. The general formula for finitely realisable predicates is as follows: $\neg\langle\mathit{\rho}\rangle\nu X.(\bigwedge_{a\in\overline{\mathit{\mathcal{B}}}}(\phi_{\mathit{on}}(a)\Rightarrow\langle\mathit{\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}^{\star}}\rangle(\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor(\phi_{\mathit{of}}(a)\land X)\lor\langle\mathit{\alpha_{\mathit{el}}(a)\setminus\alpha_{\mathit{f}}}\rangle X)))$ (2) This formula has similarities to Formula 1, particularly how $\rho$ and $\alpha_{\mathit{e}}$ are integrated. The important part is that after $\rho$, it must invariantly hold that all non-blocking actions for which $\phi_{\mathit{on}}(a)$ is satisfied are later eliminated. An action $a$ is eliminated if, within finitely many steps, $\phi_{\mathit{of}}(a)$ is satisfied or an action in $\alpha_{\mathit{el}}(a)$ occurs. In both cases, the invariant must once again hold. After $\rho$, no actions in $\alpha_{\mathit{f}}$ may occur. The formula works correctly for finite paths as well as infinite ones: if it is possible to reach a $\mathcal{B}$-locked state after $\rho$ without taking actions in $\alpha_{\mathit{f}}$, then $X$ is satisfied due to the locking property, and a violating path is found. Formula 2 is a template formula in two ways: $\rho$, $\alpha_{\mathit{f}}$ and $\alpha_{\mathit{e}}$ determine what property is captured, and $\phi_{\mathit{on}}$, $\phi_{\mathit{of}}$ and $\alpha_{\mathit{el}}$ determine the completeness criterion. In this paper, we only cover how to instantiate the formula for WFA, WHFA and JA, but it can also be used for other finitely realisable predicates. However, the correctness proof of the formula depends on the criterion being _feasible_. Feasibility on paths [3] is defined as follows. ###### Definition 6.3. A predicate on paths $P$ is _feasible_ if, and only if, for every LTS $M$, every finite path $\pi$ in $M$ can be extended to a path $\pi^{\prime}$ that satisfies $P$ and is still a valid path in $M$. That WFA, WHFA and JA are feasible for finite LTSs is proven in Appendix C. ###### Theorem 6.4. For all feasible and finitely realisable path predicates $P$, it holds that an LTSC satisfies Formula 2 if, and only if, its initial state does not admit $\mathcal{B}$-progressing paths that satisfy $P$ and are $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. By instantiating the theorem for each completeness criterion, we derive the following: ###### Corollary 6.5. A state in an LTS satisfies Formula 2 with $\phi_{\mathit{on}}(a)=\langle\mathit{a}\rangle\mathit{tt}$, $\phi_{\mathit{of}}(a)=[\mathit{a}]\mathit{ff}$ and $\alpha_{\mathit{el}}(a)=\\{a\\}$ for all $a\in\overline{\mathit{\mathcal{B}}}$ if, and only if, it does not admit $\mathcal{B}$-progressing paths that satisfy $\mathcal{B}$-weak fairness of actions and are $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. ###### Corollary 6.6. A state in an LTS satisfies Formula 2 with $\phi_{\mathit{on}}(a)=\langle\mathit{\mathit{\overline{\mathit{\mathcal{B}}}}^{\star}\cdot a}\rangle\mathit{tt}$, $\phi_{\mathit{of}}(a)=[\mathit{\mathit{\overline{\mathit{\mathcal{B}}}}^{\star}\cdot a}]\mathit{ff}$ and $\alpha_{\mathit{el}}(a)=\\{a\\}$ for all $a\in\overline{\mathit{\mathcal{B}}}$ if, and only if, it does not admit $\mathcal{B}$-progressing paths that satisfy weak $\mathcal{B}$-hyperfairness of actions and are $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. ###### Corollary 6.7. A state in an LTSC satisfies Formula 2 with $\phi_{\mathit{on}}(a)=\langle\mathit{a}\rangle\mathit{tt}$, $\phi_{\mathit{of}}(a)=\mathit{ff}$ and $\alpha_{\mathit{el}}(a)=\\{b\in\mathit{Act}\mid a\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}b\\}$ for all $a\in\overline{\mathit{\mathcal{B}}}$ if, and only if, it does not admit $\mathcal{B}$-progressing paths that satisfy $\mathcal{B}$-justness of actions and are $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. ### 6.3 Strong Fairness and Strong Hyperfairness SFA is not finitely realisable because we cannot observe within finitely many steps whether an action is relentlessly enabled: even if we observe several times that it is disabled, it may still be infinitely often enabled along the whole path. Hence, we cannot use Formula 2. Instead we observe that, on a path, actions that are not relentlessly enabled must eventually become perpetually disabled. If the path is strongly fair, then all relentlessly enabled non-blocking actions occur infinitely often. We can therefore say that a path is strongly fair if we can divide all non- blocking actions into two disjoint sets: those that occur infinitely often and those that eventually become perpetually disabled. This observation is also made in [37], where a $\mu$-calculus formula for termination under strong fairness is given. Using this idea, we give the following template formula for SFA: $\neg\langle\mathit{\rho\cdot\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}^{\star}}\rangle(\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}\lor\bigvee_{\emptyset\neq F\subseteq\overline{\mathit{\mathcal{B}}}}\nu X.(\bigwedge_{a\in F}\mu W.((\bigwedge_{b\in\overline{\mathit{\mathcal{B}}}\setminus F}[\mathit{b}]\mathit{ff})\land(\langle\mathit{a\setminus\alpha_{\mathit{f}}}\rangle X\lor\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle W))))$ (3) The use of negation, the exclusion of $\alpha_{\mathit{f}}$, and $\rho$ in the diamond operator at the start of this formula are the same as in Formula 1. We explain the start of the formula after addressing the part starting with $\bigvee_{\emptyset\neq F\subseteq\overline{\mathit{\mathcal{B}}}}$. Here, we use that on a strongly fair path, all non-blocking actions can be divided into those that occur infinitely often and those that become perpetually disabled. The disjunction over subsets considers all possible ways of selecting some non-empty subset $F$ of $\overline{\mathit{\mathcal{B}}}$ that should occur infinitely often. The greatest fixpoint states that infinitely often, all those actions must indeed occur within finitely many steps. Additionally, at no point may a non-blocking action not in $F$ be enabled. We exclude $F=\emptyset$ because the logic of the greatest fixed point formula we give relies on there being at least one $a$ in $F$. The special case that $F$ is empty and therefore a $\mathcal{B}$-locked state should be reached, is instead covered by explicitly considering $[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}$ earlier in the formula. Returning to the start of the formula, we allow a finite $\alpha_{\mathit{f}}$-free path before the greatest fixpoint is satisfied. The reason is that it may take several steps before all the non-blocking actions that are only finitely often enabled become perpetually disabled. Since we include a finite prefix already, we also add the cases that an action in $\alpha_{\mathit{e}}$ becomes enabled or that a $\mathcal{B}$-locked state is reached here, rather than deeper into the formula like in Formula 2. ###### Theorem 6.8. An LTS satisfies Formula 3 if, and only if, its initial state does not admit $\mathcal{B}$-progressing paths that satisfy $\mathcal{B}$-strong fairness of actions and are $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. Due to the quantification over subsets, the formula is exponential in the number of actions in $\overline{\mathit{\mathcal{B}}}$. Beyond small models, it is therefore not practical. However, it can serve as a basis for future work. For instance, if fairness is applied to sets of actions rather than individual actions, the formula is exponential in the number of sets instead, which may be smaller depending on how the sets are formed [36]. We can adapt the formula for strong fairness to a formula for strong hyperfairness, by replacing perpetual disabledness of non-blocking actions not in $F$ with perpetual unreachability. $\neg\langle\mathit{\rho\cdot\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}^{\star}}\rangle(\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}\lor\bigvee_{\emptyset\neq F\subseteq\overline{\mathit{\mathcal{B}}}}\nu X.(\bigwedge_{a\in F}\mu W.((\bigwedge_{b\in\overline{\mathit{\mathcal{B}}}\setminus F}[\mathit{\mathit{\overline{\mathit{\mathcal{B}}}}^{\star}\cdot b}]\mathit{ff})\land(\langle\mathit{a\setminus\alpha_{\mathit{f}}}\rangle X\lor\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle W))))$ (4) ###### Theorem 6.9. An LTS satisfies Formula 4 if, and only if, its initial state does not admit a $\mathcal{B}$-progressing path that satisfies strong $\mathcal{B}$-hyperfairness of actions and is $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. Since we are not aware of other completeness criteria that fit the same structure, we do not provide a generalised formula here like we did with Formula 2, although we do prove a more general theorem in Section D.4. ## 7 Application Example We here give an example of an application of the template formulae. In [26], several mutual exclusion algorithms are analysed using the mCRL2 toolset. Their analysis of Dekker’s algorithm [15] presents the following modal $\mu$-calculus formula for starvation freedom of processes with id’s 0 and 1. For clarity, the notation has been adjusted to match the previous sections and action names have been simplified. $[\mathit{\mathit{\mathit{Act}}^{\star}}]\bigwedge_{i\in\\{0,1\\}}[\mathit{\\{\mathit{wish\\_flag}(i,b)\mid b\in\mathbb{B}\\}}]\mu X.([\mathit{\overline{\mathit{\mathit{enter}(i)}}}]X\land\langle\mathit{\mathit{Act}}\rangle\mathit{tt})$ (5) Starvation freedom is a global response property. In this case, the starvation freedom of a process $i$ is represented as an instantiation of the pattern with $S_{q}=\\{\mathit{wish\\_flag}(i,b)\mid b\in\mathbb{B}\\}$ and $S_{r}=\\{\mathit{enter}(i)\\}$. Indeed, the above formula is equivalent to: $\bigwedge_{i\in\\{0,1\\}}\neg\langle\mathit{\mathit{\mathit{Act}}^{\star}\cdot\\{\mathit{wish\\_flag}(i,b)\mid b\in\mathbb{B}\\}}\rangle\nu X.(\langle\mathit{\emptyset}\rangle\mathit{tt}\lor[\mathit{\mathit{Act}}]\mathit{ff}\lor\langle\mathit{\overline{\mathit{\mathit{enter}(i)}}}\rangle X)$ (6) Observe that, when taking $\mathcal{B}=\emptyset$, the above matches a conjunction of two instances of Formula 1, taking $\rho$, $\alpha_{\mathit{e}}$ and $\alpha_{\mathit{f}}$ as suggested in Table 1 for global response. Thus, this formula captures starvation freedom under $\emptyset$-progress. In [26], it is reported that mCRL2 finds a violating path for this formula; a path which the authors note is unfair. The exact fairness assumption considered is not made concrete. As an ad-hoc solution, the modal $\mu$-calculus formula is adjusted to specifically ignore that counterexample. Subsequently, mCRL2 finds another counterexample, which the authors again claim is unfair. Instead of creating yet another formula, they move on to Peterson’s algorithm, which is deemed easier to analyse. Using our template formulae, we can easily produce a formula for starvation freedom under several different completeness criteria. We give the formula for $\emptyset$-WFA, as an example. $\bigwedge_{i\in\\{0,1\\}}\neg\langle\mathit{\mathit{\mathit{Act}}^{\star}\cdot\\{\mathit{wish\\_flag}(i,b)\mid b\in\mathbb{B}\\}}\rangle\\\ \nu X.(\bigwedge_{a\in\mathit{Act}}(\langle\mathit{a}\rangle\mathit{tt}\Rightarrow\langle\mathit{\mathit{\overline{\mathit{\mathit{enter}(i)}}}^{\star}}\rangle(\langle\mathit{\emptyset}\rangle\mathit{tt}\lor([\mathit{a}]\mathit{ff}\land X)\lor\langle\mathit{a\setminus\mathit{enter}(i)}\rangle X)))$ (7) We check this formula on the model from [26] using mCRL2. Since mCRL2 only supports quantification over data parameters and not over actions, the conjunction over $\mathit{Act}$ must be written out explicitly. The tool reports that the formula is violated. Examining the counterexample reveals this is because actions in the model do not show which process performs the action. Therefore, process $i$ reading value $v$ from a register $r$ is labelled with the same action as process $j$ reading $v$ from $r$. We add the responsible process to each action label, and also define $\mathcal{B}=\\{\mathit{wish\\_flag}(i,i,b)\mid i\in\\{0,1\\},b\in\mathbb{B}\\}$, to capture that processes are allowed to remain in their non-critical section indefinitely. This was not considered in Formula 5, but it is part of the mutual exclusion problem [16, 23]. The tool reports that the modified formula is satisfied. We can therefore conclude that Dekker’s algorithm satisfies starvation freedom when assuming weak fairness of actions, as long as it is taken into account for each action which process is responsible for it. Our other formulae can be used in similar ways. An example of how to use the justness formula in mCRL2, including a method for encoding the concurrency relation, is given in [7]. ## 8 Discussion In this section, we briefly reflect on the coverage of the properties we consider, and our choice in focusing on the modal $\mu$-calculus. Firstly, we have exclusively addressed liveness properties in this paper thus far. As indicated previously, the problem we are considering primarily crops up for these properties. This is because, as pointed out in [23], when a completeness criterion is feasible, assuming the criterion holds true or not has no impact on whether a safety property is satisfied or not. The reason is that for safety properties on paths, any path that violates the property must contain a finite prefix such that any extension of that prefix also violates the property [1]. Therefore, if a completeness criterion is feasible, then whenever a model contains incomplete paths that violate a safety property it also contains complete paths that violate the property. All completeness criteria discussed in section 4 are feasible with respect to finite LTSs, and hence we do not need to consider patterns that capture safety properties. For modal $\mu$-calculus formulae for the safety properties of PSP, without integrated completeness criteria, we refer to [32] and [35]. For properties that are a combination of safety and liveness, the components can be turned into separate formulae and checked separately. Readers may also wonder about alternative methods of representing properties under completeness criteria, such as using LTL. As indicated in section 3, there are many contexts where we also want to consider non-linear properties, and hence the modal $\mu$-calculus is preferred. Automatic translations from LTL to the modal $\mu$-calculus exist, but can be exponential in complexity [14] and it is unclear at this time if this blow-up is avoided in this case. Anecdotal evidence [34] suggests this is not the case for existing translations. In [23] several completeness criteria are represented in LTL, but it is noted that this translation requires introducing new atomic propositions which hides the complexity of this translation. The representation of hyperfairness in particular may be expensive, since atomic propositions for all reachable actions are required. It is also unclear how to combine LTL-based translations effectively with symbolic model checking approaches. For these reasons, a direct representation in the modal $\mu$-calculus is preferable. ## 9 Conclusion In this paper, we have presented formulae for liveness properties under several completeness criteria. As part of this, we defined a property template that generalises the liveness properties of PSP, which has been estimated to cover a majority of properties found in the literature [17]. The completeness criteria covered are progress, justness, weak fairness, strong fairness, and hyperfairness, all defined with respect to actions and parameterised with a set of blocking actions. The formulae have all been manually proven to be correct. For future work, one goal is to formalise the proofs in the appendices using a proof assistant. Another avenue for future work is extending our formulae to cover a wider range of completeness criteria and properties. We suggest some potential extensions here. One of our contributions is the identification of a shared common structure underlying justness, weak fairness and weak hyperfairness: they are finitely realisable path predicates. Our formula for such predicates can be adapted to arbitrary feasible finitely realisable path predicates. While we do not have such a generic formula for other completeness criteria, our characterisation of $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating paths can be used as a basis to express the non-existence of complete paths violating many common properties for different notions of completeness as well, as we demonstrate with strong fairness and strong hyperfairness. We are especially interested in extending our formulae to allow fairness over sets of actions, rather than individual actions, similar to the task-based definitions from [25]. In terms of properties, we can look at proposed extensions of PSP, such as those suggested in [13]. There is also the constrained chain behaviour, which is a modification of precedence chain and response chain given in [17]. There are extensions of PSP to real-time [5, 29] and probabilistic [28] contexts as well. Finally, in [6] the formula from [7] that formed the basis of Formula 2 is extended to also include state information. There are therefore many potentially useful extensions of the formulae presented in this paper. However, the presented template formulae already cover many completeness criteria and liveness properties, making them useful for model checking in practice. ## References * [1] Mack W. Alford, Leslie Lamport, and Geoff P. Mullery. Basic concepts. In Mack W. Alford, Jean-Pierre Ansart, Günter Hommel, Leslie Lamport, Barbara Liskov, Geoff P. Mullery, and Fred B. Schneider, editors, Distributed Systems: Methods and Tools for Specification, An Advanced Course, April 3-12, 1984 and April 16-25, 1985, Munich, Germany, volume 190 of Lecture Notes in Computer Science, pages 7–43. Springer, 1984. doi:10.1007/3-540-15216-4\\_12. * [2] Bowen Alpern and Fred B. Schneider. Defining liveness. Inf. Process. Lett., 21(4):181–185, 1985. doi:10.1016/0020-0190(85)90056-0. * [3] Krzysztof R. Apt, Nissim Francez, and Shmuel Katz. Appraising fairness in languages for distributed programming. Distributed Comput., 2(4):226–241, 1988. doi:10.1007/BF01872848. * [4] Paul C. Attie, Nissim Francez, and Orna Grumberg. Fairness and hyperfairness in multi-party interactions. In Frances E. Allen, editor, Conference Record of the Seventeenth Annual ACM Symposium on Principles of Programming Languages, San Francisco, California, USA, January 1990, pages 292–305. ACM Press, 1990. doi:10.1145/96709.96739. * [5] Pierfrancesco Bellini, Paolo Nesi, and Davide Rogai. Expressing and organizing real-time specification patterns via temporal logics. J. Syst. Softw., 82(2):183–196, 2009. doi:10.1016/j.jss.2008.06.041. * [6] Mark S. Bouwman. Supporting Railway Standardisation with Formal Verification. Phd Thesis 1 (Research TU/e / Graduation TU/e), Mathematics and Computer Science, Eindhoven University of Technology, 2023. https://pure.tue.nl/ws/portalfiles/portal/307965423/20231023_Bouwman_hf.pdf. * [7] Mark S. Bouwman, Bas Luttik, and Tim A. C. Willemse. Off-the-shelf automated analysis of liveness properties for just paths. Acta Informatica, 57(3-5):551–590, 2020. doi:10.1007/s00236-020-00371-w. * [8] Julian C. Bradfield and Colin Stirling. Modal logics and mu-calculi: an introduction. In Jan A. Bergstra, Alban Ponse, and Scott A. Smolka, editors, Handbook of Process Algebra, pages 293–330. Elsevier Science, 2001. doi:10.1016/b978-044482830-9/50022-9. * [9] Julian C. Bradfield and Colin Stirling. Modal mu-calculi. In Patrick Blackburn, Johan Van Benthem, and Frank Wolter, editors, Handbook of Modal Logic, volume 3 of Studies in logic and practical reasoning, pages 721–756. Elsevier, 2007. doi:10.1016/s1570-2464(07)80015-2. * [10] Julian C. Bradfield and Igor Walukiewicz. The mu-calculus and model checking. In Edmund M. Clarke, Thomas A. Henzinger, Helmut Veith, and Roderick Bloem, editors, Handbook of Model Checking, pages 871–919. Springer, 2018. doi:10.1007/978-3-319-10575-8\\_26. * [11] Olav Bunte, Jan Friso Groote, Jeroen J. A. Keiren, Maurice Laveaux, Thomas Neele, Erik P. de Vink, Wieger Wesselink, Anton Wijs, and Tim A. C. Willemse. The mCRL2 toolset for analysing concurrent systems. In Tomáš Vojnar and Lijun Zhang, editors, Tools and Algorithms for the Construction and Analysis of Systems - 25th International Conference, TACAS 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, Czech Republic, April 6-11, 2019, Proceedings, Part II, volume 11428 of Lecture Notes in Computer Science, pages 21–39. Springer, 2019. doi:10.1007/978-3-030-17465-1\\_2. * [12] Edmund M. Clarke, Orna Grumberg, Kenneth L. McMillan, and Xudong Zhao. Efficient generation of counterexamples and witnesses in symbolic model checking. In Bryan Preas, editor, Proceedings of the 32st Conference on Design Automation, San Francisco, California, USA, Moscone Center, June 12-16, 1995, pages 427–432. ACM Press, 1995. doi:10.1145/217474.217565. * [13] Rachel L. Cobleigh, George S. Avrunin, and Lori A. Clarke. User guidance for creating precise and accessible property specifications. In Michal Young and Premkumar T. Devanbu, editors, Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2006, Portland, Oregon, USA, November 5-11, 2006, pages 208–218. ACM, 2006. doi:10.1145/1181775.1181801. * [14] Sjoerd Cranen, Jan Friso Groote, and Michel A. Reniers. A linear translation from CTL⋆ to the first-order modal $\mu$-calculus. Theor. Comput. Sci., 412(28):3129–3139, 2011. doi:10.1016/j.tcs.2011.02.034. * [15] Edsger W Dijkstra. Over de sequentialiteit van procesbeschrijvingen (EWD-35). EW dijkstra archive. Center for American History, University of Texas at Austin, 1962. URL: https://www.cs.utexas.edu/~EWD/ewd00xx/EWD35.PDF. * [16] Edsger W. Dijkstra. Solution of a problem in concurrent programming control. Commun. ACM, 8(9):569, 1965. doi:10.1145/365559.365617. * [17] Matthew B. Dwyer, George S. Avrunin, and James C. Corbett. Patterns in property specifications for finite-state verification. In Barry W. Boehm, David Garlan, and Jeff Kramer, editors, Proceedings of the 1999 International Conference on Software Engineering, ICSE’ 99, Los Angeles, CA, USA, May 16-22, 1999, pages 411–420. ACM, 1999. doi:10.1145/302405.302672. * [18] E. Allen Emerson and Edmund M. Clarke. Using branching time temporal logic to synthesize synchronization skeletons. Sci. Comput. Program., 2(3):241–266, 1982. doi:10.1016/0167-6423(83)90017-5. * [19] Michael J. Fischer and Richard E. Ladner. Propositional dynamic logic of regular programs. J. Comput. Syst. Sci., 18(2):194–211, 1979. doi:10.1016/0022-0000(79)90046-1. * [20] Hubert Garavel, Frédéric Lang, Radu Mateescu, and Wendelin Serwe. CADP 2011: a toolbox for the construction and analysis of distributed processes. Int. J. Softw. Tools Technol. Transf., 15(2):89–107, 2013. doi:10.1007/s10009-012-0244-z. * [21] Rob J. van Glabbeek. Justness - A completeness criterion for capturing liveness properties (extended abstract). In Mikołaj ojańczyk and Alex Simpson, editors, Foundations of Software Science and Computation Structures - 22nd International Conference, FOSSACS 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, Czech Republic, April 6-11, 2019, Proceedings, volume 11425 of Lecture Notes in Computer Science, pages 505–522. Springer, 2019. doi:10.1007/978-3-030-17127-8\\_29. * [22] Rob J. van Glabbeek. Reactive temporal logic. In Ornela Dardha and Jurriaan Rot, editors, Proceedings Combined 27th International Workshop on Expressiveness in Concurrency and 17th Workshop on Structural Operational Semantics, EXPRESS/SOS 2020, and 17th Workshop on Structural Operational SemanticsOnline, 31 August 2020, volume 322 of EPTCS, pages 51–68. Open Publishing Association, 2020. doi:10.4204/EPTCS.322.6. * [23] Rob J. van Glabbeek. Modelling mutual exclusion in a process algebra with time-outs. Inf. Comput., 294:105079, 2023. doi:10.1016/j.ic.2023.105079. * [24] Rob J. van Glabbeek and Peter Höfner. CCS: It’s not fair! fair schedulers cannot be implemented in CCS-like languages even under progress and certain fairness assumptions. Acta Informatica, 52(2-3):175–205, 2015. doi:10.1007/s00236-015-0221-6. * [25] Rob J. van Glabbeek and Peter Höfner. Progress, Justness, and Fairness. ACM Comput. Surv., 52(4):69:1–69:38, 2019. doi:10.1145/3329125. * [26] Jan Friso Groote and Jeroen J. A. Keiren. Tutorial: designing distributed software in mCRL2. In Kirstin Peters and Tim A. C. Willemse, editors, Formal Techniques for Distributed Objects, Components, and Systems - 41st IFIP WG 6.1 International Conference, FORTE 2021, Held as Part of the 16th International Federated Conference on Distributed Computing Techniques, DisCoTec 2021, Valletta, Malta, June 14-18, 2021, Proceedings, volume 12719 of Lecture Notes in Computer Science, pages 226–243. Springer, 2021. doi:10.1007/978-3-030-78089-0\\_15. * [27] Jan Friso Groote and Mohammad Reza Mousavi. Modeling and Analysis of Communicating Systems. MIT Press, 08 2014. URL: https://mitpress.mit.edu/books/modeling-and-analysis-communicating-systems. * [28] Lars Grunske. Specification patterns for probabilistic quality properties. In Wilhelm Schäfer, Matthew B. Dwyer, and Volker Gruhn, editors, 30th International Conference on Software Engineering (ICSE 2008), Leipzig, Germany, May 10-18, 2008, pages 31–40. ACM, 2008. doi:10.1145/1368088.1368094. * [29] Sascha Konrad and Betty H. C. Cheng. Real-time specification patterns. In Gruia-Catalin Roman, William G. Griswold, and Bashar Nuseibeh, editors, 27th International Conference on Software Engineering (ICSE 2005), 15-21 May 2005, St. Louis, Missouri, USA, pages 372–381. ACM, 2005. doi:10.1145/1062455.1062526. * [30] Dexter Kozen. Results on the propositional $\mu$-calculus. Theor. Comput. Sci., 27(3):333–354, 1983. doi:10.1016/0304-3975(82)90125-6. * [31] Leslie Lamport. Fairness and hyperfairness. Distributed Comput., 13(4):239–245, 2000. doi:10.1007/PL00008921. * [32] Radu Mateescu. Property Pattern Mappings for RAFMC, 2019. Available at: https://cadp.inria.fr/resources/evaluator/rafmc.html (Accessed: 26 January 2024). * [33] Amir Pnueli. The temporal logic of programs. In 18th Annual Symposium on Foundations of Computer Science, Providence, Rhode Island, USA, 31 October - 1 November 1977, pages 46–57. IEEE Computer Society, 1977. doi:10.1109/SFCS.1977.32. * [34] Jaco van de Pol and Michael Weber. A multi-core solver for parity games. Electronic Notes in Theoretical Computer Science, 220(2):19–34, 2008. Proceedings of the 7th International Workshop on Parallel and Distributed Methods in verifiCation (PDMC 2008). doi:10.1016/j.entcs.2008.11.011. * [35] Daniela Remenska. Bringing Model Checking Closer To Practical Software Engineering. PhD thesis, Vrije U., Amsterdam, 2016. PhD Thesis, available at: https://hdl.handle.net/1871/53958. * [36] Myrthe S. C. Spronck. Fairness assumptions in the modal $\mu$-calculus, 2023. Master’s thesis, Eindhoven University of Technology, available at https://research.tue.nl/en/studentTheses/fairness-assumptions-in-the-modal-%C2%B5-calculus. * [37] Frank A. Stomp, Willem-Paul de Roever, and Rob T. Gerth. The $\mu$-calculus as an assertion-language for fairness arguments. Inf. Comput., 82(3):278–322, 1989. doi:10.1016/0890-5401(89)90004-7. ## Appendix A Property Patterns Here we recall the behaviours and scopes presented in [17]. The original presentation is not restricted to a particular logic and the patterns allow behaviour and scopes to be defined based on both states and actions. We give the definitions specifically with respect to occurrences of actions, since those are the properties we consider in this paper. We use $S_{a}$ (“after”), $S_{b}$ (“before”), $S_{q}$ (“query”) and $S_{r}$ (“required”/“response”) as placeholder names for property-specific sets of actions. We use $k$ for an arbitrary natural number. The following behaviours are given: * • Absence: no action in $S_{r}$ may occur. * • Existence: some action in $S_{r}$ must occur. * – Existence at least/at most/exactly: there must be at least/at most/exactly $k$ occurrences of actions in $S_{r}$. The existence pattern is an instantiation of existence at least with $k=1$. * • Universality: only actions in $S_{r}$ occur. * • Precedence: an occurrence of an action in $S_{r}$ must always be preceded by an occurrence of an action in $S_{q}$. * – Chain precedence: if actions from the sets $S_{r_{0}}$, $S_{r_{1}}$, $\ldots$, $S_{r_{n}}$ occur in that order (potentially with other actions in-between), then they must have been preceded by occurrences of actions from the sets $S_{q_{0}}$, $S_{q_{1}}$, $\ldots$, $S_{q_{m}}$, in that order. * • Response: an occurrence of an action in $S_{q}$ must be followed by the occurrence of an action from $S_{r}$. * – Chain response: if actions from the sets $S_{q_{0}}$, $S_{q_{1}}$, $\ldots$, $S_{q_{n}}$ occur in that order (potentially with other actions in between), then they must be followed by occurrences of actions from the sets $S_{r_{0}}$, $S_{r_{1}}$, $\ldots$, $S_{r_{m}}$, in that order. All of the behaviours only need to hold within the chosen scope. The following scopes are given: * • Global: the full path. * • Before: the prefix of the path before the first occurrence of an action in $S_{b}$. If no action in $S_{b}$ occurs on the path, then the behaviour does not need to be satisfied anywhere. * – Until: same as before, except that if no action in $S_{b}$ occurs, then the behaviour needs to hold on the full path. * • After: the suffix of the path after the first occurrence of an action in $S_{a}$. If no action in $S_{a}$ occurs on the path, then the behaviour does not need to be satisfied anywhere. * • Between: every subpath of the path that starts after an occurrence of an action in $S_{a}$ and ends before the first following occurrence of an action in $S_{b}$. If there is an occurrence of an action in $S_{a}$ that is not eventually followed by an action in $S_{b}$, the behaviour does not need to be satisfied after that $S_{a}$. This combines after and before, but unlike the default after scope considers any occurrence of $S_{a}$, not merely the first. * – After-until: same as between, except that if there is an occurrence of an action in $S_{a}$ that is not eventually followed by an action in $S_{b}$, the behaviour still needs to be satisfied after that occurrence of $S_{a}$. This combines after and until. The until scope does not appear in [17], but after-until does. We include the until scope from [35], there called before-variant, because it can be seen as a simpler form of after-until. We only consider liveness properties, so we must ask which combinations of behaviour and scope result in liveness properties. To make this judgement, we need a formal definition of what makes a property a safety or liveness property. For the purposes of this paper, since all the properties we consider are defined on occurrences of actions, we can use the following definition of a property: ###### Definition A.1. A _property_ is a set of sequences of actions. A path $\pi$ _satisfies_ the property if its sequence of actions is in the set, otherwise it _violates_ the property. We adapt the formal definitions of safety and liveness properties from [1] and [2] respectively to this definition of properties. ###### Definition A.2. A property $P$ is a _safety_ property if, and only if, every infinite sequence of actions not in $P$ has a finite prefix that is not in $P$. The consequence of this is that an infinite path that violates a safety property always has a finite prefix that violates it as well. ###### Definition A.3. A property $P$ is a _liveness_ property if, and only if, for every finite sequence of actions $\mathit{f}$ there exists some infinite sequence of actions $\mathit{f}^{\prime}$ such that $\mathit{f}\mathit{f}^{\prime}$ is in $P$. In terms of paths, this means that for every finite path $\pi$ that violates a liveness property, there exists an infinite path $\pi^{\prime}$ of which $\pi$ is a prefix that satisfies the property. Note that it is not required for an LTS that admits $\pi$ to also admit $\pi^{\prime}$, only that such an extension could be made. We now discuss which patterns form liveness properties. First, we note that the before and between scopes will turn every behaviour into a safety property: whenever an action in $S_{b}$ occurs the behaviour should be satisfied before that occurrence, hence every path that violates the property will have a finite prefix, ending with the first occurrence of an action in $S_{b}$, that also violates the property. The global, until, after and after-until scopes remain relevant. Under these four scopes, the behaviours absence, existence at most, universality, precedence and chain precedence will always result in safety properties as well. For each of these behaviours, some actions may not occur under certain circumstances (be it at all, after there have already been a number of occurrence of those actions, or when some other actions have not yet occurred), therefore every violating path has a finite prefix, ending with the occurrence of such an action, that also violates the path. This leaves us with existence, existence at least, existence exactly, response and chain response. We further drop existence exactly since it is merely a conjunction of existence at least and existence at most. Both parts of the pattern can be expressed separately, so a separate formula for existence exactly is superfluous. We could apply a similar argument to the until scope: it is merely a combination of global and before. Saying that an action in $S_{r}$ has to occur until $S_{b}$ (existence until), for instance, is the same as saying that an action in $S_{r}$ has to occur at all (global existence) and that if there is an occurrence of $S_{b}$, there must be an occurrence of $S_{r}$ before it (existence before). This extends to after-until as well, although it requires a bit more care than simply combining after and between, since after always applies to the first occurrence of an action in $S_{a}$, whereas after- until refers to every occurrence. This could be achieved with minor modifications to the patterns. However, it turns out we can relatively easily incorporate the until and after-until scopes into our formulae, so we include them for convenience. ## Appendix B Representing Chain Response We here illustrate how the chain response behaviour can be represented using our template formulae by combining several response formulae. Consider, for example, sequences of two sets each: if an occurrence of $S_{q_{0}}$ is eventually followed by an occurrence of $S_{q_{1}}$, then there must subsequently be an occurrence of $S_{r_{0}}$ followed by $S_{r_{1}}$. There are two possible violating paths here: either $S_{q_{0}},S_{q_{1}}$ is not followed by $S_{r_{0}}$, or $S_{q_{0}},S_{q_{1}},S_{r_{0}}$ is not followed by $S_{r_{1}}$. These two violations cannot be slotted directly into the form of a single $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating path. Instead, we have two different $(\rho,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating paths. In general, if we have chain response with $S_{q_{0}}$ to $S_{q_{n}}$ and $S_{r_{0}}$ to $S_{r_{m}}$ then we get $m$ different violating paths. Specifically, for all $0\leq i\leq m$, we get a violating path that consists of a sequence $S_{q_{0}}$, $\ldots$, $S_{q_{n}}$, $S_{r_{0}}$, $\ldots$, $S_{r_{i-1}}$ that may not be followed by an occurrence of $S_{r_{i}}$. For convenience, we write such violating paths as a sequence $S_{0}$, $S_{1}$, $\ldots$, $S_{n}$ that may not be followed by $S_{n+1}$. Such violating paths can be expressed through $\rho_{\mathit{b}}=\mathit{\overline{\mathit{\alpha_{\mathit{e}}}}}^{\star}\cdot S_{0}\cdot\mathit{\overline{\mathit{\alpha_{\mathit{e}}\cup S_{1}}}}^{\star}\cdot S_{1}\cdot\mathit{\overline{\mathit{\alpha_{\mathit{e}}\cup S_{2}}}}^{\star}\cdot S_{2}\ldots\cdot\mathit{\overline{\mathit{\alpha_{\mathit{e}}\cup S_{n}}}}^{\star}\cdot S_{n}$, and $\alpha_{\mathit{f}}=S_{n+1}$. Each violating path must be given its own formula, where 1(a) is still used for the scope, and all resulting formulae placed in conjunction. This way chain- response can be represented. ###### Example B.1. Say we want to express chain response with the scope after-until under WFA, and we take the chain that an occurrence of an action in $S_{q_{0}}$, if followed by an action in $S_{q_{1}}$, needs to be followed by an action in $S_{r_{0}}$ and then by an action in $S_{r_{1}}$. The violating paths are occurrences of an actions in $S_{q_{0}},S_{q_{1}}$ not followed by an action $S_{r_{0}}$, and occurrences of actions in $S_{q_{0}},S_{q_{1}},S_{r_{0}}$ with no subsequent occurrence of an action in $S_{r_{1}}$. Both must be after the first occurrence of an action in $S_{a}$, and before the next occurrence of an action in $S_{b}$. The formula we need is then: $\displaystyle\neg(\langle\mathit{\mathit{\mathit{Act}}^{\star}\cdot S_{a}\cdot\mathit{\overline{\mathit{S_{b}}}}^{\star}\cdot S_{q_{0}}\cdot\mathit{\overline{\mathit{S_{b}\cup S_{q_{1}}}}}^{\star}\cdot S_{q_{1}}}\rangle$ $\displaystyle\quad\nu X.(\bigwedge_{a\in\overline{\mathit{\mathcal{B}}}}(\langle\mathit{a}\rangle\mathit{tt}\Rightarrow\langle\mathit{\mathit{\overline{\mathit{S_{r_{0}}}}}^{\star}}\rangle(\langle\mathit{S_{b}}\rangle\mathit{tt}\lor([\mathit{a}]\mathit{ff}\land X)\lor\langle\mathit{a\setminus S_{r_{0}}}\rangle X))))$ $\displaystyle\land$ $\displaystyle\neg(\langle\mathit{\mathit{\mathit{Act}}^{\star}\cdot S_{a}\cdot\mathit{\overline{\mathit{S_{b}}}}^{\star}\cdot S_{q_{0}}\cdot\mathit{\overline{\mathit{S_{b}\cup S_{q_{1}}}}}^{\star}\cdot S_{q_{1}}\cdot\mathit{\overline{\mathit{S_{b}\cup S_{r_{0}}}}}^{\star}\cdot S_{r_{0}}}\rangle$ $\displaystyle\quad\nu X.(\bigwedge_{a\in\overline{\mathit{\mathcal{B}}}}(\langle\mathit{a}\rangle\mathit{tt}\Rightarrow\langle\mathit{\mathit{\overline{\mathit{S_{r_{1}}}}}^{\star}}\rangle(\langle\mathit{S_{b}}\rangle\mathit{tt}\lor([\mathit{a}]\mathit{ff}\land X)\lor\langle\mathit{a\setminus S_{r_{1}}}\rangle X))))$ ## Appendix C Proofs of Feasibility In Section 6.2 we claimed WFA, WHFA and JA are feasible with respect to finite LTSs. In the proof of the SFA and SHFA formulae, we will need feasibility of SFA and SHFA as well. In this appendix, we give those proofs. All our proofs assume a fixed LTSC $M=(\mathcal{S},s_{\mathit{init}},\mathit{Act},\mathit{Trans},\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}})$, although the $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$ is only relevant for JA. We also refer to an arbitrary environment $\mathit{e}$, and set of blocking actions $\mathcal{B}\subseteq\mathit{Act}$. When we refer to an arbitrary state or transition in a path in our proofs, it should be understood that we are referring to specific occurrences of those states and transitions unless explicitly stated otherwise. Recall 6.3: See 6.3 ###### Proposition C.1. $\mathcal{B}$-weak fairness of actions is feasible. ###### Proof C.2. It is proven in [25, Theorem 6.1] that if only countably many actions are enabled in each state of a transition system, then weak fairness of actions with $\mathcal{B}=\emptyset$ is feasible. We have assumed a finite set $\mathit{Act}$, hence this theorem applies in our case. This means that every finite path can be extended to a path that is $\emptyset$-weakly fair. A path that satisfies $\emptyset$-WFA also satisfies $\mathcal{B}$-WFA for arbitrary $\mathcal{B}$, since $\emptyset$-WFA requires all actions in $\mathit{Act}$ to occur in suffixes that they are perpetually enabled in, and $\mathit{Act}$ is a superset of $\overline{\mathit{\mathcal{B}}}$. We conclude that $\mathcal{B}$-weak fairness of actions is feasible. ###### Proposition C.3. $\mathcal{B}$-strong fairness of actions is feasible. ###### Proof C.4. It is proven in [25, Theorem 6.1] that if only countably many actions are enabled in each state of a transition system, then strong fairness of actions with $\mathcal{B}=\emptyset$ is feasible. We have assumed a finite set $\mathit{Act}$, hence this theorem applies. Similarly to WFA, as argued in C.1, $\emptyset$-SFA implies $\mathcal{B}$-SFA for arbitrary $\mathcal{B}$ because $\mathcal{B}$-SFA requires only actions in $\overline{\mathit{\mathcal{B}}}$ to occur when they are relentlessly enabled. We conclude $\mathcal{B}$-strong fairness of actions is feasible. For the two forms of hyperfairness, we first prove a supporting lemma. ###### Lemma C.5. Every finite path $\pi$ can be extended to a path $\pi^{\prime}$ that satisfies weak $\mathcal{B}$-hyperfairness of actions, such that all occurrences of blocking actions in $\pi^{\prime}$ are part of $\pi$. ###### Proof C.6. Let $\pi$ be an arbitrary finite path. We prove that $\pi$ can be extended to path $\pi^{\prime}$ that satisfies weak $\mathcal{B}$-hyperfairness of actions, such that there are no occurrences of blocking actions in the extension. We do this through construction of the path $\pi^{\prime}$. We will construct $\pi^{\prime}$ in steps. Let $\pi_{i}$ with $i\geq 0$ be the path constructed in the $i$’th iteration, with $\pi_{0}=\pi$. Let $s_{i}$ be the last state of $\pi_{i}$. For this construction, we use a queue $Q$ containing non-blocking actions. At the start of the construction, $Q$ is initialised with exactly one copy all non-blocking actions $\mathcal{B}$-reachable from $s_{0}$, the final state of $\pi$, in some arbitrary order. The construction has the following invariants: $Q$ contains exactly one copy of every non- blocking action $\mathcal{B}$-reachable in the final state of the path constructed so far. It may contain zero or one copies of non-blocking actions not $\mathcal{B}$-reachable from this state. It contains no blocking actions. Additionally, the only occurrences of blocking actions in the path constructed thus far are in $\pi$. At each step $i>0$, we do the following: first, we determine if $Q$ is empty or not. If it is empty, we take $\pi_{i}=\pi_{i-1}$ and the construction terminates. If $Q$ is not empty, we pop the head $a$ from $Q$. If $a$ is not $\mathcal{B}$-reachable from $s_{i-1}$, then we let $\pi_{i}=\pi_{i-1}$ and go to step $i+1$. The invariants are maintained because we only removed $a$ from $Q$, and $a$ was not $\mathcal{B}$-reachable from $s_{i-1}=s_{i}$. If $a$ is $\mathcal{B}$-reachable from $s_{i-1}$, then there exists some path $\pi_{i}^{\prime}$ consisting of only non-blocking actions starting in $s_{i-1}$ and ending in a state $s_{i-1}^{\prime}$ such that some transition $t_{a}$ with $\mathit{act}(\mathit{t_{a}})=a$ is enabled in $s_{i-1}^{\prime}$. Let $\pi_{i}=\pi_{i-1}\cdot\pi_{i-1}^{\prime}t_{a}\mathit{trgt}(\mathit{t_{a}})$ and append $a$ back to the end of $Q$. Then continue to step $i+1$. The invariants are maintained in this case as well. This is because every action $\mathcal{B}$-reachable in $s_{i}$ must also have been $\mathcal{B}$-reachable from $s_{i-1}$, since $s_{i}$ is $\mathcal{B}$-reachable from $s_{i-1}$. Since $Q$ at the start of this step contains the same actions as at the end of this step, and by the invariant it contained all actions at the start that are $\mathcal{B}$-reachable from $s_{i-1}$, it also contains all actions that are $\mathcal{B}$-reachable from $s_{i}$ at the end of the step. Finally, the segment we added did not contain any blocking actions, and $a$ itself is non- blocking because all actions in $Q$ are non-blocking. There are two potential outcomes to this construction: either $Q$ becomes empty and the construction terminates, or $Q$ never becomes empty and the construction continues infinitely. We prove that in either case, the path $\pi^{\prime}$ that is ultimately constructed is weakly $\mathcal{B}$-hyperfair of actions and does not contain occurrences of blocking actions beyond those already present in $\pi$. * • If $Q$ becomes empty and the construction terminates, then the final path $\pi^{\prime}$ is $\pi_{i}$ for the $i$ on which $Q$ was determined to be empty. The final state of $\pi^{\prime}$, $s^{\prime}$ is then a state in which no non-blocking actions are $\mathcal{B}$-reachable. Hence, there are no non-blocking actions perpetually $\mathcal{B}$-reachable on any suffix of $\pi^{\prime}$ and so $\pi^{\prime}$ is trivially $\mathcal{B}$-WHFA. * • If $Q$ never becomes empty then the construction continues forever. Let $\pi^{\prime}$ be the infinite path $\pi_{\infty}$. Let $\pi^{\prime\prime}$ be an arbitrary suffix of $\pi^{\prime}$, and let $a$ be an arbitrary action in $\overline{\mathit{\mathcal{B}}}$ that is perpetually $\mathcal{B}$-reachable on $\pi^{\prime\prime}$. We prove $a$ occurs in $\pi^{\prime\prime}$. Consider that if $a$ is perpetually $\mathcal{B}$-reachable on $\pi^{\prime\prime}$, then it is $\mathcal{B}$-reachable in every state of $\pi^{\prime\prime}$. Consider also that, since $\pi^{\prime\prime}$ is a suffix of the infinite path $\pi^{\prime}$, $\pi^{\prime\prime}$ is also infinite. In our construction, we add only a finite number of steps to the path in every iteration. Therefore, $\pi^{\prime\prime}$ was created as a part of $\pi^{\prime}$ over infinitely many iterations. Since $a$ is enabled in every state of $\pi^{\prime\prime}$, by the invariants $a$ must be in $Q$ at the start of all iterations of the construction that contributed to $\pi^{\prime\prime}$, with possible exception of the first. Since $Q$ is a queue and $\mathit{Act}$ is finite, $a$ will be at the head of the queue during the construction of $\pi^{\prime\prime}$ infinitely many times. Whenever $a$ was at the head of the queue during the construction, a finite number of steps were added to the path that ended with a transion labelled with $a$. Hence, $a$ occurs in $\pi^{\prime\prime}$, and so $\pi^{\prime}$ is $\mathcal{B}$-WHFA. In both cases, that no new occurrences of blocking actions are added to the path comes directly from the invariants. ###### Proposition C.7. Weak $\mathcal{B}$-hyperfairness of actions is feasible. ###### Proof C.8. This follows from C.5, which is a stronger property. ###### Proposition C.9. Strong $\mathcal{B}$-hyperfairness of actions in feasible. ###### Proof C.10. We prove that every finite path $\pi$ can be extended to a path $\pi^{\prime}$ that satisfies strong $\mathcal{B}$-hyperfairness of actions. Let $\pi$ be an arbitrary finite path, then by C.5 we know that there exists a path $\pi^{\prime}$ that extends $\pi$ and satisfies weak $\mathcal{B}$-hyperfairness of actions, and has no occurrences of blocking actions save those already present in $\pi$. We will use $\pi^{\prime}$ to witness that there exists a strongly $\mathcal{B}$-hyperfair extension of $\pi$, by proving $\pi^{\prime}$ satisfies strong $\mathcal{B}$-hyperfairness as well as weak $\mathcal{B}$-hyperfairness. Towards a contradiction, assume that $\pi^{\prime}$ does not satisfy strong $\mathcal{B}$-hyperfairness of actions. Then $\pi^{\prime}$ must have a suffix $\pi^{\prime\prime}$ such that there is an action $a\in\overline{\mathit{\mathcal{B}}}$ that is relentlessly $\mathcal{B}$-reachable in $\pi^{\prime\prime}$ and yet does not occur in $\pi^{\prime\prime}$. If $a$ is relentlessly $\mathcal{B}$-reachable in $\pi^{\prime\prime}$, it is also relentlessly $\mathcal{B}$-reachable in every suffix of $\pi^{\prime\prime}$. Let $\pi^{\prime\prime\prime}$ be a suffix of $\pi^{\prime\prime}$ such that $\pi^{\prime\prime\prime}$ does not contain any occurrences of blocking actions. That such a suffix exists follows from $\pi^{\prime}$ only having occurrences of blocking actions in the finite prefix $\pi$. We now have the path $\pi^{\prime\prime\prime}$ on which $a$ is relentlessly $\mathcal{B}$-reachable and that does not contain occurrences of blocking actions. Let $s$ be an arbitrary state on $\pi^{\prime\prime\prime}$. Since $a$ is relentlessly $\mathcal{B}$-reachable, there must be a state $s^{\prime}$ on $\pi^{\prime\prime\prime}$ past $s$ such that $a$ is $\mathcal{B}$-reachable from $s^{\prime}$. And since there are no occurrences of blocking actions on $\pi^{\prime\prime\prime}$, $a$ is also $\mathcal{B}$-reachable from $s$. Hence, $a$ is $\mathcal{B}$-reachable from every state of $\pi^{\prime\prime\prime}$ and is therefore perpetually $\mathcal{B}$-reachable on $\pi^{\prime\prime\prime}$. We constructed $\pi^{\prime\prime\prime}$ as a suffix of $\pi^{\prime\prime}$ which is a suffix of $\pi^{\prime}$, so $\pi^{\prime\prime\prime}$ is a suffix of $\pi^{\prime}$ as well. We know that $\pi^{\prime}$ satisfies weak $\mathcal{B}$-hyperfairness, so since $a$ is perpetually $\mathcal{B}$-reachable on $\pi^{\prime\prime\prime}$, a suffix of $\pi^{\prime}$, $a$ also occurs in $\pi^{\prime\prime\prime}$. Since $\pi^{\prime\prime\prime}$ is a suffix of $\pi^{\prime\prime}$, we know that $a$ occurs on $\pi^{\prime}$. However, we assumed previously that $a$ does not occur on $\pi^{\prime\prime}$. We have reached a contradiction and therefore conclude that $\pi^{\prime}$ satisfies strong $\mathcal{B}$-hyperfairness as well as weak $\mathcal{B}$-hyperfairness. ###### Proposition C.11. $\mathcal{B}$-justness of actions is feasible. ###### Proof C.12. Let $\pi$ be an arbitrary finite path. We prove $\pi$ can be extended to a path $\pi^{\prime}$ satisfying $\mathcal{B}$-justness of actions. We do this through construction of such a path $\pi^{\prime}$. We do this in steps, where $\pi_{i}$ with $i\geq 0$ represents the path constructed in step $i$. Let $\pi_{0}=\pi$. Let $s_{i}$ be the last state of $\pi_{i}$ for all $i\geq 0$. For this construction we use a queue $Q$. The initial contents of $Q$ are determined by $\pi_{0}$: it contains exactly one copy of every non-blocking action that is enabled in some state of $\pi_{0}$ but has not been subsequently eliminated. The order of these actions is arbitrary. The construction has the following invariant: $Q$ contains exactly one copy of every non-blocking action that is enabled in some state of the path constructed so far, but has not been subsequently eliminated. Trivially, this invariant holds at initialisation. The construction proceeds as follows: in step $i$, with $i>0$, we construct $\pi_{i}$ from $\pi_{i-1}$ using $Q$. At this point, $Q$ contains exactly one copy of every non-blocking action that was enabled in some state of $\pi_{i-1}$ but has not subsequently been eliminated. If $Q$ is empty, let $\pi_{i}=\pi_{i-1}$ and the construction terminates. Otherwise, we pop the head of $Q$, let this action be $a$. The action $a$ must have been enabled in some state $s_{a}$ of $\pi_{i-1}$ such that the subpath $\pi_{a}$ of $\pi_{i-1}$ from $s_{a}$ to $s_{i-1}$ does not contain an occurrence of an action that eliminates $a$. By the second property of concurrency relations on actions, $a$ must still be enabled in $s_{i-1}$. Let $t_{a}$ be a transition enabled in $s_{i-1}$ with $\mathit{act}(\mathit{t_{a}})=a$, let $\pi_{i}=\pi_{i-1}t_{a}\mathit{trgt}(\mathit{t_{a}})$. We modify $Q$ in two steps: firstly, every action $b$ that is in $Q$ such that $b\mathbin{{\centernot\smile}^{\raisebox{-1.20552pt}{\tiny$\bullet$}}}a$ is removed from $Q$. Secondly, every non-blocking action that is enabled in $\mathit{trgt}(\mathit{t_{a}})=s_{i}$ that is not yet in $Q$ gets appended to $Q$ in some arbitrary order. At this point, the invariant is again satisfied: by removing all actions that are eliminated by $a$ from $Q$, we ensure that $Q$ no longer contains those actions that were not eliminated in $\pi_{i-1}$ but are eliminated in $\pi_{i}$. By afterwards adding those actions that are enabled in $s_{i}$, we include those actions that are newly enabled without being eliminated yet. We proceed to the next iteration. This construction either terminates after finitely many steps, or continues forever, the latter case results in an infinite path. We show that in either case, the constructed path $\pi^{\prime}$ satisfies $\mathcal{B}$-justness of actions. * • If the construction terminates during step $i$, then $\pi^{\prime}=\pi_{i}$. By the invariant, $Q$ contains exactly those non-blocking actions that are enabled in $\pi^{\prime}$ without being subsequently eliminated, and $Q$ must be empty because the construction terminated. Let $s^{\prime}$ be the final state of $\pi^{\prime}$. If there are non-blocking actions enabled in $s^{\prime}$, then those actions are enabled in $\pi^{\prime}$ without being subsequently eliminated, since there are no further transitions in $\pi^{\prime}$. Since no such actions can exist, we know $s^{\prime}$ is a $\mathcal{B}$-locked state. If there were a non-blocking action enabled on $\pi^{\prime}$ that has not subsequently been eliminated through the occurrence of an interfering action, then by the second property of concurrency relations on actions that non-blocking action should still be enabled on $s^{\prime}$. Since $s^{\prime}$ is a $\mathcal{B}$-locked state, this is impossible and hence $\pi^{\prime}$ is $\mathcal{B}$-JA. * • If the construction never terminates, then we construct an infinite path $\pi_{\infty}=\pi^{\prime}$. Let $s$ be some arbitrary state of $\pi^{\prime}$ and let $a$ be an arbitrary non-blocking action that is enabled in $s$. We prove that $a$ is eliminated in the suffix $\pi_{a}^{\prime}$ of $\pi^{\prime}$ starting in $s$. Towards a contradiction, assume that $a$ is not eliminated in $\pi^{\prime}_{a}$. Let $i>0$ be the first iteration of the construction such that $s_{i-1}$ is in $\pi_{a}^{\prime}$. Since $s_{i-1}$ is in $\pi_{a}^{\prime}$, it either is $s$ or comes after $s$, and since $a$ is not eliminated in $\pi_{a}^{\prime}$ it must be the case, by the second property of concurrency relations on actions, that $a$ is enabled in $s_{i-1}$. Hence, by the invariant, $Q$ must have contained $a$ at the start of iteration $i$. Since $Q$ contains at most one copy of every non-blocking action and $\mathit{Act}$ is finite, there are finitely many actions before $a$ in the queue. Every iteration, at least one action gets removed from $Q$ and new actions get appended. Hence, $a$ is either removed early or eventually becomes the head of the queue. If $a$ is removed early, this is because an action occurred that eliminates $a$, hence $a$ is eliminated in $\pi_{a}^{\prime}$. If $a$ becomes the head of the queue, then we ensure $a$ itself occurs. By the first property of concurrency relations on actions, $a$ eliminates itself. In this case too, $a$ is eliminated in $\pi_{a}^{\prime}$. This contradicts our assumption that $a$ was not eliminated. We conclude that all non-blocking actions that are enabled in some state of $\pi^{\prime}$ are subsequently eliminated. Hence, $\pi^{\prime}$ satisfies $\mathcal{B}$-justness of actions. In either case, the finite path $\pi$ can be extended to a path $\pi^{\prime}$ that satisfies $\mathcal{B}$-JA. We conclude $\mathcal{B}$-justness of actions is feasible. ## Appendix D Correctness of Formulae In this appendix, we provide the correctness proofs for the presented formulae. First, there is a supporting proposition we use repeatedly throughout the different proofs. All our proofs assume a fixed LTSC $M=(\mathcal{S},s_{\mathit{init}},\mathit{Act},\mathit{Trans},\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}})$, although the $\mathbin{\smile^{\raisebox{-0.60275pt}{\tiny$\bullet$}}}$ is only relevant for JA. We also refer to an arbitrary environment $\mathit{e}$ and set of blocking actions $\mathcal{B}\subseteq\mathit{Act}$. We define the length of a finite path to be the number of transitions occurring in it. A path of length $0$ contains only a single state and is called the empty path. ### D.1 Supporting Proposition The following proposition gives the semantics of a least fixed point formula that occurs in several of our presented formulae. ###### Proposition D.1. For all states $s\in\mathcal{S}$, formal variables $Y$, modal $\mu$-calculus formulae $\phi_{1}$ and $\phi_{2}$ that do not depend on $Y$, and set of actions $\alpha$, it is the case that $s$ is in $\llbracket\mu\mathit{Y}.(\mathit{\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)})\rrbracket$ if, and only if, $s$ admits a finite path $\pi$ satisfying the following requirements: 1. 1. all actions occurring in $\pi$ are in $\alpha$, and 2. 2. all states in $\pi$ are in $\llbracket\phi_{1}\rrbracket$, and 3. 3. the final state of $\pi$ is in $\llbracket\phi_{2}\rrbracket$. We first need to prove a supporting lemma. To prove this lemma, we use an alternate characterisation of the semantics of least fixpoints to the one we presented in Section 2.2. We only give the definitions that we require for our proofs. The following presentation can be found in [8, 9], amongst others. Let $Y$ be an arbitrary formal variable, $\phi$ be an arbitrary modal $\mu$-calculus formula and $\mathit{e}$ an arbitrary environment. Let $T$ be the transformer associated with $\mu Y.\phi$, defined as $T(\mathcal{F})=\\{s\in\mathcal{S}\mid s\in\llbracket\phi\rrbracket_{\mathit{e}[Y:=\mathcal{F}]}\\}$ And define $\displaystyle T^{0}(\mathcal{F})$ $\displaystyle=\mathcal{F}$ $\displaystyle T^{i+1}(\mathcal{F})$ $\displaystyle=T(T^{i}(\mathcal{F}))$ Then we can calculate the semantics of $\mu\mathit{Y}.\mathit{\phi}$ under $\mathit{e}$ as: $\displaystyle\llbracket\mu\mathit{Y}.\mathit{\phi}\rrbracket_{\mathit{e}}$ $\displaystyle=\bigcup_{0\leq i\leq|\mathcal{S}|}T^{i}(\emptyset)$ Note that this definition only works for finite systems, since it uses $|\mathcal{S}|$. A version exists for infinite systems, but is not relevant here. We call $T^{i}(\emptyset)$ the $i$’th approximation of $\phi$. For the subsequent lemmas, we fix formal variable $Y$. ###### Lemma D.2. For all environments $\mathit{e}$, states $s\in\mathcal{S}$, modal $\mu$-calculus formulae $\phi_{1}$ and $\phi_{2}$ that do not depend on $Y$, sets of actions $\alpha$, and natural numbers $0\leq i\leq|\mathcal{S}|$, it holds that: $s$ is in the $i$’th approximation of $\mu\mathit{Y}.(\mathit{\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)})$ under $\mathit{e}$ if, and only if, $s$ admits a finite path $\pi$ meeting the following conditions: 1. 1. $\pi$ has length at most $i-1$, and 2. 2. only actions in $\alpha$ occur in $\pi$, and 3. 3. all states in $\pi$ are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and 4. 4. the final state of $\pi$ is in $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. ###### Proof D.3. Let $T$ be the transformer of $\mu\mathit{Y}.(\mathit{\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)})$. We prove that $s$ is in $T^{i}(\emptyset)$ if, and only if, $s$ admits a finite path $\pi$ of length at most $i-1$ on which only actions in $\alpha$ occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state $s^{\prime}\in\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. We do this by induction on $i$. For the first _base_ , take $i=0$. Note that $s$ is in the $i$’th approximation if $s\in T^{0}(\emptyset)$. However, $T^{0}(\emptyset)=\emptyset$, so $s$ cannot be in the $0$’th approximation. Indeed, we cannot have a path of length at most $-1$. So in both directions of the bi-implication, the left side of the implication does not hold. For the second _base_ , take $i=1$. We prove the bi-implication. * • First, assume $s$ is in the first approximation. Then $s\in T^{1}(\emptyset)=\\{s\in\mathcal{S}\mid s\in\llbracket\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)\rrbracket_{\mathit{e}[Y:=\emptyset]}\\}$. Hence, $s\in\llbracket\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)\rrbracket_{\mathit{e}[Y:=\emptyset]}$. Through the semantics of the modal $\mu$-calculus, and using that $\phi_{1}$ and $\phi_{2}$ do not depend on $Y$, this becomes $s\in(\llbracket\phi_{1}\rrbracket_{\mathit{e}}\cap(\llbracket\phi_{2}\rrbracket_{\mathit{e}}\cup\llbracket\langle\mathit{\alpha}\rangle Y\rrbracket_{\mathit{e}[Y:=\emptyset]})$. Therefore, $s\in\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and $s$ is in $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$ or $s\in\\{s\in\mathcal{S}\mid\exists_{s^{\prime}\in\mathcal{S}}.s\xrightarrow{\alpha}s^{\prime}\land s^{\prime}\in\emptyset\\}$. It is not possible for a state $s^{\prime}$ to exist that is in $\emptyset$, hence $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. Let $\pi$ be the empty path from $s$. No actions occur on $\pi$, so trivially all occurring actions are in $\alpha$. Since $s$ satisfies both $\phi_{1}$ and $\phi_{2}$ under $\mathit{e}$, the other conditions are met as well. We conclude that $s$ admits a finite path of length at most 0 on which only actions in $\alpha$ occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state in $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. * • Second, assume $s$ admits a path of length at most $0$ on which only actions on $\alpha$ occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state in $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. The only path starting in $s$ of length at most $0$ is the path consisting of only $s$. Hence, $s\in\llbracket\phi_{1}\rrbracket_{\mathit{e}}$ and $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. Since $\phi_{1}$ and $\phi_{2}$ do not depend on $Y$, we also have $s\in\llbracket\phi_{1}\rrbracket_{\mathit{e}[Y:=\emptyset]}$ and $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}[Y:=\emptyset]}$. If $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}[Y:=\emptyset]}$, it is also in the superset $\llbracket\phi_{2}\rrbracket_{\mathit{e}[Y:=\emptyset]}\cup\llbracket\langle\mathit{\alpha}\rangle Y\rrbracket_{\mathit{e}[Y:=\emptyset]}$. We conclude that $s\in\\{s\in\mathcal{S}\mid s\in\llbracket\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)\rrbracket_{\mathit{e}[Y:=\emptyset]}\\}=T^{1}(\emptyset)$ and hence $s$ is in the first approximation. The _induction hypothesis_ we use is that a state $s^{\prime}$ is in the $k$’th approximation of $\mu Y.(\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y))$ if, and only if, $s^{\prime}$ admits a finite path of length at most $k-1$ on which only actions in $\alpha$ occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state in $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. This is for all $k\geq 1$. For the _step_ case, we prove the claim for $k+1$. Let $S$ be the set of states that admit finite paths of length at most $k-1$ on which only actions in $\alpha$ occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$ and which end in a state in $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. By the induction hypothesis, $S=T^{k}(\emptyset)$. Since this lemma is a bi-implication, we prove both directions separately. * • We assume $s\in T^{k+1}(\emptyset)$. We need to prove $s$ admits a path $\pi$ that is of length at most $k+1-1=k$, on which only actions in $\alpha$ occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state in $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. We have $s\in T^{k+1}(\emptyset)=T(T^{k}(\emptyset))=T(S)$ Hence, $s\in\\{s\in\mathcal{S}\mid s\in\llbracket\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)\rrbracket_{\mathit{e}[Y:=S]}\\}$. This reduces to $s\in\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}}\lor s\in\\{s\in\mathcal{S}\mid\exists_{s^{\prime}\in\mathcal{S}}.s\xrightarrow{\alpha}s^{\prime}\land s^{\prime}\in S\\}$, because $\phi_{1}$ and $\phi_{2}$ do not depend on $Y$. We do a case distinction on whether $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. * – If $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}}$, then the path $\pi$ consisting of only $s$ is a path of length 0 on which only actions in $\alpha$ occur, and all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state in $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. Since we assumed $k\geq 1$, we know $0\leq k$, hence $s$ admits a path meeting the requirements of length at most $k$. * – If $s\not\in\llbracket\phi_{2}\rrbracket_{\mathit{e}}$, then $s\in\\{s\in\mathcal{S}\mid\exists_{s^{\prime}\in\mathcal{S}}.s\xrightarrow{\alpha}s^{\prime}\land s^{\prime}\in S\\}$. Hence, there exists a state $s^{\prime}$ such that that there exists an $\alpha$-transition $t$ from $s$ to $s^{\prime}$ and $s^{\prime}$ is in $S$. Since $s^{\prime}\in S$, we know $s^{\prime}$ admits a path $\pi^{\prime}$ of length at most $k-1$, on which only actions in $\alpha$ occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state satisfying $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. Let $\pi=st\pi^{\prime}$. Since $\pi^{\prime}$ has length at most $k-1$ and we added one transition, $\pi$ has length at most $k$. Additionally, $t$ is an $\alpha$-transition, as are all transitions in $\pi^{\prime}$, so all transitions in $\pi$ are labelled with actions in $\alpha$. Since $s\in\llbracket\phi_{1}\rrbracket_{\mathit{e}}$ and all states in $\pi^{\prime}$ are as well, all states on $\pi$ meet this requirement. Finally, since $\pi^{\prime}$ ends in a state satisfying $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$, so does $\pi$. Hence, $\pi$ is a witness that $s$ admits a path meeting all requirements. In both cases $s$ admits such a path $\pi$ of length at most $k$. * • We assume $s$ admits a path $\pi$ of length at most $k$ such that all transitions on $\pi$ are labelled with actions in $\alpha$, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and $\pi$ ends in a state satisfying $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. We prove $s\in T^{k+1}(\emptyset)=T(T^{k}(\emptyset))=T(S)=\\{s\in\mathcal{S}\mid s\in\llbracket\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)\rrbracket_{\mathit{e}[Y:=S]}\\}$. We do a case distinction on whether the length of $\pi$ is zero. * – If the length of $\pi$ is zero, then $\pi=s$ and $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. Additionally, since all states on $\pi$ are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, so is $s$. Since $\phi_{1}$ and $\phi_{2}$ do not depend on $Y$, we also have $s\in\llbracket\phi_{1}\rrbracket_{\mathit{e}[Y:=S]}$ and $s\in\llbracket\phi_{2}\rrbracket_{\mathit{e}[Y:=S]}$, and hence also $s\in\llbracket\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)\rrbracket_{\mathit{e}[Y:=S]}$. Thus, $s\in T(S)=T^{k+1}(\emptyset)$. * – If the length of $\pi$ is greater than zero, then there is at least one transition in $\pi$. Let $t$ be the first transition of $\pi$. Since there are only $\alpha$-transitions in $\pi$, $t$ is an $\alpha$-transition. Let $s^{\prime}$ be the target of $t$, and let $\pi^{\prime}$ be the suffix of $\pi$ starting in $s^{\prime}$. Then since the length of $\pi$ is at most $k$, the length of $\pi^{\prime}$ is at most $k-1$. Hence, $\pi^{\prime}$ witnesses that $s^{\prime}$ admits a path of length at most $k-1$ on which only $\alpha$-transitions occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state satisfying $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. Hence, $s\in S$. So $s$ admits an $\alpha$-transition, namely $t$, to a state in $S$, namely $s^{\prime}$. Therefore $s\in\llbracket\langle\mathit{\alpha}\rangle Y\rrbracket_{\mathit{e}[Y:=S]}$ and hence also $s\in\llbracket\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)\rrbracket_{\mathit{e}[Y:=S]}$. We conclude that $s\in T(S)=T^{k+1}(\emptyset)$. In both cases we demonstrate that $s$ is in the $k+1$’th approximation. We have proven both sides of the bi-implication that $s$ is in the $k+1$’th approximation of $\mu Y.(\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y))$ if, and only if, $s$ admits a finite path of length at most $k$ on which only $\alpha$ actions occur, all states are in $\llbracket\phi_{1}\rrbracket_{\mathit{e}}$, and which ends in a state satisfying $\llbracket\phi_{2}\rrbracket_{\mathit{e}}$. This proves the step case. By induction, we have proven the claim holds for all $i\geq 0$. Therefore it also holds for all $0\leq i\leq|\mathcal{S}|$. We conclude the lemma holds. We can now prove the main claim, D.1: See D.1 ###### Proof D.4. This claim is a bi-implication, so we prove both directions. * • If $s$ is in the semantics of $\mu\mathit{Y}.(\mathit{\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)})$, then $s$ is in the least fixed point of the transformer $T$ matching this formula. Hence, there are one or more natural numbers $0\leq i\leq|\mathcal{S}|$ such that $s\in T^{i}(\emptyset)$. Let $i$ be the smallest such number, then by D.2, $s$ admits a path of length at most $i-1$ that meets all three conditions. This path witnesses that $s$ indeed admits a path meeting all three conditions. * • Assume $s$ admits at least one finite path that satisfies all three conditions. Let $\pi$ be the shortest such path that $s$ admits. Let $k$ be the length of $\pi$. We first prove that $0\leq k+1\leq|\mathcal{S}|$. Trivially, a path has length at least $0$, so $0\leq k+1$. Towards a contradiction, assume $k+1>|\mathcal{S}|$. A path of length $j$ contains $j+1$ individual instances of states: the initial state of the path and the target of every transition on the path. The path $\pi$ has length $k$, and so contains at least $k+1$ individual occurrences of states, and $k+1>|\mathcal{S}|$. Hence, $\pi$ contains strictly more than $|\mathcal{S}|$ individual instances of states. Considering there are exactly $|\mathcal{S}|$ states in the LTS, by the pigeonhole principle there must be at least one state $s^{\prime}$ that is visited at least twice on $\pi$. Let $\pi_{1}$ be the prefix of $\pi$ up until the first occurrence of $s^{\prime}$, and let $\pi_{2}$ be the suffix of $\pi$ starting in the last occurrence of $s^{\prime}$. Now let $\pi^{\prime}=\pi_{1}\cdot\pi_{2}$. This is a valid path, since $\pi_{1}$ ends in state $s^{\prime}$ and $\pi_{2}$ starts in this state. $\pi^{\prime}$ contains a subset of the actions and states of $\pi$, and has the same final state as $\pi$, so it satisfies all three conditions. Since we chose $s^{\prime}$ to be a state that occurred more than once on $\pi$, $\pi^{\prime}$ contains at least one transition less than $\pi$ and hence $\pi\neq\pi^{\prime}$. In fact, $\pi^{\prime}$ is shorter than $\pi$. However, we chose $\pi$ to be the shortest path that meets all three conditions starting in $s$. We have reached a contradiction. Hence, we conclude that $k+1\leq|\mathcal{S}|$. This means that $s$ admits a path of length at most $k$ with $0\leq k+1\leq|\mathcal{S}|$ that meets the three conditions. By D.2, this means that $s$ is in the $k+1$’th approximation of $\mu\mathit{Y}.(\mathit{\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)})$. Since the semantics of the formula are the union of all approximations, we conclude $s$ is in $\llbracket\mu\mathit{Y}.(\mathit{\phi_{1}\land(\phi_{2}\lor\langle\mathit{\alpha}\rangle Y)})\rrbracket$. We have proven both directions of the bi-implication. ### D.2 Proof of Progress Formula We prove Theorem 6.1: See 6.1 Formula 1 is: $\neg\langle\mathit{\rho}\rangle\nu X.(\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}\lor\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle X)$ We fix arbitrary $\mathcal{B}$, $\rho$, $\alpha_{\mathit{f}}$ and $\alpha_{\mathit{e}}$ for this proof. We first prove the formula without the $\neg\langle\mathit{\rho}\rangle$ at the start. For this, let $S_{P}$ be the set of states that admit $\mathcal{B}$-progressing paths that are $(\varepsilon,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. In other words, these are the states that admit paths that are $\mathcal{B}$-progressing and that are $\alpha_{\mathit{f}}$-free up until an occurrence of an action in $\alpha_{\mathit{e}}$. We first prove that $S_{P}$ is a fixed point of $\nu X.(\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}\lor\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle X)$, and then that it is the greatest fixed point. ###### Lemma D.5. $S_{P}$ is a fixed point of the transformer $T_{P}$ defined by: $T_{P}(\mathcal{F})=\bigcap_{a\in\overline{\mathit{\mathcal{B}}}}\\{s\in\mathcal{S}\mid s\in\llbracket\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}\lor\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle X\rrbracket_{\mathit{e}[X:=\mathcal{F}]}\\}$ ###### Proof D.6. To prove $S_{P}$ is a fixed point of $T_{P}$, we prove $T_{P}(S_{P})=S_{P}$. We do this through mutual set inclusion. * • Let $s$ be an arbitrary state in $T_{P}(S_{P})$. We therefore know that $s\in\llbracket\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}\lor\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle X\rrbracket_{\mathit{e}[X:=S_{P}]}$. We do a case distinction on which of those conditions $s$ satisfies. * – If $s$ satisfies $\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}$, then there is a transition $t$ enabled in $s$ that is labelled with an action in $\alpha_{\mathit{e}}$. Let $\pi=st\mathit{trgt}(\mathit{t})$. This is a path that is $\alpha_{\mathit{f}}$-free up until the first occurrence of an action in $\alpha_{\mathit{e}}$. We now extend $\pi$ arbitrarily, either until a $\mathcal{B}$-locked state is reached or infinitely. This is always possible: as long we are not in a $\mathcal{B}$-locked state there is always a non- blocking action enabled that can be appended to the path we are constructing. This way, a $\mathcal{B}$-progressing path that is $(\varepsilon,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating is constructed. * – If $s$ satisfies $[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}$, then $s$ is a $\mathcal{B}$-locked state. Hence, the empty path is a path that $s$ admits that is $\mathcal{B}$-progressing and on which, trivially, no actions in $\alpha_{\mathit{f}}$ occur. Hence, $s$ admits a $\mathcal{B}$-progressing path that is $(\varepsilon,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. * – If $s$ satisfies $\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle X$ with $X=S_{P}$, then $s$ admits a transition labelled with an action not in $\alpha_{\mathit{f}}$ to a state in $S_{P}$. Let $t$ be such a transition and $s^{\prime}=\mathit{trgt}(\mathit{t})$. Then since $s^{\prime}\in S_{P}$, $s^{\prime}$ admits a path $\pi^{\prime}$ that is $\mathcal{B}$-progressing and $\alpha_{\mathit{f}}$-free up until the first occurrence of an action in $\alpha_{\mathit{e}}$. Let $\pi=st\pi^{\prime}$: this path too is $\mathcal{B}$-progressing and $\alpha_{\mathit{f}}$-free up until the first occurrence of an action in $\alpha_{\mathit{e}}$. Hence, $\pi$ witnesses that $s$ admits a $\mathcal{B}$-progressing path that is $(\varepsilon,\alpha_{\mathit{f}},\alpha_{\mathit{e}})$-violating. Therefore $s\in T_{P}(S_{P})\Rightarrow s\in S_{P}$. * • Let $s$ be an arbitrary state in $S_{P}$, then $s$ admits a path $\pi$ that is $\mathcal{B}$-progressing and $\alpha_{\mathit{f}}$-free up until the first occurrence of $\alpha_{\mathit{e}}$. We prove $s\in\llbracket\langle\mathit{\alpha_{\mathit{e}}}\rangle\mathit{tt}\lor[\mathit{\overline{\mathit{\mathcal{B}}}}]\mathit{ff}\lor\langle\mathit{\overline{\mathit{\alpha_{\mathit{f}}}}}\rangle X\rrbracket_{\mathit{e}[X:=S_{P}]}$. First, we do a case distinction on whether $\pi$ is the empty path. * –
# Strong limit of processes constructed from a renewal process Xavier Bardina and Carles Rovira111 X. Bardina and C. Rovira are supported by the grant PID2021-123733NB-I00 from SEIDI, Ministerio de Economia y Competividad. ###### Abstract We construct a family of processes, from a renewal process, that have realizations that converge almost surely to the Brownian motion, uniformly on the unit time interval. Finally we compute the rate of convergence in a particular case. MSC(2010): 60F17, 60G15 Keywords: strong convergence, renewal process, Brownian motion ## 1 Introduction In this paper we study realizations of processes that converge almost surely, uniformly on the unit time interval, to the standard Brownian motion. In the mathematical literature we can find papers studying the strong convergence of random walks or the process usually called as uniform transport processes. Our aim is to deal with extensions of the uniform transport process. The uniform transport process, introduced by Kac in [10], can be written as $y_{n}(t)=\frac{1}{n}(-1)^{A}\int_{0}^{n^{2}t}(-1)^{N(u)}du,$ where $N=\\{N(t),\,t\geq 0\\}$ is a standard Poisson process and $A\sim\textrm{Bernoulli}\left(\frac{1}{2}\right)$ independent of the Poisson process $N$. Griego, Heath and Ruiz-Moncayo [9] showed that these processes converge strongly and uniformly on bounded time intervals to Brownian motion. Gorostiza and Griego [8] and Csörgő and Horváth [2] obtained a rate of convergence. More precisely, in [8] it is proved that there exist versions of the transport processes $\tilde{y}_{n}$ on the same probability space as a given Brownian motion $(y(t))_{t\geq 0}$ such that, for each $q>0$, $P\left(\sup_{a\leq t\leq b}|y_{n}(t)-y(t)|>{C}n^{\frac{1}{2}}\left(\log n\right)^{\frac{5}{2}}\right)=o\left({{n}^{-q}}\right),$ as $n\to\infty$ and where $C$ is a positive constant depending on $a$, $b$ and $q$. These bounds are improved in [11] using an explicit computation in the Skorohod embedding problem. Furthermore, we can find several papers (see for instance [7], [1], [3], [4], [5], [6]) where the authors defined a sequence of processes, obtained as modifications of the uniform transport process, that converges strongly to some Gaussian processes uniformly on bounded intervals. Nevertheless, all these papers are based on processes built from a Poisson process. Let us recall that Poisson process has jump times with exponential laws and so, we are able to use all the particular properties of this distribution. Our aim is to deal with jump times that don’t have exponential law. We consider extensions of the uniform transport using a reward renewal process instead of a Poisson process. As far as the authors know, these type of processes has not been studied. We consider $x_{n}(t)=h(n)\int_{0}^{g(n)t}(-1)^{T(u)}du,$ where $T=\\{T(t),\,t\geq 0\\}$ is a renewal reward process and $h$ and $g$ are nonnegative functions defined on ${\mathbb{N}}$. We will show that for a wide class of renewal reward processes we have, when $n$ goes to $\infty$, the strong convergence of these processes to a standard Brownian motion. We also deal with the rate of convergence. Unfortunately we are not able to get a general result since the proofs heavily rely on the specific distribution of the jump times. We will compute the rate of convergence when the jump times have uniform distribution, showing that the method used in [8] can be adapted for non exponential times. All these results give us new ways to simulate the behaviour of the standard Brownian motion. The paper is organized in the following way. Section 2 is devoted to define the processes and to give the main results. In Section 3 we prove the strong convergence theorem. The study of the rate of convergence is given in Section 4. ## 2 Definitions and main result Consider $(U_{m})_{m\geq 1}$ be a sequence of independent random variables which take on only nonnegative values. We also assume that they are identically distributed with $P(U_{1}=0)<1$ and $E((U_{1})^{4})<\infty.$ For each $k\geq 1$ consider the renewal sequence $S_{k}=U_{1}+\cdots+U_{k}$ and the counting renewal function $L(t)=\sum_{k=1}^{\infty}{\rm l}\hskip-5.97527pt1_{\,[0,t]}(S_{k}),$ that is the counting function of the number of renewals in $[0,t].$ Set $\\{\eta_{m}\\}_{m\geq 0}$ a sequence of independent identically distributed random variables with law Bernoulli($\frac{1}{2}$), independent of $\\{U_{m}\\}_{m\geq 1}$. Then, we will deal with the renewal reward process defined as $T(t)=\eta_{0}+\sum_{k=1}^{\infty}\eta_{k}\,\,{\rm l}\hskip-5.97527pt1_{\,[0,t]}(S_{k})=\sum_{l=0}^{L(t)}\eta_{l}.$ Then, given a strictly positive function $\beta$ we define $T_{n}(t)=T_{\beta(n)}(t)=T\big{(}\frac{t}{\beta(n)}\big{)}=\eta_{0}+\sum_{k=1}^{\infty}\eta_{k}\,\,{\rm l}\hskip-5.97527pt1_{\,[0,\frac{t}{\beta(n)}]}(S_{k})=\eta_{0}+\sum_{k=1}^{\infty}\eta_{k}\,\,{\rm l}\hskip-5.97527pt1_{\,[0,t]}(\beta(n)S_{k}).$ (1) Notice that putting $U_{m}^{n}=\beta(n)\times U_{m}$ for all $m\geq 1$, we have that $\beta(n)\times S_{k}=U_{1}^{n}+\cdots+U_{k}^{n}.$ Our aim is to study the convergence of the processes $x_{n}(t)=\Big{(}\beta(n)\frac{{\mathbf{E}}((U_{1})^{2})}{{\mathbf{E}}(U_{1})}\Big{)}^{-\frac{1}{2}}\int_{0}^{t}(-1)^{T_{\beta(n)}(u)}du=\frac{1}{G(n)}\int_{0}^{t}(-1)^{T_{\beta(n)}(u)}du,$ (2) where $G(n)=\Big{(}\beta(n)\frac{{\mathbf{E}}((U_{1})^{2})}{{\mathbf{E}}(U_{1})}\Big{)}^{\frac{1}{2}},$ with $\sum_{n\geq 1}\beta(n)<\infty.$ Obviously, we can write $\displaystyle x_{n}(t)=\frac{1}{G(n)}\int_{0}^{t}(-1)^{T(\frac{u}{\beta(n)})}du=\frac{1}{G(n)}\beta(n)\int_{0}^{\frac{t}{\beta(n)}}(-1)^{T(v)}dv$ $\displaystyle\qquad=\Big{(}\frac{{\mathbf{E}}(U_{1})}{{\mathbf{E}}((U_{1})^{2})}\Big{)}^{\frac{1}{2}}\beta(n)^{\frac{1}{2}}\int_{0}^{\frac{t}{\beta(n)}}(-1)^{T(v)}dv.$ Our next result gives the strong convergence of realizations of our processes $\\{x_{n}(t);\,t\in[0,1]\\}$ and states as follows: ###### Theorem 2.1. There exists realizations of the process $x_{n}$ on the same probability space as a standard Brownian motion $\\{x(t),t\geq 0\\}$ such that $\lim_{n\rightarrow\infty}\max_{0\leq t\leq 1}|x_{n}(t)-x(t)|=0\quad a.s.$ ###### Proof. See Section 3. ∎ Observe that we are assuming that the jumps occurs with times that follows a family of nonnegative independent identically distributed random variables $\\{U_{m}^{n}\\}_{m\geq 1}$. ## 3 Proof of strong convergence In this section, we will prove the strong convergence when $n$ tends to $\infty$ of the processes $\\{x_{n}(t);\,t\in[0,1]\\}$ defined in Section 2. Proof of Theorem 2.1. We will follow the methodology used in [9]. Let $(\Omega,\cal{F},\cal{P})$ be the probability space for a standard Brownian motion $\\{x_{t},t\geq 0\\}$ with $x(0)=0$ and let us define: 1. 1. for each $n>0$, $\\{\xi_{m}^{n}\\}_{m\geq 1}$ a sequence of nonnegative independent identically distributed random variables, independent of the Brownian motion $x$, such that $G(n)\times\xi_{m}^{n}\sim U_{m}^{n}.$ (3) 2. 2. $\\{k_{m}\\}_{m\geq 1}$ a sequence of independent identically distributed random variables such that $P(k_{1}=1)=P(k_{1}=-1)=\frac{1}{2}$ , independent of $x$ and $\\{\xi_{m}^{n}\\}_{m\geq 1}$ for all $n$. Notice that $\xi_{m}^{n}\sim\frac{\beta(n)}{G(n)}\times U_{m}=\beta(n)^{\frac{1}{2}}\frac{{\mathbf{E}}(U_{1})^{\frac{1}{2}}}{E((U_{1})^{2})^{\frac{1}{2}}}\times U_{m}.$ So ${\mathbf{E}}(\xi_{m}^{n})=\beta(n)^{\frac{1}{2}}\frac{{\mathbf{E}}(U_{1})^{\frac{3}{2}}}{E((U_{1})^{2})^{\frac{1}{2}}},\qquad{\mathbf{E}}((\xi_{m}^{n})^{2})=\beta(n){\mathbf{E}}(U_{1})$ and ${\mathbf{E}}((\xi_{m}^{n})^{4})=\beta(n)^{2}\frac{{\mathbf{E}}((U_{1})^{4}){\mathbf{E}}(U_{1})^{2}}{{\mathbf{E}}((U_{1})^{2})^{2}}$ By Skorokhod’s theorem ([12] page 163) for each $n\geq 1$ there exists a sequence $\sigma_{1}^{n},\sigma_{2}^{n},...$ of nonnegative independent random variables on $(\Omega,\cal{F},\cal{P})$ so that the sequence $x(\sigma_{1}^{n}),x(\sigma_{1}^{n}+\sigma_{2}^{n}),...,$ has the same distribution as $k_{1}\xi_{1}^{n},k_{1}\xi_{1}^{n}+k_{2}\xi_{2}^{n},...,$ and, for each $m$, 1. 1. ${\mathbf{E}}(\sigma_{m}^{n})=Var(k_{m}\xi_{m}^{n})={\mathbf{E}}((\xi_{m}^{n})^{2})=\beta(n){\mathbf{E}}(U_{1}),$ 2. 2. There exists $L_{2}$ such that $Var(\sigma_{m}^{n})\leq{\mathbf{E}}((\sigma_{m}^{n})^{2})\leq L_{2}{\mathbf{E}}((\xi_{m}^{n})^{4})=L_{2}\beta(n)^{2}\frac{{\mathbf{E}}((U_{1})^{4}){\mathbf{E}}(U_{1})^{2}}{{\mathbf{E}}((U_{1})^{2})^{2}}.$ For each $n$ we define $\gamma_{0}^{n}\equiv 0$ and for each $m$ $\gamma_{m}^{n}=G(n)\left|x\left(\sum_{j=0}^{m}\sigma_{j}^{n}\right)-x\left(\sum_{j=0}^{m-1}\sigma_{j}^{n}\right)\right|,$ where $\sigma_{0}^{n}\equiv 0$. Then, from (3) it follows that the random variables $\gamma_{1}^{n},\gamma_{2}^{n},...,$ are independent with the same distribution that $U_{1}^{n},U_{2}^{n},\ldots,$ and $E(\gamma_{m}^{n})=E(U_{m}^{n})=\beta(n)E(U_{1})$ and $Var(\gamma_{m}^{n})=\beta(n)^{2}Var(U_{1}).$ Now, we define $x_{n}(t),t\geq 0$ to be piecewise linear satisfying $x_{n}\left(\sum_{j=1}^{m}\gamma_{j}^{n}\right)=x\left(\sum_{j=1}^{m}\sigma_{j}^{n}\right),\qquad m\geq 1$ (4) and $x_{n}(0)\equiv 0$. Observe that the process $x_{n}$ has slope $\pm|G(n)|^{-1}$ in the interval $[\sum_{j=1}^{m-1}\gamma_{j}^{n},\sum_{j=1}^{m}\gamma_{j}^{n}]$. On the other hand, let $\Gamma_{m}^{n}=\sum_{j=1}^{m}\gamma_{j}^{n}$. We get that the increments $\Gamma_{m}^{n}-\Gamma_{m-1}^{n}$, for each $m$, with $\Gamma_{0}^{n}\equiv 0$, are independent and have law $G(n)\times\xi_{1}^{m}\sim U_{1}^{n}$. Moreover the probability that $x\left(\sum_{j=0}^{m}\sigma_{j}^{n}\right)-x\left(\sum_{j=0}^{m-1}\sigma_{j}^{n}\right)$ is positive is $\frac{1}{2}$, independent of the past up to time $\sum_{j=0}^{m-1}\sigma_{j}^{n}.$ Thus $x_{n}$ is a realization of the process (2). Set $H(n):=\beta(n)E(U_{1})$. Recalling that $\gamma_{0}^{n}\equiv\sigma_{0}^{n}\equiv 0$, by (4) and the uniform continuity of Brownian motion on $[0,1]$, we have almost surely $\displaystyle\lim_{n\rightarrow\infty}\,\,\max_{0\leq t\leq 1}\left|x_{n}(t)-x(t)\right|$ $\displaystyle=$ $\displaystyle\lim_{n\rightarrow\infty}\,\,\max_{0\leq m\leq\frac{1}{H(n)}}\left|x_{n}\left(\sum_{j=0}^{m}\gamma_{j}^{n}\right)-x\left(\sum_{j=0}^{m}\gamma_{j}^{n}\right)\right|$ $\displaystyle=$ $\displaystyle\lim_{n\rightarrow\infty}\,\,\max_{0\leq m\leq\frac{1}{H(n)}}\left|x\left(\sum_{j=0}^{m}\sigma_{j}^{n}\right)-x\left(\sum_{j=0}^{m}\gamma_{j}^{n}\right)\right|,$ and it reduces the proof to check that, $\lim_{n\rightarrow\infty}\,\max_{1\leq m\leq\frac{1}{H(n)}}\left|\gamma_{1}^{n}+\dots+\gamma_{m}^{n}-mH(n)\right|=0\quad a.s.,$ and that $\lim_{n\rightarrow\infty}\,\max_{1\leq m\leq\frac{1}{H(n)}}\left|\sigma_{1}^{n}+\dots+\sigma_{m}^{n}-mH(n)\right|=0\quad a.s.,$ The first limit can be obtained easily by Borel-Cantelli lemma since by Kolmogorov’s inequality, for each $\alpha>0$, we have $\displaystyle P\left(\max_{1\leq m\leq\frac{1}{H(n)}}\left|\gamma_{1}^{n}+\dots+\gamma_{m}^{n}-mH(n)\right|\geq\alpha\right)\leq\frac{1}{\alpha^{2}}\sum_{m=1}^{[\frac{1}{H(n)}]}Var(\gamma_{k}^{n})$ $\displaystyle\qquad\leq\frac{1}{\alpha^{2}}\sum_{m=1}^{\infty}\beta(n)^{2}Var(U_{1})<\infty.$ We can study the second limit repeting the same arguments as before. Using the bouns obtained from Skorohod’s theorem, for each $\alpha>0$, we have $\displaystyle P\left(\max_{1\leq m\leq\frac{1}{H(n)}}\left|\sigma_{1}^{n}+\dots+\sigma_{m}^{n}-mH(n)\right|\geq\alpha\right)$ $\displaystyle\leq$ $\displaystyle\frac{1}{\alpha^{2}}\sum_{m=1}^{[\frac{1}{H(n)}]+1}Var(\sigma_{m}^{n})<\infty$ $\square$ ## 4 Rate of convergence In this section we will prove the rate of convergence of the processes $x_{n}(t)$ in a particular case. We consider $U_{m}\sim U(0,1)$ for all $m\geq 1$ and $\beta(n)=n^{-k}$ with $k>1$. Then $\displaystyle G(n)=\frac{2^{\frac{1}{2}}}{3^{\frac{1}{2}}n^{\frac{k}{2}}},\qquad H(n)=\frac{1}{2n^{k}},$ and $U_{m}^{n}\sim U(0,n^{-k}),\qquad\gamma_{m}^{n}\sim U(0,n^{-k}),\qquad\xi_{m}^{n}\sim U(0,\frac{3^{\frac{1}{2}}}{2^{\frac{1}{2}}}n^{-\frac{k}{2}}).$ ###### Theorem 4.1. Assume $U_{m}\sim U(0,1)$ for all $m\geq 1$ and $\beta(n)=n^{-k}$ with $k>1$. Then, for all $q>0$, $P\left(\max_{0\leq t\leq 1}|x_{n}(t)-x(t)|>\alpha\,n^{-\frac{k}{4}}\left(\log{n}\right)^{\frac{3}{2}}\right)=o(n^{-q})\qquad\mbox{as}\quad n\rightarrow\infty$ where $\alpha$ is a positive constant depending on $q$. Since the proof follows the structure of part b) of Theorem 1 in [8], we give a sketch of the proof. Proof of Theorem 4.1. Recall that $\gamma_{0}^{n}\equiv\sigma_{0}^{n}\equiv 0$ and define $\Gamma_{m}^{n}=\sum_{j=0}^{m}\gamma_{j}^{n}\qquad\mbox{and}\qquad\Lambda_{m}^{n}=\sum_{j=0}^{m}\sigma_{j}^{n}.$ Notice that $x_{n}(\Gamma_{m}^{n})=x(\Lambda_{m}^{n})$. Set $J^{n}\equiv\max_{0\leq m\leq\frac{1}{H(n)}}\,\max_{0\leq r\leq\gamma_{m+1}^{n}}\big{|}x_{n}(\Gamma_{m}^{n}+r)-x(\Gamma_{m}^{n}+r)\big{|}.$ Since $x_{n}$ is piecewise linear and using the definition of $\gamma_{m}^{n}$, notice that $\displaystyle x_{n}(\Gamma_{m}^{n}+r)$ $\displaystyle=$ $\displaystyle x(\Lambda_{m}^{n})+\frac{x(\Lambda_{m+1}^{n})-x(\Lambda_{m}^{n})}{\gamma_{m+1}^{n}}\,r$ $\displaystyle=$ $\displaystyle x(\Lambda_{m}^{n})+\frac{1}{G(n)}\times\operatorname{sgn}\Big{(}x(\Lambda_{m+1}^{n})-x(\Lambda_{m}^{n})\Big{)}r.$ Thus, $\displaystyle J^{n}$ $\displaystyle\leq$ $\displaystyle\max_{0\leq m\leq\frac{1}{H(n)}}\Big{|}x(\Lambda_{m}^{n})-x\Big{(}mH(n)\Big{)}\Big{|}+\max_{0\leq m\leq\frac{1}{H(n)}}\Big{|}x(\Gamma_{m}^{n})-x\Big{(}mH(n)\Big{)}\Big{|}$ $\displaystyle+\max_{0\leq m\leq\frac{1}{H(n)}}\,\max_{0\leq r\leq\gamma_{m+1}^{n}}\big{|}x(\Gamma_{m}^{n})-x(\Gamma_{m}^{n}+r)\big{|}+\max_{1\leq m\leq\frac{1}{H(n)}+1}\frac{1}{G(n)}\gamma_{m}^{n}$ $\displaystyle:=$ $\displaystyle J_{1}^{n}+J_{2}^{n}+J_{3}^{n}+J_{4}^{n},$ and for any $a_{n}>0$, $P(J^{n}>a_{n})\leq\sum_{j=1}^{4}P\Big{(}J_{j}^{n}>\frac{a_{n}}{4}\Big{)}:=I_{1}^{n}+I_{2}^{n}+I_{3}^{n}+I_{4}^{n}.$ We will study the four terms separately. 1\. Study of the term $I_{4}^{n}.$ Since $\gamma_{m}^{n}$’s are independent variables with law $\sim U(0,n^{-k})$, $\displaystyle I_{4}^{n}$ $\displaystyle\leq$ $\displaystyle P\left(\max_{1\leq m\leq\frac{1}{H(n)}+1}\gamma_{m}^{n}>\frac{a_{n}}{2^{\frac{3}{2}}3^{\frac{1}{2}}n^{\frac{k}{2}}}\right)=1-P\left(\gamma_{m}^{n}\leq\frac{a_{n}}{2^{\frac{3}{2}}3^{\frac{1}{2}}n^{\frac{k}{2}}}\right)^{[\frac{1}{H(n)}]+1}=0,$ when $n$ is big enough for $a_{n}$ of the type $\alpha\,n^{-\frac{k}{4}}\left(\log n\right)^{\beta}$, with $\alpha$ and $\beta$ positive arbitrary fixed constants. 2\. Study of the term $I_{1}^{n}.$ Let $\delta_{n}>0$.We can write $\displaystyle I_{1}^{n}$ $\displaystyle\leq$ $\displaystyle P\left(\max_{0\leq m\leq\frac{1}{H(n)}}\,\max_{|s|\leq\delta_{n}}\Big{|}x\Big{(}mH(n)+s\Big{)}-x\Big{(}mH(n)\Big{)}\Big{|}>\frac{a_{n}}{4}\right)$ $\displaystyle+P\left(\max_{1\leq m\leq\frac{1}{H(n)}}\Big{|}\Lambda_{m}^{n}-mH(n)\Big{|}>\delta_{n}\right)$ $\displaystyle=$ $\displaystyle I_{11}^{n}+I_{12}^{n}.$ 2.1. Study of the term $I_{12}^{n}.$ Notice that $\displaystyle I_{12}^{n}$ $\displaystyle=$ $\displaystyle P\left(\max_{1\leq m\leq\frac{1}{H(n)}}\bigg{|}\sum_{j=1}^{m}\Big{(}\frac{1}{H(n)}\sigma_{j}^{n}-1\Big{)}\bigg{|}>\frac{\delta_{n}}{H(n)}\right)$ (5) $\displaystyle\leq$ $\displaystyle\left(\frac{H(n)}{\delta_{n}}\right)^{2p}\,{\mathbf{E}}\left[\left(\sum_{m=1}^{[\frac{1}{H(n)}]}\bigg{(}\frac{1}{H(n)}\sigma_{m}^{n}-1\bigg{)}\right)^{2p}\right],$ for any $p\geq 1$, by Doob’s martingale inequality. Set $Y_{m}:=\frac{1}{H(n)}\sigma_{m}^{n}-1$. Using Hölder’s inequality, we obtain $\displaystyle{\mathbf{E}}\left[\Bigg{(}\sum_{m=1}^{[\frac{1}{H(n)}]}Y_{m}\Bigg{)}^{2p}\right]$ $\displaystyle=$ $\displaystyle\sum_{\begin{subarray}{c}|u|=2p\\\ u_{m}\neq 1\,\forall m\end{subarray}}{2p\choose u}{\mathbf{E}}\Big{(}Y_{1}^{u_{1}}\cdots Y_{[\frac{1}{H(n)}]}^{u_{[\frac{1}{H(n)}]}}\Big{)}$ $\displaystyle\leq$ $\displaystyle\sum_{\begin{subarray}{c}|u|=2p\\\ u_{m}\neq 1\,\forall m\end{subarray}}{2p\choose u}\big{[}{\mathbf{E}}\big{(}Y_{1}^{2p}\big{)}\big{]}^{u_{1}/2p}\cdots\big{[}{\mathbf{E}}\big{(}Y_{[\frac{1}{H(n)}]}^{2p}\big{)}\big{]}^{u_{[\frac{1}{H(n)}]}/2p}.$ where $u=(u_{1},\dots,u_{[\frac{1}{H(n)}]})$ with $|u|=u_{1}+\cdots+u_{[}\frac{1}{H(n)]}$ and ${2p\choose u}=\frac{(2p)!}{u_{1}!\cdots u_{[\frac{1}{H(n)}]}!}.$ Notice that in the first equality we have used that if $u_{m}=1$ for any $m$, then ${\mathbf{E}}\big{(}Y_{1}^{u_{1}}\cdots Y_{[\frac{1}{H(n)}]}^{u_{[\frac{1}{H(n)}]}}\big{)}=0$. On the other hand, by the estimates given by Skorohod’s theorem (see [8]), we have $\displaystyle{\mathbf{E}}[(\sigma_{m}^{n})^{2p}]\leq 2(2p)!{\mathbf{E}}\Big{[}\,(k_{i}\xi_{m}^{n})^{4p}\Big{]}\leq 2(2p)!\,\frac{1}{(4p+1)}3^{2p}\left(\frac{1}{2n^{k}}\right)^{2p}.$ So, using the inequality $|a+b|^{2p}\leq 2^{2p}(|a|^{2p}+|b|^{2p})$, we obtain ${\mathbf{E}}\Big{(}Y_{m}^{2p}\Big{)}\leq(2p)!\,6^{2p}.$ (7) Finally from a lemma in page 298 in [8] (see also Lemma 5-1 in [1]) we obtain that, for $p\leq 1+\frac{\log 2}{\log\big{[}1+(2n^{-k}-n^{-2k})^{\frac{1}{2}}\big{]}},$ (8) we get that $\displaystyle\sum_{\begin{subarray}{c}|u|=2p\\\ u_{i}\neq 1\,\forall i\end{subarray}}{2p\choose u}\leq 2^{2p}(2p)!\left(2n^{k}\right)^{p}.$ (9) Therefore, for $p$ as above, putting together (5), (4), (7) and (9) and applying Stirling formula, $k!=\sqrt{2\pi}\,k^{k+\frac{1}{2}}e^{-k}e^{\frac{a}{12k}}$, with $0<a<1$, we obtain $\displaystyle I_{12}^{n}$ $\displaystyle\leq$ $\displaystyle\,(\delta_{n})^{-2p}\,n^{-kp}\,6^{2p}\,2^{p}\Big{[}\sqrt{2\pi}(2p)^{2p+\frac{1}{2}}e^{-2p}e^{\frac{a}{24p}}\Big{]}^{2}$ $\displaystyle\leq$ $\displaystyle K_{1}^{p}\,(\delta_{n})^{-2p}\,n^{-kp}\,p^{4p+1}$ where $K_{1}$ is a constant. Let us impose now $K_{1}^{p}\,(\delta_{n})^{-2p}\,n^{-kp}\,p^{4p+1}=n^{-2q}$ and $p=\big{[}\log n\big{]}$. Observe that this $p$ fulfills condition on $p$ of inequality (8). We get $\displaystyle\delta_{n}=K_{2}\,n^{q/[\log{n}]-\frac{k}{2}}\,\left[\log{n}\right]^{2+1/(2[\log{n}])},$ (10) where $K_{2}=\sqrt{K_{1}}$ is a constant. Clearly, with this $\delta_{n}$, it follows that $I_{12}^{n}=o(n^{-q})$. 2.4. Study of the term $I_{11}^{n}.$ As in Theorem 1 in [8], for big $n$ and using a Doob’s martingale inequality for Brownian motion we get $\displaystyle I_{11}^{n}$ $\displaystyle\leq$ $\displaystyle\frac{1}{H(n)}P\left(\max_{|s|\leq\delta_{n}}\big{|}x(s)\big{|}>\frac{a_{n}}{4}\right)\leq 8n^{k}P\left(\max_{0\leq s\leq\delta_{n}}x(s)>\frac{a_{n}}{4}\right)$ $\displaystyle\leq$ $\displaystyle 8n^{k}\exp\left(-\Big{(}\frac{a_{n}}{4}\Big{)}^{2}\frac{1}{2\delta_{n}}\right).$ Condition $8n^{k}\exp\big{(}-(a_{n})^{2}/(32\delta_{n})\big{)}=8n^{-2q}$ yields that $a_{n}=K_{3}\,n^{-k/4}\,n^{q/2[\log{n}]}\,\big{(}\log{n}\big{)}^{1+1/4[\log{n}]}\,\big{(}\log{n}\big{)}^{1/2},$ where $K_{3}$ is a constant depending on $q$. Notice that $a_{n}=\alpha\,n^{-k/4}\,\big{(}\log{n}\big{)}^{3/2},$ for big $n$, where $\alpha$ is a constant that depends on $q$, satisfies such a condition. Thus, with $\delta_{n}$ as in (10), it follows that $I_{11}^{n}=o(n^{-q})$. 3\. Study of the term $I_{2}^{n}$. For our $\delta_{n}>0$, we have $\displaystyle I_{2}^{n}$ $\displaystyle\leq$ $\displaystyle P\left(\max_{0\leq m\leq\frac{1}{H(n)}}\,\max_{|s|\leq\delta_{n}}\Big{|}x\Big{(}mH(n)+s\Big{)}-x\Big{(}mH(n)\Big{)}\Big{|}>\frac{a_{n}}{4}\right)$ $\displaystyle+P\left(\max_{0\leq m\leq\frac{1}{H(n)}}\Big{|}\Gamma_{m}^{n}-mH(n)\Big{|}>\delta_{n}\right)=I_{21}^{n}+I_{22}^{n}.$ On one hand, observe that $I_{21}^{n}=I_{11}^{n}$, thus $I_{21}^{n}=o(n^{-q})$. On the other hand, applying Doob’s martingale inequality $\displaystyle I_{22}^{n}$ $\displaystyle=$ $\displaystyle P\left(\max_{0\leq m\leq\frac{1}{H(n)}}\Bigg{|}\sum_{j=0}^{m}\bigg{(}\frac{1}{H(n)}\gamma_{j}^{n}-1\bigg{)}\Bigg{|}>\frac{\delta_{n}}{H(n)}\right)$ $\displaystyle\leq$ $\displaystyle\left(\frac{H(n)^{2}}{\delta_{n}}\right)^{2p}\,{\mathbf{E}}\left[\left(\sum_{m=1}^{[\frac{1}{H(n)}]}\bigg{(}\frac{1}{H(n)}\gamma_{m}^{n}-1\bigg{)}\right)^{2p}\right].$ Set $V_{m}:=\frac{1}{H(n)}\gamma_{m}^{n}-1$. Notice that $V_{m}$’s are independent and centered random variables with $\displaystyle{\mathbf{E}}\Big{(}V_{m}^{2p}\Big{)}$ $\displaystyle\leq$ $\displaystyle 2^{2p}\left(\bigg{(}2n^{k}\bigg{)}^{2p}{\mathbf{E}}\big{[}(\gamma_{m}^{n})^{2p}\big{]}+1\right)\leq 2\cdot 4^{2p}\,\frac{1}{(2p+1)}.$ Then using an inequality of the type of (4) and following the same arguments that in the study of $I_{12}^{n}$, we get that $I_{22}^{n}=o(n^{-q})$. 4\. Study of the term $I_{3}^{n}$. For $\delta_{n}>0$ defined in (10) and $a_{n}$ of the type $\alpha\,n^{-\frac{k}{4}}\left(\log n\right)^{\frac{3}{2}}$ $\displaystyle I_{3}^{n}$ $\displaystyle\leq$ $\displaystyle P\left(\max_{0\leq m\leq\frac{1}{H(n)}}\,\max_{|r|\leq\delta_{n}}\big{|}x(\Gamma_{m}^{n})-x(\Gamma_{m}^{n}+r)\big{|}>\frac{a_{n}}{4}\right)$ $\displaystyle+P\left(\max_{1\leq m\leq\frac{1}{H(n)}+1}\gamma_{m}^{n}>\delta_{n}\right):=I_{31}^{n}+I_{32}^{n}.$ On one hand, $I_{31}^{n}=o(n^{-q})$ is proved in the same way as $I_{11}^{n}$. On the other hand, for $n$ big enough, $I_{32}^{n}=0$, similarly as we have proved for $I_{4}^{n}$. We have checked now that all the terms in our decomposition are of order $n^{-q}$. The proof of Theorem 4.1 can be completed following the same computations that in [8] (see also Theorem 3.2 in [1]) $\square$ ## References * [1] Bardina, X.; Binotto, G.; Rovira, C.: The complex Brownian motion as a strong limit of processes constructed from a Poisson process. J. Math. Anal. Appl. 444 (2016), no. 1, 700–720. * [2] Csörgo, M.; Horváth, L.: Rate of convergence of transport processes with an application to stochastic differential equations. Probab. Theory Related Fields 78 (1988), no. 3, 379-387. * [3] Garzón, J.; Gorostiza, L. G.; León, J. A.: A strong uniform approximation of fractional Brownian motion by means of transport processes. Stochastic Process. Appl. 119 (2009), no. 10, 3435-3452. * [4] Garzón, J.; Gorostiza, L. G.; León, J.: A strong approximation of subfractional Brownian motion by means of transport processes. In: Malliavin calculus and stochastic analysis, 335–360, Springer Proc. Math. Stat., 34, Springer, New York, 2013. * [5] Garzón, J.; Gorostiza, L. G.; León, J.: Approximations of Fractional Stochastic Differential Equations by means of transport processes. Commun. Stoch. Anal. 5(3) (2011), 433-456. * [6] Garzón, J.;Torres, S.; Tudor, C.A.: A strong convergence to the Rosenblatt process Journal of Mathematical Analysis and Applications 391 (2012), 630-647. * [7] Gorostiza, L.G.; Griego, R.J.: (1979). Strong approximation of diffusion processes by transport processes. Journal of Mathematics of Kyoto University 19, No. 1, 91-103. * [8] Gorostiza, L.G.; Griego, R.J.: . Rate of convergence of uniform transport processes to Brownian motion and application to stochastic integrals. Stochastics, Vol. 3 (1980), 291-303. * [9] Griego, R.J., Heath, D. and Ruiz-Moncayo, A.: Almost sure convergence of uniform trasport processes to Brownian motion. Ann. Math. Stat. 42 (1971), No. 3, 1129-1131. * [10] Kac, M. A stochastic model related to the telegraphers equation: Rocky Moutain J. Math. 4 (1974), 497-509. * [11] Nguyen, G. T.; Peralta, O.: An explicit solution to the Skorokhod embedding problem for double exponential increments. Statist. Probab. Lett. 165 (2020), 108867, 5 pp. * [12] Skorokhod, A.V.: Study in the Theory of Random Processes. Addison-Wesley, Reading (1965). Xavier Bardina Departament de Matemàtiques, Facultat de Ciències, Edifici C, Universitat Autònoma de Barcelona, 08193 Bellaterra. <EMAIL_ADDRESS> Carles Rovira Facultat de Matemàtiques, Universitat de Barcelona, Gran Via 585, 08007 Barcelona. <EMAIL_ADDRESS>
# GAMA: Generative Adversarial Multi-Object Scene Attacks Abhishek Aich, Calvin-Khang Ta∗, Akash Gupta, Chengyu Song, Srikanth V. Krishnamurthy, M. Salman Asif, Amit K. Roy-Chowdhury University of California, Riverside, CA, USA Equal contribution. Corresponding author: AA ([email protected]). AG is currently with Vimaan AI, USA. ###### Abstract The majority of methods for crafting adversarial attacks have focused on scenes with a single dominant object (e.g., images from ImageNet). On the other hand, natural scenes include multiple dominant objects that are semantically related. Thus, it is crucial to explore designing attack strategies that look beyond learning on single-object scenes or attack single- object victim classifiers. Due to their inherent property of strong transferability of perturbations to unknown models, this paper presents the first approach of using generative models for adversarial attacks on multi- object scenes. In order to represent the relationships between different objects in the input scene, we leverage upon the open-sourced pre-trained vision-language model CLIP (Contrastive Language-Image Pre-training), with the motivation to exploit the encoded semantics in the language space along with the visual space. We call this attack approach Generative Adversarial Multi- object Attacks (GAMA). GAMA demonstrates the utility of the CLIP model as an attacker’s tool to train formidable perturbation generators for multi-object scenes. Using the joint image-text features to train the generator, we show that GAMA can craft potent transferable perturbations in order to fool victim classifiers in various attack settings. For example, GAMA triggers $\sim$16% more misclassification than state-of-the-art generative approaches in black- box settings where both the classifier architecture and data distribution of the attacker are different from the victim. Our code is available here: https://abhishekaich27.github.io/gama.html ## 1 Introduction Despite attaining significant results, decision-making of deep neural network models is brittle and can be surprisingly manipulated with adversarial attacks that add highly imperceptible perturbations to the system inputs [1, 2]. This has led to dedicated research in designing diverse types of adversarial attacks that lead to highly incorrect decisions on diverse state-of-the-art classifiers [3, 4, 5, 6, 7, 2, 8, 9, 10, 11, 12, 13]. The majority of such adversarial attacks [2, 8, 14, 15, 9, 16, 17, 10, 11, 12, 13, 18] has focused on scenes with a single dominant object (_e.g_., images from ImageNet [19]). However, natural scenes consist of multiple dominant objects that are semantically associated [20, 21, 22, 23, 24, 25]. This calls for attack methods that are effective in such multi-object scenes. A recent body of work in adversarial attacks [26, 27, 28, 29, 30] has shown the importance of exploring attack methodologies for real-world scenes (although designed for attacking object detectors). However, such methods are image-specific approaches that are known to have poor time complexity when perturbing large batches of images, as well as poor transferability to unknown models (more details in Section 2) due to their inherent property of perturbing images independently from one another. Different from such approaches, _our interest lies in the generative model-based approaches_ [10, 11, 12, 13] which are distribution-driven and craft perturbations by learning to fool a surrogate classifier for a large number of images. These generative adversarial attacks show stronger transferability of perturbations to unknown victim models and can perturb large batches of images in one forward pass through the generator demonstrating better time complexity [31, 11]. However, these generative attack methods have focused on learning from single-object scenes (_e.g_., ImageNet in [13, 11, 12], CUB-200-2011 [32] in [13]) or against single-object surrogate classifiers (_e.g_., ImageNet classifiers [33] in [13, 11, 12, 10]). When trained against multi-object (also known as multi- label) classifiers to learn perturbations on multi-object scenes, such methods perform poorly as they do not explicitly incorporate object semantics in the generator training (see Table 3 and Table 3). As real-world scenes usually consist of multi-object images, designing such attacks is of importance to victim model users that analyze complex scenes for making reliable decisions _e.g_. self-driving cars [34]. To this end, _we propose the first generative attack approach, called _G enerative Adversarial Multi-object scene Attacks or GAMA_, that focuses on adversarial attacks on multi-object scenes._ Figure 1: Using CLIP’s image-text aligning property, we compute the features of the least similar text description w.r.t. to clean image. Progress in recent vision-and-language (VL) models [35, 36, 37, 38, 39] that allow joint modelling of image and text have garnered interest in recent times due to their versatile applicability in various image downstream tasks like inpainting, editing, _etc_. [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]. For the first time in literature, we introduce the utility of a pre-trained open-source framework of the popular VL model named CLIP (Contrastive Language-Image Pre-training) [36] in generating adversarial attacks. Trained on 400 million image-text pairs collected from the internet, CLIP has been shown to provide robust joint representations of VL semantics [46, 40] and strong zero-shot image classification on diverse datasets [36, 44]. This allows us to access diverse VL features cheaply without any training as end- user. Our proposed GAMA attack employs the CLIP model to exploit the natural language semantics encoded in text features along with the vision features (due to its joint image-text alignment property). Different from prior works, GAMA utilizes CLIP model’s extracted knowledge from $\sim$400 million images to maximize the feature differences of perturbed image $\bm{x}_{p}$ against two different types of features computed from clean image $\bm{x}_{c}$: (1) features of $\bm{x}_{c}$ computed from surrogate models, and (2) features of $\bm{x}_{c}$ computed from CLIP’s image encoder. Additionally, GAMA also guides $\bm{x}_{p}$ to contain different features compared to $\bm{x}_{c}$ by using features from CLIP’s text encoder via a contrastive loss function. For example in Figure 1, consider a clean image $\bm{x}_{c}$ with objects “sofa and bottle". Using CLIP’s image-text aligning property, we estimate that $\bm{x}_{c}$ (with text features $\bm{\rho}_{c}$) is least similar to the text prompt “car and bicycle” (text features $\bm{\rho}_{p}$) among some randomly chosen candidates (indicated by dotted circles). GAMA uses $\bm{\rho}_{p}$, created from a contextually consistent classes, to contrast and move the perturbed $\bm{x}_{p}$ away from $\bm{x}_{c}$ in feature space. Hence, the perturbed image features are comparably robust to data distribution changes in victim models as $\mathcal{G}_{\bm{\theta}}(\cdot)$ is optimized to create perturbations that differ in features from two different image features. This allows GAMA to launch highly transferable attacks on unseen victim models (see Section 4). To summarize, we make the following contributions in this paper. 1. 1. Multi-object scene based generative attack aided by VL models. We propose the first multi-object scene based generative attack, GAMA, that is designed to consider object semantics through vision-and-language models. 2. 2. Pre-trained CLIP model as an attacker’s tool. We propose the first generative attack on classifiers that utilizes the open-source pre-trained CLIP model as an attacker’s tool to train perturbation generators. 3. 3. Extensive Attack Evaluations. Our extensive experiments on various black-box settings (where victims are multi-label/single-label classifiers and object detectors) show GAMA’s state-of-the-art transferability of perturbations (Table 3, 3, 5, 5, 6, and 4). Additionally, we also show that GAMA outperforms its baselines in terms of attack robustness when the victim deploys state-of- the-art defenses (Table 8). ## 2 Related works Table 1: Characteristic comparison. Here, $\bm{f}(\cdot)$ denotes the surrogate classifier. $\bm{x}$ and $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ denote a clean and perturbed image. $k$ denotes output from a specific pre- defined layer of $\bm{f}(\cdot)$ (different for each method). Better than prior generative attacks [10, 11, 12, 13], GAMA leverages multi-modal (text and image) features $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61287pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ and $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{0.8575pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.61249pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ extracted from a pre-trained CLIP [36] model for train the perturbation generator. Its learning objective aims to pull $\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ closer to a dissimilar text embedding $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61287pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ (w.r.t. $\bm{x}$) while pushing it away from $\bm{f}_{k}(\bm{x})$ and $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{0.8575pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.61249pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$. Further, GAMA analyzes attack scenarios where the surrogate model is a multi- label classifier with input scenes that usually contain multiple objects. Attack | Venue | Generator training strategy | Analyzed input scene? ---|---|---|--- GAP [10] | CVPR2018 | maximize difference of $\bm{f}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ and $\bm{f}(\bm{x})$ | single object CDA [11] | NeurIPS2019 | maximize difference of $\bm{f}$($\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$) - $\bm{f}(\bm{x})$ and $\bm{f}(\bm{x})$ | single object TAP [12] | NeurIPS2021 | maximize difference of $\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ and $\bm{f}_{k}(\bm{x})$ | single object BIA [13] | ICLR2022 | maximize difference of $\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ and $\bm{f}_{k}(\bm{x})$ | single object GAMA | Ours | contrast $\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ w.r.t. $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}},\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ and $\bm{f}_{k}(\bm{x})$ | single/ multiple objects Adversarial attacks on classifiers. Several state-of-the-art adversarial attacks [52, 2, 14, 15, 11, 53, 54, 8, 55, 10, 56, 13, 6, 12, 16, 17, 57, 58, 9, 59, 60] have been designed to disturb the predictions of classifiers. Broadly these approaches can be categorized into two strategies: instance (or image) specific attacks and generative model-based attacks. Instance specific attacks [52, 2, 14, 15, 53, 54, 8, 55, 56, 6, 16, 17, 57, 58, 9, 59, 60] create perturbations for every image exclusively. Specifically, these perturbations are computed by querying the victim model for multiple iterations in order to eventually alter the image imperceptibly (e.g. texture level changes to image [60]) to cause its misclassification. Due to this “specific to image” strategy, their time-complexity to alter the decision of a large set of images has been shown to be extremely poor [11, 13, 31]. Furthermore, learning perturbations based on single-image generally restrict their success of misclassification only on the known models [11, 13]. To alleviate these drawbacks, a new category of attack strategies has been explored in [13, 12, 10, 11, 61] where a generative model is adversarially trained against a surrogate victim model (in other words, treated as a discriminator) to craft perturbations on whole data distribution. This attack strategy particularly allows one to perturb multiple images simultaneously once the generative model is optimized, as well as enhances the transferability of perturbations to unseen black-box models [10, 11]. For example, Generative Adversarial Perturbations or GAP [10] and Cross-Domain Attack or CDA [11] presented a distribution-driven attack that trains a generative model for creating adversarial examples by utilizing the cross- entropy loss and relativistic cross-entropy loss [62] objective, respectively. Different from these, Transferable Adversarial Perturbations or TAP [12] and Beyond ImageNet Attack or BIA [13] presented an attack methodology to further enhance transferability of perturbations using feature separation loss functions (_e.g_. mean square error loss) at mid-level layers of the surrogate model. Most of these methods focused on creating transferable perturbations assuming that the surrogate model is trained in the same domain as the target victim model [13]. Further, a mid-level layer is manually selected for each architecture and is also sensitive to the dataset (shown later in Section 4). Similarly, [61] proposes to change image attributes to create semantic manipulations using their disentangled representations via generative models. Most of these generative attacks employed classifiers that operate under the regime that input images include single dominant objects. Some recent attacks [26, 27, 28, 29, 30] have focused on analyzing complex images which contain multiple objects, however, they are instance-driven attacks that introduce aforesaid drawbacks of transferability and time complexity. In contrast to these aforementioned works, GAMA _is a generative model-based attack designed to craft imperceptible adversarial perturbations that can strongly disrupt both multi-label and single-label classifiers._ Moreover, GAMA uses a novel perturbation generation strategy that employs a pre-trained CLIP model [36] based framework to craft highly effectual and transferable perturbations by leveraging multi-modal (image and text) embeddings. We summarize the differences between prior generative attacks and GAMA in Table 1. Applications of Vision-and-Language (VL) representations. Due to their robust zero-shot performance, joint vision-and-language pre-trained models [35, 36, 37, 38, 39] have allowed new language-driven solutions for various downstream tasks [63, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]. The differentiating attribute of using VL models [36], when compared to existing conventional image-based pre-trained models [33], is that they provide high- quality aligned visual and textual representations learnt from large-scale image-text pairs. In this work, we leverage one such powerful VL framework named CLIP [36] to an adversary’s advantage and show its utility in preparing a perturbation generator for formidable attacks across multiple distributions. Employing freely available pre-trained models for tasks other than what they were trained for has been common practice (_e.g_. VGG [64] models in [65, 66], CLIP for domain adaptation of generators in [46]). To the best of our knowledge, the proposed attack is the first to introduce such VL model usage to subvert classifier decisions. ## 3 Proposed Attack Methodology: GAMA #### Problem Statement. Our goal is to train a generative model $\mathcal{G}_{\bm{\theta}}(\cdot)$ (weights $\bm{\theta}$) from a training distribution of images with multiple- objects. Once $\bm{\theta}$ is optimized, $\mathcal{G}_{\bm{\theta}}(\cdot)$ can create perturbations on diverse types (multi-object or otherwise) of input images that can lead to misclassification on an unknown victim classifier. Suppose we have access to a source dataset $\mathcal{D}$ consisting of $N$ training samples from $C$ number of classes, with each sample/image possibly consisting of multiple object labels, i.e., multi-label images. Each $i^{th}$ sample in $\mathcal{D}$ is represented as $\bm{x}^{(i)}\in\mathbb{R}^{H\times W\times T}$ (with height $H$, width $W$, and channels $T$) containing labels $\bm{y}^{(i)}=[y_{1}^{(i)},\cdots,y_{C}^{(i)}]\in\mathcal{Y}\subseteq\\{0,1\\}^{C}$. More specifically, if sample $\bm{x}^{(i)}$ is associated with class $c$, $y_{c}^{(i)}=1$ indicates the existence of an object from class $c$ in $\bm{x}^{(i)}$. Further, we have access to a surrogate multi-label classifier trained on $\mathcal{D}$ denoted as $\bm{f}(\cdot)$ which is employed to optimize the perturbation generator $\mathcal{G}_{\bm{\theta}}(\cdot)$’s weight $\bm{\theta}$. For ease of exposition, we drop the superscript $i$ in further discussion. Figure 2: Overview of GAMA. The perturbation generator $\mathcal{G}_{\bm{\theta}}(\cdot)$ crafts a perturbed image ($\ell_{\infty}$-budget constrained by projection operator $\mathcal{P}(\cdot)$) from the clean image as input. Next, embeddings $\bm{z}$ from clean image and $\widehat{\bm{z}}$ from perturbed image are extracted from the surrogate model. A pre-trained CLIP model extracts the image embedding $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{0.8575pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.61249pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ from the clean image and the text embedding $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61292pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61287pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ that is least similar to $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{0.8575pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.61249pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ (see details in Section 3.1). Finally, the loss functions $\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{0.8575pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.61249pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ and $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.4903pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ utilize these embeddings to optimize the generator weights $\bm{\theta}$. Loss solely based on a surrogate model not shown here for simplicity. We use a prefix=‘a photo depicts’ in all the text prompts following [67]. ### 3.1 Adversary Equipped with Pre-Trained CLIP We aim to train a generator $\mathcal{G}_{\bm{\theta}}(\cdot)$ that learns to create perturbations from its observations by fooling a surrogate classifier $\bm{f}(\cdot)$ during its training phase. Now, as $\mathcal{G}_{\bm{\theta}}(\cdot)$ learns to create perturbations $\bm{\delta}$ in accordance to $\bm{f}(\cdot)$, it is bounded by the features extracted from $\bm{f}(\cdot)$ in order to contrast $\bm{x}$ and $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ (_e.g_. final-layer logits in [10, 11] or mid-level features [13, 12]). In this work, we explore a case where we have access to a pre-trained vision-and- language model like CLIP that can be utilized as a loss network to train $\mathcal{G}_{\bm{\theta}}(\cdot)$. Our motivation for using CLIP is to exploit its joint text and image matching property and compute two embeddings: clean image embedding extracted from the image encoder and a dissimilar text embedding extracted from the text encoder. Specifically, we aim to encode the contextual relationships between multiple objects in the natural scene via language derivatives. We next describe GAMA’s method and present a novel strategy to use CLIP’s model to train $\mathcal{G}_{\bm{\theta}}(\cdot)$. Note that we assume each image contains two co-occurring classes for creating text prompts, mainly restricted due to computation of co-occurrence matrices of dimension $C\times C$ available for multi-label datasets. As we will see later, co-occurrence matrices allow us to discard pairs of classes that would not occur in real-world scenarios. #### GAMA Overview. Before training $\mathcal{G}_{\bm{\theta}}(\cdot)$, we first compute a text embedding matrix $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}=[\bm{\rho}_{1},\bm{\rho}_{2},\cdots,\bm{\rho}_{N}]\in\mathbb{R}^{N\times K}$ with $\bm{\rho}_{n}\in\mathbb{R}^{K}$ (explained in detail later) using the CLIP text encoder $\mathcal{T}(\cdot)$. Here, $K$ is the embedding size of output from $\mathcal{T}(\cdot)$. During training $\mathcal{G}_{\bm{\theta}}(\cdot)$, we start by feeding the clean image $\bm{x}$ to the CLIP image encoder $\mathcal{I}(\cdot)$ and computing an image embedding $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}=\mathcal{I}(\bm{x})\in\mathbb{R}^{K}$. Next, a particular vector $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}\in\mathbb{R}^{K}$ from $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ is retrieved that is least similar to $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$. Then, we feed $\bm{x}$ to $\mathcal{G}_{\bm{\theta}}(\cdot)$ and create $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ while ensuring it to be under given perturbation $\ell_{\infty}$ budget $\epsilon$ using the perturbation projection operator $\mathcal{P}(\cdot)$. These clean and perturbed images are then fed to the surrogate classifier $\bm{f}(\cdot)$ to extract $K$-dimensional embeddings at specific $k$th layer, denoted by $\bm{f}_{k}(\bm{x})$ and $\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ respectively. Finally, the aforementioned quadruplet embeddings ($\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}},\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}},\bm{f}_{k}(\bm{x}),\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$) are used to compute a contrastive learning based CLIP text embedding-guided loss $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}(\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}},\bm{f}_{k}(\bm{x}),\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}))$ and a regression learning based CLIP image embedding-guided loss $\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}(\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}},\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}))$ to compute the final objective $\mathcal{L}$. We also include a loss function that further maximizes the difference between $\bm{f}_{k}(\bm{x})$ and $\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ solely from the surrogate classifier’s perspective. This loss $\mathcal{L}$ is minimized to update the weights of the generator $\bm{\theta}$. The whole GAMA paradigm is illustrated in Figure 2 and summarized in Algorithm 1. The details of loss objectives $\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ and $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ (with text embedding matrix $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$) are discussed next. #### CLIP text embedding-guided loss ($\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$). Let $\bm{z}=\bm{f}_{k}(\bm{x})$ and $\widehat{\bm{z}}=\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$. The CLIP framework inherently learns the text and vision embedding association via a contrastive learning regime [36, 68], constraining the feature embeddings of the input image and its counterpart language description to be as similar as possible. Different from CLIP’s image embedding $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$, CLIP’s text embedding $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ allows us to look beyond the pixel-based features. More specifically, CLIP’s vision-and-language aligning ability allows us to utilize text features to craft transferable image perturbations. Hence, we can optimize $\mathcal{G}_{\bm{\theta}}(\cdot)$ to create perturbed images $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ that do not follow the same text embedding alignment as their clean image counterpart $\bm{x}$. In order to cause this text misalignment, we create a triplet of embeddings where the anchor $\widehat{\bm{z}}$ is pushed away from $\bm{z}$ while pulling it closer to a text embedding $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ that is least associated or similar to a clean image $\bm{x}$. To compute this triplet, the following two steps are performed. * • Before training, compute $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$. The goal is create a dictionary or matrix of text embeddings which can be utilized to retrieve $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ during optimization of $\mathcal{G}_{\bm{\theta}}(\cdot)$. Firstly, we generate language derivatives or text prompts using classes of source distribution. This means we only need to know all the available $C$ classes in $\mathcal{D}$ but not their specific association with $\bm{x}$. Secondly, with assumption that each clean image $\bm{x}$ is associated with two classes, we can generate ${C^{2}}$ text prompts and create a matrix $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ of size ${C^{2}}\times K$. For example, if classes ‘cat’, ‘dog’, ‘person’ and ‘boat’ exist in $\mathcal{D}$, then one can create text prompts such as “a photo depicts cat and dog” or “a photo depicts person and boat” (see Figure 1 for 10 random examples extracted from CLIP’s ‘ViT-B/16’ model using Pascal- VOC’s classes). Here, the part of the text prompt underlined is a recommended ‘prefix’ common to all text prompts as suggested in [67]. However, such $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ can contain embeddings from prompts that are generated from classes that do not exist in real life. To circumvent this, we utilize an object co-occurrence matrix $\mathcal{O}\in\mathbb{R}^{C\times C}$ (a binary matrix) to estimate the co-occurrence relationships between classes. Computed from the training data set containing $C$ classes, $\mathcal{O}$ is first initialized with a matrix containing only zeros. Then, an element $\mathcal{O}_{ij}$ ($i$th row and $j$th column of $\mathcal{O}$) is set to 1 if objects from classes $y_{i}$ and $y_{j}$ appear together at least in one image. Computing such co- occurrence information is a common practice in multi-object downstream problems [69, 26, 70, 71, 27, 72]. We use $\mathcal{O}$ provided by [69]. Using such a co-occurrence matrix, we only create text prompts from a pair of classes that occur together according to $\mathcal{O}$. This leads to a text embedding matrix of size $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ of size $\|\mathcal{O}\|_{0}\times K$ where $\|\mathcal{O}\|_{0}$ denotes total non-zero elements. * • During training, compute $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$. CLIP’s training objective allows it to push the embeddings of associated image-text pairs closer compared to non-matched pairs. We leverage this property to compute the least similar text embedding $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ w.r.t. image embedding $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$. During each training epoch, we randomly sample $B$ candidates $[\bm{\rho}_{1},\bm{\rho}_{2},\cdots,\bm{\rho}_{B}]$ from $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ and estimate $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ as follows: $\displaystyle\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}=\min[\text{cs}(\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}},\bm{\rho}_{1}),\text{cs}(\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}},\bm{\rho}_{2}),\cdots,\text{cs}(\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}},\bm{\rho}_{B})]$ (1) Here, $\text{cs}(\cdot)$ denotes cosine similarity. Next, we force $\widehat{\bm{z}}$ to align with $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ while misaligning with $\bm{z}$. This is implemented as contrastive learning [73, 74] objective as follows. $\displaystyle\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}=\min_{\bm{\theta}}~{}\nicefrac{{1}}{{K}}\Big{(}\|\widehat{\bm{z}}-\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}\|^{2}_{2}+\big{[}\alpha-\|\widehat{\bm{z}}-\bm{z}\|_{2}\big{]}_{+}\Big{)}$ (2) where $\alpha>0$ is the desired margin between clean and perturbed image embedding, and $[v]_{+}=\max(0,v)$. $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ pulls away embeddings of $\bm{x}$ and $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ by making them keep a margin $\alpha$ while pushing dissimilar embeddings $\widehat{\bm{z}}$ and $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ closer than the given margin. #### CLIP image embedding-guided loss ($\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$). Due to CLIP’s learning on $\sim$400 million internet retrieved images from diverse categories and its consequential strong zero-shot image recognition performance over different distributions [36, 41], we argue that its image encoder $\mathcal{I}(\cdot)$ outputs an embedding that has captured attributes of input image with distinct generalized visual features. GAMA leverages this to our advantage, and maximizes the difference between $\widehat{\bm{z}}$ and CLIP’s image encoder’s embedding for clean image $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$. The aim of such an objective is to increase the transferability strength of $\mathcal{G}_{\bm{\theta}}(\cdot)$ perturbations using the generalized features computed from $\mathcal{I}(\cdot)$. This is envisioned using a regression learning based loss described as follows: $\displaystyle\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}=\min_{\bm{\theta}}~{}-\big{(}\nicefrac{{1}}{{K}}\|\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}-\widehat{\bm{z}}\|^{2}_{2}\big{)}$ (3) #### Final Learning Objective ($\mathcal{L}$). Loss functions $\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ and $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ are finally added to a surrogate model loss $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{20.12907pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12907pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12918pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12906pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\text{surr}\mathstrut$}}}}}}$ that minimizes the cosine similarity of $\bm{z}$ and $\widehat{\bm{z}}$ [13]. Choice of layer $k$ is dependent on feature outputs of the CLIP model employed. All embeddings are normalized before computing the loss functions. $\displaystyle\mathcal{L}=\min_{\bm{\theta}}~{}\big{(}\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{20.12907pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12907pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12918pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12906pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\text{surr}\mathstrut$}}}}}}+\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}+\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}\big{)}$ (4) Overall, $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{20.12907pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12907pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12918pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\text{surr}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{20.12906pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\text{surr}\mathstrut$}}}}}}$ maximizes the difference between $\bm{x}$ and $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ from surrogate $\bm{f}(\cdot)$’s perspective, while $\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ and $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ enhance its transferability using CLIP’s perspective. #### Attack evaluation. We assume that the attacker has no knowledge of victim classifier $\bm{g}(\cdot)$ and its data distribution $\mathcal{D}_{t}$. Further, there is a perturbation budget of $\epsilon$ defined by an $\ell_{\infty}$ norm. To launch an attack, we input a clean image $\bm{x}_{t}$ from target dataset $\mathcal{D}_{t}$ to optimized $\mathcal{G}_{\bm{\theta}}(\cdot)$ and craft imperceptible perturbations $\bm{\delta}_{t}$ in order to alter the decision space of the target victim classifier $\bm{g}(\cdot)$ (pre-trained on $\mathcal{D}_{t}$). Mathematically, this can be represented as $\bm{y}_{t}\neq\widehat{\bm{y}}_{t}$ where, $\bm{y}_{t}=\bm{g}\big{(}\bm{x}_{t}\big{)}$ and $\widehat{\bm{y}_{t}}=\bm{g}\big{(}\bm{x}_{t}+\bm{\delta}_{t}\big{)}$ with $\|\bm{\delta}_{t}\|_{\infty}\leq\epsilon$. We can cause following attack scenarios after training $\mathcal{G}_{\bm{\theta}}(\cdot)$ against $\bm{f}(\cdot)$ on $\mathcal{D}$: * • Scenario 1: an attack termed white-box if $\bm{f}(\cdot)=\bm{g}(\cdot)$ and $\mathcal{D}=\mathcal{D}_{t}$ * • Scenario 2: an attack termed black-box if either $\bm{f}(\cdot)\neq\bm{g}(\cdot)$ or $\mathcal{D}\neq\mathcal{D}_{t}$ A real-world attack is generally modeled by Scenario 2 as an adversary would not have the knowledge of victim model $\bm{g}(\cdot)$’s architecture, its training data distribution $\mathcal{D}_{t}$ and the task it performs _e.g_. single-label classification, multi-label classification, or object detection, _etc_. The perturbations that make an attack successful in Scenario 2 should be highly transferable. Input : distribution $\mathcal{D}$, batch size $B$, perturbation $\ell_{\infty}$ bound $\epsilon$ Input : surrogate classifier $\bm{f}(\cdot)$, CLIP-encoders for text $\mathcal{T}(\cdot)$ and image $\mathcal{I}(\cdot)$ Output : optimized perturbation generator $\mathcal{G}_{\bm{\theta}}(\cdot)$’s weights $\bm{\theta}$ 1 Randomly initialize $\bm{\theta}$. Load (as well as freeze) $\bm{f}(\cdot)$, $\mathcal{T}(\cdot)$ and $\mathcal{I}(\cdot)$ with respective pre-trained weights 2 Create text embeddings matrix $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ from $\mathcal{T}(\cdot)$ as described in Section 3.1 3 repeat 4 Input $\bm{x}$ to $\mathcal{I}(\cdot)$ and get $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ 5 Randomly sample $B$ vectors from $\bm{A}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ and get least similar text embedding $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ w.r.t. $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ 6 Input clean image $\bm{x}$ (from $\mathcal{D}$) to $\bm{f}(\cdot)$ and compute mid-level embedding $\bm{f}_{k}(\bm{x})$ 7 Input $\bm{x}$ to $\mathcal{G}_{\bm{\theta}}(\cdot)$ and project it within bound $\epsilon$ using $\mathcal{P}(\cdot)$ to obtain $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ 8 Input $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ to $\bm{f}(\cdot)$ and compute mid-level embedding $\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ 9 Compute loss $\mathcal{L}$ by Equation 4 and minimize it to update $\bm{\theta}$ using Adam [75] 10until _convergence_ Algorithm 1 GAMA pseudo-code ## 4 Experiments In this section, we analyze the strength of GAMA under diverse practical attack settings. We also perform an ablation analysis of GAMA, test the attack robustness against various defenses ([76, 77], median blurring, context- consistency check), as well performance of attacks on different architecture designs. Note that we provide more black-box attack results in the supplementary material. Baselines. As there are no prior works for generative attacks that learn on multi-object scenes using multi-label classifiers, we define our baselines by adapting existing state-of-the-art generative attacks summarized in Table 1. Specifically, the cross-entropy loss in GAP [10] and CDA [11] is replaced with binary cross-entropy loss to handle the prediction of multiple labels during training. Training Details. We use the multi-label datasets PASCAL-VOC [78] and MS-COCO [79] to train generators for the baselines and our method. Unless otherwise stated, perturbation budget is set to $\ell_{\infty}\leq 10$ for all experiments. We chose the following surrogate models $\bm{f}(\cdot)$ (Pascal- VOC or MS-COCO pre-trained multi-label classifiers): ResNet152 (Res152) [80], DenseNet169 (Den169) [81], and VGG19 [64]. For the CLIP model, we use the ‘ViT-B/16’ framework [36]. See supplementary material for more training details. Inference Metrics. We measure attack performances on multi-label classifiers using hamming score (%) defined in [82, 83]. For evaluations on single-label classifiers and object detectors, we use top-1 accuracy (%) and bbox_mAP_50 $\in[0,1]$ metric, respectively. A lower score indicates better attack. Best results are in bold. For reference, accuracy on clean images is provided as ‘No Attack’. ### 4.1 Results and Analysis All trained perturbation generators (trained only on multi-label datasets) are extensively evaluated under following victim model settings. $\bullet$ White-box and black-box (multi-label classification, different model than $\bm{f}(\cdot)$): We evaluate the attacks in white-box and black-box settings on six victim multi-label classifiers (VGG16, VGG19, ResNet50 (Res50), Res152, Den169, and DenseNet121 (Den121)) in Table 3 and Table 3 (white-box attacks are marked with cell color). We outperform all baselines in the majority of cases, with an average absolute difference (w.r.t. closest method) of $\sim$13 percentage points (pp) for Pascal-VOC and $\sim$4.46pp for MS-COCO. $\bullet$ Black-box (single-label classification): We evaluate the attacks in a black-box setting with various single-label classifiers for CIFAR10/100 [84] (coarse-grained tasks [13]), CUB-200-2011 (CUB) [32], Stanford Cars (Car) [85], and FGVC Aircrafts (Air) [86] (fine-grained tasks [13]) in Table 6, and ImageNet [87] (50K validation set) in Table 5 and Table 5. Following [13], the victim models of coarse-grained tasks are taken from [88], fine-grained task models (Res50, SENet154 (SeNet), and SE-ResNet101 (se-Res101) [89]) from [90], and six ImageNet models from [33]. Here, we beat our closest baseline in all cases by $\sim$13.33pp for Pascal-VOC and $\sim$5.83pp for MS-COCO on six ImageNet models. Note that the ImageNet results also demonstrate the drop in performance of TAP [12] and BIA [13] attacks that show close to 0% top-1 accuracy when $\mathcal{G}_{\bm{\theta}}(\cdot)$ is trained on ImageNet on the attacker side [13, 12]. We hypothesize that such a drop in performance is due to sensitivity to the dataset of the manually selected mid-level layer of $\bm{f}(\cdot)$ used by the attacker. We observe a similar trend when attacking non-ImageNet distributions as suggested by BIA [13] in coarse and fine-grained tasks in Table 6. In this case, GAMA beats the prior attacks by average $\sim$13.33pp when $\mathcal{G}_{\bm{\theta}}(\cdot)$ is trained with Pascal-VOC. $\bullet$ Black-box (Object detection): We also evaluate a difficult black-box attack with state-of-the-art MS-COCO object detectors (Faster RCNN with Res50 backbone (FRCN) [91], RetinaNet with Res50 backbone (RNet) [92], DEtection TRansformer (DETR) [93], and Deformable DETR (D2ETR) [94]) in Figure 4, available from [95]. It can be observed that GAMA outperforms its competitors when $\mathcal{G}_{\bm{\theta}}(\cdot)$ is trained with Pascal-VOC. Table 2: Pascal-VOC $\rightarrow$ Pascal-VOC | Method | VGG16 | VGG19 | Res50 | Res152 | Den169 | Den121 | Average ---|---|---|---|---|---|---|---|--- $\bm{f}(\cdot)$ | No Attack | 82.51 | 83.18 | 80.52 | 83.12 | 83.74 | 83.07 | 82.69 | GAP [10] | 19.64 | 16.60 | 72.95 | 76.24 | 68.79 | 66.50 | 53.45 | CDA [11] | 26.16 | 20.52 | 61.40 | 65.67 | 70.33 | 62.67 | 51.12 | TAP [12] | 24.77 | 19.26 | 66.95 | 66.95 | 68.65 | 64.51 | 51.84 | BIA [13] | 12.53 | 14.00 | 64.24 | 69.07 | 69.44 | 64.71 | 48.99 VGG19 | GAMA | 6.11 | 5.89 | 41.17 | 45.57 | 53.11 | 44.58 | 32.73 | GAP [10] | 56.93 | 56.20 | 65.58 | 72.26 | 75.22 | 69.54 | 65.95 | CDA [11] | 41.07 | 47.60 | 53.84 | 47.22 | 67.50 | 59.65 | 52.81 | TAP [12] | 52.92 | 58.24 | 56.52 | 53.61 | 71.55 | 64.56 | 59.56 | BIA [13] | 45.34 | 49.74 | 51.98 | 50.27 | 67.75 | 61.05 | 54.35 Res152 | GAMA | 33.42 | 39.42 | 32.39 | 20.46 | 49.76 | 49.54 | 37.49 | GAP [10] | 62.09 | 59.55 | 68.60 | 72.81 | 76.09 | 72.70 | 68.64 | CDA [11] | 52.28 | 53.75 | 59.65 | 67.23 | 69.60 | 67.37 | 61.64 | TAP [12] | 58.48 | 58.55 | 58.14 | 63.42 | 52.66 | 62.57 | 58.97 | BIA [13] | 48.52 | 53.77 | 56.15 | 63.33 | 54.01 | 58.85 | 55.77 Den169 | GAMA | 44.25 | 52.89 | 48.83 | 53.25 | 45.00 | 50.96 | 49.19 Table 3: MS-COCO $\rightarrow$ MS-COCO | Method | VGG16 | VGG19 | Res50 | Res152 | Den169 | Den121 | Average ---|---|---|---|---|---|---|---|--- $\bm{f}(\cdot)$ | No Attack | 65.80 | 66.48 | 65.64 | 67.95 | 67.59 | 66.39 | 66.64 | GAP [10] | 8.31 | 10.61 | 39.49 | 48.00 | 41.00 | 38.12 | 30.92 | CDA [11] | 6.57 | 8.57 | 37.38 | 43.56 | 38.41 | 35.59 | 28.34 | TAP [12] | 3.45 | 6.14 | 25.77 | 29.56 | 20.05 | 21.15 | 17.68 | BIA [13] | 2.47 | 4.01 | 30.76 | 37.34 | 26.40 | 27.95 | 21.48 VGG19 | GAMA | 3.59 | 3.75 | 27.13 | 30.43 | 24.60 | 21.77 | 18.54 | GAP [10] | 42.59 | 45.41 | 51.22 | 53.75 | 54.18 | 52.54 | 49.94 | CDA [11] | 30.16 | 37.79 | 42.83 | 45.13 | 49.24 | 44.93 | 41.68 | TAP [12] | 24.34 | 25.94 | 29.40 | 24.13 | 35.58 | 33.06 | 28.74 | BIA [13] | 22.73 | 22.76 | 28.64 | 22.16 | 36.06 | 32.41 | 27.46 Res152 | GAMA | 24.52 | 27.73 | 30.62 | 23.04 | 31.30 | 27.31 | 27.42 | GAP [10] | 29.85 | 32.77 | 38.15 | 40.84 | 24.98 | 33.99 | 33.43 | CDA [11] | 39.39 | 41.19 | 46.34 | 50.82 | 43.42 | 44.63 | 44.29 | TAP [12] | 23.01 | 27.73 | 32.75 | 40.22 | 15.73 | 20.90 | 26.72 | BIA [13] | 27.01 | 29.59 | 34.65 | 43.42 | 13.57 | 24.69 | 28.82 Den169 | GAMA | 10.40 | 13.47 | 19.30 | 23.46 | 8.65 | 10.29 | 14.26 Table 4: Pascal-VOC $\rightarrow$ ImageNet | Method | VGG16 | VGG19 | Res50 | Res152 | Den121 | Den169 | Average ---|---|---|---|---|---|---|---|--- $\bm{f}(\cdot)$ | No Attack | 70.15 | 70.94 | 74.60 | 77.34 | 74.22 | 75.74 | 73.83 | GAP [10] | 24.44 | 21.64 | 63.65 | 67.84 | 63.09 | 65.47 | 51.02 | CDA [11] | 13.83 | 11.99 | 47.32 | 53.92 | 46.81 | 52.24 | 37.68 | TAP [12] | 06.70 | 07.28 | 50.94 | 57.36 | 47.68 | 53.43 | 37.23 | BIA [13] | 04.20 | 04.73 | 48.63 | 57.65 | 45.94 | 53.37 | 35.75 VGG19 | GAMA | 03.07 | 03.41 | 22.32 | 34.04 | 24.51 | 30.35 | 19.61 | GAP [10] | 34.04 | 34.67 | 52.85 | 61.61 | 58.09 | 59.24 | 50.08 | CDA [11] | 29.33 | 34.88 | 44.28 | 46.05 | 46.91 | 51.62 | 42.17 | TAP [12] | 33.25 | 37.53 | 41.18 | 42.14 | 50.96 | 56.45 | 43.58 | BIA [13] | 22.82 | 27.44 | 34.66 | 36.74 | 45.48 | 51.26 | 36.40 Res152 | GAMA | 16.43 | 17.02 | 21.93 | 17.07 | 31.63 | 30.57 | 22.44 | GAP [10] | 42.79 | 45.01 | 57.79 | 65.42 | 63.02 | 65.31 | 56.55 | CDA [11] | 36.67 | 37.51 | 52.30 | 61.78 | 54.68 | 57.85 | 50.13 | TAP [12] | 28.92 | 30.19 | 38.36 | 50.92 | 45.88 | 40.78 | 39.17 | BIA [13] | 26.12 | 27.42 | 37.06 | 51.30 | 40.63 | 37.56 | 36.68 Den169 | GAMA | 18.16 | 20.93 | 28.04 | 41.85 | 26.11 | 21.67 | 26.12 Table 5: MS-COCO $\rightarrow$ ImageNet | Method | VGG16 | VGG19 | Res50 | Res152 | Den121 | Den169 | Average ---|---|---|---|---|---|---|---|--- $\bm{f}(\cdot)$ | No Attack | 70.15 | 70.94 | 74.60 | 77.34 | 74.22 | 75.74 | 73.83 | GAP [10] | 15.55 | 15.06 | 49.50 | 56.07 | 47.65 | 53.49 | 39.55 | CDA [11] | 13.05 | 12.59 | 46.77 | 52.58 | 43.55 | 50.03 | 36.42 | TAP [12] | 02.33 | 02.93 | 19.28 | 35.20 | 19.45 | 23.42 | 17.10 | BIA [13] | 02.51 | 03.09 | 29.72 | 43.98 | 30.37 | 36.53 | 24.36 VGG19 | GAMA | 02.01 | 02.57 | 19.99 | 35.21 | 26.26 | 32.98 | 19.83 | GAP [10] | 22.98 | 24.41 | 32.74 | 32.35 | 39.56 | 44.11 | 32.69 | CDA [11] | 35.69 | 39.40 | 51.75 | 54.84 | 53.55 | 58.92 | 49.02 | TAP [12] | 13.29 | 12.46 | 23.44 | 21.11 | 35.14 | 41.29 | 24.45 | BIA [13] | 14.98 | 14.98 | 25.40 | 21.98 | 34.11 | 37.62 | 24.84 Res152 | GAMA | 17.94 | 19.16 | 24.57 | 17.24 | 29.67 | 30.57 | 23.19 | GAP [10] | 30.50 | 30.79 | 40.82 | 51.12 | 41.03 | 37.46 | 38.62 | CDA [11] | 35.75 | 36.69 | 50.45 | 57.43 | 51.23 | 52.44 | 47.33 | TAP [12] | 21.45 | 26.45 | 27.30 | 45.76 | 30.83 | 25.34 | 29.52 | BIA [13] | 20.91 | 25.01 | 37.16 | 50.65 | 34.71 | 23.38 | 31.97 Den169 | GAMA | 06.94 | 10.63 | 10.97 | 21.60 | 13.92 | 08.22 | 12.04 Table 6: Pascal-VOC $\rightarrow$ Coarse (CIFAR10/100) and Fine-grained (CUB, Car, Air) tasks | | CIFAR10 | CIFAR100 | CUB | CUB | CUB | Car | Car | Car | Air | Air | Air | ---|---|---|---|---|---|---|---|---|---|---|---|---|--- | Method | [88] | [88] | Res50 | SeNet | se-Res101 | Res50 | SeNet | se-Res101 | Res50 | SeNet | se-Res101 | Average $\bm{f}(\cdot)$ | No Attack | 93.79 | 74.28 | 87.35 | 86.81 | 86.54 | 94.35 | 93.36 | 92.97 | 92.23 | 92.05 | 91.90 | 89.60 | GAP [10] | 73.58 | 39.10 | 78.94 | 79.79 | 80.41 | 82.33 | 85.71 | 87.19 | 81.19 | 81.82 | 79.99 | 77.27 | CDA [11] | 70.40 | 44.68 | 54.76 | 64.74 | 68.99 | 70.87 | 75.64 | 81.78 | 42.87 | 74.38 | 77.20 | 66.02 | TAP [12] | 73.18 | 35.41 | 72.42 | 74.39 | 73.94 | 78.40 | 77.08 | 84.59 | 78.91 | 78.94 | 75.52 | 72.98 | BIA [13] | 59.82 | 27.84 | 68.31 | 65.64 | 73.70 | 75.61 | 67.90 | 81.83 | 75.88 | 66.13 | 76.75 | 67.22 VGG19 | GAMA | 53.85 | 24.94 | 53.52 | 62.19 | 66.93 | 60.08 | 69.11 | 78.95 | 45.51 | 43.71 | 63.37 | 56.56 | GAP [10] | 69.80 | 41.06 | 64.96 | 80.01 | 81.77 | 72.62 | 86.02 | 87.53 | 84.28 | 84.64 | 85.48 | 76.19 | CDA [11] | 77.60 | 49.43 | 65.38 | 71.52 | 71.63 | 73.04 | 76.52 | 79.54 | 66.61 | 72.73 | 60.10 | 69.46 | TAP [12] | 70.92 | 38.39 | 48.60 | 73.20 | 76.10 | 69.02 | 86.62 | 81.94 | 74.65 | 80.68 | 83.20 | 71.21 | BIA [13] | 67.54 | 36.43 | 51.17 | 70.64 | 71.63 | 70.85 | 82.85 | 80.21 | 72.94 | 80.20 | 81.01 | 69.58 Res152 | GAMA | 69.53 | 38.57 | 27.67 | 64.77 | 64.79 | 59.18 | 74.27 | 80.50 | 59.71 | 69.10 | 65.77 | 61.26 | GAP [10] | 83.25 | 56.08 | 64.70 | 78.15 | 76.77 | 80.65 | 85.95 | 86.74 | 81.79 | 84.40 | 85.03 | 78.50 | CDA [11] | 84.34 | 58.03 | 61.75 | 73.40 | 71.75 | 84.21 | 85.57 | 84.58 | 78.97 | 82.24 | 78.22 | 76.64 | TAP [12] | 86.77 | 58.67 | 54.04 | 64.45 | 62.31 | 76.13 | 81.35 | 82.91 | 34.02 | 76.66 | 76.75 | 68.55 | BIA [13] | 85.20 | 55.21 | 47.95 | 58.18 | 56.02 | 55.88 | 73.65 | 72.30 | 62.47 | 72.97 | 70.39 | 64.56 Den169 | GAMA | 78.27 | 46.80 | 33.57 | 57.44 | 63.24 | 49.31 | 70.65 | 75.14 | 48.48 | 62.95 | 70.15 | 59.63 $40.00$$60.00$$80.00$Accuracy (%)GAPCDATAPBIAOursAttacks$\mathcal{G}_{\bm{\theta}}(\cdot)$ trained on MS- COCO (VGG19)CIFAR10 $70.00$$80.00$$90.00$Accuracy (%)GAPCDATAPBIAOursAttacks$\mathcal{G}_{\bm{\theta}}(\cdot)$ trained on MS- COCO (Den169)CIFAR10 (a) Custom architecture (both from [88]) $20.00$$40.00$$60.00$Accuracy (%)GAPCDATAPBIAOursAttacks$\mathcal{G}_{\bm{\theta}}(\cdot)$ trained on MS- COCO (Res152)CUB $40.00$$50.00$$60.00$$70.00$Accuracy (%)GAPCDATAPBIAOursAttacks$\mathcal{G}_{\bm{\theta}}(\cdot)$ trained on MS- COCO (Den169)Air (b) Standard architecture (left: Res50, right: se-Res101) Figure 3: Transferability on types of victim model designs. GAMA shows potent transferring attacks to victim networks that were custom designed (Figure 3(a)) and that contain standard blocks like Residual blocks [80] (Figure 3(b)). Performance on Type of Architectures. In Figure 3, we further study the transferability of attacks depending on the type of victim architecture: standard which follow the standard modules like Residual blocks [80] to build the classifier, and custom where the victim classifier doesn’t adhere to a specific pattern of network modules. In both cases, GAMA consistently maintains better attack rates than other attacks. This shows convincing transferability of perturbations crafted from GAMA’s $\mathcal{G}_{\bm{\theta}}(\cdot)$ under diverse black-box settings. We provide additional results in the supplementary material. Robustness of Attacks against Defenses. To analyze the robustness of all the methods, we launch misclassification attacks ($\mathcal{G}_{\bm{\theta}}(\cdot)$ trained on MS-COCO with the surrogate model as Den169) when the victim deploys input processing based defense such as median blur with window size as $3\times 3$, and Neural Representation purifier (NRP) [76] on three ImageNet models (VGG16, Res152, Den169). From Table 8(a) and Table 8(b), we can observe that the attack success of GAMA is better than prior methods even when the victim pre-processes the perturbed image before making decisions. In Figure 6(a), we observe that Projected Gradient Descent (PGD) [77] assisted Res50 is difficult to break with GAMA performing slightly better than other methods. Finally, motivated by [96], we analyze an output processing defense scenario where the victim can check the context consistency of predicted labels on perturbed images using the co- occurrence matrix $\mathcal{O}$. In particular, if a perturbed image is misclassified showing co-occurrence of labels not present in $\mathcal{O}$, we term this as a detected attack. Otherwise, we call it an undetected attack. To measure this performance, we first compute the co-occurrence matrix $\mathcal{O}_{\delta}$ by perturbing all the test set images and estimate its precision w.r.t. ground-truth $\mathcal{O}$. To check for attacks that have high precision value $p$ and high misclassification rate, we calculate a ‘context score’ (higher is better) that is a harmonic mean of $p$ and misclassification rate (1-accuracy). We show the attack performance against this context consistency check in Figure 6(b) for both Pascal-VOC and MS-COCO averaged over all surrogate models under white-box attacks. Clearly, GAMA presents itself as the best undetected attack compared to prior works. $30.00$$40.00$$50.00$$60.00$Accuracy (%)$\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{1.63333pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{1.63333pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.14333pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.3419pt}{5.0pt}{\hbox{\raisebox{0.81667pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$$\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{1.63333pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{1.63333pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.14333pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.3419pt}{5.0pt}{\hbox{\raisebox{0.81667pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}+\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$$\mathcal{L}$AttacksPascal- VOC $\rightarrow$ Pascal-VOCVGG16Res50 $20.00$$30.00$$40.00$$50.00$Accuracy (%)$\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{1.63333pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{1.63333pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.14333pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.3419pt}{5.0pt}{\hbox{\raisebox{0.81667pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$$\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{1.63333pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{1.63333pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.14333pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.3419pt}{5.0pt}{\hbox{\raisebox{0.81667pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}+\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$$\mathcal{L}$AttacksPascal- VOC $\rightarrow$ ImageNetVGG16Res50 Figure 4: Ablation analysis of loss objective. We analyze the contribution due to the introduction of each loss function $\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.22499pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34192pt}{5.0pt}{\hbox{\raisebox{0.8575pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.61249pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ and $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.4903pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ towards the final objective $\mathcal{L}$, both in same distribution (left) and different distribution (right). The surrogate model is Res152. Table 7: Pascal-VOC $\rightarrow$ MS-COCO Object Detection task | Method | FRCN | RNet | DETR | D2ETR | Average ---|---|---|---|---|---|--- $\bm{f}(\cdot)$ | No Attack | 0.582 | 0.554 | 0.607 | 0.633 | 0.594 | GAP [10] | 0.424 | 0.404 | 0.360 | 0.410 | 0.399 | CDA [11] | 0.276 | 0.250 | 0.208 | 0.244 | 0.244 | TAP [12] | 0.384 | 0.340 | 0.275 | 0.320 | 0.329 | BIA [13] | 0.347 | 0.318 | 0.253 | 0.281 | 0.299 VGG19 | GAMA | 0.234 | 0.207 | 0.117 | 0.122 | 0.170 | GAP [10] | 0.389 | 0.362 | 0.363 | 0.408 | 0.380 | CDA [11] | 0.305 | 0.274 | 0.256 | 0.281 | 0.279 | TAP [12] | 0.400 | 0.348 | 0.288 | 0.350 | 0.346 | BIA [13] | 0.321 | 0.275 | 0.205 | 0.256 | 0.264 Res152 | GAMA | 0.172 | 0.138 | 0.080 | 0.095 | 0.121 Ablation Analysis. We dissect the contribution of each loss function in our proposed loss objective of Equation 4 in Figure 4 where $\mathcal{G}_{\bm{\theta}}(\cdot)$ is trained with Pascal-VOC with Res152 surrogate model. We analyze the attack transferability to different victim models (VGG16, Res50). We observe that the introduction of each loss objective (left to right) increases the strength of the attack both in the same distribution (Pascal-VOC) as the attacker and in the unknown distribution (ImageNet) on both victim classifiers. Finally, we visualize some perturbed image examples crafted by GAMA in Figure 5. Figure 5: Qualitative Examples. We show some clean (top) and perturbed (bottom) images from GAMA. Best viewed in color/zoomed. Method | VGG16 | Res152 | Den121 | Average ---|---|---|---|--- No Attack | 64.57 | 74.04 | 71.68 | 69.92 GAP [10] | 33.33 | 56.90 | 46.34 | 45.52 CDA [11] | 37.89 | 58.98 | 56.19 | 51.02 TAP [12] | 22.37 | 50.67 | 40.81 | 37.95 BIA [13] | 25.09 | 54.45 | 46.34 | 41.96 GAMA | 20.34 | 49.66 | 37.55 | 35.85 (a) Median Blur Method | VGG16 | Res152 | Den121 | Average ---|---|---|---|--- No Attack | 56.26 | 62.37 | 68.62 | 62.41 GAP [10] | 31.08 | 45.11 | 37.85 | 38.01 CDA [11] | 34.61 | 47.64 | 51.32 | 44.52 TAP [12] | 20.06 | 36.54 | 19.70 | 25.43 BIA [13] | 19.94 | 41.03 | 20.07 | 23.68 GAMA | 7.38 | 19.00 | 7.87 | 11.41 (b) NRP $44.80$$45.00$$45.20$$45.40$$45.60$Accuracy (%)NAGAPCDATAPBIAOursAttacks (a) PGD ($\epsilon=4$) $0.60$$0.80$$1.00$Context scoreGAPCDATAPBIAOursAttacksMS-COCOPascal-VOC (b) Context check Table 7: Robustness Analysis against various defenses. Our proposed attack GAMA consistently shows better performances compared to baselines in scenarios where the victim deploys attack defenses. ## 5 Conclusion In this paper, we propose a new generative attack GAMA that can learn to create perturbations against multi-object scenes. For the first time in generative attack literature, we show the utility of a pre-trained vision-and- language model CLIP to optimize effectual perturbation generators. Specifically, CLIP’s joint text and image aligning property allow us to use natural language semantics in order to handle the multi-object semantics in the input scene to be perturbed. To demonstrate GAMA’s efficiency, we perform extensive experiments that show state-of-the-art attacks across a wide range of black-box victim models (multi-label/single-label classifiers, and object detectors). We also evaluate the robustness of our attacks against various defense mechanisms. As part of our future works, we will explore more complex methodologies to employ vision-language models both for adversarial attacks and defense systems. ## 6 Limitations and Societal Impacts Limitations. The pre-trained CLIP model ‘ViT-B16’ outputs a 512-dimensional embedding that restricts us to compare the features extracted from the surrogate model in our losses of the same size. Another limitation of our method is the use of co-occurrence matrices to extract the right pair of classes that exist together in real-world scenes. In this paper, we make an assumption that text prompts are created using two classes that exist together according to the co-occurrence matrix of size $C\times C$ (for $C$ classes in the data distribution). However, we can also use a triplet of classes that exist together in an input scene which would need a co-occurrence tensor of size $C\times C\times C$. Computing such a huge tensor would be tedious to cover all the images provided in the train set (usually in the order of thousands). Societal Impacts. Adversarial attacks are designed with the sole goal to subvert machine decisions by any means available. Our attack approach shows one such method where a benign open-sourced vision-language model can be utilized by an attacker to create potent perturbations. This demonstrates the need for the victim to prepare for constantly evolving attacks that may cause major harm in real-world systems (_e.g_. person re-identification [97]). We believe that our work can help further propagate research into designing efficient and robust models that do not break down to attacks built upon multi-modal (in our case, text-image) features. Future researchers should also be aware of video generative models [98, 99] that can be used to create adversarial attacks for ubiquitous video classifiers built on the success of vision-language models. Acknowledgement. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112090096. Approved for public release; distribution is unlimited. Supplementary material for “GAMA: Generative Adversarial Multi-Object Scene Attacks” ###### CONTENTS 1. 1 Introduction 2. 2 Related works 3. 3 Proposed Attack Methodology: GAMA 1. 3.1 Adversary Equipped with Pre-Trained CLIP 4. 4 Experiments 1. 4.1 Results and Analysis 5. 5 Conclusion 6. 6 Limitations and Societal Impacts 7. A Additional Analysis on GAMA 8. B Additional Results w.r.t. Baselines 9. C Implementation Details ###### List of Tables 1. 1 Characteristic comparison. Here, $\bm{f}(\cdot)$ denotes the surrogate classifier. $\bm{x}$ and $\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}}$ denote a clean and perturbed image. $k$ denotes output from a specific pre-defined layer of $\bm{f}(\cdot)$ (different for each method). Better than prior generative attacks [10, 11, 12, 13], GAMA leverages multi-modal (text and image) features $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ and $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ extracted from a pre-trained CLIP [36] model for train the perturbation generator. Its learning objective aims to pull $\bm{f}_{k}(\bm{x}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.19998pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\bm{\delta}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{3.2pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\bm{\delta}\mathstrut$}}}}}})$ closer to a dissimilar text embedding $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ (w.r.t. $\bm{x}$) while pushing it away from $\bm{f}_{k}(\bm{x})$ and $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$. Further, GAMA analyzes attack scenarios where the surrogate model is a multi-label classifier with input scenes that usually contain multiple objects. 2. 4.1 Results and Analysis 3. 4.1 Results and Analysis 4. 6 Pascal-VOC $\rightarrow$ Coarse (CIFAR10/100) and Fine-grained (CUB, Car, Air) tasks 5. 7 Robustness Analysis against various defenses. Our proposed attack GAMA consistently shows better performances compared to baselines in scenarios where the victim deploys attack defenses. 6. (a) Median Blur 7. (b) NRP 8. 1 Impact of CLIP model on GAMA 9. (a) Pascal-VOC $\rightarrow$ Pascal-VOC 10. (b) Pascal-VOC $\rightarrow$ ImageNet 11. Random Runs with Error Bars. 12. Effect of Surrogate Ensemble. 13. 5 Black box attacks: MS-COCO object detection (MS-COCO $\rightarrow$ MS-COCO) 14. Black-box Setting (Multi-Label Classification). 15. 8 Robustness analysis when $\mathcal{G}_{\bm{\theta}}(\cdot)$ is trained on Pascal-VOC 16. (a) Median Blur 17. (b) NRP ###### List of Figures 1. 1 Using CLIP’s image-text aligning property, we compute the features of the least similar text description w.r.t. to clean image. 2. 2 Overview of GAMA. The perturbation generator $\mathcal{G}_{\bm{\theta}}(\cdot)$ crafts a perturbed image ($\ell_{\infty}$-budget constrained by projection operator $\mathcal{P}(\cdot)$) from the clean image as input. Next, embeddings $\bm{z}$ from clean image and $\widehat{\bm{z}}$ from perturbed image are extracted from the surrogate model. A pre-trained CLIP model extracts the image embedding $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ from the clean image and the text embedding $\bm{\rho}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61293pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.6129pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{10.61296pt}{5.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ that is least similar to $\bm{\rho}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ (see details in Section 3.1). Finally, the loss functions $\mathcal{L}_{\mathchoice{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\displaystyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34195pt}{5.0pt}{\hbox{\raisebox{1.3611pt}{$\textstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12747pt}{\leavevmode\resizebox{9.34193pt}{5.0pt}{\hbox{\raisebox{0.95277pt}{$\scriptstyle\texttt{img}\mathstrut$}}}}}{\raisebox{-1.12746pt}{\leavevmode\resizebox{9.34198pt}{5.0pt}{\hbox{\raisebox{0.68054pt}{$\scriptscriptstyle\texttt{img}\mathstrut$}}}}}}$ and $\mathcal{L}_{\mathchoice{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\displaystyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49034pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\textstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49033pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptstyle\texttt{txt}\mathstrut$}}}}}{\raisebox{0.0pt}{\leavevmode\resizebox{8.49037pt}{4.0pt}{\hbox{\raisebox{0.0pt}{$\scriptscriptstyle\texttt{txt}\mathstrut$}}}}}}$ utilize these embeddings to optimize the generator weights $\bm{\theta}$. Loss solely based on a surrogate model not shown here for simplicity. We use a prefix=‘a photo depicts’ in all the text prompts following [67]. 3. 3 Transferability on types of victim model designs. GAMA shows potent transferring attacks to victim networks that were custom designed (Figure 3(a)) and that contain standard blocks like Residual blocks [80] (Figure 3(b)). 4. (a) Custom architecture (both from [88]) 5. (b) Standard architecture (left: Res50, right: se-Res101) 6. 4.1 Results and Analysis 7. 5 Qualitative Examples. We show some clean (top) and perturbed (bottom) images from GAMA. Best viewed in color/zoomed. 8. (a) PGD ($\epsilon=4$) 9. (b) Context check 10. 0 Black-box Setting Embedding Visualization for GAMA and TAP [12] 11. (a) PGD ($\epsilon=8$) 12. 1 Evaluation of adversarial images on CLIP We present additional analysis of GAMA in the following sections to investigate its attack capabilities under various settings, including black- box embedding visualizations w.r.t. TAP [12], impact of different types of CLIP models, performance with ensemble of surrogate models. We also demonstrate GAMA’s transfer attack strength in comparison to prior methods under difficult black-box transfer attacks including in different multi-label distribution, object detection, and robustness of perturbations when victim uses defense mechanisms to minimize classifier performance deterioration. All experiments are done with perturbation budget $\ell_{\infty}\leq 10$. ## A Additional Analysis on GAMA Figure 0: Embedding visualization. GAMA uses the CLIP extracted text and image embeddings to craft highly transferable adversarial examples. This can be seen in above embedding visualizations where GAMA’s perturbed images lie convincingly farther away from the clean images with better margins compared to TAP [12]. Left and right plots show perturbed image embeddings (both on random 1000 ImageNet images) when $\mathcal{G}_{\bm{\theta}}(\cdot)$ is trained with MS-COCO and Pascal-VOC respectively. Surrogate and victim models are given in parenthesis. #### Black-box Setting Embedding Visualization. To demonstrate the phenomenon that GAMA learns to create potent perturbations compared to prior works, we perform Principal Components Analysis (PCA) of perturbed images extracted from GAMA and TAP [12] in Figure 6 when the $\mathcal{G}_{\bm{\theta}}(\cdot)$ is trained with MS-COCO and Pascal-VOC. We choose PCA visualization as it preserves the global differences of high dimensional data in low dimensional regimes [100, 101]. Clearly, in an unseen distribution (ImageNet [87]), features obtained from GAMA’s perturbed images significantly differ from those of clean images in comparison to TAP [12]. #### Impact of Different CLIP models. We analyze the impact of different open-source pre-trained CLIP models provided by Open-AI in Table 1(b) (surrogate model as Res152), both for the same domain and different domain transfer attacks. We observe that the CLIP frameworks with vision encoders with image transformers [102] (ViT-L/14, ViT-B/32, ViT-B/16) as their backbone perform better in our proposed setting than those with the vision encoders as convolutional networks (RN50, RN101). We attribute this to the effectual representation capability of transformers [102]. Table 1: Impact of CLIP model on GAMA (a) Pascal-VOC $\rightarrow$ Pascal-VOC | VGG16 | VGG19 | Res50 | Res152 | Den169 | Den121 ---|---|---|---|---|---|--- No Attack | 82.51 | 83.18 | 80.52 | 83.12 | 83.74 | 83.07 RN50 | 8.83 | 15.25 | 64.37 | 67.24 | 70.53 | 69.13 RN101 | 21.74 | 9.45 | 60.56 | 68.53 | 67.01 | 66.17 ViT-L/14 | 43.35 | 49.89 | 45.08 | 43.30 | 54.23 | 51.53 ViT-B/32 | 10.58 | 15.18 | 67.07 | 70.34 | 69.14 | 68.02 ViT-B/16 | 6.12 | 5.89 | 41.17 | 45.57 | 53.11 | 44.58 (b) Pascal-VOC $\rightarrow$ ImageNet | VGG16 | VGG19 | Res50 | Res152 | Den121 | Den169 ---|---|---|---|---|---|--- No Attack | 70.15 | 70.94 | 74.60 | 77.34 | 74.22 | 75.74 RN50 | 3.13 | 2.06 | 46.25 | 52.01 | 49.33 | 45.91 RN101 | 2.93 | 2.41 | 42.73 | 56.16 | 46.67 | 45.97 ViT-L/14 | 16.63 | 20.04 | 26.41 | 23.18 | 31.10 | 32.53 ViT-B/32 | 3.90 | 2.81 | 49.61 | 54.41 | 48.02 | 46.41 ViT-B/16 | 3.07 | 3.41 | 22.32 | 34.04 | 24.51 | 30.35 #### Random Runs with Error Bars. We report the mean, and standard error in Table 2 along with the error bar plot (with mean and standard error). We can observe that GAMA maintains its performance with random seed values over various runs. Here, $\mathcal{G}_{\bm{\theta}}(\cdot)$ was trained on Pascal-VOC with VGG19 as a surrogate. Table 2: Pascal-VOC $\rightarrow$Pascal-VOC (s.e. = standard error) | VGG16 | VGG19 | Res50 | Res152 | Den169 | Den121 ---|---|---|---|---|---|--- No Attack | 82.51 | 83.18 | 80.52 | 83.12 | 83.74 | 83.07 Run 1 | 5.86 | 5.18 | 45.43 | 50.88 | 52.61 | 43.44 Run 2 | 6.00 | 4.99 | 42.30 | 47.54 | 49.82 | 40.82 Run 3 | 6.08 | 4.88 | 40.71 | 46.64 | 50.31 | 42.73 Run 4 | 5.95 | 5.15 | 42.28 | 45.52 | 51.46 | 40.92 Run 5 | 6.01 | 4.84 | 41.47 | 45.77 | 49.33 | 39.47 mean | 5.98 | 5.01 | 42.44 | 47.27 | 50.70 | 41.47 s.e. | 0.035 | 0.067 | 0.800 | 0.966 | 0.590 | 0.711 VGG16VGG19Res50Res152Den169Den121$5\%$$10\%$$15\%$$20\%$$25\%$$30\%$$35\%$$40\%$$45\%$$50\%$$55\%$Victim ModelsAccuracy (%)Error bars (with standard error) #### Effect of Surrogate Ensemble. We analyze the results when all the surrogates (VGG19, Res152, and Den169) are employed together to train the perturbation generator $\mathcal{G}_{\bm{\theta}}(\cdot)$ using GAMA. As can be seen in Table 4 and Table 4 (ensemble denoted as All), we do not observe any significant advantage in results when using multiple surrogates. This same observation has been noted by TAP [12] as well. We hypothesize that the mid-level features from multiple surrogates may not introduce complementary features to learn comparatively powerful perturbations than single classifier based surrogates. Table 3: Ensemble comparison: VOC$\rightarrow$VOC $\bm{f}(\cdot)$ | VGG16 | VGG19 | Res50 | Res152 | Den169 | Den121 | Average ---|---|---|---|---|---|---|--- VGG19 | 6.11 | 5.89 | 41.17 | 45.57 | 53.11 | 44.58 | 32.74 Res152 | 33.42 | 39.42 | 32.39 | 20.46 | 49.76 | 49.54 | 37.49 Den169 | 44.25 | 52.89 | 48.83 | 53.25 | 45.00 | 50.96 | 49.19 All | 16.46 | 21.67 | 51.97 | 58.52 | 54.51 | 58.20 | 43.55 Table 4: Ensemble comparison: COCO$\rightarrow$COCO $\bm{f}(\cdot)$ | VGG16 | VGG19 | Res50 | Res152 | Den169 | Den121 | Average ---|---|---|---|---|---|---|--- VGG19 | 3.59 | 3.75 | 27.13 | 30.43 | 24.60 | 21.77 | 18.54 Res152 | 24.52 | 27.73 | 30.62 | 23.04 | 31.30 | 27.31 | 27.42 Den169 | 10.40 | 13.47 | 19.30 | 23.46 | 8.65 | 10.29 | 14.26 All | 10.08 | 10.75 | 23.83 | 35.23 | 29.57 | 30.45 | 23.32 ## B Additional Results w.r.t. Baselines Table 5: COCO $\rightarrow$ COCO Object Detection | Method | FRCN | RNet | DETR | D2ETR | Average ---|---|---|---|---|---|--- $\bm{f}(\cdot)$ | No Attack | 0.582 | 0.554 | 0.607 | 0.633 | 0.594 | GAP [10] | 0.347 | 0.312 | 0.282 | 0.304 | 0.311 | CDA [11] | 0.370 | 0.347 | 0.312 | 0.282 | 0.327 | TAP [12] | 0.130 | 0.120 | 0.099 | 0.104 | 0.113 | BIA [13] | 0.266 | 0.229 | 0.185 | 0.211 | 0.223 VGG19 | GAMA | 0.246 | 0.214 | 0.134 | 0.155 | 0.187 | GAP [10] | 0.187 | 0.145 | 0.097 | 0.108 | 0.134 | CDA [11] | 0.322 | 0.301 | 0.237 | 0.274 | 0.283 | TAP [12] | 0.167 | 0.151 | 0.087 | 0.123 | 0.132 | BIA [13] | 0.152 | 0.144 | 0.101 | 0.121 | 0.129 Res152 | GAMA | 0.154 | 0.128 | 0.086 | 0.100 | 0.117 | GAP [10] | 0.308 | 0.261 | 0.201 | 0.213 | 0.245 | CDA [11] | 0.325 | 0.293 | 0.238 | 0.255 | 0.277 | TAP [12] | 0.181 | 0.155 | 0.126 | 0.147 | 0.152 | BIA [13] | 0.265 | 0.236 | 0.185 | 0.214 | 0.225 Den169 | GAMA | 0.078 | 0.064 | 0.037 | 0.047 | 0.056 #### Black-box Setting (Object Detection). We evaluate a black-box transfer attack with state-of-the-art MS-COCO object detectors (Faster RCNN with Res50 backbone (FRCN) [91], RetinaNet with Res50 backbone (RNet) [92], DEtection TRansformer (DETR) [93], and Deformable DETR (D2ETR) [94]) in Table 5, provided by [95]. It can be observed that GAMA beats the baselines when $\mathcal{G}_{\bm{\theta}}(\cdot)$ is trained with MS-COCO in the majority of scenarios. #### Black-box Setting (Multi-Label Classification). We perform a black-box transfer attack on different multi-label domain than that of $\mathcal{G}_{\bm{\theta}}(\cdot)$’s training set: Pascal-VOC $\rightarrow$ MS-COCO in Table 7 and MS-COCO $\rightarrow$ Pascal-VOC in Table 7. We outperform all baselines in the majority of cases, with an average absolute difference (w.r.t. closest method) of $\sim$5 percentage points (pp) for Pascal-VOC $\rightarrow$ MS-COCO, and $\sim$13.5pp for MS-COCO $\rightarrow$ Pascal-VOC. Table 6: Pascal-VOC $\rightarrow$ MS-COCO | Method | VGG16 | VGG19 | Res50 | Res152 | Den169 | Den121 | Average ---|---|---|---|---|---|---|---|--- $\bm{f}(\cdot)$ | No Attack | 65.80 | 66.49 | 65.64 | 67.94 | 67.60 | 66.39 | 66.64 | GAP [10] | 20.14 | 20.61 | 54.12 | 58.71 | 53.68 | 50.87 | 43.02 | CDA [11] | 18.87 | 15.93 | 41.96 | 48.09 | 47.62 | 42.74 | 35.86 | TAP [12] | 7.84 | 10.03 | 45.96 | 48.46 | 43.40 | 39.76 | 32.57 | BIA [13] | 8.56 | 10.06 | 41.32 | 49.07 | 46.03 | 40.60 | 32.60 VGG19 | GAMA | 2.92 | 3.83 | 23.37 | 28.26 | 22.07 | 17.69 | 16.35 | GAP [10] | 32.90 | 33.63 | 46.70 | 54.18 | 53.71 | 51.40 | 45.41 | CDA [11] | 27.28 | 32.25 | 41.32 | 44.59 | 48.33 | 45.10 | 39.81 | TAP [12] | 31.68 | 37.33 | 36.09 | 36.85 | 47.77 | 45.59 | 39.22 | BIA [13] | 26.99 | 29.83 | 33.86 | 35.35 | 45.87 | 41.70 | 35.59 Res152 | GAMA | 21.43 | 28.59 | 29.54 | 24.95 | 32.92 | 29.89 | 27.89 | GAP [10] | 42.64 | 44.07 | 50.14 | 57.48 | 57.01 | 53.16 | 50.75 | CDA [11] | 39.60 | 39.13 | 44.85 | 53.07 | 50.01 | 47.52 | 45.69 | TAP [12] | 38.96 | 40.87 | 40.86 | 47.01 | 28.67 | 40.62 | 39.50 | BIA [13] | 31.86 | 37.59 | 37.98 | 44.93 | 28.25 | 36.15 | 36.13 Den169 | GAMA | 26.43 | 32.64 | 32.30 | 38.88 | 22.06 | 30.62 | 30.49 Table 7: MS-COCO $\rightarrow$ Pascal-VOC | Method | VGG16 | VGG19 | Res50 | Res152 | Den169 | Den121 | Average ---|---|---|---|---|---|---|---|--- $\bm{f}(\cdot)$ | No Attack | 82.51 | 83.18 | 80.52 | 83.12 | 83.74 | 83.07 | 82.69 | GAP [10] | 17.07 | 15.01 | 61.14 | 67.17 | 69.30 | 63.04 | 48.78 | CDA [11] | 15.23 | 13.19 | 58.81 | 63.80 | 67.43 | 62.23 | 46.78 | TAP [12] | 15.35 | 12.74 | 42.12 | 42.52 | 48.61 | 42.23 | 33.93 | BIA [13] | 8.10 | 8.82 | 52.85 | 55.82 | 63.05 | 56.58 | 40.87 VGG19 | GAMA | 6.60 | 7.08 | 44.16 | 49.20 | 57.49 | 52.52 | 36.17 | GAP [10] | 27.09 | 28.45 | 45.91 | 37.28 | 58.07 | 51.28 | 41.34 | CDA [11] | 53.45 | 55.82 | 64.68 | 64.12 | 70.74 | 65.04 | 62.31 | TAP [12] | 42.21 | 41.26 | 41.02 | 35.35 | 58.99 | 54.77 | 45.60 | BIA [13] | 37.04 | 36.46 | 44.91 | 36.12 | 54.60 | 49.95 | 43.18 Res152 | GAMA | 36.86 | 40.62 | 38.23 | 23.52 | 48.56 | 48.03 | 39.30 | GAP [10] | 48.37 | 46.35 | 58.04 | 60.73 | 52.89 | 57.83 | 54.03 | CDA [11] | 58.51 | 58.20 | 67.61 | 69.73 | 67.26 | 65.88 | 64.53 | TAP [12] | 46.83 | 47.88 | 46.98 | 57.68 | 44.95 | 43.99 | 48.05 | BIA [13] | 42.14 | 49.84 | 54.47 | 62.05 | 48.75 | 50.91 | 51.34 Den169 | GAMA | 19.68 | 20.29 | 23.22 | 33.57 | 26.33 | 16.37 | 23.25 #### Robustness Analysis. We launch misclassification attacks ($\mathcal{G}_{\bm{\theta}}(\cdot)$ trained on Pascal-VOC with the surrogate model as Den169) when the victim uses input processing based defense such as median blur with window size as $3\times 3$ (Table 8(a)), and Neural Representation purifier (NRP) [76] (Table 8(b)) on three ImageNet models (VGG16, Res152, Den169). We can observe that the attack success of GAMA is better than prior methods even when the victim pre-processes the perturbed image. Further, in Figure 1(a), we see that Projected Gradient Descent (PGD) [77] assisted Res50 is difficult to break with GAMA performing slightly better than other methods. Method | VGG16 | Res152 | Den121 | Average ---|---|---|---|--- No Attack | 64.57 | 74.04 | 71.68 | 69.92 GAP [10] | 47.91 | 65.64 | 61.69 | 58.41 CDA [11] | 33.62 | 58.70 | 50.12 | 47.48 TAP [12] | 23.92 | 48.89 | 43.66 | 38.82 BIA [13] | 24.49 | 50.96 | 40.29 | 38.58 GAMA | 22.84 | 52.10 | 36.19 | 37.04 (a) Median Blur Method | VGG16 | Res152 | Den121 | Average ---|---|---|---|--- No Attack | 56.26 | 62.37 | 68.62 | 62.41 GAP [10] | 33.09 | 50.50 | 53.80 | 45.79 CDA [11] | 33.48 | 48.28 | 49.74 | 43.83 TAP [12] | 27.45 | 42.98 | 42.66 | 37.69 BIA [13] | 24.62 | 41.81 | 37.91 | 34.78 GAMA | 18.61 | 34.66 | 24.93 | 26.06 (b) NRP $44.50$$45.00$$45.50$Accuracy (%)NAGAPCDATAPBIAOurs (a) PGD ($\epsilon=8$) Table 8: Robustness Analysis against various defenses. GAMA consistently shows better robustness in cases where victim uses attack defenses ($\mathcal{G}_{\bm{\theta}}(\cdot)$ trained on Pascal-VOC). ‘NA’ in Figure 1(a) denotes ‘No Attack’. #### Evaluation of adversarial images on CLIP. We evaluated CLIP (as a “zero-shot prediction” model) on the perturbed images from Pascal-VOC and computed the top two associated labels in Figure 1 using CLIP’s image-text aligning property. Specifically, we used the whole class list of Pascal-VOC and computed the top-2 associated labels both for clean and perturbed images. We can observe the perturbations change the labels associated with the clean image. Figure 1: Evaluation of adversarial images on CLIP. Surrogate model is VGG19 trained on Pascal-VOC. #### Mid-layer selection from surrogate model for training perturbation generator. Our mid-layer from surrogate model is chosen based on the embedding size of CLIP: _e.g_. if the embedding size of the CLIP encoder is 512, we select the layer from the surrogate model that outputs 512 dimension features. In comparison, prior state-of-the-art generative attack TAP [12] manually searches for the optimal layer from the surrogate model to train the perturbation generator (see Limitations in Section 4.6 in their paper [12]). In particular, finding the optimal mid-layer (that gives the best attack results) requires searching over each block of $M$ layers (which is around an average of $M=5$ layers [12]) for each surrogate model. Hence to find the best layer to train a perturbation generator for a particular model, the computation time cost for such an exhaustive search will be $MN$ GPU hours where $N$ is the total training time (in GPU hours) per layer. Moreover, our analysis shows this layer might not result in best attack results when the training data distribution varies and would require a manual search for all the different combinations of surrogate model and data distributions. Such a search is very time-consuming, impractical, and clearly not scalable. Finally, directly using TAP’s suggested layer is not possible because the embedding size doesn’t match that of CLIP, and would require us to introduce embedding modifications mechanisms (e.g. Principal Component Analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE)) leading to an unreasonable increase in training time for every epoch. Note that if we do not consider the manual search of an optimal layer from the surrogate model to train the perturbation generator, then the proper baseline on ImageNet would be CDA [11]. As evident throughout our analysis, we convincingly outperform them on all settings. ## C Implementation Details We use two multi-label datasets to stimulate the scenario of multi-object scenes: Pascal-VOC (training set: trainval from ‘VOC2007’ and ‘VOC2012’, testing set: ‘VOC2007_test’) and MS-COCO (training set: train2017, testing set: val2017). We follow prior works [11, 13] for the generator network for $\mathcal{G}_{\bm{\theta}}(\cdot)$. To stabilize the training, we replace all the ReLU [103] activation functions with Fused Leaky ReLU [104] activation function (negative slope = 0.2, scale = $\sqrt{2}$). We use a margin $\alpha=1.0$ for the contrastive loss. All our training setup uses ViT-B/16 as the CLIP model. We use Adam optimizer [75] with a learning rate $0.0001$, batch size 16, and exponential decay rates between 0.5 and 0.999. All images were resized to $224\times 224$. Training time was observed to be $\sim$1 hr for Pascal-VOC dataset (10 epochs) and $\sim$10 hrs for MS-COCO dataset (5 epochs) on one NVIDIA GeForce RTX 3090 GPUs. PyTorch is employed [105] in all code implementations. ## References * Szegedy et al. [2013] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing Properties of Neural Networks. _arXiv preprint arXiv:1312.6199_ , 2013. * Goodfellow et al. [2014] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. _arXiv preprint arXiv:1412.6572_ , 2014. * Jiang et al. [2019] Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, and Yu-Gang Jiang. Black-box adversarial attacks on video recognition models. In _Proceedings of the 27th ACM International Conference on Multimedia_ , pages 864–872, 2019. * Yan et al. [2020] Huanqian Yan, Xingxing Wei, and Bo Li. Sparse black-box video attack with reinforcement learning. _arXiv preprint arXiv:2001.03754_ , 2020. * Wei et al. [2020] Zhipeng Wei, Jingjing Chen, Xingxing Wei, Linxi Jiang, Tat-Seng Chua, Fengfeng Zhou, and Yu-Gang Jiang. Heuristic black-box adversarial attacks on video recognition models. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , pages 12338–12345, 2020. * Li et al. [2021a] Shasha Li, Abhishek Aich, Shitong Zhu, Salman Asif, Chengyu Song, Amit Roy-Chowdhury, and Srikanth Krishnamurthy. Adversarial attacks on black box video classifiers: Leveraging the power of geometric transformations. _Advances in Neural Information Processing Systems_ , 34, 2021a. * Zhang et al. [2020] Hu Zhang, Linchao Zhu, Yi Zhu, and Yi Yang. Motion-excited sampler: Video adversarial attack with sparked prior. In _European Conference on Computer Vision_. Springer, 2020. * Carlini and Wagner [2017] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In _2017 IEEE symposium on security and privacy (sp)_ , pages 39–57. IEEE, 2017. * Moosavi-Dezfooli et al. [2016] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 2574–2582. IEEE, 2016. * Poursaeed et al. [2018] Omid Poursaeed, Isay Katsman, Bicheng Gao, and Serge Belongie. Generative Adversarial Perturbations. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 4422–4431. IEEE, 2018. * Naseer et al. [2019] Muzammal Naseer, Salman H Khan, Harris Khan, Fahad Shahbaz Khan, and Fatih Porikli. Cross-Domain Transferability of Adversarial Perturbations. _arXiv preprint arXiv:1905.11736_ , 2019. * Salzmann et al. [2021] Mathieu Salzmann et al. Learning transferable adversarial perturbations. _Advances in Neural Information Processing Systems_ , 34, 2021. * Zhang et al. [2022] Qilong Zhang, Xiaodan Li, YueFeng Chen, Jingkuan Song, Lianli Gao, Yuan He, and Hui Xue’. Beyond imagenet attack: Towards crafting adversarial examples for black-box domains. In _International Conference on Learning Representations_. International Conference on Learning Representations (ICLR), 2022. * Kurakin et al. [2016] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. _arXiv preprint arXiv:1611.01236_ , 2016.
# A model for the interaction of dislocations with planar defects based on Allen-Cahn type microstructure evolution coupled to strain gradient elasticity M. Budnitzki<EMAIL_ADDRESS>S. Sandfeld<EMAIL_ADDRESS>Institute for Advanced Simulation (IAS-9: Materials Data Science and Informatics), Forschungszentrum Jülich GmbH, 52428 Jülich, Germany TU Bergakademie Freiberg, Institute of Mechanics and Fluid Dynamics, Lampadiusstr. 4, 09599 Freiberg ###### Abstract In classical elasticity theory the stress-field of a dislocation is characterized by a $1/r$-type singularity. When such a dislocation is considered together with an Allen-Cahn-type phase-field description for microstructure evolution this leads to singular driving forces for the order parameter, resulting in non-physical (and discretization-dependent) predictions for the interaction between dislocations and phase-, twin- or grain-boundaries. We introduce a framework based on first strain gradient elasticity to regularize the dislocation core. It is shown that the use of strain energy density that is quadratic in the gradient of elastic deformation results in non-singular stresses but may result in singular driving forces, whereas a strain energy, which is quadratic in the gradient of the full deformation tensor, regularizes both stresses and driving forces for the order parameter and is therefore a suitable choice. The applicability of the framework is demonstrated using a comprehensive example. ###### keywords: strain gradient elasticity , phase field , dislocation ††journal: JMPS capbtabboxtable[][] ## 1 Introduction Phase field approaches have proven to be very powerful for the investigation of the formation and evolution of microstructures due to solid-solid phase transformations and twinning. This appears to be the natural framework for the investigation of the interaction of planar crystal defects such as phase- or twin-boundaries with line defects (dislocations, disclinations). A typical phase field model for diffusionless (martensitic) transformations comprises of evolution equations of Allen-Cahn-type for the order parameters $\phi_{\beta}$ $M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}-\rho\partial_{\phi_{\beta}}\psi\,,$ (1) where $M$ and $\alpha$ are constants, $\rho$ denotes the mass density, and $\psi$ is a bulk specific free energy. The subscript $\beta$ indicates the number of the phase, grain or twin variant. Assuming a small perturbation setting, the linear strain tensor111Nomenclature: We denote vectors by bold lower case latin $\boldsymbol{a}$ and greek $\boldsymbol{\alpha}$ letters. The dot operator “$\cdot$” denotes the scalar product. Second order tensors are denoted by bold uppercase latin letters $\boldsymbol{A}$. We introduce a scalar product between second order tensors denoted by “:” as $\boldsymbol{A}:\boldsymbol{B}:=\text{tr}\boldsymbol{A}\cdot\boldsymbol{B}^{\top}$, where $\boldsymbol{B}^{\top}$ is the transpose of $\boldsymbol{B}$ and $\text{tr}(\cdot)$ denotes the trace operator. Similarly, we denote third order tensors $\boldsymbol{\mathcal{A}}$ by bold calligraphic capital letters and “$\smash{\,\smash{\vdots}\,}$” is the corresponding scalar product. We use black-board capital letters $\boldsymbol{\mathbb{C}}$ for fourth-order tensors. Whenever index-notation is used, summation over latin indices appearing twice is implied and spatial derivatives are denoted using the comma operator, e.g. $\partial_{x_{i}}y\equiv y^{,i}$. $\boldsymbol{E}$ can be additively decomposed into elastic $\boldsymbol{E}^{\text{e}}$ and inelastic (i.e., eigenstrain) $\boldsymbol{E}^{\text{in}}(\phi_{\beta})$ contributions, such that $\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)=\boldsymbol{E}-\boldsymbol{E}^{\text{in}}(\phi_{\beta})$. Assuming linear elasticity, the stress $\boldsymbol{S}$ is given by $\boldsymbol{S}=\mathbb{C}:\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)$, and the specific free energy takes the form $\psi\bigl{(}\boldsymbol{E}\,,\,\phi_{\beta}\,,\,\theta\bigr{)}=\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\mathbb{C}:\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)+\psi_{\text{b}}\bigl{(}\phi_{\beta}\,,\,\theta\bigr{)}\,.$ (2) As a consequence, the evolution equation (1) can be rewritten as $M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}+\boldsymbol{S}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})-\rho\partial_{\phi_{\beta}}\psi_{\text{b}}\,.$ (3) In linear elastic Volterra theory, the stresses diverge as the dislocation line is approached. In particular for dislocations the singularity is of $1/r$-type. As per Eq. (3), this results in singular driving forces for the evolution of the order parameters, effectively negating the concepts such as a nucleation barrier or a pile-up stress. Different approaches to regularize the stress in the core region exist in literature based either on the concept of a distributed Burger’s vector (Lothe, 1992; Cai et al., 2006), which are inspired by richer microscopic models for dislocations (Peierls, 1940; Nabarro, 1947), or generalized continuum theories (Lazar et al., 2005, 2006; Lazar and Po, 2015; Po et al., 2018). However, the first strain gradient approach advocated by Po et al. (2018) has the advantage that the obtained regularization is independent of the type of defect in question and therefore does not require any defect-specific information for the determination of model parameters. In principle, these parameters can directly be obtained from atomistic interaction potentials (Admal et al., 2017). The purpose of this work is to follow a micromorphic approach and to derive a framework which consistently couples first strain gradient elasticity to Allen-Cahn-type microstructure evolution ensuring non-singular driving forces on the order parameters in the presence of line defects. ## 2 Balance equations and boundary conditions The principle of virtual power (PVP) provides a systematic way of deriving field equations and boundary conditions for arbitrary mechanical and coupled problems (cf. Maugin, 1980; Germain, 1973; Del Piero, 2009). In the present work it is used in the following form: The virtual power of the inertia forces $\mathscr{P}^{*}_{\text{a}}$ balances the virtual power $\mathscr{P}^{*}_{\text{int}}$ of the internal and $\mathscr{P}^{*}_{\text{ext}}$ of the external forces acting on any sub-domain $\mathscr{S}$ of the material body $\mathscr{B}$ for any admissible virtual velocity field $\boldsymbol{v}^{*}$ and virtual rate of order parameter field ${\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}$, i.e., $\mathscr{P}^{*}_{\text{a}}=\mathscr{P}^{*}_{\text{int}}+\mathscr{P}^{*}_{\text{ext}}\,.$ (4) For the sake of simplicity we disregard any higher order inertia terms Mindlin (1964) as well as inertial forces acting on the order parameter, resulting in $\mathscr{P}^{*}_{\text{a}}=\int_{\mathscr{S}}\rho\dot{\boldsymbol{v}}\cdot\boldsymbol{v}^{*}\,\text{d}V\,.$ (5) The power of internal forces is given by $\mathscr{P}^{*}_{\text{int}}=-\int_{\mathscr{S}}\left(\boldsymbol{S}^{\top}:\boldsymbol{L}^{*}+\boldsymbol{\mathcal{T}}\,\smash{\vdots}\,\operatorname*{grad}{\boldsymbol{L}^{*}}-\pi_{\beta}\,{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}+\boldsymbol{\xi}_{\beta}\cdot\operatorname*{grad}{{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}}\right)\,\text{d}V\,,$ (6) with $\boldsymbol{L}^{*}:=\operatorname*{grad}{\boldsymbol{v}^{*}}$. Here $\boldsymbol{S}$ and $\boldsymbol{\mathcal{T}}$ are the Cauchy and higher order stresses, respectively, while $\pi_{\beta}$ and $\boldsymbol{\xi}_{\beta}$ are thermodynamic forces that directly correspond to the internal microforce and microstress introduced by Gurtin (1996). We note that the invariance requirement of $\mathscr{P}^{*}_{\text{int}}$ with respect to superimposed rigid body motions is satisfied sufficiently by assuming $\boldsymbol{S}=\boldsymbol{S}^{\top}$ and $\boldsymbol{\mathcal{T}}\cdot\boldsymbol{a}=(\boldsymbol{\mathcal{T}}\cdot\boldsymbol{a})^{\top}$ for arbitrary vectors $\boldsymbol{a}$. For the power of external forces we consider the very simple case of no body or contact forces acting on $\boldsymbol{L}^{*}$ and $\operatorname*{grad}{{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}}$, and only a contact (micro)force $\zeta_{\beta}$ acting ${\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}$ $\mathscr{P}^{*}_{\text{ext}}=\int_{\mathscr{S}}\boldsymbol{f}\cdot\boldsymbol{v}^{*}\rho\,\text{d}V+\int_{\partial\mathscr{S}}\left(\boldsymbol{t}\cdot\boldsymbol{v}^{*}+\zeta_{\beta}\,{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}\right)\,\text{d}a\,.$ (7) In order to obtain the consequences of the PVP, the integrals in Eq. (6) are transformed using the following identities $\displaystyle\operatorname*{div}{(\boldsymbol{S}\cdot\boldsymbol{v}^{*})}=(\operatorname*{div}{\boldsymbol{S}})\cdot\boldsymbol{v}^{*}+\boldsymbol{S}:\boldsymbol{L}^{*}\,,$ (8) $\displaystyle\operatorname*{div}{(\boldsymbol{\mathcal{T}}:\boldsymbol{L}^{*})}=(\operatorname*{div}{\boldsymbol{\mathcal{T}}}):\boldsymbol{L}^{*}+\boldsymbol{\mathcal{T}}\,\smash{\vdots}\,\operatorname*{grad}{\boldsymbol{L}^{*}}\,,$ (9) $\displaystyle\operatorname*{div}{\bigl{(}(\operatorname*{div}{\boldsymbol{\mathcal{T}}})\cdot\boldsymbol{v}^{*}\bigr{)}}=(\operatorname*{div}{\operatorname*{div}{\boldsymbol{\mathcal{T}}}})\cdot\boldsymbol{v}^{*}+(\operatorname*{div}{\boldsymbol{\mathcal{T}}}):\boldsymbol{L}^{*}\,,$ (10) $\displaystyle\operatorname*{div}{(\boldsymbol{\xi}_{\beta}\,{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta})}=(\operatorname*{div}{\boldsymbol{\xi}_{\beta}})\,{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}+\boldsymbol{\xi}_{\beta}\cdot\operatorname*{grad}{{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}}\,,$ (11) and the divergence theorem, resulting in $\mathscr{P}^{*}_{\text{int}}=\int_{\mathscr{S}}\bigl{(}\operatorname*{div}{\boldsymbol{S}}-\operatorname*{div}{\operatorname*{div}{\boldsymbol{\mathcal{T}}}}\bigr{)}\cdot\boldsymbol{v}^{*}\,\text{d}V-\int_{\partial\mathscr{S}}\boldsymbol{n}\cdot\bigl{(}\boldsymbol{S}^{\top}-\operatorname*{div}{\boldsymbol{\mathcal{T}}}\bigr{)}\cdot\boldsymbol{v}^{*}\,\text{d}a-\int_{\partial\mathscr{S}}\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}:\boldsymbol{L}^{*}\,\text{d}a\\\ +\int_{\mathscr{S}}\bigl{(}\pi_{\beta}+\operatorname*{div}{\boldsymbol{\xi}_{\beta}}\bigr{)}\,{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}\,\text{d}V-\int_{\partial\mathscr{S}}\boldsymbol{n}\cdot\boldsymbol{\xi}_{\beta}\,{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}^{*}_{\beta}\,\text{d}a\,.$ (12) Introducing the surface gradient operator $\operatorname*{grad_{S}}(\cdot)=\operatorname*{grad}(\cdot)-\partial_{\boldsymbol{n}}(\cdot)\otimes\boldsymbol{n}\,,$ (13) where $\partial_{\boldsymbol{n}}$ is the directional derivative in the direction of the outward normal $\boldsymbol{n}$, the third integral in expression (12) can be rewritten as $\displaystyle\int_{\partial\mathscr{S}}\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}:\boldsymbol{L}^{*}\,\text{d}a$ $\displaystyle=\int_{\partial\mathscr{S}}\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}:\operatorname*{grad_{S}}{\boldsymbol{v}^{*}}\,\text{d}a+\int_{\partial\mathscr{S}}\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}:\partial_{\boldsymbol{n}}\boldsymbol{v}^{*}\otimes\boldsymbol{n}\,\text{d}a$ (14) $\displaystyle=\int_{\partial\mathscr{S}}\operatorname*{div_{S}}{\bigl{(}\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}\cdot\boldsymbol{v}^{*}\bigr{)}}\,\text{d}a-\int_{\partial\mathscr{S}}\operatorname*{div_{S}}{\bigl{(}\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}\bigr{)}}\cdot\boldsymbol{v}\,\text{d}a+\int_{\partial\mathscr{S}}\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}:\partial_{\boldsymbol{n}}\boldsymbol{v}^{*}\otimes\boldsymbol{n}\,\text{d}a,.$ Finally, applying the surface divergence theorem and, for the sake of simplicity, neglecting any wedge line and corner contributions, we find $\int_{\partial\mathscr{S}}\operatorname*{div_{S}}{\bigl{(}\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}\cdot\boldsymbol{v}^{*}\bigr{)}}\,\text{d}a=\int_{\partial\mathscr{S}}\bigl{(}\operatorname*{div_{S}}{\boldsymbol{n}}\bigr{)}\boldsymbol{n}\otimes\boldsymbol{n}:\boldsymbol{\mathcal{T}}\cdot\boldsymbol{v}^{*}\,\text{d}a\,.$ (15) Enforcing Eq. (4) we arrive after a number of straight forward algebraic manipulations at the following field equations on $\mathscr{B}$ $\displaystyle\rho\dot{\boldsymbol{v}}=\operatorname*{div}{\left(\boldsymbol{S}-\operatorname*{div}\boldsymbol{\mathcal{T}}\right)}+\rho\boldsymbol{f}\,,$ (16a) $\displaystyle 0=\operatorname*{div}{\boldsymbol{\xi}_{\beta}}+\pi_{\beta}\,,$ (16b) and boundary conditions on $\partial\mathscr{B}$ $\displaystyle\boldsymbol{t}=\left(\boldsymbol{S}-\operatorname*{div}\boldsymbol{\mathcal{T}}\right)\cdot\boldsymbol{n}-\operatorname*{div_{S}}{\left(\boldsymbol{n}\cdot\boldsymbol{\mathcal{T}}\right)}\,,$ (16c) $\displaystyle\zeta_{\beta}=\boldsymbol{\xi}_{\beta}\cdot\boldsymbol{n}\,.$ (16d) We note that, introducing the total stress $\boldsymbol{S}_{\text{t}}:=\boldsymbol{S}-\operatorname*{div}{\boldsymbol{\mathcal{T}}}\,,$ (17) the balance of linear momentum (16a) regains its standard form for simple materials $\displaystyle\rho\dot{\boldsymbol{v}}=\operatorname*{div}{\boldsymbol{S}_{\text{t}}}+\rho\boldsymbol{f}\,,$ (18) which is convenient for the numerical implementation. ## 3 Constitutive equations The following equations are formulated assuming a geometrically linear setting, i.e., the displacement gradient is considered to be small $||\operatorname*{grad}{\boldsymbol{u}}||\ll 1$. In this case the deformation is characterized by the linear strain tensor $\boldsymbol{E}=\frac{1}{2}\left(\operatorname*{grad}{\boldsymbol{u}}+(\operatorname*{grad}{\boldsymbol{u}})^{\top}\right)$. Its gradient will be denoted by $\boldsymbol{\mathcal{Y}}:=\operatorname*{grad}\boldsymbol{E}$. ### 3.1 Laws of state We choose the following ansatz for the specific free energy and thermodynamic forces $\displaystyle\psi=\psi\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\right)\,,$ $\displaystyle\boldsymbol{S}=\boldsymbol{S}\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\right)\,,$ $\displaystyle\boldsymbol{\mathcal{T}}=\boldsymbol{\mathcal{T}}\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\right)\,,$ $\displaystyle\pi_{\beta}=\pi_{\beta}\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\,,\,{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}\right)\,,$ $\displaystyle\boldsymbol{\xi}_{\beta}=\boldsymbol{\xi}_{\beta}\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\right)\,.$ The second law of the thermodynamics in the form of the Clausius-Duhem inequality given for the isothermal case by $\left(\boldsymbol{S}-\rho\partial_{\boldsymbol{E}}\psi\right):{\dot{\boldsymbol{E}\mkern 5.0mu}\mkern-5.0mu}{}+\left(\boldsymbol{\mathcal{T}}-\rho\partial_{\boldsymbol{\mathcal{Y}}}\psi\right)\,\smash{\vdots}\,{\dot{\boldsymbol{\mathcal{Y}}\mkern 5.0mu}\mkern-5.0mu}{}-\left(\pi_{\beta}+\rho\partial_{\phi}\psi\right){\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}+\left(\boldsymbol{\xi}_{\beta}-\rho\partial_{\operatorname*{grad}\phi_{\beta}}\psi\right)\cdot\operatorname*{grad}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}\geqslant 0$ (19) can be exploited using the classical Coleman-Noll procedure to arrive at the laws of state $\displaystyle\boldsymbol{S}=\rho\partial_{\boldsymbol{E}}\psi\,,$ $\displaystyle\boldsymbol{\mathcal{T}}=\rho\partial_{\boldsymbol{\mathcal{Y}}}\psi\,,$ $\displaystyle\boldsymbol{\xi}_{\beta}=\rho\partial_{\operatorname*{grad}\phi_{\beta}}\psi$ (20) and the residual dissipation inequality $\displaystyle-\pi^{\text{d}}_{\beta}\,{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}\geqslant 0\,,\quad\text{with}\quad\pi^{\text{d}}_{\beta}:=\pi_{\beta}+\rho\partial_{\phi_{\beta}}\psi\,.$ (21) ### 3.2 Free energy and dissipation potential As customary in phase field models for solid-solid transformations, the specific free energy can be split into an elastic, a bulk chemical and an interface contribution $\psi=\psi_{\text{e}}\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\theta\right)+\psi_{\text{b}}\left(\phi_{\beta}\,,\,\theta\right)+\psi_{\text{i}}\left(\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\right)\,,$ (22) as indicated by the subscripts “e” (elastic), “b” (bulk chemical) and “i” (interface). In our formulation, the elastic free energy is of Helmholtz-type, i.e., $\displaystyle\rho\psi_{\text{e}}\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\theta\right)=\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)+\frac{1}{2}\bigl{(}\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\bigr{)}\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}\,,\text{ or}$ (23) $\displaystyle\rho\psi_{\text{e}}\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\theta\right)=\frac{1}{2}\mathbb{C}^{ijkl}(\phi_{\beta})E_{ij}^{\text{e}}(\boldsymbol{E}\,,\,\phi_{\beta})E_{kl}^{\text{e}}(\boldsymbol{E}\,,\,\phi_{\beta})+\frac{1}{2}\mathbb{C}^{ijkl}(\phi_{\beta})\Lambda_{mn}(\phi_{\beta})\boldsymbol{\mathcal{Y}}_{ij}^{n}\boldsymbol{\mathcal{Y}}_{kl}^{m}\,$ (24) where $\boldsymbol{E}^{\text{in}}(\phi_{\beta})$ is the inelastic strain, $\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):=\boldsymbol{E}-\boldsymbol{E}^{\text{in}}(\phi_{\beta})$ is the elastic strain, $\mathbb{C}(\phi_{\beta})$ the stiffness tensor and $\boldsymbol{\Lambda{}}(\phi_{\beta})$ a gradient length scale tensor (cf. Po et al., 2018). The specific choice of functional dependence of $\boldsymbol{E}^{\text{in}}(\phi_{\beta})$, $\psi_{\text{b}}\left(\phi_{\beta}\,,\,\theta\right)$ and $\psi_{\text{i}}\left(\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\right)$ on the order parameter $\phi_{\beta}$ is of no relevance at this point; however, we will assume that the interface energy is of the form $\displaystyle\rho\psi_{\text{i}}\left(\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\right):=\frac{\alpha}{2}\,||\operatorname*{grad}\phi_{\beta}||^{2}+g(\phi_{\beta}\,,\,\theta)\,,$ $\displaystyle\rho\psi_{\text{i}}\left(\phi_{\beta}\,,\,\operatorname*{grad}\phi_{\beta}\,,\,\theta\right):=\frac{\alpha}{2}\,\phi_{\beta}^{,i}\phi_{\beta}^{,i}+g(\phi_{\beta}\,,\,\theta)\,.$ (25) Using the laws of state (20) we immediately find $\displaystyle\boldsymbol{S}=\mathbb{C}(\phi_{\beta}):\bigl{(}\boldsymbol{E}-\boldsymbol{E}^{\text{in}}(\phi_{\beta})\bigr{)}\,,$ $\displaystyle S^{ij}=\mathbb{C}^{ijkl}(\phi_{\beta})\left(E_{kl}-E_{kl}^{\text{in}}(\phi_{\beta})\right)\,,$ (26a) $\displaystyle\boldsymbol{\mathcal{T}}=\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\,,$ $\displaystyle\boldsymbol{\mathcal{T}}^{ij}_{n}=\mathbb{C}^{ijkl}(\phi_{\beta})\Lambda_{mn}(\phi_{\beta})\boldsymbol{\mathcal{Y}}_{kl}^{m}\,,$ (26b) $\displaystyle\boldsymbol{\xi}_{\beta}=\alpha\operatorname*{grad}\phi_{\beta}\,,$ $\displaystyle\xi_{\beta}^{i}=\alpha\phi_{\beta}^{,i}\,,$ (26c) and combining the first two equations $\displaystyle\boldsymbol{\mathcal{T}}=\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\left(\mathbb{C}^{-1}(\phi_{\beta}):\boldsymbol{S}\right)}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})+\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\boldsymbol{E}^{\text{in}}(\phi_{\beta})}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\,,\text{ or}$ (27) $\displaystyle\boldsymbol{\mathcal{T}}^{ij}_{n}=\mathbb{C}^{ijkl}(\phi_{\beta})\Lambda_{mn}(\phi_{\beta})\left(\mathbb{C}^{-1}_{klpq}(\phi_{\beta})S^{pq}\right)^{,m}+\mathbb{C}^{ijkl}(\phi_{\beta})\Lambda_{mn}(\phi_{\beta})E_{kl}^{\text{in},m}(\phi_{\beta})\,.$ (28) Equation (17) can now be used in two ways: In conjunction with the laws of state (26a) and (26b) it is a constitutive equation for the total stress $\boldsymbol{S}_{\text{t}}$, which enters the balance of linear momentum (18) $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\bigr{)}=\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\operatorname*{div}{\left[\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\right]}\,,\text{ or}$ (29) $\displaystyle S_{\text{t}}^{ij}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\bigr{)}=\mathbb{C}^{ijkl}(\phi_{\beta}):E^{\text{e}}_{kl}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\left(\mathbb{C}^{ijkl}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}_{kl}^{m}\Lambda_{mn}\right)^{,n}\,.$ (30) When combined with Eq. (27), Eq. (17) can be used to determine the true stress $\boldsymbol{S}$ from the total stress $\boldsymbol{S}_{\text{t}}$ $\displaystyle\boldsymbol{S}-\operatorname*{div}{\left[\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\left(\mathbb{C}^{-1}(\phi_{\beta}):\boldsymbol{S}\right)}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\right]}=\boldsymbol{S}_{\text{t}}+\operatorname*{div}{\bigl{(}\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\boldsymbol{E}^{\text{in}}(\phi_{\beta})}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\bigr{)}}\,,\text{ or}$ (31) $\displaystyle S^{ij}-\left[\mathbb{C}^{ijkl}(\phi_{\beta})\Lambda_{mn}(\phi_{\beta})\left(\mathbb{C}^{-1}_{klpq}(\phi_{\beta})S^{pq}\right)^{,m}\right]^{,n}=S_{\text{t}}^{ij}+\left[\mathbb{C}^{ijkl}(\phi_{\beta})\Lambda_{mn}(\phi_{\beta})E_{kl}^{\text{in},m}(\phi_{\beta})\right]^{,n}\,.$ (32) In order to complete the phase field formulation we require a constitutive equation for $\pi^{\text{d}}_{\beta}$, which is obtained in the spirit of classical irreversible thermodynamics as ${\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=-\partial_{\pi^{\text{d}}_{\beta}}\Omega\left(\pi^{\text{d}}_{\beta}\right)$ (33) from a dissipation potential $\Omega\left(\pi^{\text{d}}_{\beta}\right)$ that is homogeneous of degree two $\Omega\left(\pi^{\text{d}}_{\beta}\right):=\frac{1}{2}M\left(\pi^{\text{d}}_{\beta}\right)^{2}\,,$ (34) where $M$ is the so called mobility constant. Combining equations (16b), (21), (26c), (33) and (34) we find the classical Allen-Cahn equation $\displaystyle M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}-\rho\partial_{\phi_{\beta}}\psi\,,$ $\displaystyle M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\phi_{\beta}^{,ii}-\rho\partial_{\phi_{\beta}}\psi\,,$ (35) or, explicitely writing down the partial derivatives of $\psi$, $M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}+\boldsymbol{S}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})-\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\partial_{\phi_{\beta}}\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\frac{1}{2}\left(\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}\cdot\partial_{\phi_{\beta}}\boldsymbol{\Lambda{}}(\phi_{\beta})\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\\\ -\frac{1}{2}\left(\partial_{\phi_{\beta}}\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\rho\partial_{\phi_{\beta}}\psi_{\text{b}}(\phi_{\beta}\,,\,\theta)-\partial_{\phi_{\beta}}g(\phi_{\beta}\,,\,\theta)\,,$ (36) or $M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\phi_{\beta}^{,ii}+S^{ij}\partial_{\phi_{\beta}}E^{\text{in}}_{kl}(\phi_{\beta})-\frac{1}{2}\partial_{\phi_{\beta}}\mathbb{C}_{ijkl}(\phi_{\beta})E^{\text{e}}_{ij}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)E^{\text{e}}_{kl}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\frac{1}{2}\partial_{\phi_{\beta}}\mathbb{C}^{ijkl}(\phi_{\beta})\Lambda_{mn}(\phi_{\beta})\boldsymbol{\mathcal{Y}}_{ij}^{n}\boldsymbol{\mathcal{Y}}_{kl}^{m}-\\\ -\frac{1}{2}\mathbb{C}^{ijkl}(\phi_{\beta})\partial_{\phi_{\beta}}\Lambda_{mn}(\phi_{\beta})\boldsymbol{\mathcal{Y}}_{ij}^{n}\boldsymbol{\mathcal{Y}}_{kl}^{m}-\rho\partial_{\phi_{\beta}}\psi_{\text{b}}(\phi_{\beta}\,,\,\theta)-\partial_{\phi_{\beta}}g(\phi_{\beta}\,,\,\theta)\,,$ Note that all terms that appear in the driving force, and as per Lazar et al. (2005) the Cauchy stress $\boldsymbol{S}$ in particular, are non-singular even in the presence of dislocations. Interestingly, this is not true for an elastic specific free energy that is quadratic in $\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right):=\operatorname*{grad}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)$ rather that $\boldsymbol{\mathcal{Y}}$ (cf. A). ### 3.3 Formulation for specific cases For phase transformations the crystal lattice on both sides of the interface will, in general, be different leading to different elastic properties and a different shape of the dislocation core. In this case the equations (18), (29), (31) and (36) retain their full complexity. However, the strength of the general formulation is that it also covers simplified special cases. In the following, we consider scenarios for which these equations can be strongly reduced and which therefore elucidates the structure of the whole formalism. #### 3.3.1 Homogeneous bulk material In the bulk phase the order parameter does not vary in space, i.e., $\operatorname*{grad}\phi_{\beta}=\boldsymbol{0}$, $\mathbb{C}(\phi_{\beta})=\mathbb{C}$, $\boldsymbol{\Lambda{}}(\phi_{\beta})=\boldsymbol{\Lambda{}}$, $\boldsymbol{E}^{\text{in}}(\phi_{\beta})=\boldsymbol{0}$. The Allen-Cahn equation is fulfilled automatically and Eqs. (31) and (29) recover the form derived by Po et al. (2018) $\displaystyle\boldsymbol{S}-\operatorname*{div}{\bigl{(}(\operatorname*{grad}{\boldsymbol{S}})\cdot\boldsymbol{\Lambda{}}}\bigr{)}=\boldsymbol{S}_{\text{t}}\,,$ with $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\bigr{)}=\mathbb{C}:\left[\boldsymbol{E}-\operatorname*{div}{\bigl{(}\boldsymbol{\mathcal{Y}}\cdot\boldsymbol{\Lambda{}}}\bigr{)}\right]\,.$ (37a) For materials with cubic symmetry the gradient length scale tensor $\boldsymbol{\Lambda{}}$ is isotropic, i.e., $\boldsymbol{\Lambda{}}=\mathcal{l}^{2}\boldsymbol{I}$, and the above expressions can be further simplified to the form derived by Lazar et al. (2005) $\displaystyle\boldsymbol{S}-\mathcal{l}^{2}\Delta\boldsymbol{S}=\boldsymbol{S}_{\text{t}}\,,$ with $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\bigr{)}=\mathbb{C}:\bigl{(}\boldsymbol{E}-\mathcal{l}^{2}\operatorname*{div}{\boldsymbol{\mathcal{Y}}}\bigr{)}=\mathbb{C}:\bigl{(}\boldsymbol{E}-\mathcal{l}^{2}\Delta\boldsymbol{E}\bigr{)}\,.$ (37b) #### 3.3.2 Boundaries between grains without inelastic strain The crystal lattices on both sides of a grain boundary differ only by a rotation $\boldsymbol{Q}(\phi_{\beta})$. Hence, we assume that the chemical bulk energy is independent of the order parameter, i.e., $\psi_{\text{b}}\left(\phi_{\beta}\,,\,\theta\right)=\psi_{\text{b}}\left(\theta\right)$. Then the elastic stiffness $\mathbb{C}(\phi_{\beta})$ and the gradient length scale tensor $\boldsymbol{\Lambda{}}(\phi_{\beta})$ can be expressed as $\mathbb{C}(\phi_{\beta})=\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}$ and $\boldsymbol{\Lambda{}}(\phi_{\beta})=\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}$, respectively. In the absence of inelastic strain, we have $\boldsymbol{E}^{\text{in}}(\phi_{\beta})=\boldsymbol{0}$. For this case Eqs. (31), (29) and (36) take the form $\displaystyle\boldsymbol{S}-\operatorname*{div}{\left[\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\operatorname*{grad}{\Bigl{(}\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}^{-1}\bigr{)}:\boldsymbol{S}\Bigr{)}}\cdot\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}\bigr{)}\right]}=\boldsymbol{S}_{\text{t}}\,,$ (38a) with $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\bigr{)}=\mathbb{C}(\phi_{\beta}):\boldsymbol{E}-\operatorname*{div}{\left[\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\cdot\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}\bigr{)}\right]}\,,$ (38b) and $M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}-\frac{1}{2}\boldsymbol{E}:\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{E}-\frac{1}{2}\left(\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\cdot\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}*\boldsymbol{\Lambda{}}\bigr{)}\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\\\ -\frac{1}{2}\left(\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\cdot\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}\bigr{)}\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\partial_{\phi_{\beta}}g(\phi_{\beta}\,,\,\theta)\,.$ (38c) The isotropy of the gradient length scale tensor $\boldsymbol{\Lambda{}}$ for cubic crystals implies that $\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}=\boldsymbol{\Lambda{}}=\mathcal{l}^{2}\boldsymbol{I}$, which simplifies Eqs. (38) to the following form $\displaystyle\boldsymbol{S}-\mathcal{l}^{2}\operatorname*{div}{\left[\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\operatorname*{grad}{\Bigl{(}\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}^{-1}\bigr{)}:\boldsymbol{S}\Bigr{)}}\right]}=\boldsymbol{S}_{\text{t}}\,,$ (39a) with $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\bigr{)}=\mathbb{C}(\phi_{\beta}):\boldsymbol{E}-\mathcal{l}^{2}\operatorname*{div}{\left[\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\right]}\,,$ (39b) and $\displaystyle M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}-\frac{1}{2}\boldsymbol{E}:\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{E}-\frac{1}{2}\mathcal{l}^{2}\left(\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\partial_{\phi_{\beta}}g(\phi_{\beta}\,,\,\theta)\,.$ (39c) #### 3.3.3 Twin boundaries and boundaries between grains with inelastic strain Since the twin variants on both sides of the boundary are related by mirror and/or rotational symmetry transformations between the unit cells, we can - as in the case of grain boundaries - assume that the bulk chemical energy remains unchanged, i.e., $\psi_{\text{b}}\left(\phi_{\beta}\,,\,\theta\right)=\psi_{\text{b}}\left(\theta\right)$, and the elastic stiffness $\mathbb{C}(\phi_{\beta})$ and the gradient length scale tensor $\boldsymbol{\Lambda{}}(\phi_{\beta})$ can be expressed using an orthogonal tensor $\boldsymbol{Q}(\phi_{\beta})$ as $\mathbb{C}(\phi_{\beta})=\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}$ and $\boldsymbol{\Lambda{}}(\phi_{\beta})=\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}$, respectively. Under these assumptions we find $\displaystyle\boldsymbol{S}-\operatorname*{div}{\left[\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\operatorname*{grad}{\Bigl{(}\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}^{-1}\bigr{)}:\boldsymbol{S}\Bigr{)}}\cdot\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}\bigr{)}\right]}=\boldsymbol{S}_{\text{t}}+\operatorname*{div}{\left[\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\bigl{(}\boldsymbol{E}^{\text{in}}(\phi_{\beta})\bigr{)}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})}\right]}\,,$ (40a) with $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\bigr{)}=\mathbb{C}(\phi_{\beta}):\boldsymbol{E}_{\text{e}}(\boldsymbol{E})-\operatorname*{div}{\left[\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\cdot\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}\bigr{)}\right]}\,,$ (40b) and $M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}+\boldsymbol{S}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})-\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\\\ -\frac{1}{2}\left(\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\cdot\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}*\boldsymbol{\Lambda{}}\bigr{)}\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\frac{1}{2}\left(\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\cdot\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\boldsymbol{\Lambda{}}\bigr{)}\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\partial_{\phi_{\beta}}g(\phi_{\beta}\,,\,\theta)\,.$ (40c) For cubic lattices these expressions simplify to $\displaystyle\boldsymbol{S}-\mathcal{l}^{2}\operatorname*{div}{\left[\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\operatorname*{grad}{\Bigl{(}\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}^{-1}\bigr{)}:\boldsymbol{S}\Bigr{)}}\right]}=\boldsymbol{S}_{\text{t}}+\mathcal{l}^{2}\operatorname*{div}{\left[\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\boldsymbol{E}^{\text{in}}(\phi_{\beta})}\right]}\,,$ (41a) with $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\bigr{)}=\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\mathcal{l}^{2}\operatorname*{div}{\left[\bigl{(}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\right]}\,,$ (41b) and $M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}+\boldsymbol{S}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})-\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\\\ -\frac{1}{2}\mathcal{l}^{2}\left(\bigl{(}\partial_{\phi_{\beta}}\boldsymbol{Q}(\phi_{\beta})*\mathbb{C}\bigr{)}:\boldsymbol{\mathcal{Y}}\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\partial_{\phi_{\beta}}g(\phi_{\beta}\,,\,\theta)\,.$ (41c) #### 3.3.4 Phase boundaries between cubic phases In the case of phase boundaries between different cubic phases the gradient length scale tensor $\boldsymbol{\Lambda{}}$ is isotropic on both sides of the interface, even though not necessarily constant across the interface, i.e., $\boldsymbol{\Lambda{}}=\mathcal{l}(\phi_{\beta})^{2}\boldsymbol{I}$. This allows us to reduce Eqs. (31), (29) and (36) to the following form $\displaystyle\boldsymbol{S}-\operatorname*{div}{\left[\mathcal{l}(\phi_{\beta})^{2}\,\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\bigl{(}\mathbb{C}^{-1}(\phi_{\beta}):\boldsymbol{S}\bigr{)}}\right]}=\boldsymbol{S}_{\text{t}}+\operatorname*{div}{\bigl{(}\mathcal{l}(\phi_{\beta})^{2}\,\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\boldsymbol{E}^{\text{in}}(\phi_{\beta})}\bigr{)}}\,,$ (42a) with $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\bigr{)}=\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\operatorname*{div}{\left(\mathcal{l}(\phi_{\beta})^{2}\,\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}\right)}\,,$ (42b) and $M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}+\boldsymbol{S}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})+\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\partial_{\phi_{\beta}}\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\mathcal{l}(\phi_{\beta})\partial_{\phi_{\beta}}\mathcal{l}(\phi_{\beta})\left(\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\\\ -\frac{\mathcal{l}(\phi_{\beta})^{2}}{2}\left(\partial_{\phi_{\beta}}\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}-\rho\partial_{\phi_{\beta}}\psi_{\text{b}}(\phi_{\beta}\,,\,\theta)-\partial_{\phi_{\beta}}g(\phi_{\beta}\,,\,\theta)\,.$ (42c) ## 4 Examples To demonstrate the key properties of the above model numerical simulations using the finite element method are performed using the commercial software “COMSOL Multiphysics”222https://www.comsol.com/. A uniform mesh with quadratic333Independent of the chosen shape functions, Comsol does not provide third spatial derivatives of the degrees of freedom. Therefore, in order to obtain the second derivative of strain (third spatial derivative of the displacement), the “Distributed ODE” feature is used in order to introduce additional degrees of freedom, corresponding to the second spatial derivatives of the displacement. For this “Distributed ODE” linear shape functions are employed., quadrilateral elements is used for the domain discretization. The element size is 0.2 nm. Time stepping is performed using the BDF method. Based on the assumptions of the small perturbation hypothesis444Both the displacement $\boldsymbol{u}$ as well as the displacement gradient are considered to be small, i.e., $|\boldsymbol{u}|\ll L$ and $||\operatorname*{grad}{\boldsymbol{u}}||\ll 1$. (Maugin, 1992), we apply traction boundary conditions to the undeformed geometry whenever required. We assume elastostatics with an isotropic stiffness tensor $\boldsymbol{\mathbb{C}}$. Material parameters have been chosen to represent $\alpha$-iron with the values of the elastic constants, $E=200$ GPa, $\nu=0.29$, and the Burger’s vector $b=0.285$ nm. parameter name | symbol | value ---|---|--- Young’s modulus | $E$ | 200 GPa Poisson’s ratio | $\nu$ | 0.29 Burger’s vector | $b$ | 0.285 nm coefficient | $a$ | 2.98 coefficient | $A$ | $1.155\times 10^{8}\text{ J/m}^{3}$ coefficient | $B$ | $-3.43\times 10^{7}\text{ J/m}^{3}$ coefficient | $C$ | $-2.78\times 10^{8}\text{ J/m}^{3}$ mobility | $M$ | $2\text{ m}^{3}/\text{Js}$ gradient coefficient | $\alpha$ | $5\times 10^{-11}\text{ N}$ Table 1: Model parameters used for the numerical example in Sec. 4.2. ### 4.1 Regularization in the dislocation core As shown in Sec. 3.3.1, the present model reduces to the set of equations proposed by Po et al. (2018) in the homogeneous bulk phase. Here, we apply this formulation to a single edge dislocation in an infinite elastic medium: Fig. 1 shows the shear stress component $S_{12}$ in the plane perpendicular of this dislocation with and without regularization ($\mathcal{l}=2$ Å). In the “classical” case without regularization, the stress in the dislocation core is singular, whereas it is well defined and finite for the regularized solution, in analogy to what one would expect from a real atomistic configuration. \begin{overpic}[percent,scale={0.95}]{disl_stress} \put(62.0,15.0){\includegraphics[scale={0.6}]{sample_1dislocation.pdf}} \end{overpic} (a) Shear stress component $S_{12}$ in the glide plane. (b) Density plot the of stress component $S_{12}$. Figure 1: Shear stress component $S_{12}$ for a single edge dislocation. The inset in (a) shows the simulation setup. ### 4.2 Effect of the regularization on the interaction of dislocations with a moving interface This examples demonstrates the interaction of dislocations with a moving interface between phase variant 1 (indicated by the superscript “V1”) and variant 2 (indicated by the superscript “V2”). The phase mesostructure is described by one single order parameter $\phi$. The only difference between the two variants is with respect to the eigenstrain induced by the phase transformation. This inelastic strain is given as a function of the order parameter by $\boldsymbol{E}^{\text{in}}(\phi_{\beta})(\phi)=\begin{cases}\boldsymbol{E}^{\text{in}}(\phi_{\beta})^{\text{V1}}\,\varphi(\phi)&\textrm{if}\;\;\phi\geqslant 0\,,\\\ \boldsymbol{E}^{\text{in}}(\phi_{\beta})^{\text{V2}}\,\varphi(\phi)&\textrm{if}\;\;\phi<0\\\ \end{cases}\,,$ (43) where $\boldsymbol{E}^{\text{in}}(\phi_{\beta})^{\text{V1}}$ and $\boldsymbol{E}^{\text{in}}(\phi_{\beta})^{\text{V2}}$ are the eigenstrains of the phases V1 and V2, respectively, $\displaystyle\boldsymbol{E}^{\text{in}}(\phi_{\beta})^{\text{V1}}=\left(\begin{matrix}0&0.076\\\ 0.076&0\end{matrix}\right)\,,$ $\displaystyle\boldsymbol{E}^{\text{in}}(\phi_{\beta})^{\text{V2}}=\left(\begin{matrix}0&-0.076\\\ -0.076&0\end{matrix}\right)\,.$ (44) $\varphi(\phi)$ is a polynomial chosen in accordance with Levitas and Preston (2002) $\varphi(\phi)=\frac{a}{2}\phi^{2}+(3-a)\phi^{4}+\frac{1}{2}(a-4)\phi^{6}\,.$ (45) The symmetric bulk chemical free energy takes the following form $\rho\psi_{\text{b}}(\phi)=A\phi^{6}+B\phi^{4}+C\phi^{2}\,,$ (46) and the interface energy density is assumed as $\rho\psi_{\text{i}}\left(\operatorname*{grad}\phi\right)=\frac{\alpha}{2}\,||\operatorname*{grad}\phi||^{2}.$ (47) For this specific case the resulting set of partial differential equations (18) and (42a) can be further simplified to $\displaystyle\operatorname*{div}{\boldsymbol{S}_{\text{t}}}=\boldsymbol{0}\,,$ (48a) $\displaystyle\boldsymbol{S}-\mathcal{l}^{2}\Delta\boldsymbol{S}=\boldsymbol{S}_{\text{t}}+\mathcal{l}^{2}\,\mathbb{C}:\Delta\boldsymbol{E}^{\text{in}}(\phi_{\beta})\,,$ (48b) with $\displaystyle\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi\bigr{)}=\mathbb{C}:\left(\boldsymbol{E}-\boldsymbol{E}^{\text{in}}(\phi_{\beta})\right)-\mathcal{l}^{2}\,\mathbb{C}:\operatorname*{div}{\boldsymbol{\mathcal{Y}}}\,,$ (48c) and $\displaystyle M^{-1}{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}=\alpha\Delta\phi+\boldsymbol{S}:\partial_{\phi}\boldsymbol{E}^{\text{in}}(\phi_{\beta})-\rho\partial_{\phi}\psi_{\text{b}}(\phi)\,.$ (48d) These equations are solved for the displacement field $\boldsymbol{u}$, the order parameter $\phi$ and the true stress $\boldsymbol{S}$. All parameters and coefficients occurring in the above equations are summarized in Tab. 1. The resulting interface energy, computed for a stationary flat interface, is $\gamma=0.22\text{ J/m}^{2}$. The timescale in the simulation is controlled by the mobility constant $M$. Since the simulation time can be arbitrarily re- scaled using the mobility, in our simulations we treat it as dimensionless pseudo time. variant 2 --- variant 1 --- 1 --- 2 --- 3 --- 4 --- 5 --- 6 --- (a) Schematic representation of interface and dislocation arrangement. The system is assumed to be periodic in vertical direction. Only the domain indicated by the dashed box is used in the simulations. The false-color plot indicated domains with positive (red) and negative (blue) in-plane shear stress $S_{12}$. (b) In-plane shear stress $S_{12}$ due to the dislocation for the two different regularization lengths $\mathcal{l}$ used in this example. Figure 2: Model problem with initially flat phase boundary driven by pure shear loading towards a periodic arrangement of dislocations. 2 --- 2.1 --- 2.3 --- 4.5 --- 5 --- 5.2 --- 4 --- 3.5 --- 3.3 --- 2.95 --- 3.05 --- 0 --- 1 --- 1.5 --- 1.7 --- 1.85 --- 2 --- 2.3 - 6 --- a) --- b) --- Figure 3: Propagation of the interface. The labeled lined correspond to the center of the interface at the pseudo-times (in $\upmu$s) denoted by the corresponding labels. a) $\mathcal{l}=0.6$ Å: The interface is arrested at the dislocation array. b) $\mathcal{l}=1.9$ Å: The interface sweeps over the dislocation array. (a) Evolution of the V1 phase content for different regularization lengths $\mathcal{l}$. For $\mathcal{l}=0.6$ Å the interface is arrested, whereas for $\mathcal{l}=1.9$ Å it moves past the dislocation array. (b) Rate of the evolution of the V1 phase content for different regularization lengths $\mathcal{l}$. The large rate at time 5.2 is an artifact of approaching the boundary of the simulation domain. (c) Phase-boundary positions at equidistant time intervals for regularization lengths $\mathcal{l}=0.6$ Å (left) and $\mathcal{l}=1.9$ Å (right). An increasing distance between the contours indicates an acceleration of the interface, while a decreasing distance indicates deceleration. The interface positions at times 1, 2, 3, 4, 5, 6 $\upmu$s are shown in red (from left to right). (d) Evolution of the V1 phase content for $\mathcal{l}=0.6$ Å. A comparison between the overall phase content, the phase content along the centerline of the simulation box and the top of the simulation box. (e) Evolution of the V1 phase content for $\mathcal{l}=1.9$ Å. A comparison between the overall phase content, the phase content along the centerline of the simulation box and the top of the simulation box. Figure 4: Evolution of the phase content of variant 1 over pseudo-time. The following scenario considers an initially flat interface between variants V1 and V2, and a periodic, immobile dislocation structure with a dislocation spacing of 10 nm within variant 2. For a pictorial representation, see Fig. 2a. The structure is assumed to be infinite in vertical direction, allowing us to reduce the simulation domain to the dashed 40 nm wide and 10 nm high box in Fig. 2a with periodic boundary conditions in vertical direction. The domain is loaded under pure shear conditions with an in-plain shear stress of 85 MPa, under which V1 is energetically more favorable, i.e., the interface will move to the right. Simulations are carried out using different regularization lengths $\mathcal{l}$ ($\mathcal{l}=0.6$ Å and $\mathcal{l}=1.9$ Å) resulting in different peak stresses in the dislocation core (see Fig. 2b). Fig. 3 shows the positions of the V1-V2 interface for different points in simulation time. As the interface approaches the dislocations it bows out due to the interaction with the stress field of the dislocation core. The smaller regularization length results in a larger stress magnitude in the dislocation core region, leading to an arrest of the interface (see Fig. 3a). For the larger regularization length the stress in the vicinity of the dislocation is low enough in order to allow the interface to pass over the dislocation as shown in Fig. 3b. To analyze the temporal evolution in more detail Fig. 4 visualizes a number of different aspects of the investigated system. The overall V1 phase content, i.e., the area containing phase variant V1 divided by the whole area, as a function of (pseudo) time is shown in Fig. 4a for the two different regularization lengths. There, the most obvious characteristic is the arrest of the interface for both values of $\mathcal{l}$ happening simultaneously shortly after $t=2$ µs. While the system with the smaller regularization length has already reached a stationary state, the other system shows that the interface “detaches” from the dislocation and swipes the same area per time as before, which shows in the same inclination of the respective line in Fig. 4a. How does the _rate_ of the V1 phase content evolution change shortly before and after the arresting of the interface took place? In Fig. 4a the peaks at $t\approx 2$ and $\approx 3$ µs indicate that the phase boundary is accelerating towards the dislocation until its velocity is significantly reduced in the vicinity of the dislocation. The second peak shows that the interface effectively accelerates again after passing the dislocation. This stage is followed by another dip (in between $\approx 3$ and $\approx 3.5$ µs) where the interface motion right of the dislocation is again decelerated. This behaviors is also visualized in Fig. 4c, which shows the interface position at equidistant points in time. Two different phenomenons operate here, which can be understood from the change of sign of the shear stress field of an edge dislocation as shown in Fig. 2a. Recall that the inelastic strain of the interface is governed only by the shear components of the strain tensor. Once the phase boundary is getting close enough to interact with the dislocation, the upper and lower sections of phase V2 are in the regions 3/6 of the dislocation (compare Fig. 2a). The driving force is effectively directed in positive x-direction and causes the acceleration. The central regions of the phase boundary, is located in region 4 of the dislocation stress field and therefore experiences a net driving force that is directed in _opposite_ direction. This interplay between the directions of the two driving forces is also responsible for the curvature of the interface. Once the interface has passed the dislocation, the central regions of the interface experiences a very large driving force in positive direction (region 1 of the dislocation). The top and bottom sections of the interface, however, are located in regions with negative driving force 2/6. When the interface moves further towards the right, the magnitude of the driving force from the dislocation acting in section 1 decreases as $1/r$ and this region of the interface decelerates. At the same time, the driving force acting on the top and bottom of the interface increases only slightly, explaining the second dip in Fig 4b around $t=3...3.75$µs. At a sufficient distance from the dislocation the top and bottom parts of the interface accelerate, which leads to a decrease in curvature of the interface. This is further shown in Figs. 4d and 4e, which relate the motion of the curved interface to the motion of a flat interface for the case without a dislocation structure (indicated by the dotted line). ## 5 Summary In this paper we developed a framework for coupling a phase-field description of planar defects such as phase or twin-boundaries with a discrete representation of dislocations within (anisotropic) first-strain-gradient elasticity. Its main features and advantages in contrast to phase-field within classical elasticity are: * 1. Non-singular stresses at the dislocation core that can be easily calibrated to match molecular statics predictions using the approach of Admal et al. (2017) * 2. Non-singular driving forces for the evolution of the phase-field evolution in the presence of dislocation. This ensures a mesh-independent numerical solution and is a necessary condition for modeling the _interaction_ of dislocations with interfaces such as phase-, grain- or twin-boundaries. We have shown that in order to ensure regularized driving forces in the dislocation core, a Helmholtz-type elastic free energy that is quadratic in the gradient of the total rather than the elastic strain must be used. We implemented the proposed framework in the Comsol Multiphics Modeling Software and demonstrated its feasibility and basic properties based on a number of examples. Coupled to a dislocation-dynamics code, we expect this phase-field framework to be a valuable tool for understanding microstructure- evolution on a small scale. Acknowledgements The authors gratefully acknowledge the Deutsche Forschungsgemeinschaft (DFG) for supporting this work carried out within the framework of Collaborative Research Center SFB 799. SS acknowledges financial support from the European Research Council through the ERC Grant Agreement No. 759419 MuDiLingo (“A Multiscale Dislocation Language for Data-Driven Materials Science”). ## Appendix A Energy that is quadratic in $\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)$ Starting with a setup identical to Sec. 3.2 but for the Helmholtz-type elastic free energy $\rho\psi_{\text{e}}\left(\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\,,\,\theta\right)=\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)+\frac{1}{2}\left(\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)\,,$ (49) we find $\displaystyle\boldsymbol{S}=\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)=\mathbb{C}(\phi_{\beta}):\left(\boldsymbol{E}-\boldsymbol{E}^{\text{in}}(\phi_{\beta})\right)\,,$ (50) $\displaystyle\boldsymbol{\mathcal{T}}=\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\,,$ (51) $\displaystyle\boldsymbol{\xi}_{\beta}=\alpha\operatorname*{grad}\phi_{\beta}-\boldsymbol{\mathcal{T}}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})\,,$ (52) and once again combining the first two equations $\displaystyle\boldsymbol{\mathcal{T}}=\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\left(\mathbb{C}^{-1}(\phi_{\beta}):\boldsymbol{S}\right)}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\,.$ (53) From Eq. (17) we find the constitutive equation for the total stress $\boldsymbol{S}_{\text{t}}$ $\boldsymbol{S}_{\text{t}}\bigl{(}\boldsymbol{E}\,,\,\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\bigr{)}=\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\operatorname*{div}{\left[\mathbb{C}(\phi_{\beta}):\bigl{(}\boldsymbol{\mathcal{Y}}-\operatorname*{grad}{\boldsymbol{E}^{\text{in}}(\phi_{\beta})}\bigr{)}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\right]}\,.$ (54) the equation to determine true stress $\boldsymbol{S}$ from the total stress $\boldsymbol{S}_{\text{t}}$ $\boldsymbol{S}-\operatorname*{div}{\left[\mathbb{C}(\phi_{\beta}):\operatorname*{grad}{\left(\mathbb{C}^{-1}(\phi_{\beta}):\boldsymbol{S}\right)}\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\right]}=\boldsymbol{S}_{\text{t}}\,.$ (55) The evolution equation for the order parameter obtained using the same procedure as in Sec. 3.2 is $M{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}+\operatorname*{div}{\left[\boldsymbol{\mathcal{T}}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})\right]}-\rho\partial_{\phi_{\beta}}\psi\,.$ (56) The divergence on the right hand side of (56) is easily evaluated: $\displaystyle\operatorname*{div}{\left[\boldsymbol{\mathcal{T}}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})\right]}$ $\displaystyle=-\operatorname*{div}{\left[\boldsymbol{\mathcal{T}}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)\right]}$ $\displaystyle=-\operatorname*{div}{\boldsymbol{\mathcal{T}}}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\boldsymbol{\mathcal{T}}\,\smash{\vdots}\,\partial_{\phi_{\beta}}\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)$ $\displaystyle=\boldsymbol{S}_{\text{t}}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})-\boldsymbol{S}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\boldsymbol{\mathcal{T}}\,\smash{\vdots}\,\partial_{\phi_{\beta}}\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)$ $\displaystyle=\boldsymbol{S}_{\text{t}}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})+\rho\partial_{\phi_{\beta}}\psi-\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\partial_{\phi_{\beta}}\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-$ $\displaystyle\hskip 71.13188pt\frac{1}{2}\left(\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)\cdot\partial_{\phi_{\beta}}\boldsymbol{\Lambda{}}(\phi_{\beta})\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)-$ $\displaystyle\hskip 99.58464pt\frac{1}{2}\left(\partial_{\phi_{\beta}}\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)$ Finally, we find the expression $M{\dot{\phi\mkern 5.0mu}\mkern-5.0mu}{}_{\beta}=\alpha\Delta\phi_{\beta}+\boldsymbol{S}_{\text{t}}:\partial_{\phi_{\beta}}\boldsymbol{E}^{\text{in}}(\phi_{\beta})-\frac{1}{2}\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right):\partial_{\phi_{\beta}}\mathbb{C}(\phi_{\beta}):\boldsymbol{E}^{\text{e}}\left(\boldsymbol{E}\,,\,\phi_{\beta}\right)-\\\ \quad\frac{1}{2}\left(\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)\cdot\partial_{\phi_{\beta}}\boldsymbol{\Lambda{}}(\phi_{\beta})\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)-\frac{1}{2}\left(\partial_{\phi_{\beta}}\mathbb{C}(\phi_{\beta}):\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)\cdot\boldsymbol{\Lambda{}}(\phi_{\beta})\right)\,\smash{\vdots}\,\boldsymbol{\mathcal{Y}}^{\text{e}}\left(\boldsymbol{\mathcal{Y}}\,,\,\phi_{\beta}\right)-\\\ \rho\partial_{\phi_{\beta}}\psi_{\text{b}}(\phi_{\beta}\,,\,\theta)-\partial_{\phi_{\beta}}g(\phi_{\beta}\,,\,\theta)\,.$ (57) where the total stress $\boldsymbol{S}_{\text{t}}$ appears in the driving force. In general, this stress cannot be assumed to be bounded in the dislocation-core. This is illustrated in Fig. 5 that shows the maximum shear stress in the dislocation core for different “thicknesses” of the dislocation, i.e., different discretizations. While the true stress $S_{12}$ does not change noticeably once the discretization is sufficiently fine, the total stress $S_{\text{t}12}$ keeps increasing with decreasing thickness of the dislocation. Figure 5: The maximum shear stress in the dislocation core as a function of the dislocation “thickness”. ## References * Admal et al. (2017) Admal, N.C., Marian, J., Po, G., 2017. The atomistic representation of first strain-gradient elastic tensors. Journal of the Mechanics and Physics of Solids 99, 93–115. doi:10.1016/j.jmps.2016.11.005. * Cai et al. (2006) Cai, W., Arsenlis, A., Weinberger, C.R., Bulatov, V.V., 2006\. A non-singular continuum theory of dislocations. Journal of the Mechanics and Physics of Solids 54, 561–587. doi:10.1016/j.jmps.2005.09.005. * Del Piero (2009) Del Piero, G., 2009. On the method of virtual power in continuum mechanics. Journal of Mechanics of Materials and Structures 4, 281–292. doi:10.2140/jomms.2009.4.281. * Germain (1973) Germain, P., 1973. The Method of Virtual Power in Continuum Mechanics. Part 2: Microstructure. SIAM Journal on Applied Mathematics 25, 556–575. doi:10.1137/0125053. * Gurtin (1996) Gurtin, M., 1996. Generalized Ginzburg-Landau and Cahn-Hilliard equations based on a microforce balance. Physica D: Nonlinear Phenomena 92, 178–192. * Lazar et al. (2005) Lazar, M., Maugin, G.A., Aifantis, E.C., 2005. On dislocations in a special class of generalized elasticity. physica status solidi (b) 242, 2365–2390. doi:10.1002/pssb.200540078. * Lazar et al. (2006) Lazar, M., Maugin, G.A., Aifantis, E.C., 2006. Dislocations in second strain gradient elasticity. International Journal of Solids and Structures 43, 1787–1817. doi:10.1016/j.ijsolstr.2005.07.005. * Lazar and Po (2015) Lazar, M., Po, G., 2015. The non-singular Green tensor of Mindlin’s anisotropic gradient elasticity with separable weak non-locality. Physics Letters A 379, 1538–1543. doi:10.1016/j.physleta.2015.03.027. * Levitas and Preston (2002) Levitas, V.I., Preston, D.L., 2002\. Three-dimensional Landau theory for multivariant stress-induced martensitic phase transformations. II. Multivariant phase transformations and stress space analysis. Physical Review B 66, 134207\. doi:10.1103/PhysRevB.66.134207. * Lothe (1992) Lothe, J., 1992. Dislocations in Continuous Elastic Media, in: Indenbom, V.L., Lothe, J. (Eds.), Elastic Strain Fields and Dislocation Mobility. Elsevier. volume 31 of Modern Problems in Condensed Matter Sciences, pp. 175–235. doi:10.1016/B978-0-444-88773-3.50008-X. * Maugin (1980) Maugin, G., 1980. The method of virtual power in continuum mechanics: Application to coupled fields. Acta Mechanica 35, 1–70. * Maugin (1992) Maugin, G., 1992. The Thermomechanics of Plasticity and Fracture. Cambridge University Press. * Mindlin (1964) Mindlin, R., 1964. Micro-structure in linear elasticity. Archive for Rational Mechanics and Analysis 16, 51–78. * Nabarro (1947) Nabarro, F.R.N., 1947. Dislocations in a simple cubic lattice. Proceedings of the Physical Society 59, 256. doi:10.1088/0959-5309/59/2/309. * Peierls (1940) Peierls, R., 1940. The size of a dislocation. Proceedings of the Physical Society 52, 34. doi:10.1088/0959-5309/52/1/305. * Po et al. (2018) Po, G., Lazar, M., Admal, N.C., Ghoniem, N., 2018. A non-singular theory of dislocations in anisotropic crystals. International Journal of Plasticity 103, 1–22. doi:10.1016/j.ijplas.2017.10.003.
# Generator coordinate method for transition-state dynamics in nuclear fission G.F. Bertsch Department of Physics and Institute for Nuclear Theory, Box 351560, University of Washington, Seattle, Washington 98915, USA K. Hagino Department of Physics, Kyoto University, Kyoto 606-8502, Japan ###### Abstract The existence of transition channels across the fission barrier has been central to the theory of induced fission, but there has been no microscopic theory applicable to energies at the barrier top. We propose a microscopic treatment motivated by the Generator Coordinate Method (GCM) and Gaussian Overlap Approximation (GOA) to parameterize both the dynamics within the channels and their incoherent couplings to states outside the barrier. The physical characteristics of the channels (often called ”transition states”) examined here are their effective bandwidths for crossing the barrier and the quality of the coupling to compound-nucleus states as measured by the transmission factor $T$. We also investigate the spacing of GCM states with respect to their degree of overlap. We find that a rather coarse mesh provides an acceptable accuracy for estimating the bandwidths and transmission factors. The common numerical stability problem in using the GCM is avoided due to the choice of meshes and the finite bandwidths of the channels. ## I Introduction Transition-state theory111 The term “channel” describes its role in reaction theory better than “state” and we shall use that designation for the models presented here. has been at the foundation of the theory of induced fission since the original paper by Bohr and Wheeler in 1939 bo39 and continuing up to the present era bo13 ; capote2009 ; chadwick2006 ; chadwick2011 ; cap2011 ; lu2016 ; schmidt1991 . It is encapsulated in the formula for the decay rate $\Gamma$ $\Gamma_{\rm BW}=\frac{1}{2\pi\rho}\sum_{i}T_{i}$ (1) where $i$ labels channels, $\rho$ is the level density of the compound nucleus, and $T_{i}$ is a transmission coefficient or conductance. It is also identical to the penetration factor in subbarrier conductance. It satisfies the bounds $0\leq T\leq 1$. Typically $T$ is assumed to depend on energy as a particle traversing an inverted parabolic barrier, but that is a pure guess absent a microscopic understanding of the Hamiltonian dynamics. The goal of this paper is to carry out the first steps of building a microscopic theory of the barrier-crossing dynamics applicable to heavy nuclei. In the theory of large nuclei, the starting point is the wave functions of self-consistent mean-field theory, such as those given by the energy density functionals of Skyrme, Gogny, or relativistic formulations bender03 . Besides the self-consistent solutions of the Hartree-Fock (HF) or Hartree-Fock-Bogoliubov (HFB) equations, an adequate basis of states for studying transport properties can be constructed using the Generator Coordinate Method (GCM). This requires the calculation of mean-field configurations that are constrained by one or more single-particle fields. The GCM has been used previously for modeling fission dynamics near the barrier top go05 ; ta17 . In those works, the authors used GCM with two constraining fields and the Gaussian Overlap Approximation (GOA) to map the Hamiltonian onto a two-dimensional Schrödinger equation. However, the steps needed to arrive at a Schrödinger equation ignore the statistical aspects of the decay and gives no hint of a connection to Eq. (1). In an earlier paper be21 we showed how one could derive the transition-state formula in a highly simplified configuration-interaction approach. Here we shall use the same reaction theory formalism to calculate transmission coefficients, but with a much more realistic description of the channels. An important advantage of the reaction theory is that statistical aspects of the theory can be easily included in the formalism ha21 . A technical obstacle in the GCM approach is the nonorthogonality of the basis configurations. As will be shown, any formal difficulties are avoided in present-day theory based on the many-body Green’s function in Eq. (4). A related problem is the danger of numerical instabilities when overlaps between configurations are large. We will show that for the reaction theory that problem does not arise, because one can use coarse bases without much loss of accuracy. For investigating transition channels in fermionic systems, the general characteristics can be derived independently of the details of the constraining field. A configuration is labeled by the expectation value of the field; we shall call the expectation value $q$ and further specify it as $q_{i}$ for a configuration $i$ in a finite-dimensional basis. We will start the analysis with a model that assumes the validity of the Gaussian Overlap Approximation. Besides the internal properties of the channel, one needs specific information about the coupling of the reservoirs of states on either side of the channel. The situation is very similar to treating couplings to electrical cables. The cable has a characteristic impedance, and conductance depends on impedance matching. An optimally matched coupling yields a transition conductance $T=1$. Mismatches on either side decrease it, except for resonances within the cable. To model the complete conductance we define reservoirs of states outside the channel and some details of the interaction connecting them to channel states. In this paper we treat the outside states and their couplings schematically. ## II GCM methodology for transmission channels The usual procedure for applying the GCM to nuclear spectroscopy consists of the following steps. 1) Define a set of configurations calculated in mean-field theory and constrained by some physical one-body field such as the mass quadrupole moment $Q$. The set of expectation values of the field $(q_{1},q_{2},...q_{N})$ defines an $N$-dimensional basis for the configuration space. 2) Calculate the matrix $\boldsymbol{N}$ of overlaps between configurations and the matrix $\boldsymbol{H}$ of the Hamiltonian or the energy functional that plays the role of the Hamiltonian in the mean-field theory. Here and below we use boldface symbols for matrices. 3) Solve the nonhermitian eigenvalue problem (i.e., the Hill-Wheeler equation) $\boldsymbol{H}\psi=E\boldsymbol{N}\psi$ (2) for energies $E$ and corresponding $N$-dimensional wave functions $\psi$. 4) Check for convergence by varying the number of configurations $N$ in the calculation. The effect on the properties in the low-energy part of the spectrum should be small. Steps 1) and 2) are the same for calculating reaction rates in the GCM, but the remaining steps are completely different. Namely, the new steps are: 3’.) In a new step 3), the Hamiltonian is made complex by adding imaginary terms $-\boldsymbol{\Gamma}_{j}/2$ $\boldsymbol{H}^{\prime}=\boldsymbol{H}-i\sum_{j}\boldsymbol{\Gamma}_{j}/2.$ (3) Each $\boldsymbol{\Gamma}_{j}$ is a matrix of decay rates to states $j$ outside of the model space. For the present problem, there are two decay modes, one corresponding to the set of compound nucleus states and the other to states in the second well. We label $a$ and $b$, respectively, in the equation below. 4’.) The next step is to calculate the Green’s function for the Hamiltonian $\boldsymbol{H}^{\prime}$ by the matrix inversion, $\boldsymbol{G}(E)=(\boldsymbol{H}-i\boldsymbol{\Gamma}_{a}/2-i\boldsymbol{\Gamma}_{b}/2-\boldsymbol{N}E)^{-1}.$ (4) This replaces the matrix diagonalization in the old step 3). 5’.) As a final step, the transmission factor $T_{ab}$ between reservoir $a$ and $b$ is computed in $S$-matrix theory as encapsulated by the Datta formula da95 ; al21 $T_{ab}=\sum_{ijkl}(\boldsymbol{\Gamma}_{a})_{ij}\boldsymbol{G}_{jk}(\boldsymbol{\Gamma}_{b})_{kl}\boldsymbol{G}^{*}_{li}={\rm Tr}\,\left(\boldsymbol{\Gamma}_{a}\boldsymbol{G}\boldsymbol{\Gamma}_{b}\boldsymbol{G}^{*}\right).$ (5) Calculated this way, $T_{ab}$ is a continuous real function of $E$ in the range $[0,1]$. As in the procedure for spectroscopic studies, one gains confidence by varying the dimension of the configuration spaces. ## III Decay widths Constructing the matrices $\boldsymbol{H}$ and $\boldsymbol{N}$ is straightforward with available tools for calculating properties of large nuclei in the HF or HFB approximations. The calculation of the internal widths $\boldsymbol{\Gamma}_{j}$ is more subtle. The guiding principle is Fermi’s Golden Rule for the decay of configuration $i$ into a set of states $j$: $\Gamma_{j}=2\pi\overline{\langle i|H|j\rangle^{2}}\rho_{j}.$ (6) Here $\rho_{j}$ is the density of final states, and the overline indicates an average over them. The state $i$ is part of the basis for the transmission channel, while the states $j$ might include quasiparticle excitations of the GCM configurations beyond the channel entry and exit points. The Golden Rule assumes that the states are defined in an orthogonal basis, even though it can be extended to a non-orthogonal basis bertsch19 . It is easy to keep $i$ nearly orthogonal to the set $j$ by demanding that the orbital fillings be different. The transmission channel includes multiple configurations $i$ and $i^{\prime}$ and the coherence between them needs to be taken into account. This requires a matrix form for $\boldsymbol{\Gamma}$, $(\boldsymbol{\Gamma}_{j})_{ii^{\prime}}=2\pi\overline{\langle i|H|j\rangle\langle i^{\prime}|H|j\rangle}\rho_{j}.$ (7) We follow this structure and use the separable form $(\boldsymbol{\Gamma}_{j})_{ii^{\prime}}=\gamma_{j}g_{i_{j}}(i)g_{i_{j}}(i^{\prime}).$ (8) to parameterize them. Here $i_{j}$ labels e.g., the cm positions of the final state configurations. We assume that the decays take place near the positions of the end states, so that $i_{e}=1$ or $N$. ## IV Examples of GOA Hamiltonians Before specializing to the internal transmission channels that are the subject of this paper, we show how the GCM/GOA methodology can be applied to calculate the free propagation of a composite particle. A basic assumption is that the mean-field wave function $\Psi_{q_{i}}$ of the many-body system can be factorized into a product of an internal wave function $\Psi(\xi)$ and a Gaussian wave packet in the center-of-mass coordinate $x$ be19c , $\Psi_{z}(x)\sim\Psi(\xi)\exp\left(-(x-z)^{2}/2s^{2}\right).$ (9) Here $z$ locates the center of the wave packet and $s$ is its r.m.s. width. The factorization is exact for the ground states in the harmonic oscillator model of the single-particle Hamiltonian, but is an uncontrolled approximation for more realistic mean-field potentials. On the other hand, the Gaussian form of the overlap function in Eq. (10) below can be checked and is quite well satisfied in practice be19c . With this approximations the overlap matrix $\boldsymbol{N}=\langle\Psi_{z}|\Psi_{x^{\prime}}\rangle$ is given by $\boldsymbol{N}_{ij}=\exp(-(z-z^{\prime})^{2}/4s^{2}).$ (10) Under the GOA the Hamiltonian matrix $\boldsymbol{H}$ can be approximated as a quadratic polynomial times the overlap function, $\boldsymbol{H}_{ij}=E_{K}(1-(z-z^{\prime})^{2}/2s^{2})\boldsymbol{N}_{ij}.$ (11) The prefactor $E_{K}$ is the kinetic energy associated with the center-of-mass wave function, $E_{K}=\hbar^{2}/4Ms^{2}$ with $M$ the mass of the composite particle. When the formula is applied to other GCM collective coordinates, $E_{K}$ is related to the zero-point energy associated with that coordinate. Eq. (11) depends only on the coordinate difference $z-z^{\prime}$ when the Hamiltonian is translationally invariant. Translational invariance also implies that the eigenfunctions of the center- of-mass coordinate are plane waves. A plane wave of momentum $k$ is constructed by the integral $\Psi_{k}=\int^{\infty}_{-\infty}dz\,\Psi_{z}e^{ikz}.$ (12) The corresponding eigenvalues can be computed as the ratio of matrix elements $E_{k}=\frac{\langle\Psi_{z}|H|\Psi_{k}\rangle}{\langle\Psi_{z}|\Psi_{k}\rangle}=\frac{\hbar^{2}}{2M}k^{2}.$ (13) Here $z$ is arbitrary. Turning now to the computational treatment of internal channels, we consider a chain of $N$ states spanning an interval $[q_{1},q_{N}]$ with spacing $\Delta q=(q_{N}-q_{1})/N$ between configurations. As discussed above, imaginary decay matrices centered at end points $q_{1}$ and $q_{N}$ are added to $\boldsymbol{H}$. If the cm wave functions of the reservoir states have the same Gaussian form, $g$ in Eq. (8) may be taken as $g_{i_{e}}(i)=\boldsymbol{N}_{i_{e}i}$. The resulting Hamiltonian is then $\boldsymbol{H}^{\prime}_{ij}=\boldsymbol{H}_{ij}-i\gamma\boldsymbol{N}_{i1}\boldsymbol{N}_{j1}-i\gamma\boldsymbol{N}_{iN}\boldsymbol{N}_{jN}.$ (14) ### IV.1 A single flat channel The first model we investigate is a flat chain composed of $N=4$ configurations with overlaps between them determined by the parameter $s=1/\sqrt{5}$. This choice of $s$ was shown in Ref. be19c to give a good compromise between accuracy and computational effort. The channel is the topmost one depicted in Fig. 1. The states indicated by black circles are the ones included in the $N=4$ model. We will also examine the same model with 7 states; the added states are shown as the red circles. Figure 1: Relationship between states in the models described in Sections IV.A,B, and C. The states in the 4-state and 7-state channels are shown as black and black+red circles respectively. The real part of the Hamiltonian couples the states in the channel or channels; the couplings to the reservoirs are parameterized by the imaginary part of the Hamiltonian. The diagonal energies of the GCM states $E_{K}$ are taken to be $E_{K}=5/4$ and the strengths of the absorption at the ends are $\gamma=1$. With these parameters the overlap between neighboring states is fairly small, $\boldsymbol{N}_{i,i+1}=0.28$. The resulting transmission factor $T(E)$ calculated by Eq. (5) is shown in Fig. 2 as the black solid line. Figure 2: Transmission factor for a chain of length $q=3\sqrt{5}/2$ comparing GCM calculations for 4 and 7 states in the chain (solid black and dashed red lines, respectively). The parameters of the Hamiltonian are $(s,E_{0},\gamma)=(1/\sqrt{5},5/4,1)$. See the Supplementary Material for the computer scripts used to calculate the data presented here and later in the Figures. One sees a structure of 4 peaks, each close to an eigenvalue of the Hill- Wheeler Eq. (2). Physically, the peaked structure arises from the wave reflection at the ends of the channel. Note also that the range of $T(E)$ satisfies $0<T<1$ as required by the unitarity of the $S$ matrix. Note also that the channel starts conducting near $E\approx 0$, as would be the case for a classical channel. The adequacy of the mesh spacing can be assessed by shrinking it. Decreasing it by a factor of 2, the same interval contains 7 seven states instead of 4. The resulting transmission factor is shown as the dashed red line in Fig. 2. One sees that in the low-energy region it is quite similar to the 4-state approximation. However, it has 3 additional peaks at higher energy, corresponding to the high-energy eigenfunctions of the 7-d model. These peaks are much narrower than the lower ones and can be neglected in calculating integrated transmission rates. The same behavior would continue with finer mesh spacings; there would remain 4 peaks in the energy region $[0,2]$ and the additional narrow peaks would appear at higher and higher energies. The qualitative aspects of this behavior can be easily understood. With a finite mesh spacing of Gaussian wave packets one can approximate plane wave with a good fidelity for low momentum, but there is a momentum cutoff controlled by the mesh spacing. In the transmission channel as parameterized, the momentum at the injection and exit point is controlled by the Gaussian width parameter $s$. The momentum match to the channel parameters suppresses the transmission to the high-momentum modes in the channel. We conclude that fairly sparse meshes are adequate for representing the overall conductivity of flat transmission channels. As mentioned earlier, very fine mesh spacings often lead to numerical instabilities in the spectroscopic applications of the GCM. The usual fix is to make a singular value decomposition of the overlap matrix, throwing out eigenfunctions that have small norms. It is instructive to see what happens when the same procedure is applied here. Figure 3: Transmission factor for a chain of length $q=4$ comparing GCM calculations for 4 and 7 states in the chain (solid black and dashed red lines, respectively). The difference with Fig. 2 is that the 7-d space was truncated to 4 dimensions by the singular value decomposition of the overlap matrix. The Hamiltonians are the same as in Fig. 2. Fig. 3 compares the $4$-state model with the 7-state model truncated to 4 states by eliminate the small-norm vectors. One sees that the resonance positions are rather close and the widths are also very similar. There is no obvious benefit from starting out with a larger space. Since there is no need to truncate the space for reasons of numerical stability, this aspect of the usual methodology can be dropped. We next examine how $T(E)$ depends on the strength of the absorption at the ends of the channel. Fig. 4 shows $T(E)$ for a range of absorption strengths $\gamma$. Figure 4: Transmission factors in the 4-d model for several values of absorption strength: $\gamma=0.5$ (solid black line); $\gamma=1.0$ (red dashed line); $\gamma=2.0$ (blue dotted line). Obviously, for small $\gamma$ the channel acts as a resonant cavity with sharply defined resonances and the overall conductance is low. For the intermediate value the reflection amplitude is small and the individual peak structure disappears. Very large $\gamma$’s are probably unphysical, as the couplings are strong enough to concentrate the decay strength into a small energy region. ### IV.2 A parabolic channel In this section we extend the model to include a peaked barrier. We take the shape of the barrier as an inverted parabola, as is often assumed in phenomenological treatments. Under the factorization Ansatz Eq. (9) the GCM matrix elements of a potential depending only on the cm position are given by $\displaystyle\langle i|V|j\rangle=$ $\displaystyle\frac{1}{s\pi^{1/2}}\int^{\infty}_{-\infty}dx\,V(x)e^{-(x-z_{i})^{2}/2s^{2}-(x-z_{j})^{2}/2s^{2}}.$ (15) Here $V(x)$ is taken as the parabolic form $V(x)=V_{2}(x-x_{b})^{2}$ where $x_{b}$ is at the center of the barrier. The resulting GCM matrix elements are $\langle i|V|j\rangle=V_{2}\left[\left(\frac{z_{i}+z_{j}}{2}-x_{b}\right)^{2}+\frac{s^{2}}{2}\right]\boldsymbol{N}_{ij}.$ (16) The matrix $\boldsymbol{V}$ of these elements are added to the Hamiltonian defined in Eq. (11) and (14). Note that the diagonal potential matrix elements are slightly below the defining potential due to the second term in Eq. (16). The diagonal energies are indicated in the channel marked “B” in Fig. 1. For a numerical example we take $V_{2}=-1/2$. The channel Hamiltonian then has eigenenergies ranging from $-$0.4 to 3.3. Fig. 5 shows the transmission factor as a function of energy taking $\gamma=1.0$. Figure 5: Transmission factor in the 4-d and 7-d models with a parabolic barrier. Solid black line: 4x4 model; red dashed line: 7x7 model. It may be seen that the coupling to outside is strongest at the energy of the end channel states. However, at that energy the barrier suppresses the conductance and $T$ is a fraction of the maximum. At higher energies the $T$ can approach its maximum value of one, but the coupling is weaker and the peaks are narrow. We believe this behavior is generic for channels that follow the topography of the potential energy surface. This is the case when they are constructed using an adiabatic approximation. To see that the results are not an artifact of the GCM mesh spacing, we also show the transmission factor taking a finer mesh with 7 GCM configurations instead of 4. One sees that the low-energy conductance is almost the same. At higher energies, the narrowing of the peaks is also similar, although the peaks are somewhat shifted in energy. ### IV.3 Two crossing channels To understand better the adiabatic treatment, we consider a model in which the adiabatic channels arise from coupling between diabatic ones. We start with two diabatic channels that cross as depicted in the lowest diagram in Fig. 1. The dashed black lines link configurations that have large matrix elements in HF mean field theory; resulting chains are the diabatic paths in the dynamics. Adiabatic dynamics arise when one first diagonalizes the Hamiltonian within the subspace at fixed $z_{i}$. These are indicated by the curved red dotted lines in the Figure. The picture of adiabatic channels peaking at the barrier top is unavoidable in transition-channel theory as implemented in Eq. (1). For the Hamiltonian model we add linear potentials to generate the diabatic paths together with a constant interaction between intrinsic states at the same positions $q$. The matrix elements for a potential having a constant slope $V_{a}(x)=v_{a}x$ are given by $\boldsymbol{V}_{ai;a^{\prime}j}=v_{a}\left(\frac{z_{i}+z_{j}}{2}-x_{b}\right)\boldsymbol{N}_{ij}\delta_{aa^{\prime}}.$ (17) Here $a$ and $a^{\prime}$ label the two diabatic channels. The other term to be added to the Hamiltonian is the coupling $h_{c}$ between intrinsic states of the two diabatic channels. We take the coupling as a constant independent of $z$. Again invoking factorization hypothesis, the matrix elements are $\boldsymbol{H}_{ai,a^{\prime}j}=h_{c}(1-\delta_{a,a^{\prime}})N_{ij}.$ (18) For the numerical example, we take $v_{a}=\pm 0.5$ in the channels and $h_{c}=0.8$ for the coupling. As depicted in Fig. 1 there are now 4 decay matrices to be added to the Hamiltonian. We may assume that final states are all orthogonal, so we can apply the transmission formula Eq. (5) with an incoherent sum over all four combinations $(a,a^{\prime})\times(b,b^{\prime})$. In the adiabatic approximation, only the transmission factor from the two lowest states at the ends are included, $T_{\rm adiabatic}\approx T_{ab^{\prime}}.$ (19) It is shown as the red dashed line in Fig. 6. The dotted blue line shows the combined transmission factor that includes including the upper adiabatic channel as well, $T^{\prime}_{\rm adiabatic}\approx T_{a^{\prime}b}.$ (20) These are to be compared to the full transmission factor (solid black line) including all contributions, $T=T_{ab}+T_{a^{\prime}b^{\prime}}+T_{ab^{\prime}}+T_{b^{\prime}a}.$ (21) Figure 6: Transmission factor for the Hamiltonian “C8” depicted in Fig. 1. Solid black line: all contributions to $T$; red line: lowest adiabatic contribution; blue line: both adiabatic contributions One sees that the adiabatic approximation works well overall when both channels are included. The second channel adds hardly anything where the lower channel is open, but fills out the higher region $1.0<E<2.5$. Another interesting finding, not very visible in the figure, is that the adiabatic approximation significantly under predicts the transmission factor at the lowest energies. This inadequacy of the approximation was noted earlier in Refs. ha20 ; gi14 . We have also calculated $T$ without any coupling between the diabatic channels. As expected, that treatment seriously underpredicts the transmission coefficient. ## V General conclusions . A few tentative conclusions may be drawn from the simple models presented here. First of all, one does not need fine collective-coordinate meshes in the GCM configuration space. A mesh spacing giving overlaps of 0.3 between a configuration and its diabatic neighbor seems adequate; smaller mesh spacings will place the resonances at different positions but the coarse properties of the channel will remain the same. The second conclusion is that momentum matching is an important consideration in the channel coupling to the reservoir states. It produces an effective energy cutoff in the conductance of the channel. The energy scale for this effect is given by the zero-point energy of the collective coordinate in the mean-field wave function. To give a sense of that, we present in Table I some characteristics of the transmission function $T(E)$ for the models discussed in the previous section. The first characteristic is the integrated transmission factor. This is reported in the Table in units of $E_{K}$, $I_{T}=\int_{-\infty}^{\infty}dE\,T(E)/E_{K}.$ (22) Another important characteristic is the energy difference between the nominal barrier height and the energies of the states in the reservoirs that are strongly coupled by the channel. A measure of that is the median energy $E_{m}$ in the integrated $T(E)$, $\int_{-\infty}^{E_{m}}dE\,T(E)=\frac{1}{2}\int_{-\infty}^{\infty}dE\,T(E),$ (23) measured with respect to the GCM energy at the barrier, $\langle x_{b}|H|x_{b}\rangle$ in the single-channel examples and the lower eigenvalue of the Hamiltonian for the case of two coupled channels. Model | $I_{T}$ | $E_{m}/E_{K}$ ---|---|--- A4 | 1.7 | 0.0 A7 | 1.7 | 0.0 B4 | 0.6 | -0.1 B7 | 0.6 | -0.3 C8 | 3.1 | 0.9 C${}^{a}_{8}$ | 2.5 | 0.9 Table 1: Integrated channel properties of the models discussed in the text. The model labels refer to the subsection in Sect. IV where they were discussed. The subscript refers to the dimension of the GCM space. $E_{m}$ is computed with respect to the diagonal GCM energy at the barrier peak. In case C, the energy is the adiabatic one computed by diagonalizing the 2x2 matrix mixing the two GCM states. The row marked C${}^{a}_{8}$ ignores the coupling between the adiabatic channels. The energy difference scaled to $E_{K}$ is given in the third column of Table I. Both quantities exhibited in the Table support our conclusion that one can safely use coarse meshes to define the channels. Comparing the 4-state models with the 7-state models one sees little or no change in the integrated transmission factor and in its median energy. Model C with two interacting diabatic channels has about twice the integrated transmission strength as model A, which is hardly surprising. Another point to note about the phenomenological transition-state theory is that the coupling between adiabatic channels is completely neglected. We can test this assumption by the entry marked “C${}^{a}_{8}$”. That model ignores the coupling between the adiabatic channels; its transmission factor is lower by 20%. This is not beyond the expected uncertainties in a fully microscopic theory, but a computational framework taking all interactions into account is obviously to be preferred. ## References * (1) N. Bohr and J.A. Wheeler, Phys. Rev. 56, 426 (1939). * (2) O. Bouland, J.E. Lynn, and P. Talou, Phys. Rev. C 88 054612 (2013). * (3) R. Capote et al., Nucl. Data Sheets 110, 3107 (2009). * (4) M.B. Chadwick et al., Nucl. Data Sheets 107, 2931 (2006). * (5) M.B. Chadwick et al., Nucl. Data Sheets 112, 2887 (2011). * (6) T. Cap, K. Siwek-Wilczynska, and J. Wilczynski, Phys. Rev. C83, 054602 (2011). * (7) H. Lu, A. Marchix, Y. Abe and D. Boilley, Comp. Phys. Comm. 200, 381 (2016). * (8) K.-H. Schmidt and W. Morawek, Rep. Prog. Phys. 54, 949 (1991). * (9) M. Bender, P.H. Heenen, and P.-G. Reinhard, Rev. Mod. Phys. 75, 121 (2003). * (10) H. Goutte, et al., Phys. Rev. C 71 024316 (2005). * (11) H. Tao, et al., Phys. Rev. C 96 024319 (2017). * (12) G.F. Bertsch and K. Hagino, J. Phys. Soc. Jpn. 90 114005 (2012). * (13) K. Hagino and G.F. Bertsch, Phys. Rev. E104, L052104 (2021). * (14) S. Datta, Electronic Transport in Mesoscopic Systems, (Cambridge University Press, Cambridge, 1995), Eq. (3.5.20). * (15) Y. Alhassid, G.F. Bertsch, and P. Fanto, Ann. Phys. 424 168381 (2021). * (16) G.F. Bertsch and L.M. Robledo, Phys. Rev. C100, 044606 (2019). * (17) G.F. Bertsch and W.Younes, Ann. Phys. 403 68 (2019). * (18) K. Hagino and G.F. Bertsch, Phys. Rev. C 101 064317 (2020). * (19) S.A. Guilini, L.M. Robledo, and R. Rodriguez-Guzmán, Phys. Rev. C. 90 054311 (2014).
**footnotetext: First Author. Work done during an internship at NAVER Cloud.$\dagger$$\dagger$footnotetext: Co-corresponding authors. # Noise Map Guidance: Inversion with Spatial Context for Real Image Editing Hansam Cho1,2∗, Jonghyun Lee1,2, Seoung Bum Kim1, Tae-Hyun Oh3,4†, Yonghyun Jeong2† 1School of Industrial and Management Engineering, Korea University, 2NAVER Cloud 3Dept. of Electrical Engineering and Grad. School of Artificial Intelligence, POSTECH 4Institute for Convergence Research and Education in Advanced Technology,Yonsei University {chosam95, tomtom1103<EMAIL_ADDRESS> <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Text-guided diffusion models have become a popular tool in image synthesis, known for producing high-quality and diverse images. However, their application to editing real images often encounters hurdles primarily due to the text condition deteriorating the reconstruction quality and subsequently affecting editing fidelity. Null-text Inversion (NTI) has made strides in this area, but it fails to capture spatial context and requires computationally intensive per-timestep optimization. Addressing these challenges, we present Noise Map Guidance (NMG), an inversion method rich in a spatial context, tailored for real-image editing. Significantly, NMG achieves this without necessitating optimization, yet preserves the editing quality. Our empirical investigations highlight NMG’s adaptability across various editing techniques and its robustness to variants of DDIM inversions. Figure 1: Compared to other inversion methods, NMG (a) demonstrates high fidelity editing when paired with Prompt-to-Prompt, (b) successfully conducts viewpoint alteration via MasaCtrl, and (c) preserves the spatial context of the input image while performing zero-shot image-to-image translation with pix2pix-zero. Text prompt corresponding to each input image is presented beneath each sample, with words introduced for image editing distinctly highlighted in green. ## 1 Introduction Text-guided diffusion models (Rombach et al., 2022; Saharia et al., 2022; Ramesh et al., 2022; Chang et al., 2023; Podell et al., 2023) have recently emerged as a powerful tool in image synthesis, widely adapted for their ability to generate images of exceptional visual quality and diversity. Owing to their impressive performance, numerous studies have leveraged these models for image editing (Hertz et al., 2022; Cao et al., 2023; Parmar et al., 2023; Epstein et al., 2023). While these models excel in editing synthesized images, they often produce sub-par results when editing real images. Hertz et al. (2022) point out that this challenge mostly originates from the reliance on classifier-free guidance (CFG) (Ho & Salimans, 2021), a method ubiquitously adopted to increase fidelity Editing real images within a diffusion framework follows a twofold approach. Firstly, an image is inverted into a latent variable via DDIM inversion. This latent variable is then channeled into two paths: reconstruction and editing. While the editing path modifies the latent towards the desired outcome, it also integrates information from the reconstruction path to retain the original identity of the image. As a result, the quality of the edit is highly dependent on the reconstruction path. However, extrapolating towards the condition via CFG introduces errors in each timestep. This deviation pushes the reconstruction path away from the DDIM inversion trajectory, hindering accurate reconstruction and ultimately diminishing the editing fidelity. Addressing this limitation, Mokady et al. (2023) propose Null-text Inversion (NTI) leading to notable achievements in real image editing. This approach optimizes the null-text embedding used in CFG on a per-timestep basis, correcting the reconstruction path to the desired DDIM inversion trajectory. By leveraging the optimized null-text embedding and integrating it with Prompt-to-Prompt (Hertz et al., 2022) editing, NTI conducts real image editing. While NTI offers certain benefits, its use of per-timestep optimization can lead to increased computational demands in practical applications. As a solution to this time-intensive optimization method, NPI (Miyake et al., 2023) and ProxNPI (Han et al., 2023) attempt to approximate the optimized null-text embedding without optimization. Although these methods succeed in correcting the reconstruction path, they often struggle to capture the spatial context in the input images. To address these challenges, we introduce Noise Map Guidance (NMG), an inversion methodology enriched with spatial context for real image editing. To capture spatial context, we employ the latent variables from DDIM inversion, which we refer to as noise maps. These noise maps, essentially noisy representations of images, inherently encapsulate the spatial context. To eliminate the need for an optimization phase, we condition the noise maps to the reverse process. Rather than solely depending on text embeddings for image editing, our methodology harnesses both noise maps and text embeddings, drawing from their spatial and semantic guidance to perform faithful editing. To accommodate our dual-conditioning strategy, we reformulate the reverse process by leveraging the guidance technique proposed by Zhao et al. (2022). Our experimental results highlight NMG’s capacity to preserve the spatial context of the input image during real image editing. Figure 1 (a) and (c) reveal that NTI, NPI, and ProxNPI often struggle to capture the spatial context of the input image. Furthermore, Figure 1 (b) highlights a scenario where spatial context is essential for effective editing, and in such context, NMG consistently outperforms other methods. By utilizing guidance techniques of diffusion models, our optimization-free method achieves speeds substantially faster than NTI without compromising editing quality. The versatility of NMG is further emphasized by its integration with various editing techniques, e.g., Prompt-to-Prompt (Hertz et al., 2022), MasaCtrl (Cao et al., 2023) and pix2pix-zero (Parmar et al., 2023), each grounded in an inversion methodology. Moreover, we demonstrate NMG’s resilience to variations of DDIM inversion. Our main contributions are summarized as follows: * • We present Noise Map Guidance (NMG), an inversion method rich in spatial context, specifically tailored for real-image editing. * • Although formulated as an optimization-free method, we show that NMG maintains editing quality without compromise, even achieving superior performance over competing methods. * • We demonstrate NMG’s adaptability by combining it with different editing methodologies and by highlighting its consistent robustness across variations of DDIM inversion. ## 2 Related Work #### Inversion with Diffusion Models The initial stage of image editing typically consists of encoding the input image into a latent space, referred to as inversion. DDIM (Song et al., 2020a) first introduces the concept of inversion within diffusion models by formulating the sampling process as an ordinary differential equation (ODE) and inverting it to recover a latent noise as a starting point for the sampling process. However, the precision of DDIM’s reconstruction deteriorates when being integrated with a text-guided diffusion model (Hertz et al., 2022). In response, recent studies propose the addition of an auxiliary diffusion state (Wallace et al., 2023) or a different inversion framework (Huberman- Spiegelglas et al., 2023). Recently, Mokady et al. (2023) proposes Null-text Inversion (NTI), which tailors inversion techniques specifically for text- guided diffusion models. NTI refines the null-text embedding used in text- guided diffusion models, achieving accurate image reconstruction. However, the intensive computation required for optimization constrains its broader applicability. To mitigate this drawback, Negative-prompt Inversion (NPI) (Miyake et al., 2023) approximates the optimized null-text embedding to achieve optimization-free inversion, albeit with compromised performance relative to NTI. Subsequently, Han et al. (2023) incorporate proximal guidance into NPI, enhancing its quality. Like NPI, we introduce an optimization-free inversion technique, but uniquely, our approach robustly maintains the spatial context of the original image. #### Editing with Inversion methods Despite the emergence of numerous studies in the field of editing via diffusion models, they often encounter challenges including the need for extra training (Kawar et al., 2023; Valevski et al., 2023; Bar-Tal et al., 2022), or the incorporation of additional conditional signals (Avrahami et al., 2022; 2023; Nichol et al., 2022; Rombach et al., 2022). Recently, inversion-based editing methods (Hertz et al., 2022; Cao et al., 2023; Parmar et al., 2023) have shown promising results that do not require additional training or conditional signals. These methods typically involve two parallel phases: a reconstruction sequence and an editing sequence. In the reconstruction sequence, critical information from the input image is extracted and fed into the editing sequence for manipulation. Notably, Prompt-to-Prompt (Hertz et al., 2022) leverages cross-attention maps to guide the editing sequence, while MasaCtrl (Cao et al., 2023) introduces mutual self-attention to facilitate non-rigid editing. pix2pix-zero (Parmar et al., 2023) utilizes cross-attention map guidance to perform zero-shot image-to-image translation. Because obtaining precise information from the reconstruction path is crucial for reliable image editing, the efficiency of these methods heavily relies on the performance of the inversion methods they use. By integrating our inversion method with existing inversion-based editing methods, we demonstrate significant improvement in preserving the spatial context of input images. ## 3 Method ### 3.1 Background #### Text-guided Diffusion Model Text-guided diffusion models (Rombach et al., 2022) are designed to map a random Gaussian noise vector ${\bm{z}}_{T}$ into an image ${\bm{z}}_{0}$ while aligning with the given text condition $c_{T}$, typically text embeddings derived from text encoders like CLIP (Radford et al., 2021). This is achieved through a sequential denoising operation, commonly termed as the reverse process. This process is driven by a noise prediction network $\epsilon_{\theta}$, which is optimized by the loss: $L_{simple}=E_{{\bm{z}}_{0},\epsilon\sim N(0,I),t\sim U(1,T)}\|{\epsilon-\epsilon_{\theta}({\bm{z}}_{t},t,c_{T})}\|_{2}^{2}.$ (1) Although $\epsilon_{\theta}$ is conditioned on the timestep $t$, denoted as $\epsilon_{\theta}({\bm{z}}_{t},t,c_{T})$, we omit the timestep condition for the output of the network as $\epsilon_{\theta}({\bm{z}}_{t},c_{T})$ for brevity. Text-guided diffusion models typically utilize classifier-free guidance (CFG) (Ho & Salimans, 2021) to incorporate text conditions during image generation. CFG is represented as follows: $\tilde{\epsilon}_{\theta}({\bm{z}}_{t},c_{T})=\epsilon_{\theta}\left({\bm{z}}_{t},\emptyset\right)+w\cdot(\epsilon_{\theta}\left({\bm{z}}_{t},c_{T}\right)-\epsilon_{\theta}\left({\bm{z}}_{t},\emptyset\right)).$ (2) Here, $w$ signifies the text guidance scale (controlling the impact of the text condition), and $\emptyset$ denotes the null text embedding (the embedding vector for a null-text “”). Figure 2: As seen in (a), naive reconstruction often fails due to the reconstruction path diverging from the original inversion path. Achieving reliable reconstruction necessitates realigning the reconstruction path with the inversion path. As depicted in (b), NTI achieves this alignment by optimizing the null-text embedding, thereby reducing the error between the inversion and reconstruction paths. Conversely, NMG, as shown in (c), conditions the reconstruction process based on the divergence between the two paths, leveraging this variance to refine the reconstruction path. #### DDIM Inversion To edit a real image in the framework of diffusion models, an image firstly has to be converted into a latent variable ${\bm{z}}^{*}_{T}$, a process known as inversion. Fundamentally, the latent variable ${\bm{z}}^{*}_{T}$ is the original starting noise that reconstructs into the image when denoised. To invert an image into its latent variable, DDIM inversion (Song et al., 2020a) is predominantly utilized. DDIM inversion is derived from the reverse process of DDIM that is formulated as: ${\bm{z}}_{t-1}=\sqrt{\tfrac{\alpha_{t-1}}{\alpha_{t}}}{\bm{z}}_{t}+\sqrt{\alpha_{t-1}}\left(\sqrt{\tfrac{1}{\alpha_{t-1}}-1}-\sqrt{\tfrac{1}{\alpha_{t}}-1}\right)\epsilon_{\theta}({\bm{z}}_{t},c_{T})$ (3) with $\\{\alpha_{t}\\}_{t=0}^{T}$ as a predefined noise schedule. Because this DDIM reverse process can be formulated as an ordinary differential equation (ODE), the DDIM inversion process can be obtained by reversing said ODE as: ${\bm{z}}_{t+1}=\sqrt{\tfrac{\alpha_{t+1}}{\alpha_{t}}}{\bm{z}}_{t}+\sqrt{\alpha_{t+1}}\left(\sqrt{\tfrac{1}{\alpha_{t+1}}-1}-\sqrt{\tfrac{1}{\alpha_{t}}-1}\right)\epsilon_{\theta}({\bm{z}}_{t},c_{T}).$ (4) #### Null-text Inversion Inversion-based editing methods follow a twofold approach. First, an image is converted into its latent variable through inversion. This is followed by a simultaneous two-phase procedure: reconstruction and editing. While the editing phase alters the latent variable towards the desired modification, it concurrently draws on information from the reconstruction phase to maintain the image’s characteristics. During editing, a substantial CFG guidance value of $w>1$ is crucial for generating high-fidelity images. However, the extrapolation toward the condition introduces errors at every timestep. As illustrated in Figure 2 (a), a large CFG guidance scale misguides the reconstruction path away from the inversion path. This divergence leads to an imprecise reconstructed image, subsequently degrading the final quality of the edited image. To address this, Null-text Inversion (NTI) (Mokady et al., 2023) optimizes null-text embeddings used in Eq. 2 to align with the DDIM inversion trajectory. Initially, DDIM inversion is performed with $w=1$ to compute the latent variables $\\{{\bm{z}}_{t}^{*}\\}_{t=1}^{T}$, termed the noise maps. As the reverse process progresses from $t$ to $t-1$, ${\bm{z}}_{t-1}$ is derived from ${\bm{z}}_{t}$ with Eq. 2 and Eq. 3: ${\bm{z}}_{t-1}=\sqrt{\tfrac{\alpha_{t-1}}{\alpha_{t}}}{\bm{z}}_{t}+\sqrt{\alpha_{t-1}}\left(\sqrt{\tfrac{1}{\alpha_{t-1}}-1}-\sqrt{\tfrac{1}{\alpha_{t}}-1}\right)\tilde{\epsilon}_{\theta}({\bm{z}}_{t},c_{T}).$ (5) With both ${\bm{z}}_{t-1}$ and a single noise map ${\bm{z}}^{*}_{t-1}$ available, the null-text embedding $\emptyset_{t}$ is optimized every step using the loss function: $\min_{\emptyset_{t}}\|{\bm{z}}_{t-1}-{\bm{z}}_{t-1}^{*}\|_{2}^{2}.$ (6) By optimizing the null-text embeddings, NTI reduces the error of the reconstruction path. Overall process of NTI is depicted in Figure 2 (b). However, since the null-text embedding is represented as a single-dimensional vector in $\mathbb{R}^{d}$, it struggles to capture the image’s spatial context. ### 3.2 Noise Map Guidance Unlike null-text embeddings, noise maps $\\{{\bm{z}}_{t}^{*}\\}_{t=1}^{T}$ naturally capture the spatial context of the input image. This is due to the fact that noise maps originate from infusing a small amount of noise into the input image and have the same spatial dimensions as the input image. Leveraging this attractive trait, we directly employ noise maps to preserve the spatial context of an image. As a part of this approach, we condition the reverse process on noise maps. Given that current text-guided diffusion models are conditioned solely on text embeddings, our initial step is to reformulate the reverse process to account for both text and noise maps. To reformulate the reverse process to a conditional form, we introduce the score function $\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t})$ with the relation $\epsilon_{\theta}({\bm{z}}_{t})\approx-\sqrt{1-\alpha_{t}}\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t})$. By replacing $\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t})$ with $\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t}|c)$, we enable the network to produce outputs based on specific conditions, leading to the equation $\epsilon_{\theta}({\bm{z}}_{t},c)\approx-\sqrt{1-\alpha_{t}}\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t}|c)$. Applying Bayes’s rule, $\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t}|c)=\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t})+\nabla_{{\bm{z}}_{t}}\log p(c|{\bm{z}}_{t})$, the network’s conditional output is then formulated as: $\epsilon_{\theta}({\bm{z}}_{t},c)=-\sqrt{1-\alpha_{t}}(\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t})+\nabla_{{\bm{z}}_{t}}\log p(c|{\bm{z}}_{t}))$ (7) In this context, $\nabla_{{\bm{z}}_{t}}\log p(c|{\bm{z}}_{t})$ is a crucial component to condition the diffusion model. We introduce energy guidance (Zhao et al., 2022), which serves as a flexible conditioning format in our formulation. With energy guidance, Eq. 7 is revised as follows: $\epsilon_{\theta}({\bm{z}}_{t},c)=-\sqrt{1-\alpha_{t}}(\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t})-\nabla_{{\bm{z}}_{t}}\mathcal{E}({\bm{z}}_{t},c,t)).$ (8) NTI (Mokady et al., 2023) performs one-step denoising, transitioning from $z_{t}$ to $z_{t-1}$, and compares the noise map $z^{*}_{t-1}$ with latent variable $z_{t-1}$ for optimization. To align with this process, we define the energy function $\mathcal{E}({\bm{z}}_{t},c,t)=\|{z}_{t-1}-z_{t-1}^{*}\|_{1}$ to condition the noise map for each timestep. Note that unlike NTI, we employ the L1 distance to compute the distance between the noise map and the current latent variable. Based on the definition of the energy function, Eq. 8 is revised as: $\displaystyle\epsilon_{\theta}(z_{t},c_{N})=-\sqrt{1-\alpha_{t}}(\nabla_{{\bm{z}}_{t}}\log p({\bm{z}}_{t})-s_{g}\cdot\nabla_{z_{t}}\|{\bm{z}}^{\prime}_{t-1}-{\bm{z}}_{t-1}^{*}\|_{1})$ (9) $\displaystyle\text{where}\quad{\bm{z}}^{\prime}_{t-1}=\sqrt{\tfrac{\alpha_{t-1}}{\alpha_{t}}}{\bm{z}}_{t}+\sqrt{\alpha_{t-1}}\left(\sqrt{\tfrac{1}{\alpha_{t-1}}-1}-\sqrt{\tfrac{1}{\alpha_{t}}-1}\right)\epsilon_{\theta}({\bm{z}}_{t},\emptyset).$ We term the noise map $c_{N}$ conditioned to produce the network output as $\epsilon_{\theta}(z_{t},c_{N})$ in Eq. 9. Additionally, we introduce a hyperparameter $s_{g}$ termed the gradient scale in Eq. 9 to compensate for the scale difference between the gradient and the network’s output. By adjusting the magnitude of $s_{g}$, we modulate the degree of the edit from the original reverse process to the DDIM inversion trajectory. Building on the findings that a substantial guidance scale can yield high-quality results (Dhariwal & Nichol, 2021; Ho & Salimans, 2021), we also introduce a guidance technique for noise map condition similar to Eq. 2 as follows: $\tilde{\epsilon_{\theta}}({\bm{z}}_{t},c_{N})=\epsilon_{\theta}({\bm{z}}_{t},\emptyset)+s_{N}\cdot(\epsilon_{\theta}({\bm{z}}_{t},c_{N})-\epsilon_{\theta}({\bm{z}}_{t},\emptyset)),$ (10) where $s_{N}$ is the guidance scale of the noise map. To derive a latent variable conditioned on the noise map, we execute a one-step reverse process with $\tilde{\epsilon_{\theta}}({\bm{z}}_{t},c_{N})$ as follows: ${\bm{z}}^{NM}_{t-1}=\sqrt{\tfrac{\alpha_{t-1}}{\alpha_{t}}}{\bm{z}}_{t}+\sqrt{\alpha_{t-1}}\left(\sqrt{\tfrac{1}{\alpha_{t-1}}-1}-\sqrt{\tfrac{1}{\alpha_{t}}-1}\right)\tilde{\epsilon_{\theta}}({\bm{z}}_{t},c_{N}).$ (11) By performing a text-conditioned reverse process with ${\bm{z}}^{NM}_{t-1}$, a latent variable conditioned on a noise map, we derive our final latent variable that is conditioned on both the noise map and the text condition. Empirically, we find that we can approximate ${\bm{z}}^{NM}_{t}\approx{\bm{z}}^{NM}_{t-1}$. Thus, a latent ${\bm{z}}_{t-1}$ both conditioned on noise map and text embedding is determined as follows: $\displaystyle\tilde{\epsilon_{\theta}}({\bm{z}}^{NM}_{t},c_{T})=\epsilon_{\theta}({\bm{z}}^{NM}_{t},\emptyset)+s_{T}\cdot(\epsilon_{\theta}({\bm{z}}^{NM}_{t},c_{T})-\epsilon_{\theta}({\bm{z}}^{NM}_{t},\emptyset))$ (12) $\displaystyle{\bm{z}}_{t-1}=\sqrt{\tfrac{\alpha_{t-1}}{\alpha_{t}}}{\bm{z}}^{NM}_{t}+\sqrt{\alpha_{t-1}}\left(\sqrt{\tfrac{1}{\alpha_{t-1}}-1}-\sqrt{\tfrac{1}{\alpha_{t}}-1}\right)\tilde{\epsilon_{\theta}}({\bm{z}}^{NM}_{t},c_{T}).$ (13) To maintain consistent notation conventions with $s_{N}$ in Eq. 10, we designate $s_{T}$ to represent the text guidance scale, instead of $w$ in Eq. 2. In Figure 2 (c), we display the overall process of NMG. We note that NMG is a sequential process of first conditioning the noise map and conditioning the text embedding on the outcome of the step before. ## 4 Experiments Figure 3: Image editing results using Prompt-to-Prompt are shown in (a) for local editing and (b) for global editing. Results show that DDIM lacks in preserving details of the input image, both NTI and NPI face challenges in maintaining spatial context, and ProxNPI exhibits limited editing capabilities. In contrast, NMG consistently produces robust results for both local and global edits. Our method is tailored to ensure that the reconstruction path remains closely aligned with the inversion trajectory throughout the image editing process. In the section below, we compare NMG with several methods, including DDIM (Song et al., 2020a), NTI (Mokady et al., 2023), NPI (Miyake et al., 2023), and ProxNPI (Han et al., 2023). While DDIM is not inherently formulated to prevent reconstruction divergence, we include it in our comparisons to highlight how image quality deteriorates when the reconstruction path strays from the desired trajectory. ### 4.1 Qualitative Comparison Given that the primary goal of inversion methods is image editing, we first integrate NMG into the widely adopted Prompt-to-Prompt (Hertz et al., 2022) editing method to empirically validate our approach. For the integration with Prompt-to-Prompt, we incorporate NMG within the reconstruction path, rather than the editing path. This approach stems from NMG’s characteristic of aligning a pathway with the inversion trajectory. Such an alignment can be detrimental to the image’s editing. Prompt-to-Prompt performs diverse tasks by controlling attention maps from reconstruction and editing paths. Our experiments encompass four local editing tasks: object swapping, contextual alteration, facial attribute editing, and color change, as well as four global style transfer tasks to styles of oil paintings, van Gogh, pointillism, and watercolor. Figures 3 depict our results for both local and global editing using Prompt-to-Prompt. Due to its limited reconstruction capability, DDIM struggles to integrate details of the input image into the edited image. Both NTI and NPI leverage null-text embeddings to retain input image details, but since the null-text embedding is a one- dimensional vector, it inherently struggles to preserve spatial context. ProxNPI, due to the inversion guidance, utilizes the noise maps to follow an inversion trajectory and excels in retaining spatial context. However, this guidance is also applied to editing paths, producing constrained results. For instance, the first row of Figure 3 (b) illustrates an attempt to transition the style of the image toward “van Gogh style”. ProxNPI tends to adhere closely to the original context, whereas our proposed NMG effectively transforms the style. Figure 4: Image editing outcomes are presented using (a) MasaCtrl and (b) pix2pix-zero. NMG’s proficiency in retaining spatial context is highlighted in (a), while its resilience to variations of DDIM inversion is showcased in (b). ### 4.2 Evaluating Property with Additional Editing Methods #### Leveraging spatial context The distinct advantage of NMG lies in its direct utilization of the noise maps, allowing it to preserve spatial context compared to other inversion methods. To demonstrate this capability, we integrate NMG with MasaCtrl (Cao et al., 2023), an editing approach that utilizes mutual self-attention for non-rigid editing tasks. Because MasaCtrl relies on the DDIM to reconstruct an input image during real image editing, we substitute the DDIM with NMG and other comparison methods to demonstrate the effectiveness of our approach. For spatially demanding edits, we undertake pose modification and viewpoint alternation. Figure 4 (a) showcases the results achieved with MasaCtrl. Due to NMG’s capability to access spatial context directly through noise maps, it yields unparalleled editing results. The effectiveness of our method in preserving spatial context is further highlighted in Section 4.1. Although ProxNPI employs inversion guidance to capture spatial context, its reliance on a single gradient descent step to align with the inversion trajectory occasionally results in inconsistent outputs. We also compare results with ProxMasaCtrl, a method proposed by Han et al. (2023) that integrates ProxNPI with MasaCtrl using a different integration approach than our main comparison. For details and experiments, see Appendix A.2. #### Robustness to variations of DDIM inversion Most inversion techniques leverage DDIM inversion as its foundation to encode the image into the latent. However, DDIM inversion can be modified to meet certain goals. To explore the robustness of our method to variations on DDIM inversion, we incorporate NMG with pix2pix-zero (Parmar et al., 2023). Pix2pix-zero, designed for zero-shot image-to-image translation, modifies DDIM inversion by adding a regularization term to calibrate the initial noise to more closely resemble Gaussian noise. In the inversion phase, we utilize the modified DDIM inversion proposed in pix2pix-zero. For the reconstruction phase, similar to our experiments with MasaCtrl, we integrate NMG and other comparison methods. Figure 4 (b) presents the editing outcomes achieved using pix2pix-zero on tasks like cat-to-dog, and dog-to-cat conversions. The results indicate that our method produces robust results with spatially coherent samples, even under variations of DDIM inversion. ### 4.3 Quantitative Comparison Models | P2P (Local) | P2P (Global) | MasaCtrl | User Study ---|---|---|---|--- CLIP$\uparrow$ | TIFA$\uparrow$ | CLIP$\uparrow$ | TIFA$\uparrow$ | CLIP$\uparrow$ | TIFA$\uparrow$ | DDIM | 0.2977 | 0.8436 | 0.3066 | 0.8349 | 0.2825 | 0.8253 | - NTI | 0.2983 | 0.9125 | 0.3202 | 0.8302 | 0.2903 | 0.8277 | 10.0% NPI | 0.2982 | 0.9076 | 0.3157 | 0.8117 | 0.2921 | 0.8188 | 5.0% ProxNPI | 0.2951 | 0.8947 | 0.3006 | 0.8463 | 0.2922 | 0.8188 | 12.5% NMG(Ours) | 0.3007 | 0.8955 | 0.3221 | 0.8991 | 0.2955 | 0.8548 | 72.5% Table 1: Quantitative evaluation of image editing using local and global editing with Prompt-to-Prompt, MasaCtrl, and user study reveals that NMG consistently surpasses other baseline methods in editing performance. To quantitatively measure the quality of image editing paired with our method, we utilize two metrics: CLIPScore (Hessel et al., 2021) and TIFA (Hu et al., 2023). CLIPScore gauges how closely the edited image aligns with the target text prompt in the CLIP (Radford et al., 2021) embedding space. TIFA, on the other hand, evaluates the semantic alignment of given image and text prompt pairs based on visual question-answering accuracy. Our evaluation, utilizing all the tasks described in Section 4.1, is conducted on four local and global editing results via Prompt-to-Prompt and two tasks for non-rigid image editing via MasaCtrl. We edit 20 images for each task and report the averaging scores in Table 1. It can be seen that NMG consistently surpasses competing methods, an efficacy that stems from NMG’s capability to retain the spatial context of the input image without imposing constraints on the editing path. While the metrics previously discussed are commonly employed, their reliance on specific models often causes them to misalign with human perception. To address this and evaluate visual quality from a human-centric perspective, we additionally conduct a user study. Fifty participants were recruited through Prolific (Prolific, 2023). For the study, 40 sets, each containing four images edited using NTI, NPI, ProxNPI, and NMG, were presented to the participants. Each set was paired with a specific editing instruction and an input image. Participants chose the image with the highest fidelity that best met the editing instructions. We note that images within each set were displayed in a randomized order. In our analysis, we collect the method most often selected in each comparison and report the selection ratio in the final column of Table 1. Notably, our NMG method was the most preferred by participants. These findings underscore the efficacy of our method in aligning closely with human perception of image quality. A print screen of the user study is provided in Figure 12. | Method | Reconstruction ---|---|--- | optimized | conditional | MSE$\downarrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | Time$\downarrow$ NTI | ✓ | | 0.0127 | 0.7292 | 0.1588 | 104.85 NMG | | ✓ | 0.0124 | 0.7296 | 0.1673 | 5.97 NTI + NMG | ✓ | ✓ | 0.0124 | 0.7302 | 0.1668 | 107.72 Table 2: A quantitative evaluation of reconstruction assesses how closely each method aligns with the inversion trajectory. While combining both strategies yields the best results, NMG stands out as an efficient approach when considering both time and performance. Figure 5: Ablation results of (a) guidance scales and (b) gradient scales. In (a), we demonstrate that the noise map guidance scale governs the influence of input image nuances, while the text guidance scale steers the extent of edits in the desired direction. In (b), we demonstrate that the gradient scale regulates the degree of alignment with the inversion trajectory. ### 4.4 Ablation Study #### Reconstruction For reconstruction of real images, NMG adheres to the trajectory of DDIM inversion, conditioned by noise maps. To demonstrate the effectiveness of our approach, we compare it with the optimization-based method, NTI. Our evaluation utilizes the MS-COCO validation set Lin et al. (2014), from which we randomly select 100 images. We assess the reconstruction quality of images using metrics such as MSE, SSIM (Wang et al., 2004), and LPIPS (Zhang et al., 2018). The results in Table 2 indicate that our method performs comparably to the optimization-based approach. Notably, the best results are achieved when both methods are used together. However, regarding reconstruction speed, our strategy outpaces its optimization counterpart by nearly a factor of 20. Taking both speed and performance into account, NMG emerges as the efficient choice for aligning with the inversion trajectory during reconstruction. #### Guidance scale NMG utilizes a dual-conditioning strategy comprising noise maps and text prompts. To discern the effects of each condition, we modulate their respective scales of $s_{N}$ and $s_{T}$ in Eq.10 and Eq. 12. Figure 5 (a) illustrates the influence of each guidance scale. Prioritizing text conditioning aligns outcomes more closely to the intended edits. In contrast, a prominent noise map conditioning scale retains the image’s context. However, achieving a balance between the two scales is essential for a desirable outcome. Over-relying on text conditioning, evident in the grid image’s lower left triangle, erodes the nuance of the input image. Conversely, an excessive emphasis on noise map conditioning, seen in the grid image’s upper right triangle, may hinder successful editing. The grid image’s diagonal showcases results from balanced scaling, indicating that appropriate scaling yields the best outcomes. Importantly, our experiments maintained a consistent guidance scale, underscoring its robustness across varied samples. #### Gradient scale To regulate the adherence to the trajectory of DDIM inversion, we employ a gradient scale $s_{g}$ in Eq. 9. The effects of this scale are demonstrated by varying its magnitude and presenting the samples in Figure 5 (b). A smaller gradient scale offers limited alignment with the inversion trajectory, leading to edited outputs closely mirroring the result using DDIM in the reconstruction path. Conversely, an overly large gradient scale results in pronounced trajectory alignments, degrading the editing quality. Therefore, the proper selection of the gradient scale is vital. Analogous to the guidance scale, we maintain a consistent gradient scale across all experiments, ensuring it remains universally effective rather than overly sensitive to individual samples. ## 5 Conclusion In the evolving field of image editing, Noise Map Guidance (NMG) offers a notable advancement. By addressing the challenges present in current real- image editing methods using text-guided diffusion models, NMG introduces an inversion method rich in spatial context. NMG directly conditions noise maps to the reverse process, which captures the spatial nuance of the input image and ensures the preservation of its spatial context. Experimental results demonstrate NMG’s capability to preserve said spatial context, and this property is furthermore highlighted with spatially intensive edits. NMG is also designed as an optimization-free approach that prioritizes speed without compromising quality. NMG represents a significant step forward, suggesting further investigation and refinement in real-image editing techniques. ## Acknowledgements We thank the ImageVision team of NAVER Cloud for their thoughtful advice and discussions. Training and experiments were done on the Naver Smart Machine Learning (NSML) platform (Kim et al., 2018). This study was supported by BK21 FOUR. T.-H. Oh was partially supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub; No.RS-2023-00225630, Development of Artificial Intelligence for Text- based 3D Movie Generation). ## Ethics statement Generative models for synthesizing images carry with them several ethical concerns, and these concerns are shared by (or perhaps exacerbated in) any generative models such as ours. Generative models, in the hands of bad actors, could be abused to generate disinformation. Generative models such as ours may have the potential to displace creative workers via automation. That said, these tools may also enable growth and improve accessibility for the creative industry. ## Reproducibility statement The source code can be found at https://github.com/hansam95/NMG. ## References * Avrahami et al. (2022) Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 18208–18218, 2022. * Avrahami et al. (2023) Omri Avrahami, Ohad Fried, and Dani Lischinski. Blended latent diffusion. _ACM Transactions on Graphics (TOG)_ , 42(4):1–11, 2023. * Bansal et al. (2023) Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Universal guidance for diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 843–852, 2023. * Bar-Tal et al. (2022) Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, and Tali Dekel. Text2live: Text-driven layered image and video editing. In _European conference on computer vision_ , pp. 707–723. Springer, 2022. * Cao et al. (2023) Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. _arXiv preprint arXiv:2304.08465_ , 2023. * Chang et al. (2023) Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. _arXiv preprint arXiv:2301.00704_ , 2023. * Chung et al. (2022) Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. _Advances in Neural Information Processing Systems_ , 35:25683–25696, 2022. * Dhariwal & Nichol (2021) Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. _Advances in neural information processing systems_ , 34:8780–8794, 2021. * Epstein et al. (2023) Dave Epstein, Allan Jabri, Ben Poole, Alexei A Efros, and Aleksander Holynski. Diffusion self-guidance for controllable image generation. _arXiv preprint arXiv:2306.00986_ , 2023. * Han et al. (2023) Ligong Han, Song Wen, Qi Chen, Zhixing Zhang, Kunpeng Song, Mengwei Ren, Ruijiang Gao, Yuxiao Chen, Di Liu, Qilong Zhangli, et al. Improving negative-prompt inversion via proximal guidance. _arXiv preprint arXiv:2306.05414_ , 2023. * Hertz et al. (2022) Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-or. Prompt-to-prompt image editing with cross-attention control. In _The Eleventh International Conference on Learning Representations_ , 2022. * Hessel et al. (2021) Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pp. 7514–7528, 2021. * Ho & Salimans (2021) Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In _NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications_ , 2021. * Hu et al. (2023) Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. _arXiv preprint arXiv:2303.11897_ , 2023. * Huberman-Spiegelglas et al. (2023) Inbar Huberman-Spiegelglas, Vladimir Kulikov, and Tomer Michaeli. An edit friendly ddpm noise space: Inversion and manipulations. _arXiv preprint arXiv:2304.06140_ , 2023. * Isola et al. (2017) Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 1125–1134, 2017. * Kawar et al. (2023) Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 6007–6017, 2023. * Kim et al. (2018) Hanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, KyungHyun Kim, Youngil Yang, Youngkwan Kim, et al. Nsml: Meet the mlaas platform with a real-world case study. _arXiv preprint arXiv:1810.09957_ , 2018. * Kwon & Ye (2022) Gihyun Kwon and Jong Chul Ye. Diffusion-based image translation using disentangled style and content representation. In _The Eleventh International Conference on Learning Representations_ , 2022. * Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13_ , pp. 740–755. Springer, 2014. * Miyake et al. (2023) Daiki Miyake, Akihiro Iohara, Yu Saito, and Toshiyuki Tanaka. Negative-prompt inversion: Fast image inversion for editing with text-guided diffusion models. _arXiv preprint arXiv:2305.16807_ , 2023. * Mokady et al. (2023) Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 6038–6047, 2023. * Nichol et al. (2022) Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In _International Conference on Machine Learning_ , pp. 16784–16804. PMLR, 2022. * Parmar et al. (2023) Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. In _ACM SIGGRAPH 2023 Conference Proceedings_ , pp. 1–11, 2023\. * Pathak et al. (2016) Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 2536–2544, 2016. * Podell et al. (2023) Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: improving latent diffusion models for high-resolution image synthesis. _arXiv preprint arXiv:2307.01952_ , 2023. * Prolific (2023) Prolific, 2023. URL https://www.prolific.co. * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , pp. 8748–8763. PMLR, 2021. * Ramesh et al. (2022) Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_ , 1(2):3, 2022\. * Rombach et al. (2022) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pp. 10684–10695, 2022. * Saharia et al. (2022) Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. _Advances in Neural Information Processing Systems_ , 35:36479–36494, 2022. * Song et al. (2020a) Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In _International Conference on Learning Representations_ , 2020a. * Song et al. (2020b) Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In _International Conference on Learning Representations_ , 2020b. * Valevski et al. (2023) Dani Valevski, Matan Kalman, Eyal Molad, Eyal Segalis, Yossi Matias, and Yaniv Leviathan. Unitune: Text-driven image editing by fine tuning a diffusion model on a single image. _ACM Transactions on Graphics (TOG)_ , 42(4):1–10, 2023. * Wallace et al. (2023) Bram Wallace, Akash Gokul, and Nikhil Naik. Edict: Exact diffusion inversion via coupled transformations. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 22532–22541, 2023. * Wang et al. (2004) Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_ , 13(4):600–612, 2004. * Yu et al. (2023) Jiwen Yu, Yinhuai Wang, Chen Zhao, Bernard Ghanem, and Jian Zhang. Freedom: Training-free energy-guided conditional diffusion model. _arXiv preprint arXiv:2303.09833_ , 2023. * Zhang et al. (2018) Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 586–595, 2018. * Zhang et al. (2022) Zhongping Zhang, Huiwen He, Bryan A. Plummer, Z. Liao, and Huayan Wang. Complex scene image editing by scene graph comprehension. 2022\. URL https://api.semanticscholar.org/CorpusID:247627800. * Zhao et al. (2022) Min Zhao, Fan Bao, Chongxuan Li, and Jun Zhu. Egsde: Unpaired image-to-image translation via energy-guided stochastic differential equations. _Advances in Neural Information Processing Systems_ , 35:3609–3623, 2022. Noise Map Guidance: Inversion with Spatial Context for Real Image Editing – Supplementary Material – ## Appendix A Additional Experiments Figure 6: Ablation results of (a) conditioning order and (b) energy function ### A.1 Additional Ablation Study #### Conditioning Order As detailed in Section 3.2, NMG employs a dual-conditioning approach that leverages both the noise map and text conditions sequentially. While our primary strategy conditions the noise map before the text embeddings, it is feasible to reverse this order. Figure 6 (a) presents the outcomes of local editing under varying conditioning sequences: the second column depicts the canonical NMG sequence—initially the noise map, then text; the third column illustrates the reverse order. Both configurations yield promising results, underscoring NMG’s robustness to conditioning sequence variations. #### Energy Function In Section 3.2, we formulate the energy function using the L1 distance, diverging from the L2 distance employed by NTI (Mokady et al., 2023). Figure 6 (b) showcases editing outcomes derived from these distinct energy functions: the second column illustrates results using L1 distance, while the third column represents those from L2 distance. Both methodologies adeptly guide the editing direction. However, the L2-based results manifest a noticeable blurring in the image background. These observations align with the findings of prior research (Isola et al., 2017; Pathak et al., 2016) that L2 distance often produces blurred results. Hence, for NMG, we intentionally select the L1 distance for energy function formulation. Figure 7: Comparison with ProxMasaCtrl ### A.2 Comparisons with ProxMasaCtrl ProxiNPI (Han et al., 2023) introduces an integration with MasaCtrl (Cao et al., 2023), termed ProxMasaCtrl. This approach incorporates NPI (Miyake et al., 2023) in the reconstruction path, while leveraging proximal guidance in the editing path to enhance reliability. Figure 7 compares the results from ProxMasaCtrl with our integration, labeled as ”NMG + MasaCtrl”. Despite ProxMasaCtrl’s mitigation of unintended changes through proximal guidance, its use of NPI in reconstruction means it often misses spatial context. This lack of spatial context is evident in Figure 7 (a): the first row highlights ProxMasaCtrl’s inability to alter poses accurately, and the second row reveals missing spatial components, like a kitten’s tail, during editing. Similarly, when observing viewpoint alternation results in Figure 7 (b), ProxMasaCtrl struggles to deliver convincing outcomes, whereas our NMG-integrated solution, drawing on its rich spatial context, consistently achieves superior results. ### A.3 Experiments Details Within our experimental framework, we employ Stable Diffusion (Rombach et al., 2022), standardizing the diffusion steps to $T=50$ across all experiments. For the editing tasks, the parameters are set as follows: noise map guidance $s_{N}=10$, text guidance $s_{T}=10$, and guidance scale $s_{g}=5000$. For the reconstruction tasks, the configurations are set to $s_{N}=10$, $s_{T}=7.5$, and $s_{g}=10000$. Figure 8: Additional editing results of (a) global editing, (b) non-rigid editing, (c) combine NMG with NTI, and (d) additional comparison with ProxNPI ### A.4 Additional Results #### Local and Global Editing We present supplementary comparison results in Figure 10 and Figure 11, showcasing local and global editing outcomes achieved with Prompt-to-Prompt (Hertz et al., 2022). These additional results further underscore NMG’s proficiency in preserving spatial context without diminishing the quality of the editing output. Furthermore, we present the results of diverse stylization tasks in Figure 8(a), demonstrating NMG’s versatility. This evidence suggests that NMG’s capabilities extend beyond painting stylizations, encompassing a broad spectrum of styles, including anime and mosaic. #### Non-rigid Editing In conjunction with MasaCtrl (Cao et al., 2023), NMG demonstrates its capability for non-rigid editing. Figure 8(b) exhibits additional experimental results of pose modifications. It is evident that NMG effectively conducts edits with pronounced spatial information changes. #### Editing with NTI + NMG In image reconstruction, the combined use of NTI and NMG has been shown to yield superior quality. To ascertain whether this synergy also extends to image editing, we have undertaken a series of editing experiments utilizing both NTI and NMG. Figure 8(c) demonstrates that while NMG alone provides dependable editing, it occasionally lacks specific spatial details. Integrating additional information from NTI effectively compensates for this shortfall, restoring the spatial context of the input image. This integration potentially represents a significant advancement in augmenting the capabilities of NMG for image editing tasks. #### Comparisons with ProxNPI For a further comparative analysis with ProxNPI, we replicated the experiment using the same image featured in the ProxNPI study. Figure 8(d) presents these additional comparative results. The outcomes demonstrate that NMG yields results comparable to the ProxNPI. However, it is essential to note that ProxNPI primarily concentrates on object swapping and does not extensively explore various editing scenarios, such as contextual alterations or stylization. As discussed in Section 4.1, ProxNPI’s capabilities are somewhat limited due to its reliance on inversion guidance. In contrast, NMG is not subject to these constraints, demonstrating its more extensive utility in various image editing tasks. #### Reconstruction We offer an additional quantitative comparison of reconstruction results. The left side of Table 3 presents a quantitative comparison of reconstruction capabilities between NMG and other methods. While NMG demonstrates comparable quantitative results in reconstruction to these methods, it is important to emphasize that NMG’s primary focus is on image editing. It should be noted that high-quality reconstruction does not invariably equate to superior editing performance. ProxNPI exhibits a commendable ability to reconstruct the image in Table 3. However, as illustrated in the sixth row of Figure 11, its editing capabilities are somewhat limited, underscoring the distinction between reconstruction proficiency and editing versatility. #### Additional User Study Evaluating edited images based on their alignment with editing instructions is crucial, yet it is equally essential to delve into more detailed aspects of evaluation. For example, in local editing tasks, it is essential to maintain the integrity of unedited regions, while in global editing, preserving the overall structure of the image is essential. To explore these aspects, we conducted an additional user study similar to the approach in Section 4.3. Thirty participants, recruited via Prolific (Prolific, 2023), were presented with ten sets of images. Participants were instructed to assess the preservation of unselected regions in local editing and the retention of the overall structure in global editing. The results of the study, displayed on the right side of Table 3, indicate that NMG performs well in both preserving unselected regions during local editing and maintaining the overall structure in global editing scenarios. Method | Reconstruction | User Study ---|---|--- MSE$\downarrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | DDIM | 0.159 | 0.368 | 0.521 | - NTI | 0.013 | 0.729 | 0.159 | 0.0% NPI | 0.021 | 0.661 | 0.230 | 0.0% ProxNPI | 0.012 | 0.737 | 0.153 | 10.0% NMG(Ours) | 0.012 | 0.723 | 0.167 | 90.0% Table 3: Quantitative comparison of reconstruction and additional user study Figure 9: Illustrations of limitations ## Appendix B Limitations Our research endeavors to enhance inversion-based editing methods, focusing on improving spatial context during the editing process. Nevertheless, our current methodology is limited to compatibility with methods that do not conform to the inversion-based framework. For instance, SGC-Net (Zhang et al., 2022), which operates on a text-guided diffusion model for relationship change tasks, proves challenging to integrate with NMG due to its deviation from the inversion-based editing paradigm. Consequently, applying NMG in conjunction with SGC-Net for tasks involving relationship changes faces significant hurdles. As an alternative, we attempted relationship change tasks using MasaCtrl (Cao et al., 2023). However, Figure 9(a) depicts the ineffectiveness of this approach, as MasaCtrl is not inherently designed for relationship change tasks. Additionally, the editing capabilities of NMG are intrinsically linked to the limitations of existing inversion-based methods. For example, Prompt-to-Prompt (Hertz et al., 2022) editing involves the swapping of cross-attention maps between source and target texts, which restricts the possibility of structural changes in the edited object to the constraints of the original image’s object. This limitation is evident in Figure 9(b), which illustrates an object swap task between a car and a bicycle. The disparate structures of these objects underscore the challenges faced by NMG in ensuring reliable editing under these conditions. Moreover, NMG’s dependence on text for image editing hinders its ability to perform precise spatial changes. As shown on the left side of Figure 9(c), NMG can effectively remove an object, such as a man, from an image. However, it lacks the precision to identify and remove a specific individual, making targeted removals unfeasible. Similarly, as seen on the right side of Figure 9(c), NMG struggles to follow exact location directives when adding new objects. For instance, despite the instruction to place a boat on the right side of the beach, the edited image fails to place it on the right. Addressing these challenges to enable more accurate spatial control in real image editing with NMG is a vital area for future research and development. Figure 10: Local editing with Prompt-to-Prompt Figure 11: Global editing with Prompt-to-Prompt ## Appendix C Extended Related Work #### Guidance in Diffusion Models To generate samples that abide by a certain condition, Score-SDE (Song et al., 2020b) initially introduces a formulation for conditional diffusion models. Following this formulation, Dhariwal & Nichol (2021) proposes classifier guidance, which leverages additional classifiers to enhance the quality of class conditional image synthesis. Building upon classifier guidance, Ho & Salimans (2021) proposes classifier-free guidance, which obviates the need for an additional classifier. However, these guidance methods are often restricted to class or text conditions. EGSDE (Zhao et al., 2022) advances the field by introducing energy guidance techniques. Although EGSDE primarily focuses on image-to-image translation tasks, they notably expand the flexibility in conditioning formats of diffusion models. Similarly, DiffuseIT (Kwon & Ye, 2022), inspired by MCG (Chung et al., 2022), focuses on image translation. Successive methodologies, such as FreeDoM (Yu et al., 2023) and universal guidance (Bansal et al., 2023), broaden the scope of conditions to encompass a diverse range of generative tasks. These methodologies are adaptable to various conditional signals, including segmentation maps, style images, and facial identities. Aligning with these advancements, our approach also leverages the versatile framework of energy guidance, targeting the accurate reconstruction of input images. ## Appendix D User Study A screenshot of our user study conducted through Prolific (Prolific, 2023) is depicted in Figure 12. Note that Prolific receives a pre-built Google form, and distributes it to participants of the study. Figure 12: User study screenshot
* Jayasinghe et al. (2018) Jayasinghe T., et al., 2018, MNRAS, 477, 3145 * Jayasinghe et al. (2019a) Jayasinghe T., et al., 2019a, MNRAS, 485, 961 * Jayasinghe et al. (2019b) Jayasinghe T., et al., 2019b, MNRAS, 486, 1907 * Jean-Baptiste et al. (2017) Jean-Baptiste I., Di Matteo P., Haywood M., Gómez A., Montuori M., Combes F., Semelin B., 2017, A&A, 604, A106 * Jurcsik & Kovacs (1996) Jurcsik J., Kovacs G., 1996, A&A, 312, 111 * Jurić et al. (2008) Jurić M., et al., 2008, ApJ, 673, 864 * Karczmarek et al. (2017) Karczmarek P., Wiktorowicz G., Iłkiewicz K., Smolec R., Stępień K., Pietrzyński G., Gieren W., Belczynski K., 2017, MNRAS, 466, 2842 * Kervella et al. (2019) Kervella P., et al., 2019, A&A, 623, A117 * Kinman et al. (1966) Kinman T. D., Wirtanen C. A., Janes K. A., 1966, ApJS, 13, 379 * Koposov et al. (2017) Koposov S. E., Belokurov V., Torrealba G., 2017, MNRAS, 470, 2702 * Koposov et al. (2019) Koposov S. E., et al., 2019, MNRAS, 485, 4726 * Koppelman et al. (2018) Koppelman H., Helmi A., Veljanoski J., 2018, ApJ, 860, L11 * Koppelman et al. (2020) Koppelman H. H., Bos R. O. Y., Helmi A., 2020, arXiv e-prints, p. arXiv:2006.07620 * Kormendy & Kennicutt (2004) Kormendy J., Kennicutt Robert C. J., 2004, ARA&A, 42, 603 * Kukarkin (1949) Kukarkin B. V., 1949, The study of the structure and evolution of stellar systems * Kunder et al. (2016) Kunder A., et al., 2016, ApJ, 821, L25 * Kunder et al. (2017) Kunder A., et al., 2017, AJ, 153, 75 * Kunder et al. (2020) Kunder A., et al., 2020, AJ, 159, 270 * Lacey (1984) Lacey C. G., 1984, MNRAS, 208, 687 * Lancaster et al. (2019) Lancaster L., Koposov S. E., Belokurov V., Evans N. W., Deason A. J., 2019, MNRAS, 486, 378 * Laporte et al. (2018) Laporte C. F. P., Johnston K. V., Gómez F. A., Garavito-Camargo N., Besla G., 2018, MNRAS, 481, 286 * Laporte et al. (2019) Laporte C. F. P., Minchev I., Johnston K. V., Gómez F. A., 2019, MNRAS, 485, 3134 * Layden (1994) Layden A. C., 1994, AJ, 108, 1016 * Layden (1995a) Layden A. C., 1995a, AJ, 110, 2288 * Layden (1995b) Layden A. C., 1995b, AJ, 110, 2312 * Lee et al. (1994) Lee Y.-W., Demarque P., Zinn R., 1994, ApJ, 423, 248 * Lindegren et al. (2018) Lindegren L., et al., 2018, A&A, 616, A2 * Liu (1991) Liu T., 1991, PASP, 103, 205 * Liu et al. (2020) Liu G. C., et al., 2020, ApJS, 247, 68 * López-Corredoira & Molgó (2014) López-Corredoira M., Molgó J., 2014, A&A, 567, A106 * Mackereth et al. (2019a) Mackereth J. T., et al., 2019a, MNRAS, 482, 3426 * Mackereth et al. (2019b) Mackereth J. T., et al., 2019b, MNRAS, 489, 176 * Magurno et al. (2018) Magurno D., et al., 2018, ApJ, 864, 57 * Maiolino et al. (2017) Maiolino R., et al., 2017, Nature, 544, 202 * Majewski et al. (2017) Majewski S. R., et al., 2017, AJ, 154, 94 * Marsakov et al. (2018) Marsakov V. A., Gozha M. L., Koval V. V., 2018, Astronomy Reports, 62, 50 * Marsakov et al. (2019) Marsakov V. A., Gozha M. L., Koval’ V. V., 2019, Astronomy Reports, 63, 203 * Martig et al. (2014) Martig M., Minchev I., Flynn C., 2014, MNRAS, 443, 2452 * Mateu & Vivas (2018) Mateu C., Vivas A. K., 2018, MNRAS, 479, 211 * Mateu et al. (2018) Mateu C., Read J. I., Kawata D., 2018, MNRAS, 474, 4112 * McWilliam & Zoccali (2010) McWilliam A., Zoccali M., 2010, ApJ, 724, 1491 * Miceli et al. (2008) Miceli A., et al., 2008, ApJ, 678, 865 * Michel-Dansac et al. (2011) Michel-Dansac L., Abadi M. G., Navarro J. F., Steinmetz M., 2011, MNRAS, 414, L1 * Minchev et al. (2009) Minchev I., Quillen A. C., Williams M., Freeman K. C., Nordhaus J., Siebert A., Bienaymé O., 2009, MNRAS, 396, L56 * Moetazedian & Just (2016) Moetazedian R., Just A., 2016, MNRAS, 459, 2905 * Morrison et al. (2009) Morrison H. L., et al., 2009, ApJ, 694, 130 * Muraveva et al. (2018) Muraveva T., Delgado H. E., Clementini G., Sarro L. M., Garofalo A., 2018, MNRAS, 481, 1195 * Myeong et al. (2018a) Myeong G. C., Evans N. W., Belokurov V., Sand ers J. L., Koposov S. E., 2018a, ApJ, 856, L26 * Myeong et al. (2018b) Myeong G. C., Evans N. W., Belokurov V., Sand ers J. L., Koposov S. E., 2018b, ApJ, 863, L28 * Naidu et al. (2020) Naidu R. P., Conroy C., Bonaca A., Johnson B. D., Ting Y.-S., Caldwell N., Zaritsky D., Cargile P. A., 2020, arXiv e-prints, p. arXiv:2006.08625 * Necib et al. (2019) Necib L., Lisanti M., Belokurov V., 2019, ApJ, 874, 3 * Nemec et al. (1994) Nemec J. M., Nemec A. F. L., Lutz T. E., 1994, AJ, 108, 222 * Nemec et al. (2011) Nemec J. M., et al., 2011, MNRAS, 417, 1022 * Nemec et al. (2013) Nemec J. M., Cohen J. G., Ripepi V., Derekas A., Moskalik P., Sesar B., Chadid M., Bruntt H., 2013, ApJ, 773, 181 * Ness et al. (2013) Ness M., et al., 2013, MNRAS, 430, 836 * Nissen & Schuster (2010) Nissen P. E., Schuster W. J., 2010, A&A, 511, L10 * Oort & Plaut (1975) Oort J. H., Plaut L., 1975, A&A, 41, 71 * Oosterhoff (1939) Oosterhoff P. T., 1939, The Observatory, 62, 104 * Oosterhoff (1944) Oosterhoff P. T., 1944, Bull. Astron. Inst. Netherlands, 10, 55 * Pedregosa et al. (2011) Pedregosa F., et al., 2011, Journal of Machine Learning Research, 12, 2825 * Pietrukowicz et al. (2015) Pietrukowicz P., et al., 2015, ApJ, 811, 113 * Pietrzyński et al. (2012) Pietrzyński G., et al., 2012, Nature, 484, 75 * Preston (1959) Preston G. W., 1959, ApJ, 130, 507 * Price-Whelan et al. (2015) Price-Whelan A. M., Johnston K. V., Sheffield A. A., Laporte C. F. P., Sesar B., 2015, MNRAS, 452, 676 * Pritzl et al. (2000) Pritzl B., Smith H. A., Catelan M., Sweigart A. V., 2000, ApJ, 530, L41 * Prudil et al. (2019a) Prudil Z., Dékány I., Catelan M., Smolec R., Grebel E. K., Skarka M., 2019a, MNRAS, 484, 4833 * Prudil et al. (2019b) Prudil Z., Skarka M., Liška J., Grebel E. K., Lee C. U., 2019b, MNRAS, 487, L1 * Prudil et al. (2019c) Prudil Z., Dékány I., Grebel E. K., Catelan M., Skarka M., Smolec R., 2019c, MNRAS, 487, 3270 * Prudil et al. (2020) Prudil Z., Dékány I., Grebel E. K., Kunder A., 2020, MNRAS, 492, 3408 * Ramos et al. (2020) Ramos P., Mateu C., Antoja T., Helmi A., Castro-Ginard A., Balbinot E., Carrasco J. M., 2020, A&A, 638, A104 * Renaud et al. (2020) Renaud F., Agertz O., Read J. I., Ryde N., Andersson E. P., Bensby T., Rey M. P., Feuillet D. K., 2020, arXiv e-prints, p. arXiv:2006.06011 * Rimoldini et al. (2019) Rimoldini L., et al., 2019, A&A, 625, A97 * Robin et al. (2012) Robin A. C., Marshall D. J., Schultheis M., Reylé C., 2012, A&A, 538, A106 * Saha (1985) Saha A., 1985, ApJ, 289, 310 * Salvatier et al. (2016) Salvatier J., Wiecki T., Fonnesbeck C., 2016, PeerJ Computer Science, 2, e55 * Sandage (1982) Sandage A., 1982, ApJ, 252, 553 * Sanders & Das (2018) Sanders J. L., Das P., 2018, MNRAS, 481, 4093 * Savino et al. (2020) Savino A., Koch A., Prudil Z., Kunder A., Smolec R., 2020, arXiv e-prints, p. arXiv:2006.12507 * Schlegel et al. (1998) Schlegel D. J., Finkbeiner D. P., Davis M., 1998, ApJ, 500, 525 * Schönrich (2012) Schönrich R., 2012, MNRAS, 427, 274 * Schönrich & Dehnen (2018) Schönrich R., Dehnen W., 2018, MNRAS, 478, 3809 * Schönrich et al. (2010) Schönrich R., Binney J., Dehnen W., 2010, MNRAS, 403, 1829 * Schönrich et al. (2011) Schönrich R., Asplund M., Casagrande L., 2011, MNRAS, 415, 3807 * Schönrich et al. (2012) Schönrich R., Binney J., Asplund M., 2012, MNRAS, 420, 1281 * Searle & Zinn (1978) Searle L., Zinn R., 1978, ApJ, 225, 357 * Sellwood & Carlberg (1984) Sellwood J. A., Carlberg R. G., 1984, ApJ, 282, 61 * Sesar et al. (2007) Sesar B., et al., 2007, AJ, 134, 2236 * Sesar et al. (2013) Sesar B., et al., 2013, ApJ, 776, 26 * Sesar et al. (2017) Sesar B., et al., 2017, AJ, 153, 204 * Sharma et al. (2020) Sharma S., et al., 2020, arXiv e-prints, p. arXiv:2004.06556 * Simion et al. (2014) Simion I. T., Belokurov V., Irwin M., Koposov S. E., 2014, MNRAS, 440, 161 * Simion et al. (2019) Simion I. T., Belokurov V., Koposov S. E., 2019, MNRAS, 482, 921 * Sit & Ness (2020) Sit T., Ness M., 2020, arXiv e-prints, p. arXiv:2006.01158 * Skowron et al. (2019) Skowron D. M., et al., 2019, Science, 365, 478 * Smith (1984) Smith H. A., 1984, PASP, 96, 505 * Smith et al. (2009) Smith M. C., et al., 2009, MNRAS, 399, 1223 * Smolec (2005) Smolec R., 2005, Acta Astron., 55, 59 * Soszyński et al. (2009) Soszyński I., et al., 2009, Acta Astron., 59, 1 * Soszyński et al. (2014) Soszyński I., et al., 2014, Acta Astron., 64, 177 * Spitzer & Schwarzschild (1951) Spitzer Lyman J., Schwarzschild M., 1951, ApJ, 114, 385 * Stetson et al. (2014) Stetson P. B., Fiorentino G., Bono G., Bernard E. J., Monelli M., Iannicola G., Gallart C., Ferraro I., 2014, PASP, 126, 616 * Strömberg (1946) Strömberg G., 1946, ApJ, 104, 12 * Suntzeff et al. (1991) Suntzeff N. B., Kinman T. D., Kraft R. P., 1991, ApJ, 367, 528 * Taam et al. (1976) Taam R. E., Kraft R. P., Suntzeff N., 1976, ApJ, 207, 201 * Thomas et al. (2019) Thomas G. F., et al., 2019, MNRAS, 483, 3119 * Tian et al. (2019) Tian H., Liu C., Xu Y., Xue X., 2019, ApJ, 871, 184 * Ting & Rix (2019) Ting Y.-S., Rix H.-W., 2019, ApJ, 878, 21 * Torrealba et al. (2015) Torrealba G., et al., 2015, MNRAS, 446, 2251 * Torrealba et al. (2019) Torrealba G., et al., 2019, MNRAS, 488, 2743 * Veilleux et al. (2020) Veilleux S., Maiolino R., Bolatto A. D., Aalto S., 2020, A&ARv, 28, 2 * Velazquez & White (1999) Velazquez H., White S. D. M., 1999, MNRAS, 304, 254 * Venn et al. (2004) Venn K. A., Irwin M., Shetrone M. D., Tout C. A., Hill V., Tolstoy E., 2004, AJ, 128, 1177 * Vivas & Zinn (2006) Vivas A. K., Zinn R., 2006, AJ, 132, 714 * Vivas et al. (2001) Vivas A. K., et al., 2001, ApJ, 554, L33 * Walker & Terndrup (1991) Walker A. R., Terndrup D. M., 1991, ApJ, 378, 119 * Watkins et al. (2009) Watkins L. L., et al., 2009, MNRAS, 398, 1757 * Wegg & Gerhard (2013) Wegg C., Gerhard O., 2013, MNRAS, 435, 1874 * Wegg et al. (2019) Wegg C., Gerhard O., Bieth M., 2019, MNRAS, 485, 3296 * Wenger et al. (2000) Wenger M., et al., 2000, A&AS, 143, 9 * Wetzel et al. (2016) Wetzel A. R., Hopkins P. F., Kim J.-h., Faucher-Giguère C.-A., Kereš D., Quataert E., 2016, ApJ, 827, L23 * Widrow et al. (2012) Widrow L. M., Gardner S., Yanny B., Dodelson S., Chen H.-Y., 2012, ApJ, 750, L41 * Wielen (1977) Wielen R., 1977, A&A, 60, 263 * Xu et al. (2015) Xu Y., Newberg H. J., Carlin J. L., Liu C., Deng L., Li J., Schönrich R., Yanny B., 2015, ApJ, 801, 105 * Xue et al. (2015) Xue X.-X., Rix H.-W., Ma Z., Morrison H., Bovy J., Sesar B., Janesh W., 2015, ApJ, 809, 144 * Yu et al. (2020) Yu S., et al., 2020, MNRAS, 494, 1539 * Zinn & West (1984) Zinn R., West M. J., 1984, ApJS, 55, 45 * Zinn et al. (2014) Zinn R., Horowitz B., Vivas A. K., Baltay C., Ellman N., Hadjiyska E., Rabinowitz D., Miller L., 2014, ApJ, 781, 22 * Zinn et al. (2020) Zinn R., Chen X., Layden A. C., Casetti-Dinescu D. I., 2020, MNRAS, 492, 2161 * Zoccali et al. (2003) Zoccali M., et al., 2003, A&A, 399, 931 * de Boer et al. (2018) de Boer T. J. L., Belokurov V., Koposov S. E., 2018, MNRAS, 473, 647 ## Appendix A Photometric metallicity estimate Most of the stars in the SOS Gaia catalogue have photometric metallicities (Clementini et al., 2019) estimated through the non-linear relation by Nemec et al. (2013). The Nemec et al. (2013) relation has been fitted to a small sample of stars and it does not seem to generalise well enough on larger sample. In particular, it assigns to a group of RRL with intermediate-large periods and large $\Phi_{31}$ high metallicities ($[Fe/H]\gtrsim-0.5$) that are likely artefacts (see e.g. Fig. 17). Moreover, the relation is based on the Kepler magnitude band and a number of auxiliary relations have to be used to translate the $\Phi_{31}$ from the original band to the Gaia one (Clementini et al., 2019); additionally the value of $\Phi_{31}$ can change if we use a different number of harmonics to decompose the light curve. For all these reasons, we decide to find a relation based solely on the light curve properties reported in the Gaia SOS catalogue. For the purpose of our analysis we cross-matched the subsample of RRab stars with complete SOS light curve information in our Gclean catalogue (see Sec. 2) with different spectroscopic sample of RRab stars with spectroscopic metallicity estimate: Layden (1994) (84 stars), Marsakov et al. (2018) (76 stars), Nemec et al. (2013) (21 stars), Zinn et al. (2020) (149 stars, mostly based on the sample by Dambis et al. 2013 containing also the 84 stars in Layden 1994). Concerning the RRc stars we follow Nemec et al. (2013) considering the RRL in globular clusters (50 stars) assigning them the metallicity of the cluster they belong. We use the catalog of Gaia objected associated with Globular Clusters in Gaia Collaboration et al. (2018d), while the Globular Cluster metallicity are taken from Harris (1996). We consider the old Harris (1996) compilation because the metallicities are reported in the Zinn & West (1984) metallicity scale instead of the Carretta et al. (2009) scale used in the more recent Harris (2010) catalogue. The Zinn & West (1984) scale is the same metallicity scale of spectroscopic catalogs and the absolute magnitude-metallicity relation used in this work has been calibrated on this same scale (Muraveva et al., 2018). Figure 16: Best fit linear relation $[Fe/H]\propto a\times~{}P+b\times\Phi_{31}$ for RRab (top panel) and RRc stars (bottom panel). The spectroscopic metallicities are from Layden (1994) and Harris (1996) for RRab and RRc stars, respectively. Periods and phase difference $\Phi_{31}$ values are from the SOS Gaia catalogue. The solid black lines show the median of the posterior distributions of the relations, while the gray lines are randomly sampled from the same distributions. The black dashed lines indicate the intrinsic scatter. The best fit relations are given in Equations 3 and 4. We perform a large number of tests using both linear (e.g. Jurcsik & Kovacs 1996; Smolec 2005) and non linear relations (e.g. Nemec et al. 2013) and investigating different combinations of light curve and stellar properties. Initially, we evaluate the feature relevance through a random forest regression of the metallicity using the scikit-learn python module (Pedregosa et al., 2011). In practice we consider as feature: the period $P$ (fundamental period for RRab and first overtone period for RRc), the phase difference between the third or second light curve harmonics with respect to the fundamental one, the amplitude, the ratio between the amplitude of third or second light curve harmonics with respect to the fundamental one and the stellar color. In order to check possible biases and artefacts we also add the number of Gaia observations, the mean $G$ magnitude and the $RUWE$ to the group of features. For both RRab and RRc samples the most relevant feature is by far the period $P$, followed by the phase difference $\Phi_{31}$. We do not use the random forest method to estimate the metallicity since our training sample is relatively small and, considering the large number of parameters involved, it is very likely to produce a significant variance or overfit problem. Instead we fit the relations using a Bayesian approach taking into account the uncertainties of all the used features. In each tested relation we consider also the presence of an intrinsic scatter. We sample the posterior of the relation parameters exploiting the Hamiltonian MCMC technique making use of the python module PYMC3 (Salvatier et al., 2016). The performance of the various relations are analysed considering: $i$) fit residuals, $ii$) comparison with metallicities of RRL stars in Globular clusters (association with GC from Gaia Collaboration et al. 2018d, metallicites estimate from Harris 1996), $iii$) comparison with the spectroscopic metallicities of the RRL stars in the solar neighbours,the halo and the bulge taken from the crossmatch with the Magurno et al. (2018), Liu et al. (2020) and Savino et al. (2020) samples (see Fig. 17), $iv)$ comparison of the distance moduli derived using the $M_{G}-[Fe/H]$ relation by Muraveva et al. (2018) with the distance moduli of the Magellanic Clouds141414We used the median of the distance moduli estimates taken from NED (NASA/IPAC Extragalactic Database, http://ned.ipac.caltech.edu).. We conclude that the optimal fit, both for RRab and RRc stars, is obtained with a linear relation with $P$ and $\Phi_{31}$, very little improvements can be obtained using non-linearity or adding parameters to the relation. As already noted by Jurcsik & Kovacs (1996); Smolec (2005); Nemec et al. (2011), the major issue is a moderate systematic trend of the residuals as a function of the spectroscopic metallicities: the relation tends to overestimate (underestimate) the metallicity at the metal- poor (metal-rich) end. Anyhow, this problem is present with the same significance also with more complex models. This is likely due to the lack of calibrators at both ends of the metallicity distribution. Among the various samples of RRab, the results of the fit are very similar except for the Nemec sample, but it contains a small number of stars covering a narrower range of metallicites with respect to the other samples. Therefore, we adopt as final relations (Equation 3 and 4), the linear relation in $P$ and $\Phi_{31}$ obtained with the Layden (1994) sample (for RRab stars). This choice is motivated by the fact that it is not a collection of different catalogues and it reports a metallicities uncertainty for each star. Fig. 16 shows the best- fit relations. The metallicity interval of the fit training set ranges from -2.51 to 0.08 for the RRabs stars and from -2.37 to -0.55 for the RRc stars. Only a very small portion (mostly RRc stars) of our Gclean sample (see Sec. 2.2) has metallicities extrapolated outside these ranges: 396 at the metal- poor tail (93 RRab, 303 RRc, 295 in the halo subsample, 6 in the disc subsample), 105 at the metal-rich end (26 RRab, 79 RRc, 15 in the halo subsample, 42 in the disc subsample). These numbers are small enough to have negligible effects on our outcomes as confirmed by the results obtained with the SA sample (see e.g. Fig. 12 and Fig. 7) that contains only $0.3\%$ of stars with extrapolated metallicities. Moreover, the fit procedure “naturally" assigns larger errors to extrapolated metallicities and the implemented linear function limits uncontrolled behaviour outside the range of calibrators. Compared to the photometric metallicities reported in the Gaia SOS catalogue our estimate perform better both on estimating the absolute magnitude of the stars in the Magellanic Clouds (using the $M_{G}-[Fe/H]$ relation by Muraveva et al. 2018) and compared to the RRL sample of spectroscopic metallicity obtained by Savino et al. (2020), Liu et al. (2020) and Magurno et al. (2018). Fig. 17 shows that the distribution of SOS photometric metallicities significantly differs from the spectroscopic ones in both shape and centroid position (see also Hajdu 2019). In particular, considering the bulge sample, the SOS distribution peaks at a very metal-rich value of $[Fe/H]\approx-0.5$, while the peak of the spectroscopic metallicity is $[Fe/H]\approx-1.5$. The photometric metallicity estimated with our relation shows a more similar distribution with a coincident but narrower peak. The narrow distribution of the photometric metallicities is due to the already discussed problem of overestimating/underestimating the metallicities at the edge of the distribution. Considering the Liu et al. (2020) sample, our metallicity distribution is slightly offset from the spectroscopic distribution, but overall the distribution widths are very similar. On the contrary, the SOS distribution is much more spread containing a significant number of metal-rich stars ($[Fe/H]>-1$). The peak of the distribution of our photometric metallicities is consistent with the peak of the high resolution spectroscopic metallicities in Magurno et al. (2018), but in this case the differences in the tails are more significant. For the same sample, the SOS photometric metallicities cover the same range of the spectroscopic metallicities, but their distribution is much flatter without a clear peak and with an over- abundance of very metal-rich stars. Figure 17: Comparison between the distribution of photometric (this work, blue; Gaia SOS orange) and spectroscopic (dashed-black) metallicity values for two samples of RRL. Top panel: cross-match between the bulge RRL sample in Savino et al. (2020) and Gaia SOS with lightcurve information (212 stars). Middle panel: cross-match between the RRL sample (mostly in the halo) from Liu et al. (2020) and Gaia SOS with lightcurve information (3153 stars). Bottom panel: cross-match between the RRL sample (local field) from Magurno et al. (2018) and the Gaia SOS with lightcurve information (64 stars). Vertical lines indicate the median of each distribution. Finally, we test that the use of the constant absolute magnitude $M_{G}=0.64\pm 0.25$ for both RRab and RRc stars (see e.g. Iorio & Belokurov 2019) is a good approximation when light curve properties are not available.The associated error $\delta M_{G}=0.25$ is a robust and conservative estimate that can absorb both random and systematic uncertainties (e.g. RRL type, metallicity) giving a error on heliocentrinc distance of about $13\%$. ## Appendix B Rotation Matrix The rotation matrix $\mathbf{R}$ to pass from velocities in Spherical $\bm{V}_{\mathrm{sph}}=(V_{\mathrm{r}},V_{\mathrm{\theta}},V_{\mathrm{\phi}})$ or Cylindrical $\bm{V}_{\mathrm{cyl}}=(V_{\mathrm{R}},V_{\mathrm{z}},V_{\mathrm{\phi}})$ Galactocentric coordinates to the velocities in the observed frame of reference $\bm{V}_{\mathrm{sky}}=(V_{\mathrm{los}},V_{\ell},V_{b})$ can be obtained with the matrix product $\mathbf{R}=\mathbf{R}_{\mathrm{c}}\cdot\mathbf{R}_{\mathrm{s,sph/cyl}}$ (12) where $\mathbf{R}_{\mathrm{c}}$ is the rotation matrix to pass from the Galactic cartesian velocities $\bm{V}_{\mathrm{car}}=(V_{\mathrm{x}},V_{\mathrm{y}},V_{\mathrm{z}})$ to the the observed velocities, while $\mathbf{R}_{\mathrm{s,sph}}$ and $\mathbf{R}_{\mathrm{s,cyl}}$ are the rotation matrix to pass from Galactic cartesian velocities to Galactic spherical and cylidrincal velocities, respectively. The matrix $\mathbf{R}_{\mathrm{c}}$ is defined as $\mathbf{R}_{\mathrm{c}}=\begin{bmatrix}\cos b\cos\ell&\cos b\sin\ell&\sin b\\\ -\sin\ell&\cos\ell&0\\\ -\sin b\cos\ell&-\sin b\sin\ell&\cos b\end{bmatrix},$ (13) while the matrices $\mathbf{R}_{\mathrm{s}}$ are defined as $\mathbf{R}_{\mathrm{s,sph}}=\begin{bmatrix}\Gamma\cos\theta\cos\phi&-\Gamma\sin\theta\cos\phi&-\Gamma\sin\phi\\\ \cos\theta\sin\phi&-\sin\theta\sin\phi&-\cos\theta\\\ \sin\theta&\cos\theta&0\end{bmatrix}$ (14) and $\mathbf{R}_{\mathrm{s,cyl}}=\begin{bmatrix}\Gamma\cos\phi&0&-\Gamma\sin\phi\\\ \sin\phi&0&-\cos\phi\\\ 0&1&0\end{bmatrix}.$ (15) The factor $\Gamma$ is equal to 1 for a right-handed Galactocentric frame of reference or to -1 for a left-handed Galactocentric frame of reference (as the one used in this work). The angular coordinates $\theta$ and $\phi$ are the zenithal and azimuthal angle respectively, while $b$ and $\ell$ are the Galactic sky coordinates (see Sec. 2.1).
GAN Generative Adversarial Network DCGAN Deep Convolutional Generative Adversarial Network SNGAN Spectral Normalization Generative Adversarial Network SA-GAN Self-Attention Generative Adversarial Network BigGAN Big Generative Adversarial Network WGAN Wasserstein Generative Adversarial Network WGAN-GP Wasserstein Generative Adversarial Network with Gradient Penalty DGMR Deep Generative Model of Rainfall VAE Variational Autoencoder ASL Arterial Spin Labeling MRI Magnetic Resonance Imaging PET Positron Emission Tomography ADNI Alzheimer’s Disease Neuroimaging Initiative MSL Multiple Scale and Location SVC Singular Value Clipping LIDC The Lung Image Database Consortium image collection TCIA The Cancer Imaging Archive BRATS The Multimodal Brain Tumor Image Segmentation Benchmark CT Computerized Tomography HU Hounsfield Units DICOM Digital Imaging and Communications in Medicine FID Fréchet Inception Distance EM Earth Mover TWR Tournament Win Rate MS-SSIM Multiple Scale Structural Similarity Index bMMD2 batch-wise Maximum Mean Discrepancy2 CAD Computer-Aided Diagnostics TTUR Two Time-scale Update Rule 11institutetext: Department of Computer Science, University of Copenhagen, Denmark 11email<EMAIL_ADDRESS>22institutetext: Department of Oncology, Rigshospitalet, Denmark33institutetext: Department of Neuroscience, University of Copenhagen, Denmark44institutetext: Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark # Explicit Temporal Embedding in Deep Generative Latent Models for Longitudinal Medical Image Synthesis Julian Schön 1122 Raghavendra Selvan 1133 Lotte Nygård 22 Ivan Richter Vogelius 2244 Jens Petersen 1122 ###### Abstract Medical imaging plays a vital role in modern diagnostics and treatment. The temporal nature of disease or treatment progression often results in longitudinal data. Due to the cost and potential harm, acquiring large medical datasets necessary for deep learning can be difficult. Medical image synthesis could help mitigate this problem. However, until now, the availability of GANs capable of synthesizing longitudinal volumetric data has been limited. To address this, we use the recent advances in latent space-based image editing to propose a novel joint learning scheme to explicitly embed temporal dependencies in the latent space of GANs. This, in contrast to previous methods, allows us to synthesize continuous, smooth, and high-quality longitudinal volumetric data with limited supervision. We show the effectiveness of our approach on three datasets containing different longitudinal dependencies. Namely, modeling a simple image transformation, breathing motion, and tumor regression, all while showing minimal disentanglement. The implementation is made available online111https://github.com/julschoen/Temp-GAN. ###### Keywords: Generative Adversarial Networks Temporal Generation Semantic Editing. ## 1 Introduction The use of deep learning in the medical domain has increased recently but is limited by the need for large and well-labeled datasets [14]. A potential mitigation to this problem is the use of synthetic data obtained using generative models such as Generative Adversarial Networks [7], which has been shown to enhance medical deep learning algorithms [18]. Due to the natural temporal development of, e.g., disease progression or treatment monitoring, temporal data is gathered frequently in the medical domain. Prominent cases are neurodegenerative diseases such as Alzheimer’s or cancer-related longitudinal data collected during radiotherapy. Combining longitudinal medical data and deep learning can allow for earlier and more accurate prognosis [21], as well as disease modeling, such as tumor progression or regression [24]. GAN have successfully been used to generate temporal data. They have shown promising results in video generation [17, 20], precipitation forecasting [16], and also medical temporal data generation [5, 1]. However, all previous approaches have been either on 2D data [17, 20, 16, 1] or have considered image-to-sequence or sequence-to-sequence generation tasks [20, 16, 5]. While temporal data generation can be done with image-to-sequence or sequence-to-sequence models, they do not allow for the generation of new sequences but rely on input data on which the generated data expands. In recent years, a line of work has focused on the interpretability of GAN by investigating them utilizing linear directions in their latent spaces that result in meaningful and interpretable image transformations [6, 10, 19]. These works show that simple shifts of latent codes along a linear direction can result in powerful image transformations such as increasing memorability [6], rotating the image subject [10], or even background removal [19]. However, these approaches operate on pre-trained GAN and can only discover what is already captured by the learned representation. The following summarises the main contributions of our work: * • We propose a novel model, jointly training a GAN and a direction in the latent space corresponding to any desired image transformation for which ordered data is available. To the best of our knowledge, our approach is the first to explicitly embed image transformations in the form of linear directions into the GAN latent space during training. Furthermore, the proposed joint training procedure is model agnostic and works with any latent variable-based GAN. * • We use the proposed framework to embed a linear direction corresponding to temporal changes in volumetric medical data. This allows the generation of longitudinal volumetric data for the first time without requiring input to the generator in the form of images or sequences of images. Furthermore, as the temporal sequence generation is based on a simple shift in the latent space, we can generate smooth and continuous sequences processing each time point individually, thereby lessening memory requirements compared to the processing of full sequences. ## 2 Related Work Our work considers concepts from different lines of prior research in generative modeling. In the following, we summarise this. ### 2.0.1 Volumetric Data Synthesis Despite the advances in natural image synthesis, there is a lack of usage and implementations of general-purpose state-of-the-art volumetric GAN architectures. Existing volumetric GAN either do not utilize current advances in GAN architectures [22], are task-specific [12], trade-off image resolution to allow for state-of-the-art model architectures [8] or are focusing on image-to-image generation tasks [13, 11]. More advanced architectures, such as Self-Attention Generative Adversarial Network (SA-GAN) [23], which can be made more memory efficient by using residual blocks with bottlenecks as suggested with Big Generative Adversarial Network (BigGAN) [3], are well suited to volumetric data synthesis. However, while their use is common in the natural image domain, they are generally not used for volumetric data. ### 2.0.2 Latent Based Image Editing Latent-based image edits have been possible in GAN latent spaces since the introduction of Deep Convolutional Generative Adversarial Network (DCGAN) [15]. Currently, most approaches use linear directions, i.e., latent walks, in the latent space of pre-trained generators corresponding to interpretable image edits [6, 19, 10]. The learned representation, however, does not necessarily contain any potentially desired image transformation. InfoGAN [4] mitigates this by jointly training the GAN and an additional latent vector input to disentangle the learned representations. However, a potentially desired image transformation can not be explicitly enforced. In contrast, thanks to jointly training the generator and the desired embedding, we ensure that the desired edit is encoded in the latent space. ### 2.0.3 Temporal Synthesis Prior works on temporal synthesis have focused on video generation with GAN [17, 20, 16]. This work was inspired by the shown viability of temporal latent embedding [17] and the use of two discriminators [17, 20, 16]. However, most approaches use generators conditioned on input images [20, 16], limiting their ability to generate new sequences, the exception being Saito et al. [17], which shows the viability of temporal generation based on latent models, they do so on natural images and expand the latent space for the temporal modeling. In contrast, our approach is the first to operate on volumetric data, and we make this possible by embedding the temporal component into the latent space directly rather than expanding it. More similar approaches to ours have been introduced with TR-GAN [5] and the work by Abdullah et al. [1]. TR-GAN explores many-to-many predictions of Magnetic Resonance Imaging (MRI) using a single generator. Like the previous methods, TR-GAN utilizes a generator conditioned on input sequences to predict temporal sequences. Thus, future time steps are directly generated from input. In contrast, our approach embeds temporal dependencies in the latent space. Thus, TR-GAN relies on a sequential generator and cannot generate new sequences. Finally, Abdullah et al. [1] propose subdividing the latent space to embed temporal dependencies of medical images. However, they rely on 2D data, crops around the region of interest, and a-priori information on the time-dependent variable (e.g., accelerometer data). In contrast, our proposed method only requires the natural ordering of the temporal sequence, operates on entire volumes, and we choose linear latent embeddings. ## 3 Methods In this section, we introduce the proposed framework. Figure 1 shows a schematic overview of our proposed model architecture. Figure 1: Schematic overview of our proposed architecture for explicit embedding in GAN latent space. $z$ is a latent vector, $\boldsymbol{\alpha}$ is a set of shift magnitudes, $z_{1,...,t}$ are the shifted latent vectors, $x_{1,...,t}$ are shifted images corresponding to synthesized or real data, $x_{y}$ is the time step corresponding to the original latent vector $z$ if generated or one real image otherwise, and $x_{i,j,k}$ are three time steps, where $i,j,k\in\\{1,...,t\\}$. Our base architecture takes inspiration from video GAN and uses two discriminators. $D_{im}$ takes individual volumes, i.e., time steps, and discriminates between real and synthesized data. Thus given an underlying GAN architecture, the discriminator can be used as $D_{im}$ without further changes. Next, the architecture has a temporal discriminator $D_{temp}$, which, given three volumes, discriminates whether they are temporally consistent. Again, this is implemented using an underlying GAN architecture and tripling the input channels to allow for the input of temporal sequences. Further, we use two generators. $G_{im}$ is a traditional generator from a GAN taking some latent code $z\in\mathbb{R}^{L}$, where $L$ is the latent space size, and mapping it to an image $x$ without any further changes. Finally, the temporal generator $G_{temp}$ takes a latent code $z$ and shift magnitudes $\alpha$ and shifts $z$ by magnitudes $\alpha$ along a learned linear direction $d\in\mathbb{R}^{L}$ where $L$ is the size of the latent space. These shifted latent codes are individually used as input to $G_{im}$ to generate the sequence of volumes. Thus, rather than directly generating a sequence, we embed a direction in the latent space corresponding to the desired change, and by shifting with increasing $\alpha$, we can create a set of latent codes $\vec{z}$ corresponding to consecutive time steps of variable length. These latent codes are individually used as input to $G_{im}$ to create the desired number of time steps in data space. Given the design of the proposed model, any GAN architecture consisting of discriminator $D$ and generator $G$ can be used by adding the temporal generator $G_{temp}$ as detailed above and using the discriminator architecture twice as $D_{im}$ and $D_{temp}$ to construct an explicitly embedding GAN. We suggest the following implementation details: Based on the work of Voynov and Babenko [19], we use a direction $d$ of unit length and $\alpha\in\mathcal{U}[-6,6]$. As the image discriminator follows standard GAN training, we suggest using the same loss for the image discriminator and generator that the base architecture uses. For temporal consistency, we optimize using adversarial learning. Let $p_{true}$ be the distribution of real data correctly ordered w.r.t. transformation magnitude and $p_{false}$ be incorrectly ordered, and $p_{z}$ the latent distribution. Further, let $\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\alpha_{3})$ be any $\alpha_{i}\in\mathbb{R}$ for which $\alpha_{1}\leq\alpha_{2}\leq\alpha_{3}$, then we define the adversarial loss objective for the temporal discriminator using the hinge loss as: $\begin{split}\underset{D_{temp}}{\min}\>\mathcal{L}_{D_{temp}}&=\underset{x\sim p_{true}}{\mathbb{E}}[\min(0,1-D_{temp}(x))]\\\ &+\underset{x\sim p_{false}}{\mathbb{E}}[\min(0,1+D_{temp}(x))]\\\ &+\underset{z\sim p_{z}}{\mathbb{E}}[\min(0,1+D_{temp}(G_{im}(G_{temp}(z,\boldsymbol{\alpha}))))]\end{split}$ (1) Given that we want to force the embedding in the latent space, we add a loss term for both $G_{im}$ and $G_{temp}$ so that $G_{temp}$ learns the direction and $G_{im}$ learns the latent representation corresponding to the desired transformation. Thus, we get: $\underset{G_{im},G_{temp}}{\min}\>\mathcal{L}_{G}=\underset{z,z^{\prime}\sim p_{z}}{\mathbb{E}}\big{[}-D_{temp}(G_{im}(G_{temp}(z,\boldsymbol{\alpha})))+L_{GAN}(D_{im}(G_{im}(z^{\prime})))\big{]}$ (2) where $L_{GAN}$ is the applicable GAN loss of the base architecture, and $\boldsymbol{\alpha}$ and $p_{z}$ are defined as above. Intuitively, the temporal discriminator learns to discriminate based on the transformation we aim to embed. Therefore, the generators trying to fool the temporal discriminator must generate data that exhibits the correct transformation and does not change the scene (e.g., patient) markedly. We evaluate image quality using visual inspection and slice-wise Fréchet Inception Distance (FID) and visual inspection for temporal consistency. The shifted images should not be of worse image quality than images resulting from directly sampled latent codes. Thus, the basic image quality measures are also used to assess the shifted images. ## 4 Experiments ### 4.0.1 Datasets We evaluate the proposed architecture on three volumetric thoracic Computerized Tomography (CT) datasets. * • The Lung Image Database Consortium image collection (LIDC) [2]. We preprocess LIDC by limiting the intensity range to $[-1000,2000]$ Hounsfield Units (HU) and normalize to a range of $[-1,1]$ using min-max scaling. We resize the data to $128\times 128\times 128$ voxels to limit computational demands. To test the approach, we introduce a simple image transformation corresponding to shifts along the $x$-axis with a shift magnitude of $[-32,32]$ voxels randomly selected. * • Breathing Motion. The dataset, collected at Rigshospitalet, Copenhagen, Denmark, consists of longitudinal thoracic CT scans showing the breathing phases of $499$ non-small cell lung cancer patients treated with radiotherapy between $2009$ to $2017$. Each data point generally consists of $10$ scans, where the first five correspond to exhaling and the following five to inhaling. We only consider the scans corresponding to exhaling. We limit the range to $[-1000,500]$ HU and normalize to a range of $[-1,1]$ using min-max scaling. We resize the data to $128\times 128\times 64$ voxels to limit computational demands. * • Tumor Regression. The final dataset consists of $256$ patients, a subset of the patients from the Breathing Motion dataset, for which at least $10$ daily treatment thoracic cone beam CT scans were available. It shows tumor regression during radiotherapy. We apply the same preprocessing as with the breathing motion dataset. For all three datasets, we use $90\%$ of the patients for training and $10\%$ for testing. Figure 2 shows an example of the breathing motion and tumor regression dataset. Figure 2: Examples of breathing motion and tumor regression datasets. The presented examples are after preprocessing. For breathing motion, the center volume shows the most exhaled while the ones to the left correspond to exhaling and those to the right to inhaling. For tumor regression, we estimate the slice corresponding to the center of the tumor manually. The tumor is marked with the red bounding box. ### 4.0.2 Implementation Details We run all experiments on two Nvidia RTX A6000. We use Python $3.9$ and PyTorch $1.11.0$ for the implementation. The first author performed all visual inspections without formal education in medical image evaluation. We adapt SA- GAN [23] to volumetric data and use it as our base GAN architecture following the parameters (e.g., learning rate and optimizer) suggested by the authors unless otherwise specified. Throughout all experiments, we sample $\alpha\sim\mathcal{U}[-6,6]$ with a linear direction $d$ of unit length based on Voynov and Babenko [19], and we arbitrarily choose to train for $5000$ iterations. We use a batch size of $8$ for the LIDC dataset and a batch size of $16$ for the other two to fit the memory of the used GPUs. For the LIDC dataset, we use a latent space size of $L=512$ and reduce it to $L=256$ for the other two, as the resolution of the datasets is halved as well. ## 5 Results We present the final, unbiased estimate of the image quality of the proposed model on all three datasets in Table 1. Note that some image transformations are more readily visible in video format. Therefore, consider the generated volumes as videos in the provided GitHub repository. | FID ax. | FID cor. | FID sag. ---|---|---|--- LIDC | $93.8\pm 1.0$ | $54.3\pm 0.5$ | $30.0\pm 1.2$ Breathing Motion | $139.2\pm 2.7$ | $79.8\pm 1.0$ | $99.6\pm 1.4$ Tumor Regression | $82.2\pm 2.7$ | $42.3\pm 1.4$ | $53.4\pm 0.8$ Table 1: Image Quality of the temporal GAN trained on all three datasets. All models are trained on $90\%$ of the scans split patient-wise and evaluated on $10\%$. The FID scores are calculated using random time steps of real and synthesized data. Examples of generated volumes of the model trained on the full LIDC data set are given in Figure 3. Figure 3: Four examples of generated volumes with embedded shift for the proposed model on the LIDC dataset. The center volume corresponds to the original latent vector. All images show the center slice for the sagittal, coronal, and axial view. The resulting volumes are of high quality, and we observe the desired image transformation embedded as a shift in the latent space. The transition between shifted images is smooth with minimal entanglement. Next, we consider the breathing motion dataset. Figure 4 presents some generated volumes for the breathing motion dataset. Figure 4: Four examples of generated volumes with embedded shift for the proposed model on the breathing motion dataset. The center volume corresponds to the original latent vector. All images show the center slice for the sagittal, coronal, and axial view. We observe good image quality with sufficient detail and realistic anatomy. Considering the embedding, we observe that breathing motion is captured well, and the temporal dependencies of breathing are embedded in the latent space. The clearest change when moving along the embedded direction is the diaphragm moving upward while exhaling. This is also the most obvious change observable in the real data. Additionally, we observe the stomach or rib cage contracting while exhaling. Lastly, we observe very high scene consistency. I.e., the generated scan of the patient does not change markedly while moving the latent code along the embedded direction. Thus, the patient’s anatomy is preserved, and the changes induced by moving along the embedded direction are restricted to breathing-related changes. Finally, we consider the model trained on the tumor regression dataset. We present examples of generated volumes in Figure 5. Figure 5: Four examples of generated volumes with embedded shift for the proposed model on the tumor regression dataset. The center volume corresponds to the original latent vector. For all images, we try to locate the center of the tumor for the sagittal, coronal, and axial view. The image quality of the generated volumes is good, showing details in the vessel, tissue, and bone structure. We observe volumes both containing and not containing tumors. Those generated volumes containing tumors have them in varied places, shapes, and sizes. If tumors are present in the generated volumes, the temporal generation results in tumors shrinking in size. I.e., the model successfully embeds temporal tumor regression in image space as a linear direction in the latent space. When traversing the embedded direction, there is minimal change to the volumes other than the reduction in tumor size. Further, no clear change exists if no tumor is present in the volume. Thus, the direction models tumor regression in a disentangled manner. ## 6 Discussion ### 6.0.1 Temporal Generation We can use a simple learned direction to generate temporal sequences of data using only a single non-temporal generator, which, to the best of our knowledge, has not been shown previously. Our proposed method of jointly training such a direction and the GAN shows distinct benefits over discovering directions in pretrained generators. From visible inspection, our method shows almost no entanglement and ensures that even complex transformations, such as tumor regression, are enforced to be present as a linear direction in the latent space. Our model shows high-quality synthetic data with controllable enforced image transformations with smooth continuous generation on three datasets. Additionally, in particular, on the breathing motion and tumor regression dataset, we observe clearer changes than observed in the data (e.g., movement of the diaphragm in the breathing motion dataset). This indicates that the proposed method isolates the signal corresponding to temporal development well. Compared to other temporal generation approaches, our model does not require conditioning of the generator or a sequential generator. As a consequence, we can easily vary the architecture and benefit from advances in GAN architectures that are to come. The results we observe on the tumor regression dataset deserve the most attention. While traditional tumor regression models might be more patient- and therapy-specific [9], the scale we operate on is novel. Furthermore, unlike previous methods, we generate entire volumes and show that we can model tumor regression as part of the image generation process. ### 6.0.2 Limitations We use the parameters used for SA-GAN. While this is likely a good choice for the image generation component of our proposed model, the temporal aspects could perform better with different parameters and architecture choices. Moreover, as there is little prior investigation into evaluating temporal GAN, we needed to devise our evaluation strategy, further development in this area would likely be beneficial. ### 6.0.3 Impact & Future Work As our method is trained based only on the order of the transformation magnitude, it is reasonable to assume that it can be applied to any transformation where such an ordering exists. Further, our method provides many practical applications to downstream machine learning tasks by providing a method to synthesize controllable data, e.g., with or without tumor, in a domain where annotations are costly and difficult to obtain. We provide a proof of concept of unsupervised tumor segmentation using our method in Figure 6. Figure 6: Proof of concept of unsupervised tumor segmentation using our method. We generate a future time point to a given volume, take the difference image, and threshold it with a threshold of $0.2$. Then we apply two erosion followed by two dilation operations to remove noise and are left with the segmentation mask. Next to the direct application to tumor segmentation, the difference image might also visualize treatment effects, such as weight loss, which is a common side effect. Further practical applications directly benefiting medical image analysis will likely arise with further investigation of our method. Given the clear and isolated signal we observe, natural applications of our work could be in the visualization and understanding of changes happening in temporal sequences. Additionally, investigating how much the sampled temporal development reflects patient-specific as opposed to therapy-specific aspects would offer valuable insights. Finally, we see the future investigation of embedding non-temporal dependencies as one of the most promising potential applications. Embedding transformations across patients, e.g., disease severity, would allow for further fine-grained control over data synthesis in the medical domain. ## 7 Conclusion In this work, we investigate the possibility of explicitly embedding temporal dependencies in the latent space of generative models to generate longitudinal volumetric data. We generate controllable longitudinal data with minimal entanglement. Further, linear directions in the latent space are sufficient to generate temporal sequences from a non-temporal generator. Due to the simplicity of the linear latent walk, we can generate continuous and smooth sequences of varying lengths, unlike other suggested temporal GAN. We show that our framework can generate complex temporal dependencies, e.g., breathing motion or tumor regression, as part of the image synthesis task on medical data. The method could potentially improve unsupervised tumor segmentation, disease-aware image augmentation, and radiotherapy planning. Further, our method can embed any temporal dependency with limited supervision and thus provides further usefulness beyond what we explore in this work. ### 7.0.1 Acknowledgements The authors acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study. The authors would like to thank Anna Kirchner for help in preparation of the manuscript. Jens Petersen is partly funded by research grants from the Danish Cancer Society (grant no. R231-A13976) and Varian Medical Systems. ## References * [1] Abdullah, Holler, M., Kunisch, K., Landman, M.S.: Nonlinear motion separation via untrained generator networks with disentangled latent space variables and applications to cardiac MRI. arXiv abs/2205.10367 (2022) * [2] Armato III, S.G., McLennan, G., Bidaut, L., McNitt-Gray, M.F., Meyer, C.R., Reeves, A.P., Zhao, B., Aberle, D.R., Henschke, C.I., Hoffman, E.A., et al.: The lung image database consortium (lidc) and image database resource initiative (idri): A completed reference database of lung nodules on ct scans. Medical physics 38(2), 915–931 (2011) * [3] Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net (2019) * [4] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. pp. 2172–2180 (2016) * [5] Fan, C.C., Peng, L., Wang, T., Yang, H., Zhou, X.H., Ni, Z.L., Wang, G., Chen, S., Zhou, Y.J., Hou, Z.G.: Tr-gan: Multi-session future mri prediction with temporal recurrent generative adversarial network. IEEE Transactions on Medical Imaging pp. 1–1 (2022) * [6] Goetschalckx, L., Andonian, A., Oliva, A., Isola, P.: Ganalyze: Toward visual definitions of cognitive image properties. In: 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 \- November 2, 2019. pp. 5743–5752. IEEE (2019) * [7] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems. vol. 27. Curran Associates, Inc. (2014) * [8] Hong, S., Marinescu, R.V., Dalca, A.V., Bonkhoff, A.K., Bretzner, M., Rost, N.S., Golland, P.: 3d-stylegan: A style-based generative adversarial network for generative modeling of three-dimensional medical images. In: Deep Generative Models, and Data Augmentation, Labelling, and Imperfections - First Workshop, DGM4MICCAI 2021, and First Workshop, DALI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings. Lecture Notes in Computer Science, vol. 13003, pp. 24–34. Springer (2021) * [9] Huang, Z., Mayr, N.A., Yuh, W.T., Lo, S.S., Montebello, J.F., Grecula, J.C., Lu, L., Li, K., Zhang, H., Gupta, N., Wang, J.Z.: Predicting Outcomes in Cervical Cancer: A Kinetic Model of Tumor Regression during Radiation Therapy. Cancer Research 70(2), 463–470 (2010) * [10] Jahanian, A., Chai, L., Isola, P.: On the "steerability" of generative adversarial networks. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net (2020) * [11] Lan, H., Initiative, A.D.N., Toga, A.W., Sepehrband, F.: Three-dimensional self-attention conditional gan with spectral normalization for multimodal neuroimaging synthesis. Magnetic Resonance in Medicine 86(3), 1718–1733 (2021) * [12] Li, F., Huang, W., Luo, M., Zhang, P., Zha, Y.: A new VAE-GAN model to synthesize arterial spin labeling images from structural MRI. Displays 70, 102079 (2021) * [13] Lin, W.: Synthesizing missing data using 3d reversible GAN for alzheimer’s disease. In: ISAIMS 2020: International Symposium on Artificial Intelligence in Medical Sciences, Beijing, China, September, 2020. pp. 208–213. ACM (2020) * [14] Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., van der Laak, J.A.W.M., van Ginneken, B., Sánchez, C.I.: A survey on deep learning in medical image analysis. Medical Image Anal. 42, 60–88 (2017) * [15] Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings (2016) * [16] Ravuri, S., Lenc, K., Willson, M., Kangin, D., Lam, R., Mirowski, P., Fitzsimons, M., Athanassiadou, M., Kashem, S., Madge, S., et al.: Skilful precipitation nowcasting using deep generative models of radar. Nature 597(7878), 672–677 (2021) * [17] Saito, M., Matsumoto, E., Saito, S.: Temporal generative adversarial nets with singular value clipping. In: IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. pp. 2849–2858. IEEE Computer Society (2017) * [18] Sandfort, V., Yan, K., Pickhardt, P.J., Summers, R.M.: Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Scientific Reports 9, 16884 (2019) * [19] Voynov, A., Babenko, A.: Unsupervised discovery of interpretable directions in the GAN latent space. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event. Proceedings of Machine Learning Research, vol. 119, pp. 9786–9796. PMLR (2020) * [20] Wang, Y., Bilinski, P., Brémond, F., Dantcheva, A.: Imaginator: Conditional spatio-temporal GAN for video generation. In: IEEE Winter Conference on Applications of Computer Vision, WACV 2020, Snowmass Village, CO, USA, March 1-5, 2020. pp. 1149–1158. IEEE (2020) * [21] Wen, J., Thibeau-Sutre, E., Diaz-Melo, M., Samper-González, J., Routier, A., Bottani, S., Dormont, D., Durrleman, S., Burgos, N., Colliot, O.: Convolutional neural networks for classification of alzheimer’s disease: Overview and reproducible evaluation. Medical Image Anal. 63, 101694 (2020) * [22] Wu, J., Zhang, C., Xue, T., Freeman, B., Tenenbaum, J.: Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. pp. 82–90 (2016) * [23] Zhang, H., Goodfellow, I.J., Metaxas, D.N., Odena, A.: Self-attention generative adversarial networks. In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA. Proceedings of Machine Learning Research, vol. 97, pp. 7354–7363. PMLR (2019) * [24] Zhu, H.B., Xu, D., Ye, M., Sun, L., Zhang, X.Y., Li, X.T., Nie, P., Xing, B.C., Sun, Y.S.: Deep learning-assisted magnetic resonance imaging prediction of tumor response to chemotherapy in patients with colorectal liver metastases. International Journal of Cancer 148(7), 1717–1730 (2021) *[GAN]: Generative Adversarial Network *[SA-GAN]: Self-Attention Generative Adversarial Network *[BigGAN]: Big Generative Adversarial Network *[DCGAN]: Deep Convolutional Generative Adversarial Network *[MRI]: Magnetic Resonance Imaging *[FID]: Fréchet Inception Distance *[CT]: Computerized Tomography *[LIDC]: The Lung Image Database Consortium image collection *[HU]: Hounsfield Units
# Van der Waals hetrostructure configuration effect on exciton and thermoelectric characteristics T.E. Ada K.N. Nigussa<EMAIL_ADDRESS>Department of Physics, Dilla University, P.O. Box 419, Dilla, Ethiopia Department of Physics, Addis Ababa University, P.O. Box 1176, Addis Ababa, Ethiopia ###### Abstract A GW calculation based on a truncated coulomb interaction with an added small q limit was applied to 2D van der Waals hetro-layered structures, and the Kane dispersion model was used to determine the accurate band gap edge. All ab initio calculations were performed with the gpaw package. Our findings show that layering the same or different types of atoms with a vacuum in between has an enormous impact on band alignment, effective mass of holes and electrons, exciton binding energy, thermoelectric characteristics, density of hot electrons, and electronic band gaps.Thus, layered interactions of the same kind constrain the configuration to have a direct band gap, whereas different kinds allow for an indirect gap, resulting in an indirect exciton with a greater binding energy due to the band confinement effect. Also, the hetro structure of the utmost configuration allows hot electrons to relax by creating unoccupied states; as a result, a high density of hot electrons distribute themselves over unoccupied states, making it less sensitive to temperature; in other words, electron-hole pairing lasts longer due to thermal drop. Therefore, among several possible configurations, $\mathrm{MoSe_{2}-MoS_{2}}$ and $\mathrm{MoSeSe-MoSSe}$ have an enhanced optical gap plus multiple excitation peaks that is similar to the experimental photoluminescence spectrum, as well as an improved seebeck coefficient due to low thermal conductivity and the ability to generate a higher density of states and empty states. ###### keywords: 2D van der Waals heterostructure , Effective mass, Thermoelectric materials, Exciton, Seebeck coefficient, Parabolic constant , Kane dispersion equation. ## 1 Introduction 2D van der Waals heterostructures are a preferred choice for high-efficiency solar devices. A 2D material can exhibit remarkable features, particularly in optics and electron transport, due to strong covelent bonding that provides in-plane stability and van der Waals forces that bind the stack together, as well as electron-hole pairing in adjacent layers. Interlayer electron-hole pairing occurs in a 2D van der Waals heterostructure when there is free space between the bottom of the conduction band and the top of the valence band, and forces take place between atoms or electron density sections.Thus, interlayer electron-hole pairing in varied configurations is expected to profoundly impact photon absorption and optical characteristics in solar cells. Hot electrons released during solar cell heating, on the other hand, have the ability to affect optical characteristics across multiple layers. However, a thorough knowledge of interlayer electron-hole pairing and hot electrons in relation to the configuration of a 2D van der Waals heterostructure is still lacking due to inaccurate estimates of band gap and band aligment at interfaces [1]. Density functional theory calculations are notoriously challenging for determining band gaps and band alignment at interfaces of 2D semiconductor . To get reliable band energies, many-body perturbation theory, such as the GW approximation, and the Bethe Salpeter Equation (BSE) should be used, with excition effects incorporated [2]. However, the computational expense of such approaches renders them inappropriate for van der Waals hetrostructures containing more than a few stacked layer. Here we carried out a $\mathrm{G_{0}W_{0}}$ calculation of 2D truncated coulomb interaction with added analytic correction for the small $\mathrm{q}$ limit of less dense k-point grid convergence [3], and To circumvent band alignment problems, we employed the parabolic-band approximation, which breaks down at high photon (electron) energies and can be incorporated in an approximate way similar to Ridley’s method for hot-electron transport [4, 5]. We explore the interlayer electron-hole pairing and hot electrons of a van der Waals heterostructure of a 2H-bilayer ($\mathrm{MoS_{2}|MoSe_{2}}$)(see Fig 1) in various configurations of the highest symmetry point of the first Brillouin zone, which is where the lowest bound exciton is situated(see Fig 2). (a) $\mathrm{MoS_{2}-MoS_{2}}~{}(a=3.17~{}\AA)$ (b) $\mathrm{MoSeS-MoS_{2}}~{}(a=3.21~{}\AA)$ (c) $\mathrm{MoSe_{2}-MoS_{2}}~{}(a=3.25~{}\AA)$ (d) $\mathrm{MoSe_{2}-MoSe_{2}}~{}(a=3.32~{}\AA)$ (e) $\mathrm{MoSSe-MoSe_{2}}~{}(a=3.28~{}\AA)$ (f) $\mathrm{MoSSe-MoSeS}~{}(a=3.25~{}\AA)$ (g) $\mathrm{MoSeS-MoSSe}~{}(a=3.25~{}\AA)$ (h) $\mathrm{MoSSe-MoSSe}~{}(a=3.25~{}\AA)$ Figure 1: Possible hetrostructured stacking combinations of $\mathrm{MoS_{2}}$ and $\mathrm{MoSe_{2}}$ colored by constituent elements: yellow for Sulfur, gray for Molybdenum, and green for Selenium. For interpretation of color code references, refer to the web version of this article. ## 2 Computational method All the ab-initio calculation were perfomed using gpaw code [6], and projected augmented wave is used for the Kohn-Sham self-consistant calculations [7, 8]. The wave functions are expanded in a plane wave cut-off energy below 600 eV with double-zeta polarized atomic orbitals basis set [9]. The $k$-points within Brillouin zone (BZ) is chosen according to Monkhorst-Pack scheme [10], where a $\rm\bf{k}$ meshes of 18$\times$18$\times$2\. The interactions of the valence electrons with the core electrons and nuclei is treated using an approximation within a projector augmented wave (paw) data sets [9, 11]. Geometry optimizations of the atomic coordinates and the unit cell degrees of freedom is done with implementation of the stress tensor [12, 13, 14, 15]. The convergence criteria for the forces were set at 0.005 eV/Å. The exchange- correlation energies are approximated within the generalized gradient approximation of PBE [16] $\&$ a k mesh of 2$\times$2$\times$1 is used in a geometry relaxation calculations, and applied to 2H-heterostructure consists of two mono layers with 2.0 $\AA$ vacuum in between, and the surface is insulated from external interaction by 10 $\AA$ vacuum along the c-axis direction. In the calculations of the bandgaps, we have implemented an exchange- correlation functionals GLLBSC, $\mathrm{vdW-DF2}$ and PBE within GW approximations [17, 18]. Calculation of density of states (DOS) $\&$ the group velocity [19], $\mathrm{v(\varepsilon)}$ are done from a graph of $\varepsilon\rm_{k}$ versus $k$, and DOS at a particular energies $\varepsilon$ is given as, ${}\mathrm{N({\varepsilon})=\sum\limits_{N}w\rm_{N}{\delta}({\varepsilon}-{\varepsilon\rm_{N}})},$ (1) and, $\mathrm{v({\varepsilon})=\frac{k^{2}}{2\pi^{2}}\frac{1}{N({\varepsilon})\hbar}}$ (2) where $\mathrm{N=(k,s)}$ is an occupation state corresponding to a $\mathrm{k}$ point $\&$ a spin $s$, $\&$ $\mathrm{w\rm_{N}}$ is a weight factor respectively. For better match to the experimental results, we used $\mathrm{vdW-DF2}$ [20, 21, 22] for layer interaction and GLLB-SC for London force exclusion to distinguish the van der Waals interaction force effect. We used parabolic band approximation to study the influence of band alignment at interfaces, and the $\mathrm{E(k)}$ relation may be stated directly from Kane’s theory [23], as is often used in literature by $\mathrm{\frac{\hbar^{2}k^{2}}{2m^{*}}=E(1+\alpha E),\text{~{}~{}where, }\alpha\approx\frac{1}{E_{g}}}$ (3) Where $\mathrm{m^{*}}$ is the effective mass at the band edge, $\mathrm{E_{g}}$ is the band gap, and the energy $\mathrm{E}$ is measured from the band edge. ## 3 Results and discussion The optimized lattice constants for $\mathrm{2H-MoS_{2}}$ and $\mathrm{2H-MoSe_{2}}$ are $3.17~{}\AA$ and $3.32~{}\AA$, respectively, as shown in Fig. 1, which are impressively similar to the experimental values of $3.18~{}\AA$ and $3.33~{}\AA$ in the order reported in Ref [24]. Obviously, the rest structure lattice constant falls between (3.17-3.33). Fig. 2 illustrates that a quasi-particle band gap estimated using the Bethe Salpeter Equation (BSE) is equal to the sum of the optical gap and the exciton binding energy, and is a popular choice because it produces the exact band gap. The $\mathrm{G_{0}W_{0}}$ [25], on the other hand, yielded somewhat similar band gap values. One can note that $\mathrm{E_{B}+E_{g}^{opt}=E_{g}^{GW}-E_{g}^{KS}}$, exciton binding energy and optical gap are often difficult to distinguish due to a lack of a distinct line between photoabsorption and resonant excitation energy. Thus, the estimated values of exciton binding energy for $\mathrm{2H-MoS_{2}}$ and $\mathrm{2H-MoSe_{2}}$ are $\mathrm{1.99~{}eV}$ and $\mathrm{1.68~{}eV}$, respectively, which are surprisingly similar to the experimental values within the range of $\mathrm{1.96-1.99~{}eV}$ and $\mathrm{1.682-1.712~{}eV}$ in the corresponding order as published in Refs [26, 27, 28, 29]. As a result, we can conclude that the remaining structural configuration of $\mathrm{E_{B}+E_{g}^{opt}}$ will lie between the two ranges specified above. Figure 2: Fundamental gap, $\mathrm{E_{g}^{BSE}}$ could constitute optical gap, $\mathrm{E_{g}^{opt}}$, the electron-hole or excitonic binding energy, $\mathrm{E_{B}}$, while GW HOMO-LUMO gaps, $\mathrm{E_{g}^{GW}}$ and $\mathrm{E_{g}^{GLLB-SC}}$ consisting of electronic gap, $\mathrm{E_{g}^{KS}}$ and band discontinuity, $\mathrm{\Delta_{xc}}$. For interpretation of color code references, refer to the web version of this article. Figure 3: In response to band edge energy, $\mathrm{E=\frac{1}{\alpha}}$, the band alignment of hetrostructural configuration initiates either direct or indirect electron and hole binding. In two-dimensional electronic systems, the coulomb interaction between electrons and holes is much stronger. Band structure effects are incorporated using an excitonic effective mass and dielectric screen. Exciton binding energies are directly proportional to band gaps. Fig. 3 depicts how excitonic effects bound at different and identical k-points, resulting in changes in exciton gap size. Mono and bilayers have a higher amount due to their lower dielectric constants [30]. Figs. 4(a) and 4(b) highlight that the MoSSe-MoSSe and MoSe2-MoS2 hetrostructure configurations have a higher number of hot-electron populations, giving rise to a gradual increase in hot electron mobility. However, when temperature rises, the hot-electron population gradually declines, except in the aforementioned structure, triggering relatively fast hot electron mobility. Therefore, $\mathrm{MoSe_{2}-MoS_{2}}$ and $\mathrm{MoSeSe-MoSSe}$ were positioned at the maximum achievable hetrocity, resulting in increased hot electron density and a significant contribution to the steady increase in hot electron mobility. (a) The detrimental effect of layer configuration on hot electron density as temperature changes. For the interpretations of the references to color in this plot legend, the reader is referred to the web-version. (b) Hot electron velocity rises in response to temperature changes in their respective layer configurations. For the interpretations of the references to color in this plot legend, the reader is referred to the web version. Figure 5: The equation of state calculation for $\mathrm{MoS_{2}-MoS_{2}}$ using the parabolic fit (green broken line) and Murnaghan fit (solid red line) compared to $\mathrm{MoSe_{2}-MoSe_{2}}$ on the right. The equation of state calculation for $\mathrm{MoS_{2}-MoS_{2}}$ using the parabolic fit (green broken line) and Murnaghan fit (solid red line) compared to $\mathrm{MoSe_{2}-MoSe_{2}}$ on the right. A greater bulk modulus number indicates a stronger resistance to high pressures and compression. Consequently, $\mathrm{MoSe_{2}-MoSe_{2}}$ structures are slightly less resistant to high pressure and compression than $\mathrm{MoS_{2}-MoS_{2}}$. The bulk modulus values in Table 2 and 3 are calculated from curve fits using the Murnaghan equation, Eq. (4), as shown in Fig .5. Hence, some findings are shown in Table 2, such as $\mathrm{MoSe_{2}-MoSe_{2}}$ being 20.99 GPa, which is consistent with the experimental value of 20.55 GPa [31]. $E(\eta)=E_{0}+\frac{9B_{0}V_{0}}{16}((\eta^{2}-1)^{2}(6+B^{{}^{\prime}}_{0}(\eta^{2}-1)-4\eta^{2})$ (4) Where $\eta=\left(\frac{V}{V_{0}}\right)^{\frac{1}{3}}$, $B_{0}$ and $B^{{}^{\prime}}_{0}$ are the bulk modulus and its pressure derivative at the equilibrium volume $V_{0}$ [32]. Table 1: Calculations of Effective mass of electron and hole, $\mathrm{m^{*}}$, Reduced mass, $\mathrm{\mu}$, Parabolic constant, $\mathrm{\alpha~{}[1/eV]}$, Electron transition on same or different k-point for $\mathrm{MoS_{2}-MoS_{2}}$, $\mathrm{MoSeS-MoS_{2}}$ , $\mathrm{MoSe_{2}-MoS_{2}}$ , $\mathrm{MoSe_{2}-MoSe_{2}}$ , $\&$ $\mathrm{MoSSe-MoSe_{2}}$ with vdW-DF2 and GLLB-SC exchange correlations. | | $\mathrm{\underline{vdW-DF2}}$ | | | | $\mathrm{\underline{GLLB-SC}}$ | | ---|---|---|---|---|---|---|---|--- System | $\mathrm{~{}m^{*}}$ | $\mu$ | $\mathrm{\alpha~{}[1/E_{g}]}$ | $\mathrm{Transition,~{}V~{}\rightarrow~{}C}$ | $\mathrm{m^{*}}$ | $\mu$ | $\mathrm{\alpha~{}[1/E_{g}]}$ | $\mathrm{Transition,~{}V~{}\rightarrow~{}C}$ | $\mathrm{m_{h}=-0.29}$ | | $\mathrm{CB=-2.41}$ | | $\mathrm{m_{h}=-0.62}$ | | $\mathrm{CB=-0.25}$ | $\mathrm{MoS_{2}-MoS_{2}}$ | | 0.20 | | $\mathrm{H~{}\rightarrow~{}K}$ | | 0.13 | | $\mathrm{K~{}\rightarrow~{}K}$ | $\mathrm{m_{e}=0.69}$ | | $\mathrm{VB=0.75}$ | | $\mathrm{m_{e}=0.28}$ | | $\mathrm{VB=0.92}$ | | $\mathrm{m_{h}=-0.26}$ | | $\mathrm{CB=-3.34}$ | | $\mathrm{m_{h}=-0.47}$ | | $\mathrm{CB=-2.03}$ | $\mathrm{MoSeS-MoS_{2}}$ | | 0.15 | | $\mathrm{K~{}\rightarrow~{}K}$ | | 0.19 | | $\mathrm{K~{}\rightarrow~{}H}$ | $\mathrm{m_{e}=0.37}$ | | $\mathrm{VB=0.87}$ | | $\mathrm{m_{e}=0.33}$ | | $\mathrm{VB=1.01}$ | | $\mathrm{m_{h}=-0.24}$ | | $\mathrm{CB=-2.81}$ | | $\mathrm{m_{h}=-0.21}$ | | $\mathrm{CB=-4.43}$ | $\mathrm{MoSe_{2}-MoS_{2}}$ | | 0.19 | | $\mathrm{H~{}\rightarrow~{}K}$ | | 0.16 | | $\mathrm{H~{}\rightarrow~{}K}$ | $\mathrm{m_{e}=0.81}$ | | $\mathrm{VB=3.34}$ | | $\mathrm{m_{e}=0.73}$ | | $\mathrm{VB=1.84}$ | | $\mathrm{m_{h}=-0.32}$ | | $\mathrm{CB=-1.69}$ | | $\mathrm{m_{h}=-0.27}$ | | $\mathrm{CB=-1.13}$ | $\mathrm{MoSe_{2}-MoSe_{2}}$ | | 0.22 | | $\mathrm{H~{}\rightarrow~{}H}$ | | 0.19 | | $\mathrm{H~{}\rightarrow~{}K}$ | $\mathrm{m_{e}=0.75}$ | | $\mathrm{VB=1.17}$ | | $\mathrm{m_{e}=0.66}$ | | $\mathrm{VB=1.72}$ | | $\mathrm{m_{h}=-0.60}$ | | $\mathrm{CB=-2.96}$ | | $\mathrm{m_{h}=-0.24}$ | | $\mathrm{CB=-1.31}$ | $\mathrm{MoSSe-MoSe_{2}}$ | | 0.23 | | $\mathrm{K~{}\rightarrow~{}H}$ | | 0.14 | | $\mathrm{K~{}\rightarrow~{}H}$ | $\mathrm{m_{e}=0.37}$ | | $\mathrm{VB=1.02}$ | | $\mathrm{m_{e}=0.33}$ | | $\mathrm{VB=1.72}$ | | $\mathrm{m_{h}=-0.65}$ | | $\mathrm{CB=-2.79}$ | | $\mathrm{m_{h}=-0.27}$ | | $\mathrm{CB=-4.47}$ | $\mathrm{MoSSe-MoSeS}$ | | 0.35 | | $\mathrm{H~{}\rightarrow~{}H}$ | | 0.19 | | $\mathrm{H~{}\rightarrow~{}H}$ | $\mathrm{m_{e}=0.76}$ | | $\mathrm{VB=0.82}$ | | $\mathrm{m_{e}=0.66}$ | | $\mathrm{VB=0.73}$ | | $\mathrm{m_{h}=-0.30}$ | | $\mathrm{CB=-3.03}$ | | $\mathrm{m_{h}=-0.26}$ | | $\mathrm{CB=-1.14}$ | $\mathrm{MoSeS-MoSSe}$ | | 0.16 | | $\mathrm{K~{}\rightarrow~{}K}$ | | 0.14 | | $\mathrm{K~{}\rightarrow~{}K}$ | $\mathrm{m_{e}=0.36}$ | | $\mathrm{VB=0.80}$ | | $\mathrm{m_{e}=0.31}$ | | $\mathrm{VB=1.39}$ | | $\mathrm{m_{h}=-0.30}$ | | $\mathrm{CB=-2.38}$ | | $\mathrm{m_{h}=-0.26}$ | | $\mathrm{CB=-2.56}$ | $\mathrm{MoSSe-MoSSe}$ | | 0.23 | | $\mathrm{H~{}\rightarrow~{}K}$ | | 0.19 | | $\mathrm{H~{}\rightarrow~{}K}$ | $\mathrm{m_{e}=0.92}$ | | $\mathrm{VB=1.49}$ | | $\mathrm{m_{e}=0.66}$ | | $\mathrm{VB=1.24}$ | Table 2: Calculations of effective mass of hole of parabolic constant, $\mathrm{\alpha~{}[1/eV]}$ to corresponding energy of band index, $\mathrm{\varepsilon_{N}}$ $\&$ Bulk modulus, B for $\mathrm{MoS_{2}-MoS_{2}}$, $\mathrm{MoSeS-MoS_{2}}$ , $\mathrm{MoSe_{2}-MoS_{2}}$ , $\mathrm{MoSe_{2}-MoSe_{2}}$ , $\&$ $\mathrm{MoSSe-MoSe_{2}}$ with vdW-DF2 and GLLB-SC exchange correlations plus PBE within GW approximation. | | | $\mathrm{\underline{vdW-DF2}}$ | | | | $\mathrm{\underline{GLLB-SC}}$ | | | | $\underline{\mathrm{G_{0}W_{0}@PBE}}$ ---|---|---|---|---|---|---|---|---|---|---|--- System | $\mathrm{Band}$ | $\mathrm{\varepsilon_{N}~{}[eV]}$ | $\mathrm{m_{h}}$ | $\mathrm{~{}\alpha}$ | B[GPa] | $\mathrm{\varepsilon_{N}~{}[eV]}$ | $\mathrm{m_{h}}$ | $\mathrm{\alpha}$ | B[GPa] | $\mathrm{E_{g}^{KS}}$ | $\mathrm{E_{g}^{GW}}$ | $\mathrm{N=26}$ | 0.42 | -0.29 | -2.41 | | 0.68 | -0.25 | -1.47 | | | | $\mathrm{N=27}$ | 0.42 | -0.28 | -2.41 | | 0.68 | -0.24 | -1.47 | | | | $\mathrm{N=28}$ | 1.56 | -0.64 | -0.64 | | 1.96 | -0.62 | -0.51 | | | | $\mathrm{N=29}$ | 1.59 | -0.65${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cap}$ | -0.63 | | 1.97 | -0.63 | -0.51 | | | $\mathrm{MoS_{2}-MoS_{2}}$ | $\mathrm{N=30}$ | 2.02 | 0.28${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cup}$ | -0.50 | 20.99 | 2.41 | 0.28 | -0.42 | 38.62 | 0.91 | 2.90 | $\mathrm{N=31}$ | 2.05 | 0.28 | -0.49 | | 2.43 | 0.28 | -0.41 | | | | $\mathrm{N=32}$ | 3.37 | 0.41 | -0.30 | | 3.88 | 0.39 | -0.26 | | | | $\mathrm{N=33}$ | 3.41 | 0.44 | -0.29 | | 3.90 | 0.39 | -0.26 | | | | $\mathrm{N=26}$ | 0.30 | -0.26 | -3.34 | | 0.49 | -0.47 | -2.03 | | | | $\mathrm{N=27}$ | 0.56 | -0.35 | -1.80 | | 0.75 | -0.64 | -1.34 | | | | $\mathrm{N=28}$ | 1.48 | -0.59 | -0.67 | | 1.76 | -1.22 | -0.57 | | | | $\mathrm{N=29}$ | 1.68 | -0.77${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cap}$ | -0.60 | | 2.01 | -1.62 | -0.50 | | | $\mathrm{MoSeS-MoS_{2}}$ | $\mathrm{N=30}$ | 1.99 | 0.26${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cup}$ | -0.50 | 21.65 | 2.33 | 0.62 | -0.43 | 37.26 | 0.85 | 2.78 | $\mathrm{N=31}$ | 2.04 | 0.29 | -0.49 | | 2.33 | 0.62 | -0.43 | | | | $\mathrm{N=32}$ | 3.27 | 0.48 | -0.31 | | 3.69 | 0.95 | -0.27 | | | | $\mathrm{N=33}$ | 3.40 | 0.39 | -0.29 | | 3.80 | 0.36 | -0.26 | | | | $\mathrm{N=26}$ | 0.36 | -0.24 | -2.81 | | 0.23 | -0.21 | -4.43 | | | | $\mathrm{N=27}$ | 1.36 | -0.40 | -0.74 | | 1.16 | -0.35 | -0.86 | | | | $\mathrm{N=28}$ | 1.92 | -0.94 | -0.52 | | 1.91 | -0.49 | -0.53 | | | | $\mathrm{N=29}$ | 2.06 | -0.49${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cap}$ | -0.49 | | 1.95 | -0.88 | -0.51 | | | $\mathrm{MoSe_{2}-MoS_{2}}$ | $\mathrm{N=30}$ | 2.10 | 0.25${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cup}$ | -0.48 | 20.21 | 2.11 | 0.24 | -0.48 | 36.08 | 0.06 | 2.14 | $\mathrm{N=31}$ | 2.70 | 0.28 | -0.37 | | 2.58 | 0.29 | -0.39 | | | | $\mathrm{N=32}$ | 3.30 | 0.57 | -0.30 | | 3.39 | 0.54 | -0.30 | | | | $\mathrm{N=33}$ | 4.08 | 0.38 | -0.25 | | 4.06 | 0.37 | -0.25 | | | | $\mathrm{N=26}$ | 0.59 | -0.32 | -1.69 | | 0.88 | -0.27 | -1.13 | | | | $\mathrm{N=27}$ | 0.60 | -0.30 | -1.68 | | 0.88 | -0.27 | -1.13 | | | | $\mathrm{N=28}$ | 1.61 | -0.58 | -0.62 | | 2.01 | -0.57 | -0.50 | | | | $\mathrm{N=29}$ | 1.66 | -0.59${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cap}$ | -0.60 | | 2.04 | -0.58 | -0.49 | | | $\mathrm{MoSe_{2}-MoSe_{2}}$ | $\mathrm{N=30}$ | 2.01 | 0.24${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cup}$ | -0.50 | 17.17 | 2.41 | 0.24 | -0.42 | 33.43 | 0.27 | 1.95 | $\mathrm{N=31}$ | 2.07 | 0.24 | -0.48 | | 2.45 | 0.24 | -0.41 | | | | $\mathrm{N=32}$ | 3.27 | 0.44 | -0.31 | | 3.77 | 0.44 | -0.27 | | | | $\mathrm{N=33}$ | 3.36 | 0.49 | -0.30 | | 3.83 | 0.46 | -0.26 | | | | $\mathrm{N=26}$ | 0.34 | -0.60 | -2.96 | | 0.77 | -0.24 | -1.31 | | | | $\mathrm{N=27}$ | 0.59 | -0.74 | -1.71 | | 1.02 | -0.31 | -0.98 | | | | $\mathrm{N=28}$ | 1.43 | -1.10 | -0.70 | | 1.94 | -0.50 | -0.52 | | | | $\mathrm{N=29}$ | 1.61 | -1.93${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cap}$ | -0.62 | | 2.17 | -0.95 | -0.46 | | | $\mathrm{MoSSe-MoSe_{2}}$ | $\mathrm{N=30}$ | 1.93 | 0.57${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}~{}\cup}$ | -0.52 | 18.50 | 2.47 | 0.27 | -0.40 | 34.63 | 0.64 | 2.71 | $\mathrm{N=31}$ | 1.99 | 0.54 | -0.50 | | 2.50 | 0.26 | -0.40 | | | | $\mathrm{N=32}$ | 3.16 | 0.98 | -0.32 | | 3.81 | 0.44 | -0.26 | | | | $\mathrm{N=33}$ | 3.31 | 0.89 | -0.30 | | 3.93 | 0.40 | -0.26 | | | Table 3: Calculations of effective mass of hole of parabolic constant, $\mathrm{\alpha~{}[1/eV]}$ to corresponding energy of band index, $\mathrm{\varepsilon_{N}}$ $\&$ Bulk modulus, B for $\mathrm{MoSSe-MoSeS}$ , $\mathrm{MoSeS-MoSSe}$ , $\&$ $\mathrm{MoSSe-MoSSe}$ with vdW-DF2 and GLLB-SC exchange correlations plus PBE within GW approximation | | | $\mathrm{\underline{vdW-DF2}}$ | | | | $\mathrm{\underline{GLLB-SC}}$ | | | | $\underline{\mathrm{G_{0}W_{0}@PBE}}$ ---|---|---|---|---|---|---|---|---|---|---|--- System | $\mathrm{Band}$ | $\mathrm{\varepsilon_{N}~{}[eV]}$ | $\mathrm{m_{h}}$ | $\mathrm{~{}\alpha}$ | B[GPa] | $\mathrm{\varepsilon_{N}~{}[eV]}$ | $\mathrm{m_{h}}$ | $\mathrm{\alpha}$ | B[GPa] | $\mathrm{E_{g}^{KS}}$ | $\mathrm{E_{g}^{GW}}$ | $\mathrm{N=26}$ | 0.36 | -0.65 | -2.79 | | 0.22 | -0.27 | -4.47 | | | | $\mathrm{N=27}$ | 0.36 | -0.63 | -2.77 | | 0.23 | -0.26 | -4.44 | | | | $\mathrm{N=28}$ | 1.47 | -1.61 | -0.68 | | 1.45 | -0.75 | -0.69 | | | | $\mathrm{N=29}$ | 1.52 | -1.49${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cap}$ | -0.66 | | 1.48 | -0.71 | -0.67 | | | $\mathrm{MoSSe-MoSeS}$ | $\mathrm{N=30}$ | 1.88 | 0.60${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cup}$ | -0.53 | 20.14 | 1.87 | 0.29 | -0.54 | 35.97 | 0.87 | 2.78 | $\mathrm{N=31}$ | 1.94 | 0.57 | -0.52 | | 1.91 | 0.27 | -0.53 | | | | $\mathrm{N=32}$ | 3.15 | 0.85 | -0.32 | | 3.25 | 0.39 | -0.31 | | | | $\mathrm{N=33}$ | 3.24 | 0.98 | -0.31 | | 3.30 | 0.42 | -0.30 | | | | $\mathrm{N=26}$ | 0.33 | -0.33 | -3.03 | | 0.88 | -0.26 | -1.14 | | | | $\mathrm{N=27}$ | 0.33 | -0.30 | -3.03 | | 0.88 | -0.26 | -1.14 | | | | $\mathrm{N=28}$ | 1.46 | -0.71 | -0.69 | | 2.12 | -0.73 | -0.47 | | | | $\mathrm{N=29}$ | 1.47 | -0.73${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cap}$ | -0.68 | | 2.12 | -0.73 | -0.47 | | | $\mathrm{MoSeS-MoSSe}$ | $\mathrm{N=30}$ | 1.88 | 0.27${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cup}$ | -0.53 | 20.23 | 2.54 | 0.28 | -0.39 | 35.96 | 0.92 | 2.80 | $\mathrm{N=31}$ | 1.89 | -0.28${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cap}$ | -0.53 | | 2.54 | 0.28 | -0.39 | | | | $\mathrm{N=32}$ | 3.16 | 0.43${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cup}$ | -0.32 | | 3.93 | 0.40 | -0.26 | | | | $\mathrm{N=33}$ | 3.17 | 0.44 | -0.32 | | 3.94 | 0.40 | -0.25 | | | | $\mathrm{N=26}$ | 0.42 | -0.30 | -2.38 | | 0.39 | -0.26 | -2.56 | | | | $\mathrm{N=27}$ | 0.91 | -0.30 | -1.10 | | 0.79 | -0.26 | -1.27 | | | | $\mathrm{N=28}$ | 1.55 | -0.72${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cap}$ | -0.64 | | 1.63 | -0.72 | -0.62 | | | | $\mathrm{N=29}$ | 1.97 | 0.27${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cup}$ | -0.51 | | 2.03 | -0.74 | -0.49 | | | $\mathrm{MoSSe-MoSSe}$ | $\mathrm{N=30}$ | 2.04 | -0.71${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cap}$ | -0.49 | 20.20 | 2.04 | 0.28 | -0.49 | 35.99 | 0.25 | 2.29 | $\mathrm{N=31}$ | 2.46 | 0.27${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}~{}\cup}$ | -0.41 | | 2.45 | 0.28 | -0.41 | | | | $\mathrm{N=32}$ | 3.26 | 0.43 | -0.31 | | 3.43 | 0.41 | -0.29 | | | | $\mathrm{N=33}$ | 3.75 | 0.43 | -0.27 | | 3.84 | 0.40 | -0.26 | | | Higher temperatures enhance the likelihood of discovering energetic electrons and holes, hence the generation rate rises with temperature. For small electron-hole densities, increasing temperature spreads carrier distributions to higher energies where Auger recombination [33] is less efficient, reducing the recombination rate. Large electron-hole concentrations require vacant final scattering states for electron-hole recombination near the band boundaries. An increase in temperature results in more vacant states. As a result of the two variables mentioned above, recombination rates at high electron-hole densities are less temperature sensitive. Thus, improving the seebeck coefficient, either by designing the density of states or altering nanostructure, is a requirement for producing effective thermoelectric materials, and our findings support this argument in Refs [34, 35, 36]. Eq. 3 could be rearranged to form a parabolic band model for analyzing the band alignment effect at interfaces as, $\mathrm{E(k)=\frac{\hbar^{2}k^{2}}{2m^{*}}-\alpha E(k)^{2}}$ (5) Where $\alpha$ is used to quantify the flattening of a parabola curve, a smaller negative value indicates that the parabolic curve is more flat. Thus, accurately estimating the effective mass from the $\mathrm{E(k)}$ curve is crucial for correctly modeling the optical and transport properties of various structural configurations. The Taylor expansion can be used to discover the $\mathrm{E(k)}$ dispersion relation as follows: $\mathrm{E(k)=E_{0}+\frac{\partial E}{\partial k}+\frac{\partial^{2}E}{2!\partial k^{2}}+O(k^{3})}$ (6) Where $\mathrm{E_{0}}$ is is energy at $\Gamma$-point. Then, contracting Eq.6 and discarding the second term, we get, $\mathrm{E(k)=E_{0}+\alpha|\Delta k|^{2}}$ (7) Thus, the change in $\mathrm{E(k)}$ curve at a symmetric path k-point divided by the change in square of the k-point path equals twice the band curvature. $\mathrm{\frac{\hbar^{2}}{2m^{*}}=\frac{\partial^{2}E}{2!\partial k^{2}}=\alpha}$ (8) $\mathrm{\frac{\hbar^{2}k^{2}}{2m^{*}}=\alpha}$, this relation proves that materials with small masses have high mobility and long diffusion lengths [37]. In this work, we investigated energy change, E(k), along the k-point path from $\mathrm{\Gamma}$ to $\mathrm{K}$, where $\mathrm{G~{}[0.0,0.0,0.0]}$ and $\mathrm{K~{}[1/2,1/2,0.0]}$ were used to calculate effective mass ($\mathrm{m_{e}~{}\&~{}m_{h}}$) and parabolic constant, $\mathrm{\alpha}$ while taking into account the transiting electron path of choice. Table.1 illustrates, the computed effective mass, $\mathrm{m_{e}}$ of bilayer $\mathrm{2H-MoS_{2}}$ and $\mathrm{2H-MoSe_{2}}$ is $\mathrm{0.69m_{e}}$ and $\mathrm{0.75m_{e}}$, respectively, which are consistent with the literature values of $\mathrm{0.7m_{e}}$ and $\mathrm{0.8m_{e}}$ given in Refs [38, 39, 40]. Van der Waals interactions between layers have a significant impact on a material’s effective mass and band curvature. In this work, we analyzed effective mass using the GLLBSC exchange correlation, and discovered that the value is underestimated since the GLLBSC correlation ignores the London force. Therefore, taking into account improved van der Waals interaction exchange correlation yields the best results. Table. 2 shows that electrons may excite either directly or indirectly depending on band orientation. Excitation occurs when the effective hole mass density changes sign, suggesting a leap that requires a particular amount of energy, implying that band alignment has a substantial influence on exiting energy. Consequently, direct transition excitation requires less energy, but indirect transition excitation requires more exciting energy. Thus, $\mathrm{MoSeS-MoS_{2}}$ and $\mathrm{MoSe_{2}-MoSe_{2}}$ have a relatively smaller band confinement effect and hence require less energy due to direct band alignment. Table. 3 shows that the proportionate size of constituent atoms, when arranged in various configurations, can result in varied band alignments effect. The interaction of adjacent layers of the same type can enhance optical properties such as multiple excitation peaks. Therefore, layer contact of various sorts with adjacent sheets has the potential to improve the optical properties of hetrostructure. One can see that both $\mathrm{MoSeS-MoSSe}$ and $\mathrm{MoSSe-MoSSe}$ have double excitation peaks. ## 4 Conclusion The tuning of 2D structures in various configurations is critical to optical and thermoelectric properties. In this paper, we used the kane dispersion, $\mathrm{E(k)}$ relation to calculate the density of hot electrons with their corresponding group velocity, as well as the electronic gap, $\mathrm{E_{g}^{KS}}$ which is the difference between the conduction band edge energy, $\mathrm{~{}E_{c}=\frac{1}{\alpha_{c}}}$ and the valence band edge energy, $\mathrm{~{}E_{v}=\frac{1}{\alpha_{v}}}$, which allowed us to accurately determine $\mathrm{E_{g}^{opt}+E_{B}=E_{g}^{GW}-E_{g}^{KS}\approx E_{g}^{BSE}}$. We also discovered how different configurations alter the exciton binding energy and thermoelectric properties of a 2D material. In all respects, the highest hetro configuration offers excellent thermoelectric properties. Finally, we estimated bulk modulus, which is a measure of material stability, using the Murnaghan curve fitting method, and we discovered that hetrostructures stabilize themselves when layer interaction increases owing to surface exposure to adjacent layers. ## Disclosure statement The author declare that there is no conflict of interest. ## 5 Data Availability Statement The data that support the findings of this study are available upon reasonable request from the author. ## 6 Acknowledgements We are grateful to the Dilla University for financial support. ORCID iDs. T.E. Ada. https://orcid.org/0000-0002-4417-0058. ## 7 References ## References * [1] S. Latini, K. T. Winther, T. Olsen, K. S. Thygesen, Interlayer excitons and band alignment in mos2/hbn/wse2 van der waals heterostructures., Nano letters 17. URL https://api.semanticscholar.org/CorpusID:206737138 * [2] X. Blase, I. Duchemin, D. Jacquemin, P.-F. Loos, The bethe–salpeter equation formalism: From physics to chemistry, The Journal of Physical Chemistry Letters 11. URL https://doi.org/10.1021/acs.jpclett.0c01875 * [3] F. A. Rasmussen, P. S. Schmidt, K. T. Winther, K. S. Thygesen, Efficient many-body calculations for two-dimensional materials using exact limits for the screened potential: Band gaps of ${\mathbf{mos}}_{2},h$-bn, and phosphorene, Phys. Rev. B 94. URL https://link.aps.org/doi/10.1103/PhysRevB.94.155406 * [4] B. Ridley, The diffusion of hot electrons across a semiconductor base, Solid-State Electronics 24. URL https://doi.org/10.1016/0038-1101(81)90010-1 * [5] S. Latini, T. Olsen, K. S. Thygesen, Excitons in van der waals heterostructures: The important role of dielectric screening, Phys. Rev. B 92. URL https://link.aps.org/doi/10.1103/PhysRevB.92.245123 * [6] J. Enkovaara, C. Rostgaard, J. Mortensen, J. Chen, M. Dułak, L. Ferrighi, J. Gavnholt, C. Glinsvad, V. Haikola, H. Hansen, H. Kristoffersen, M. Kuisma, A. Larsen, L. Lehtovaara, M. Ljungberg, O. Lopez-Acevedo, P. Moses, J. Ojanen, T. Olsen, V. Petzold, N. Romero, J. Stausholm-M${\rm\o}$ller, M. Strange, G. Tritsaris, M. Vanin, M. Walter, B. Hammer, H. H$\rm\ddot{a}$kkinen, G. Madsen, R. Nieminen, J. N${\rm\o}$rskov, M. Puska, T. Rantala, J. Schi${\rm\o}$tz, K. Thygesen, K. Jacobsen, Electronic structure calculations with GPAW: a real-space implementation of the projector augmented-wave method, J. Phys.: Condens. Matter 22 (2010) 253202. URL https://doi.org/10.1088/0953-8984/22/25/253202 * [7] J. Mortensen, L. Hansen, K. Jacobsen, Real-space grid implementation of the projector augmented wave method, Phys. Rev. B 71 (2005) 035109. URL https://doi.org/10.1103/PhysRevB.71.035109 * [8] W. Kohn, L. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev. 140 (1965) A1133–A1138. URL https://doi.org/10.1103/PhysRev.140.A1133 * [9] P. Bl$\rm\ddot{o}$chl, Projector augmented-wave method, Phys. Rev. B 50 (1994) 17953. URL https://doi.org/10.1103/PhysRevB.50.17953 * [10] H. Monkhorst, J. Pack, Special points for Brillouin-zone integrations, Phys. Rev. B 13 (1976) 5188. URL https://doi.org/10.1103/PhysRevB.13.5188 * [11] G. Kresse, D. Joubert, From Ultrasoft Pseudopotentials to the Projector Augmented-Wave Method, Phys. Rev. B 59 (1999) 1758. URL http://dx.doi.org/10.1103/PhysRevB.59.1758 * [12] H. Schlegel, Optimization of equilibrium geometries and transition structures, J. Comp. Chem. 3 (1982) 214\. URL https://doi.org/10.1002/jcc.540030212 * [13] P. Feynman, Forces in Molecules, Phys. Rev. 56 (1939) 340. URL https://doi.org/10.1103/PhysRev.56.340 * [14] O. Nielsen, R. Martin, Quantum-mechanical theory of stress and force, Phys. Rev. B. 32 (1985) 3780. URL https://doi.org/10.1103/PhysRevB.32.3780 * [15] R. Wentzcovitch, J. Martins, First principles molecular dynamics of Li: Test of a new algorithm, Solid State Commun. 78 (1991) 831. URL https://doi.org/10.1016/0038-1098(91)90629-A * [16] J. Perdew, K. Burke, M. Ernzerhof, Generalized Gradient Approximation Made Simple, Phys. Rev. Lett. 77 (1996) 3865. URL https://doi.org/10.1103/PhysRevLett.77.3865 * [17] M. Kuisma, J. Ojanen, J. Enkovaara, T. Rantala, Kohn-Sham potential with discontinuity for band gap materials, Phys. Rev. B 82 (2010) 115106. URL https://doi.org/10.1103/PhysRevB.82.115106 * [18] J. Perdew, A. Ruzsinszky, G. Csonka, O. Vydrov, G. Scuseria, L. Constantin, X. Zhou, K. Burke, Restoring the Density-Gradient Expansion for Exchange in Solids and Surfaces, Erratum Phys. Rev. Lett. 102 (2009) 039902. URL https://doi.org/10.1103/PhysRevLett.100.136406 * [19] G. N. Koskowich, M. Soma, R. B. Darling, Near-infrared free-carrier optical absorption in silicon: Effect of first-order phonon-assisted scattering in a nonparabolic conduction band, Phys. Rev. B 41\. URL https://link.aps.org/doi/10.1103/PhysRevB.41.2944 * [20] B. Hammer, L. B. Hansen, J. K. Nørskov, Improved adsorption energetics within density-functional theory using revised perdew-burke-ernzerhof functionals, Phys. Rev. B 59. URL https://link.aps.org/doi/10.1103/PhysRevB.59.7413 * [21] J. Klimes, D. R. Bowler, A. Michaelides, Chemical accuracy for the van der waals density functional, Journal of Physics: Condensed Matter 22 (2) (2009) 022201. URL https://dx.doi.org/10.1088/0953-8984/22/2/022201 * [22] D. Chakraborty, K. Berland, T. Thonhauser, Next-generation nonlocal van der waals density functional, Journal of Chemical Theory and Computation 16. URL https://doi.org/10.1021/acs.jctc.0c00471 * [23] B. R. Nag, A. N. Chakravarti, On a simplified form of kane’s dispersion relation for semiconductors, physica status solidi (b) 71. URL https://doi.org/10.1002/pssb.2220710153 * [24] J. Kang, S. Tongay, J. Zhou, J. Li, J. Wu, Band offsets and heterostructures of two-dimensional semiconductors, Applied Physics Letters 102. URL https://doi.org/10.1063/1.4774090 * [25] F. Huser, T. Olsen, K. S. Thygesen, Quasiparticle gw calculations for solids, molecules, and two-dimensional materials, Phys. Rev. B 87. URL https://link.aps.org/doi/10.1103/PhysRevB.87.235132 * [26] V. Jindal, D. Jana, T. Deilmann, S. Ghosh, Interlayer and excited-state exciton transitions in bulk $2h\text{$-$}{\mathrm{mos}}_{2}$, Phys. Rev. B 102. URL https://link.aps.org/doi/10.1103/PhysRevB.102.235204 * [27] M. Bhatnagar, T. Woźniak, Ł. Kipczak, N. Zawadzka, K. Olkowska-Pucko, M. Grzeszczyk, J. Pawłowski, K. Watanabe, T. Taniguchi, A. Babiński, M. R. Molas, Temperature induced modulation of resonant raman scattering in bilayer 2h-mos2, Scientific Reports 12. URL https://doi.org/10.1038/s41598-022-18439-7 * [28] P. Tonndorf, R. Schmidt, P. Böttger, X. Zhang, J. Börner, A. Liebig, M. Albrecht, C. Kloc, O. Gordan, D. R. T. Zahn, S. M. de Vasconcellos, R. Bratschitsch, Photoluminescence emission and raman response of monolayer mos2, mose2, and wse2, Opt. Express 21\. URL https://opg.optica.org/oe/abstract.cfm?URI=oe-21-4-4908 * [29] H. J. Liu, L. Jiao, L. Xie, F. Yang, J. L. Chen, W. K. Ho, C. L. Gao, J. F. Jia, X. D. Cui, M. H. Xie, Molecular-beam epitaxy of monolayer and bilayer wse2: a scanning tunneling microscopy/spectroscopy study and deduction of exciton binding energy, 2D Materials 2. URL https://dx.doi.org/10.1088/2053-1583/2/3/034004 * [30] T. Olsen, S. Latini, F. Rasmussen, K. S. Thygesen, Simple screened hydrogen model of excitons in two-dimensional materials, Phys. Rev. Lett. 116. URL https://link.aps.org/doi/10.1103/PhysRevLett.116.056401 * [31] R. Aksoy, Y. Ma, E. Selvi, M. C. Chyu, A. Ertas, A. White, X-ray diffraction study of molybdenum disulfide to 38.8gpa, Journal of Physics and Chemistry of Solids 67. URL https://www.sciencedirect.com/science/article/pii/S0022369706002769 * [32] M. Hebbache, M. Zemzemi, Ab initio study of high-pressure behavior of a low compressibility metal and a hard material: Osmium and diamond, Phys. Rev. B 70. URL https://link.aps.org/doi/10.1103/PhysRevB.70.224107 * [33] Y. Jiang, M. Cui, S. Li, C. Sun, Y. Huang, J. Wei, L. Zhang, M. Lv, C. Qin, Y. Liu, M. Yuan, Reducing the impact of auger recombination in quasi-2d perovskite light-emitting diodes, Nature Communications 12. URL https://doi.org/10.1038/s41467-020-20555-9 * [34] Y. Pei, A. D. LaLonde, H. Wang, G. J. Snyder, Low effective mass leading to high thermoelectric performance, Energy Environ. Sci. 5. URL http://dx.doi.org/10.1039/C2EE21536E * [35] D. K. Ferry, First-order optical and intervalley scattering in semiconductors, Phys. Rev. B 14. URL https://link.aps.org/doi/10.1103/PhysRevB.14.1605 * [36] F. Rana, Electron-hole generation and recombination rates for coulomb scattering in graphene, Phys. Rev. B 76. URL https://link.aps.org/doi/10.1103/PhysRevB.76.155431 * [37] F. Brivio, K. T. Butler, A. Walsh, M. van Schilfgaarde, Relativistic quasiparticle self-consistent electronic structure of hybrid halide perovskite photovoltaic absorbers, Phys. Rev. B 89. URL https://link.aps.org/doi/10.1103/PhysRevB.89.155204 * [38] T. Cheiwchanchamnangij, W. R. L. Lambrecht, Quasiparticle band structure calculation of monolayer, bilayer, and bulk $\mathrm{MoS_{2}}$, Phys. Rev. B 85. URL https://link.aps.org/doi/10.1103/PhysRevB.85.205302 * [39] R. Pisoni, T. Davatz, K. Watanabe, T. Taniguchi, T. Ihn, K. Ensslin, Absence of interlayer tunnel coupling of $k$-valley electrons in bilayer ${\mathrm{mos}}_{2}$, Phys. Rev. Lett. 123 (2019) 117702. doi:10.1103/PhysRevLett.123.117702. URL https://link.aps.org/doi/10.1103/PhysRevLett.123.117702 * [40] S. Larentis, H. C. P. Movva, B. Fallahazad, K. Kim, A. Behroozi, T. Taniguchi, K. Watanabe, S. K. Banerjee, E. Tutuc, Large effective mass and interaction-enhanced zeeman splitting of k-valley electrons in $\mathrm{MoSe_{2}}$, Phys. Rev. B 97. URL https://link.aps.org/doi/10.1103/PhysRevB.97.201407
# On Delay-Doppler Plane Orthogonal Pulse Hai Lin Osaka Metropolitan University Sakai, Osaka, 599-8531, Japan Email<EMAIL_ADDRESS>Jinhong Yuan The University of New South Wales Sydney, NSW, 2052, Australia Email<EMAIL_ADDRESS> ###### Abstract In this paper, we analyze the recently discovered delay-Doppler plane orthogonal pulse (DDOP), which is essential for delay-Doppler plane multi- carrier modulation waveform. In particular, we introduce a _local orthogonality_ property of pulses corresponding to Weyl-Heisenberg (WH) _subset_ and justify the DDOP’s existence, in contrast to _global orthogonality_ corresponding to WH _set_ governed by the WH frame theory. Then, sufficient conditions for locally-orthogonal pulses are presented and discussed. Based on the analysis, we propose a general DDOP design. We also derive the frequency domain representation of the DDOP, and compare the DDOP- based orthogonal delay-Doppler division multiplexing (ODDM) modulation with other modulation schemes, in terms of TF signal localization. Interestingly, we show perfect local orthogonality property of the DDOP with respect to delay-Doppler resolutions using its ambiguity function. ## I Introduction In digital communications, a modulation scheme usually requires a set of (bi)orthogonal _analog_ pulses or continuous time functions, each of which carries an information-bearing _digital_ symbol, to synthesize the signal waveform[1]. Therefore, the modulation process can be intuitively thought of as placing these pulses in the time-frequency (TF) plane, and the (bi)orthogonality can be achieved by placing them with proper TF distance. Such modulation schemes include single carrier (SC) modulation with temporally spaced pulses, and multi-carrier (MC) modulation whose pulses are spaced both temporally and spectrally. Meanwhile, for a communication system, a transmit signal always consists of a finite number of pulses and occupies a finite TF region in the TF plane, determining the signal’s duration and bandwidth. In the context of MC modulation, the pulses are typically generated by TF shifting a _prototype pulse_ in accordance with a frequency resolution $\mathcal{F}$ and a time resolution $\mathcal{T}$. The minimum TF distance among these pulses can be quantified by $\mathcal{R}=\mathcal{T}\mathcal{F}$, called as joint TF resolution (JTFR) in this paper. The fundamental issue of designing an MC modulation scheme is to find the prototype pulse that can form (bi)orthogonal pulses with respect to $\mathcal{T}$ and $\mathcal{F}$. Conventionally, these TF-shifted pulses are considered as _Wely-Heisenberg_ (WH) or _Gabor_ function set[2, 3, 4]. According to the WH frame theory, the (bi)orthogonal WH function sets only exist for $\mathcal{R}\geq 1$[5, 6], and therefore most of orthogonal MC modulation schemes are designed with $\mathcal{R}\geq 1$ [7, 8]. Recently, a delay-Doppler plane MC (DDMC) modulation named as the orthogonal delay-Doppler division multiplexing (ODDM) modulation, was proposed in [9, 10]. Considering that linear time-varying (LTV) channels in a stationary region that can be modelled as a delay-Doppler (DD) channel with a deterministic spreading function, the ODDM modulation employs a newly discovered DD plane orthogonal pulse (DDOP) to couple the modulated MC signal with the DD channel. It achieves superior performance by harvesting both time and frequency diversity, while it is shown in [9, 10] that the DDOP can form an orthogonal function set with respect to the DD plane resolutions. Because the DD plane’s TF resolutions result in a JTFR $\mathcal{R}_{\textrm{DD}}<1$, the DDOP seems inconsistent with current (bi)orthogonal pulses design principles. Although its orthogonality has been proved, a rational explanation for the DDOP’s unique properties is still missing. In this paper, we take an in-depth look into the DDOP and justify its existence. We introduce a _local orthogonality_ property and clarify that the DDOP only needs to satisfy local orthogonality, in contrast to _global orthogonality_ governed by the WH frame theory. Then, sufficient conditions for pulse to achieve local orthogonality are analyzed. Based on the analysis, we propose a general DDOP design. Our contributions can be summarized as follows: * • We point out that only local (bi)orthogonality in the finite TF region rather than global (bi)orthogonality in the whole TF plane is required by a modulation scheme. Accordingly, we show that a WH _subset_ rather than a WH set is required in the pulse design. * • We reformulate the (bi)orthogonal pulse design problem, based on the local (bi)orthogonality. We show that the DDOP forms a WH subset that satisfies the local orthogonality. * • We analyze the local orthogonality with respect to TF resolutions, and discuss the corresponding sufficient conditions. We reveal that for a limited number of subcarriers, surprisingly, there are _infinite_ pulses orthogonal with respect to $\mathcal{F}$, as long as they are periodic functions with a specified period related to the number of subcarriers. * • By introducing cyclic prefix (CP) and cyclic suffix (CS) to achieve the specified periodicity, we propose a general DDOP design, which releases the duration constraint of square-root Nyquist (SRN) sub-pulses in our previously designed DDOP. * • We derive the frequency domain representation of the DDOP. Together with the DDOP’s time domain representation, we illustrate the DDOP-based ODDM’s TF signal localization, and schematically compare it with those of other modulation schemes. The ambiguity function shows perfect local orthogonality property of the DDOP with respect to delay-Doppler resolutions. Notations: In this paper, $\operatorname{\Pi}_{\mathfrak{T}}(t)$ stands for the rectangular pulse with unit energy and support $[0,\mathfrak{T}]$. Given the number of subcarriers $N$, $a(t)$ denotes the SRN pulse for interval $\mathcal{T}$, with energy $\frac{1}{N}$ and support $[-T_{a}/2,T_{a}/2]$. $\mathcal{A}_{g,\gamma}(\cdot)$ is the (cross)ambiguity function of $g(t)$ and $\gamma(t)$. ## II WH set based pulse design principles Let us first introduce main parameters and their notations for an MC modulation in Table I. The transmit pulses in an MC modulation can be represented by the function set $\left(g,\mathcal{T},\mathcal{F}\right)=\left\\{g_{m,n}\right\\}_{m,n\in\mathbb{Z}},$ (1) where $g_{m,n}\coloneqq g(t-m\mathcal{T})e^{j2\pi n\mathcal{F}(t-m\mathcal{T})}$ and $g(t)$ is the prototype pulse. Similarly, we can form the receive pulses $\left(\gamma,\mathcal{T},\mathcal{F}\right)$ using another prototype pulse $\gamma(t)$ with the same TF resolutions. Note that because a time-limited signal cannot be strictly band-limited, the bandwidth of $g(t)$, $B_{g}$, is defined in an essential sense [11]. TABLE I: MC Modulation Parameters Notation | Parameter ---|--- $\mathcal{F}$ | frequency resolution, frequency spacing $\mathfrak{T}$ | symbol period, $\mathfrak{T}=1/\mathcal{F}$ $\mathcal{T}$ | time resolution, symbol interval $\mathcal{R}$ | JTFR, $\mathcal{R}=\mathcal{T}\mathcal{F}$ $N$ | number of subcarriers $M$ | number of symbols $g(t)$ | transmit (prototype) pulse $T_{g}$ | duration of $g(t)$, symbol duration $B_{g}$ | bandwidth of $g(t)$ Given $\mathcal{T}$ and $\mathcal{F}$, the fundamental issue of an MC modulation is to find $g(t)$ and $\gamma(t)$ satisfying the orthogonal condition of $\langle g_{m,n},g_{\dot{m},\dot{n}}\rangle=\delta(m-\dot{m})\delta(n-\dot{n}),$ (2) or the biorthogonal condition of $\langle g_{m,n},\gamma_{\dot{m},\dot{n}}\rangle=\delta(m-\dot{m})\delta(n-\dot{n}).$ (3) By considering the TF plane as a 2D phase space, the function set in (1) forms a discrete lattice “sampling” the phase space[12, 2], where the “sampling” resolution is the JTFR $\mathcal{R}$. Then, the function set in (1) can be treated as a WH set. According to the WH frame theory, the existence of (bi)orthogonal WH set depends on the “sampling” resolution and can be summarized as [13, 2, 4, 3, 5, 12, 14, 7, 15]: * • Critical sampling ($\mathcal{R}=1$) : Orthogonal WH sets exist. However, they have either infinite time or frequency energy spread according to the Balian- Low theory [16], and therefore are not TF well-localized. * • Undercritical sampling ($\mathcal{R}>1$) : TF well-localized orthogonal or biorthogonal WH sets exist, if $\mathcal{R}$ is sufficiently larger than $1$. * • Overcritical sampling ($\mathcal{R}<1$) : Neither orthogonal nor biorthogonal WH sets exist. With the transmit pulses in (1), the transmit waveform of an MC modulation can be represented as $\displaystyle x(t)=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}X_{m,n}g(t-m\mathcal{T})e^{j2\pi n\mathcal{F}(t-m\mathcal{T})},$ (4) where $X_{m,n}$’s are the information-bearing digital symbols. ## III ODDM modulation In the design of modulation schemes, the primary concern is the dispersive effect of the channel. A doubly-selective wireless channel with both time and frequency dispersion is usually considered as a LTV system, and represented by its time-varying channel impulse response (TV-CIR) or DD spread function[17]. ### III-A DD channel model Since the transmit signal is band- and time-limited, we always apply an appropriate bandpass filtering and a subsequent sampling at the receiver. As a result, we observe an equivalent channel that is the band- and time-limited version of the physical channel. Let the sampling rate and duration be $W_{0}$ and $T_{0}$, respectively. The equivalent DD channel can be written as [17] $h(\tau,\nu)=\sum_{p=1}^{P}h_{p}\delta(\tau-\tau_{p})\delta(\nu-\nu_{p}),$ (5) with $\tau_{p}=\frac{l_{p}}{W_{0}}$, $\nu_{p}=\frac{k_{p}}{T_{0}}$, $l_{p},k_{p}\in\mathbb{Z}$, where $\frac{1}{W_{0}}$ and $\frac{1}{T_{0}}$ are the delay and Doppler resolutions, respectively. Figure 1: $u(t)$, the transmit pulse of ODDM modulation. ### III-B ODDM modulation and DDOP To couple the MC signal with the DD channel in (5), the ODDM matches its signal resolutions to the delay and Doppler resolutions, namely set $\mathcal{T}=\frac{1}{W_{0}}$ and $\mathcal{F}=\frac{1}{T_{0}}$, respectively. Note that for an ODDM signal, we have $W_{0}=\frac{M}{T}$ and $T_{0}=NT$. Then, an ODDM frame without the frame-wise CP can be written as[10] $\displaystyle x(t)=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}X_{m,n}u\left(t-m\frac{T}{M}\right)e^{j2\pi n\frac{1}{NT}(t-m\frac{T}{M})},$ (6) where $u(t)$ is the DDOP given by $u(t)=\sum_{\dot{n}=0}^{N-1}a(t-\dot{n}T).$ (7) As shown in Fig. 1, the duration of $a(t)$ in $u(t)$ is $T_{a}=2Q\frac{T}{M}$. When $2Q\ll M$ and therefore $T_{a}\ll T$, it has been proved in [10] that $u(t)$ satisfies the orthogonal property $\mathcal{A}_{u,u}\left(\bar{m}\frac{T}{M},\bar{n}\frac{1}{NT}\right)=\delta(\bar{m})\delta(\bar{n}),$ (8) for $|\bar{m}|\leq M-1$ and $|\bar{n}|\leq N-1$. Because the corresponding JTFR of $\mathcal{R}_{\textrm{DD}}=\frac{T}{M}\times\frac{1}{NT}=\frac{1}{MN}\ll 1$ does not allow the existence of (bi)orthogonal WH set, a natural question arises: How to explain the existing DDOP in [10] and whether is there any general DDOP design principle? ## IV Global and Local (Bi)Orthogonality From (8), one can see that this orthogonality is regarding $M$ symbols with $N$ subcarriers, and therefore it only applies to a part of TF plane. Since an MC modulation has a limited number of symbols and subcarriers, the orthogonality within this signal bandwidth and duration is sufficient for an MC modulation. As a result, we can reformulate its pulse design problem, and introduce a concept of local orthogonality. ### IV-A Global and local (bi)orthogonality Analogous to (2) and (3), the (bi)orthogonal pulse design problem taking the limited number of symbols and subcarriers into account is to find WH subsets $\left(g,\mathcal{T},\mathcal{F},M,N\right)$ and $\left(\gamma,\mathcal{T},\mathcal{F},M,N\right)$ that satisfy the orthogonal condition of $\langle g_{m,n},g_{\dot{m},\dot{n}}\rangle=\delta(m-\dot{m})\delta(n-\dot{n}),\,\,m,\dot{m}\in\mathbb{Z}_{m},n,\dot{n}\in\mathbb{Z}_{N},$ (9) or the biorthogonal condition of $\langle g_{m,n},\gamma_{\dot{m},\dot{n}}\rangle=\delta(m-\dot{m})\delta(n-\dot{n}),\,\,m,\dot{m}\in\mathbb{Z}_{m},n,\dot{n}\in\mathbb{Z}_{N},$ (10) where $\displaystyle\mathbb{Z}_{M}=\\{0,1,\cdots,M-1\\},\,\,\mathbb{Z}_{N}=\\{0,1,\cdots,N-1\\}.$ (11) We call (9) and (10) the local orthogonal condition and local biorthogonal condition, respectively. Because of $\displaystyle\langle g_{m,n},g_{\dot{m},\dot{n}}\rangle=\mathcal{A}_{g,g}(\bar{m}\mathcal{T},\bar{n}\mathcal{F})e^{j2\pi n\bar{m}\mathcal{F}\mathcal{T}},$ (12) where $\bar{m}=\dot{m}-m$ and $\bar{n}=\dot{n}-n$, the local orthogonal condition in (9) is equivalent to $\displaystyle\mathcal{A}_{g,g}(\bar{m}\mathcal{T},\bar{n}\mathcal{F})=\delta(\bar{m})\delta(\bar{n}),$ (13) for $|\bar{m}|\leq M-1,|\bar{n}|\leq N-1$. Similar result can be obtained for the local biorthogonal condition in (10). It is noteworthy that the WH frame theory based results regarding (bi)orthogonal WH sets are rigorously correct. Since the WH set is a time- frequency analysis tool for functions in $L^{2}(\mathbb{R})$, it considers the whole TF plane where $m,n\in\mathbb{Z}$, and corresponds to the signal without the limitation of bandwidth and duration. To make this possible, given $\mathcal{T}$ and $\mathcal{F}$, $g(t)$ is independent of the number of symbols $M$ and the number of subcarriers $N$, to be shifted over the whole TF plane. In other words, to achieve the global (bi)orthogonality in (2) and (3), $g(t)$ is parameterized only by $\mathcal{T}$ and/or $\mathcal{F}$. On the other hand, for MC modulation, a WH subset that satisfies the local (bi)orthogonality in (9) and (10) is sufficient. Obviously, $g(t)$ that achieves the global (bi)orthogonality can form a such WH subset. However, what we really need is just a WH subset, and it is not necessarily bounded by the WH frame theory for the WH set. In fact, the pulses parameterized by not only $\mathcal{T}$ and $\mathcal{F}$ but also $M$ and $N$, can achieve the local orthogonality. An example is the DDOP in (7). ### IV-B Orthogonality with respect to $\mathcal{F}$ Let us consider a fixed $m$ in $g_{m,n}$, and investigate the orthogonality with respect to the frequency resolution $\mathcal{F}$. We want to find $g(t)$ that can achieve the orthogonality among $g(t-m\mathcal{T})e^{j2\pi n\mathcal{F}(t-m\mathcal{T})}$ with a given $m$ but variable $n$, where $0\leq t\leq T_{g}$ and $T_{g}=\mathfrak{T}=1/\mathcal{F}$. Without loss of generality, let $m=0$. We can obtain the following results: 1. (F1) Unbounded $n$ ($n\in\mathbb{Z}$): $g(t)$ is the rectangular pulse $\Pi_{\mathfrak{T}}(t)$, which is independent of $N$. 2. (F2) Bounded $n$ ($|n|\leq N-1$): We have the following lemma: ###### Lemma 1. When $g(t)$ is a periodic function with period $\frac{\mathfrak{T}}{N}$ for $0\leq t\leq T_{g}$ and $T_{g}=\mathfrak{T}$, it satisfies the orthogonal property that $\mathcal{A}_{g,g}\left(0,n\mathcal{F}\right)=\delta(n),$ (14) for $|n|\leq N-1$. ###### Proof: Since the period of $g(t)$ is $\frac{\mathfrak{T}}{N}$, $g(t)$ can be written as $\displaystyle g(t)=g\left(t+\dot{n}\frac{\mathfrak{T}}{N}\right),\,\,\,0\leq\dot{n}\leq N-1.$ (15) for $0\leq t<\frac{\mathfrak{T}}{N}$. Then, bearing in mind that $\mathfrak{T}=1/\mathcal{F}$, we have $\displaystyle\mathcal{A}_{g,g}(0,n\mathcal{F})$ $\displaystyle=\int_{0}^{T_{g}}g(t)g^{*}(t)e^{-j2\pi n\mathcal{F}t}dt,$ $\displaystyle=\sum_{\dot{n}=0}^{N-1}\int_{\dot{n}\frac{\mathfrak{T}}{N}}^{(\dot{n}+1)\frac{\mathfrak{T}}{N}}g(t)g^{*}(t)e^{-j2\pi n\mathcal{F}t}dt,$ $\displaystyle=\sum_{\dot{n}=0}^{N-1}e^{-j2\pi\frac{n\dot{n}}{N}}\int_{0}^{\frac{\mathfrak{T}}{N}}g(t)g^{*}(t)e^{-j2\pi n\mathcal{F}t}dt,$ $\displaystyle=\delta(n),$ (16) for $|n|\leq N-1$, which completes the proof. ∎ Lemma 1 indicates that once there is a constraint imposed on the number of subcarriers, there are _infinite_ pulses that can satisfy the orthogonality with respect to $\mathcal{F}$. In particular, _regardless of $B_{g}$_, $g(t)$ can achieve the orthogonality among $N$ subcarriers with a subcarrier spacing $\mathcal{F}$, as long as it is an aforementioned periodic function. An example of such $g(t)$ for $N=4$ is shown in Fig. 2. Figure 2: $g(t)$ orthogonal w.r.t $\mathcal{F}=\frac{1}{\mathfrak{T}}$ for $|n|\leq N-1$ and fixed $m$. It is noteworthy that in contrast to (F1) where $B_{g}$ is proportional to $\mathcal{F}$, (F2) _decouples_ $B_{g}$ and $\mathcal{F}$, and consequently allows pulses with much wider bandwidth to achieve orthogonality among $N$ subcarriers. On the other hand, to avoid the intersymbol interference (ISI) and achieve the orthogonality among MC symbols time-multiplexed by $\mathcal{T}$, we need $B_{g}$ to be comparable to $\frac{1}{\mathcal{T}}$. The decoupling of $\mathcal{F}$ and $B_{g}$ in (F2) actually paves a way to design orthogonal pulse with respect to _independent TF resolutions_. ### IV-C Orthogonality with respect to $\mathcal{T}$ Similarly, we can consider a fixed $n$ in $g_{m,n}$, and investigate the orthogonality with respect to the time resolution $\mathcal{T}$. Our target now is to find $g(t)$ that can achieve the orthogonality among $g(t-m\mathcal{T})e^{j2\pi n\mathcal{F}(t-m\mathcal{T})}$ with a fixed $n$ but different $m$. When $n\neq 0$, we have the following straightforward answer with _isolated_ pulses/sub-pulses: 1. (T1) Unbounded $m$ ($m\in\mathbb{Z}$) : $g(t)$ can be any function with duration $T_{g}\leq\mathcal{T}$, which is independent of $M$. 2. (T2) Bounded $m$ ($|m|\leq M-1$) : $g(t)$ consists of $\dot{N}>1$ sub-pulses $b_{\dot{n}}(t),0\leq\dot{n}\leq N-1$, where these sub-pulses are temporally spaced by $M\mathcal{T}$ and each sub-pulse has a duration of $T_{b_{\dot{n}}}\leq\mathcal{T}$. Meanwhile, when $n=0$, we have another answer with _overlapped_ pulse/sub- pulses: 1. (T3) Unbounded $m$ ($m\in\mathbb{Z}$) : SRN pulse for symbol interval $\mathcal{T}$, which is also independent of $M$. 2. (T4) Bounded $m$ ($|m|\leq M-1$) : $g(t)$ consists of $\dot{N}>1$ SRN sub-pulses for symbol interval $\mathcal{T}$, where these sub-pulses are temporally spaced by $M\mathcal{T}$. The SRN sub-pulse can have any duration. It is interesting to note that $g(t)$ in (T4) actually can form a periodic function that satisfies (F2), when $\dot{N}$ is large enough. Figure 3: $u_{c}(t)$ for $D=1$. ## V General DDOP design Recall that the orthogonal property of the DDOP in (8) is subject to a duration constraint of SRN sub-pulse, given by $T_{a}\ll T$. In practice, it is desirable to relax such constraint to enable flexible design. In this section, we propose a general DDOP design, where the SRN sub-pulse’s duration constraint is released. Let $\dot{N}=N$ and $\mathcal{T}=\frac{T}{M}$, $g(t)$ in (T4) becomes the DDOP $u(t)$ in (8), except the unbounded $T_{a}$. From (F2), we know that for the frequency resolution $\mathcal{F}=\frac{1}{NT}$, the key to achieve the orthogonality among $N$ subcarriers is to form a periodic function with period $\frac{1}{N\mathcal{F}}=T$. This observation inspires us to use $u_{c}(t)$, a cyclically extended version of $u(t)$, as the transmit pulse, while the receive pulse is still $u(t)$. Furthermore, because $\mathcal{A}_{u_{c},u}(m\frac{T}{M},n\frac{1}{NT})$ is calculated between $u_{c}(t)$ and $u(t-m\frac{T}{M})e^{j2\pi\frac{n}{NT}(t-m\frac{T}{M})}$, the problem becomes how can we let $u_{c}(t)$ have the specified periodicity within the range of $u(t-m\frac{T}{M})e^{j2\pi\frac{\bar{n}}{NT}(t-m\frac{T}{M})}$ for $|m|\leq M-1$. We have the following lemma: ###### Lemma 2. Let $u(t)$ consist of $N$ SRN pulses $a_{T/M,N}(t)$ temporally spaced by $T$, it satisfies the orthogonal property that $\mathcal{A}_{u_{c},u}\left(m\frac{T}{M},n\frac{1}{NT}\right)=\delta(m)\delta(n),$ (17) for $|m|\leq M-1$ and $|n|\leq N-1$, where $u_{c}(t)$ is a cyclically extended version of $u(t)$ that is a periodic function with period $T$ during $-(M-1)\frac{T}{M}\leq t\leq(MN-1)\frac{T}{M}+T_{a}$. ###### Proof: Let us first check the periodicity of $u_{c}(t)$ within the range of $-(M-1)\frac{T}{M}\leq t\leq(MN-1)\frac{T}{M}+T_{a}$, which correspond to the start of the first sub-pulse of $u(t+(M-1)\frac{T}{M}$ and the end of the last sub-pulse of $u(t-(M-1)\frac{T}{M})$, respectively. From (7), we can divide $u(t)$ into $N$ segments, where $u(t)=\sum_{n=0}^{N-1}u_{n}(t)$ and the $n$th segment is given by $u_{n}(t)=u(t)$ for $nT\leq t<(n+1)T$. Figure 4: $u_{c}(t)$ for $D=2$. Let $D=\lceil T_{a}/T\rceil$. If $D=1$, we have $u_{n}(t)=a(t-nT),$ which implies that the periodicity within $-(M-1)\frac{T}{M}\leq t\leq(MN-1)\frac{T}{M}+T_{a}$ can be obtained by cyclically extending $u(t)$ to $u_{c}(t)=\sum_{n=-1}^{N}a(t-nT)$. Similarly, when $D>1$, the periodicity can be obtained by further extending to $\displaystyle u_{c}(t)=\sum_{n=-D}^{N-1+D}a(t-nT).$ (18) Two examples of $u_{c}(t)$ with $D=1,2$ are shown in Fig. 3 and Fig. 4, respectively, where the first sub-pulse of $u(t+(M-1)\frac{T}{M})$ and the last sub-pulse of $u(t-(M-1)\frac{T}{M})$ are also plotted with dashed lines. Next, let us verify the ambiguity functions. Due to the aforementioned periodicity of $u_{c}(t)$, we have $\displaystyle u_{c}(t)=u_{c}(t+\dot{n}T),\,\,\,0\leq\dot{n}\leq N-1,$ (19) for $m\frac{T}{M}\leq t\leq m\frac{T}{M}+T_{u}$, where $|m|\leq M-1$ and $T_{u}=(N-1)T+T_{a}$. Then, using (19), the ambiguity function between $u_{c}(t)$ and $u(t)$ for $|n|\leq N-1$ and $|m|\leq M-1$ can be calculated similarly to (16), and given by $\displaystyle\mathcal{A}_{u_{c},u}(m\frac{T}{M},n\frac{1}{NT})$ $\displaystyle=\int_{m\frac{T}{M}}^{m\frac{T}{M}+T_{u}}u_{c}(t)u^{*}(t-m\frac{T}{M})e^{-j2\pi n\frac{1}{NT}(t-m\frac{T}{M})}dt,$ $\displaystyle=\delta(n)\delta(m).$ (20) (20) completes the proof. ∎ Lemma 2 indicates that the constraint of $T_{a}$ in $u(t)$ can be removed. Once the appropriate CP and CS are added in accordance with (18), the desired local orthogonality can be achieved as well. As a result, generally the transmit pulse of ODDM modulation is $u_{c}(t)$, where the extension parameter $D=\lceil T_{a}/T\rceil=\lceil 2Q/M\rceil$. When $M\gg 2Q$, we have $2Q/M\approx 0$. Then, as proved in [10], the ODDM can just employ the DDOP $u(t)$ without cyclic extension ($D=0$). ## VI TF signal localization and numerical results ### VI-A Frequency domain representation of DDOP The frequency domain representation plays an important role in the analysis of pulse. In the following, we will derive $U(f)$, the frequency domain representation of $u(t)$. It is well-known that the frequency domain representation of an impulse train $\dot{u}(t)=\sum_{n=-\infty}^{\infty}\delta(t-nT),$ is a Fourier series and also can be written as an impulse train in frequency domain $\dot{U}(f)=\frac{1}{T}\sum_{n=-\infty}^{\infty}\delta(f-\frac{n}{T}).$ It is interesting to observe that the DDOP can be obtained from $\dot{u}(t)$ by applying a rectangular window $\operatorname{\Pi}_{NT}\left(t+\frac{T}{2}\right)$ followed by a $a(t)$ based filtering. Then, we have $\displaystyle u\left(t+\frac{T_{a}}{2}\right)=\left(\dot{u}(t)\times\operatorname{\Pi}_{NT}\left(t+\frac{T}{2}\right)\right)\star a(t),$ (21) where $\star$ denotes the convolution. Since the multiplication and convolution in time domain correspond to the convolution and multiplication in frequency domain, respectively, we have $\displaystyle U(f)$ $\displaystyle=e^{-j2\pi f\frac{T_{a}}{2}}A(f)\left(\dot{U}(f)\star e^{-j2\pi f\frac{(N-1)T}{2}}\operatorname{Sinc}(fNT)\right),$ $\displaystyle=\frac{e^{-j2\pi f\tilde{T}}}{T}A(f)\sum_{n=-\infty}^{\infty}e^{j2\pi\frac{n(N-1)}{2}}\operatorname{Sinc}(fNT- nN),$ (22) where $\tilde{T}=(T_{a}+(N-1)T)/2$ and $A(f)$ is the Fourier transform of $a(t)$. Without loss of generality, let $M$ be an even number. Then, the shape of $|U(f)|$ in plotted in Fig. 5, where the shape of $|\operatorname{Sinc}(fNT-nN)|$ is truncated for the purpose of display. Now, it becomes clear that $\operatorname{Sinc}(fNT-nN)$ and $A(f)$ correspond to the orthogonality with respect to $\mathcal{F}=\frac{1}{NT}$ and $\mathcal{T}=\frac{T}{M}$, respectively. Figure 5: $|U(f)|$. ### VI-B TF signal localization comparison Figure 6: TF signal localization comparison of modulation waveforms. For the TF region bounded by the sampling rate and duration of $W_{0}=\frac{M}{T}$ and $T_{0}=NT$, the corresponding degrees of freedom (DoF) of the signal is around $W_{0}T_{0}=MN$. Then, an MC modulation scheme employs $MN$ orthogonal pulses corresponding to its TF resolutions to transmit $MN$ digital symbols, resulting in its own TF localization structure. With $u(t)$ in (7) and $U(f)$ in (22), like that of OFDM in [7], the TF signal localization structure of ODDM modulation can be schematically illustrated in Fig. 6, where those of other modulation waveforms are also given for comparison. It can be observed that : 1. 1) For SC modulation, which is a time-division multiplexing (TDM) scheme, the $MN$ digital symbols are conveyed by $MN$ SRN pulses for symbol interval $\frac{T}{M}$. The pulses are overlapped only in time domain. 2. 2) For frequency-division multiplexing (FDM) scheme, an example is the OFDM modulation with frequency resolution $\frac{1}{NT}$, where $MN$ digital symbols are conveyed by $MN$ rectangular pulses $\operatorname{\Pi}_{NT}(t)$ modulated by $MN$ subcarriers, respectively. The pulses are overlapped only in frequency domain. 3. 3) For the conventional OFDM modulation with frequency resolution $\frac{1}{T}$ and time resolution $T$, $MN$ digital symbols are conveyed by $N$ OFDM symbols, where each OFDM symbols has $M$ rectangular pulses $\operatorname{\Pi}_{T}(t)$ modulated by $M$ subcarriers, respectively. Since $N$ OFDM symbols are isolated in time domain, these pulses also are overlapped only in frequency domain. 4. 4) For ODDM modulation with frequency resolution $\frac{1}{NT}$ and time resolution $\frac{T}{M}$, $MN$ digital symbols are conveyed by $M$ pulse trains $u(t)$ modulated by $N$ subcarriers, respectively. These pulses are overlapped in both time and frequency domains to achieve the local orthogonality with respect to $\frac{T}{M}$ and $\frac{1}{NT}$. ### VI-C Numerical results Now, we present the numerical results for the ambiguity function of the DDOP. A three-dimensional plot of the ambiguity function in (17) is shown in Fig. 7, where $\mathcal{F}=\frac{1}{NT}$, $\mathcal{T}=\frac{T}{M}$ with $M=32$, $N=8$. $a(t)$ is a root raised cosine pulse with roll-off factor $\rho=0.1$ and $Q=20$. Because $D=2$ for this parameter setting, we adopt the general DDOP design. The corresponding 2D plot of $\left|\mathcal{A}_{u_{c},u}\left(m\frac{T}{M},n\frac{1}{NT}\right)\right|$ with $n=0$ is also given in Fig. 8. One can see that with appropriate CP and CS, the DDOP can achieve the local orthogonality within $|m|\leq M-1$ and $|n|\leq N-1$. For $|m|\geq M$ or $|n|\geq N$, the ambiguity function repeats with time period $T$ and frequency period $\frac{1}{T}$, if we further extend the CP and CS. The elegant TF localization of ODDM schemes shown in Fig. 6 demonstrates that every information symbol is evenly distributed over its TF region. Thus, it is flexible for allocating TF resources for multi-user communications system design. In addition, the perfect local orthogonality of the DDOP’s ambiguity function with respect to DD resolutions, shown in Figs. 7 and 8, can be exploited for design integrated sensing and communication (ISAC) systems. We will investigate these topics in our future work. Figure 7: $\left|\mathcal{A}_{u_{c},u}\left(m\frac{T}{M},n\frac{1}{NT}\right)\right|$, $M=32$ and $N=8$. Figure 8: $\left|\mathcal{A}_{u_{c},u}\left(m\frac{T}{M},n\frac{1}{NT}\right)\right|$, $M=32$ and $N=8$, $n=0$. ## VII Conclusion In this paper, the recently discovered DDOP is analyzed in terms of local orthogonality, frequency-domain representation and ambiguity function. We clarified the DDOP’s local orthogonality and justified its existence as a WH subset, without violating the WH frame theory which governs the global orthogonality corresponding to the WH set. Several sufficient conditions for locally-orthogonal pulses were presented, and a general DDOP design was proposed by introducing CP and CS to the DDOP. We derived the DDOP’s frequency domain representation, and compared the DDOP-based ODDM modulation with other modulation schemes, in terms of TF signal localization. We demonstrated the perfect local orthogonality of DDOP with respect to DD resolutions by its ambiguity function. ## References * [1] J. M. Wozencraft and I. M. Jacobs, _Principles of Communication Engineering_. Wiley, 1965. * [2] I. Daubechies, “The wavelet transform, time-frequency localization and signal analysis,” _IEEE Trans. Inf. Theory_ , vol. 36, no. 5, pp. 961–1005, 1990\. * [3] H. G. Feichtinger and T. Strohmer, Eds., _Gabor Analysis and Algorithms: Theory and Applications_. Birkhäuser, Boston, MA, 1998. * [4] K. Gröchenig, _Foundations of Time-Frequency Analysis_. Birkhäuser, Boston, MA, 2001. * [5] J. Wexler and S. Raz, “Discrete Gabor expansions,” _Signal Process._ , vol. 21, no. 3, pp. 207–220, 1990. * [6] A. Janssen, “Duality and biorthogonality for Weyl-Heisenberg frames,” _J. Fourier Anal. Applicat._ , vol. 1, no. 4, pp. 403–436, 1995. * [7] G. Matz, H. Bolcskei, and F. Hlawatsch, “Time-frequency foundations of communications: Concepts and tools,” _IEEE Signal Process. Mag._ , vol. 30, no. 6, pp. 87–96, 2013. * [8] A. Sahin, I. Guvenc, and H. Arslan, “A survey on multicarrier communications: Prototype filters, lattice structures, and implementation aspects,” _IEEE Commun. Surveys Tuts._ , vol. 16, no. 3, pp. 1312–1338, 2014\. * [9] H. Lin and J. Yuan, “Multicarrier modulation on delay-Doppler plane: Achieving orthogonality with fine resolutions,” in _Proc. of IEEE ICC_ , 2022, pp. 1–6. * [10] H. Lin and J. Yuan, “Orthogonal delay-Doppler division multiplexing modulation,” _IEEE Trans. Wireless Commun._ , 2022, to appear. * [11] D. Slepian, “On bandwidth,” _Proc. IEEE_ , vol. 64, no. 3, pp. 292–300, 1976\. * [12] R. Hass and J.-C. Belfiore, “A time-frequency well-localized pulse for multiple carrier transmisson,” _Wireless Personal Commun._ , vol. 5, no. 1, pp. 1–18, 1997. * [13] B. Le Floch, M. Alard, and C. Berrou, “Coded orthogonal frequency division multiplex,” _Proc. IEEE_ , vol. 83, no. 6, pp. 982–996, 1995. * [14] W. Kozek and A. Molisch, “Nonorthogonal pulseshapes for multicarrier communications in doubly dispersive channels,” _IEEE J. Sel. Areas Commun._ , vol. 16, no. 8, pp. 1579–1589, 1998. * [15] T. Strohmer and S. Beaver, “Optimal OFDM design for time-frequency dispersive channels,” _IEEE Trans. Commun._ , vol. 51, no. 7, pp. 1111–1122, 2003. * [16] I. Daubechies, _Ten Lectures on Wavelets_. SIAM, 1992. * [17] P. Bello, “Characterization of randomly time-variant linear channels,” _IEEE Trans. Commun. Syst._ , vol. 11, no. 4, pp. 360–393, 1963.
# An Efficient Temporary Deepfake Location Approach Based Embeddings for Partially Spoofed Audio Detection ###### Abstract Partially spoofed audio detection is a challenging task, lying in the need to accurately locate the authenticity of audio at the frame level. To address this issue, we propose a fine-grained partially spoofed audio detection method, namely Temporal Deepfake Location (TDL), which can effectively capture information of both features and locations. Specifically, our approach involves two novel parts: embedding similarity module and temporal convolution operation. To enhance the identification between the real and fake features, the embedding similarity module is designed to generate an embedding space that can separate the real frames from fake frames. To effectively concentrate on the position information, temporal convolution operation is proposed to calculate the frame-specific similarities among neighboring frames, and dynamically select informative neighbors to convolution. Extensive experiments show that our method outperform baseline models in ASVspoof2019 Partial Spoof dataset and demonstrate superior performance even in the cross-dataset scenario. The code is released online111https://github.com/xieyuankun/TDL-ADD. Index Terms— partially spoofed audio detection, temporal deepfake location, embedding learning. ## 1 Introduction AI generated content (AIGC) technology has witnessed swift progress in recent years, particularly in speech-related applications like text-to-speech (TTS) [1, 2, 3] and voice conversion (VC) [4, 5, 6]. Although these technologies have brought about convenience, they have also posed significant security threats. Thus, various initiatives and challenges, such as ASVspoof [7, 8], have been established to foster research on countermeasure solutions that safeguard speech applications and human listeners against spoofing attacks. Nevertheless, a significant scenario has been overlooked in most datasets and challenges where a bonafide speech utterance is contaminated by synthesized speech segments, leading to partial spoofing (PS). Attackers can use PS to alter sentence semantics, and such modifications can be easily accomplished at low cost. For instance, attackers can easily modify single word such as time, place, and characters in sentence to dramatically change the semantics. Furthermore, If attackers have knowledge of phonology, they can manipulate vowels and even consonants such as “pan,”“pin,”“pen,” which are smaller than the word level. Therefore, defending against such fine-grained PS scenarios poses significant challenges for defenders. In recent years, there are several studies about PS scenarios for Audio Deepfake Detection (ADD). Yi et al. [9] create a dataset that focuses on changing a few words in an utterance for half-truth audio detection. At the same time, Zhang et al. [10] construct a speech database called ‘PartialSpoof’ designed for PS scenarios. The above two datasets are the beginning of the research for PS scenario in ADD task. Afterward, Zhang et al. [11] propose the SELCNN network to enhance the ability of the accuracy of the utterance. Lv et al. [12] use Wav2Vec2 (W2V2) [13] as front-end, ECAPA-TDNN [14] as back-end achieving the first rank in ADD 2022 Track 2[15]. Although the above research shows effectiveness at the utterance level detection in PS, they do not pinpoint specific segments with precision. Thus, Zhang et al. [16] extended the previous utterance-level PS dataset labels to frame-level and proposed corresponding W2V2-based countermeasures to enhance frame-level detection capability. Fig. 1: The entire structure of our proposed Temporal Deepfake Location (TDL) method. The aforementioned methods solely utilize existing ADD models such as LCNN, currently lacking specific approaches tailored to the PS scenario, particularly in terms of precise frame-level localization. To address this challenge, we propose a novel Temporal Deepfake Location (TDL) method. For front-end, we take advantage of W2V2 [17]. By training on a vast corpus of genuine speech from diverse source domains, W2V2 can effectively discriminate the real and fake in complex acoustic scenarios. For back-end, our primary focus is on fine-grained locating the genuine and spoofed speech segment. To clearly distinguish the real and fake in feature level, we first design the embedding similarity module to separate the real and fake frames in embedding space and get a high-quality embedding similarity vector. Then, we propose temporal convolution operation to locate the region from the embedding vector. The local similarity for each temporal position is calculated from the embedding. By this means, we can obtain a frame-specific weight to guide the convolution making a temporal sensitive calculation. Our main contributions can be summarized as follows: * • We propose TDL method, an efficient and effective ADD method for PS scenarios which combines a embedding similarity module and temporal convolution operation to effectively capture both feature and positional information. * • The proposed method outperforms baseline models in ASV spoof 2019PS dataset and demonstrate superior performance even in cross-dataset experiments. ## 2 Proposed Method ### 2.1 Problem statement and overview In PS scenarios, the fake audio segment is inserted within the genuine speech. Our target is to detect the real and fake segments at frame level. Given the large-scale self-supervised audio feature $f=(f_{1},f_{2},...f_{T})\in R^{D\times T}$, where $D$ and $T$ denote the dimension of audio feature and the number of frames respectively. The whole task is defined as input feature $f$ and output the frame level label $y=(y_{1},y_{2},...y_{T})\in\\{0,1\\}^{T}$, where 1 represents the real frames and 0 represents the fake frames. The framework of our proposed TDL is depicted in Figure 1. First, we utilize Wav2Vec-XLS-R to extract the frame level feature from the raw audio. Then, for enhanced identification of genuine and fake distinctions at the embedding level, we devise an embedding similarity module to segregate authentic and synthetic frames within the embedding space. Next, to capture the position information, we adopt temporal convolution operation by focusing on frame- specific similarities among neighboring frames. Finally, we employ 1D convolutional layers and fully connected layers for downsampling to the frame level label to compute the Binary Cross-Entropy (BCE). ### 2.2 W2V2 front-end W2V2 based front-end is trained by solving a contrastive task over a masked feature encoder. Firstly, speech signals in various lengths are passed through a feature extractor consisting of seven convolutional neural network (CNN) layers. Subsequently, context representations are obtained using a Transformer network [18] comprising of 24 layers, 16 attention heads, and an embedding size of 1024. In practice, we utilize the Hugging Face version of wav2vec2-XLS-R-300M222https://huggingface.co/facebook/wav2vec2-xls-r-300m and freeze the weights of the front-end. The front-end model is pre-trained with 436k hours of unannotated genuine speech data in 128 languages. Consequently, the last hidden states from the transformer can effectively represent the contextualized information of genuine speech which is different from the partially fake speech. ### 2.3 Embedding similarity module To better capture feature-level information, we first distinguish the real and fake frames in the embedding space. Specifically, the W2V2 features are fed into a CONV module, consisting of two sequential 1D-CNNs, which downsamples the embedding dimension from 1024 to 32. The embedding vector is $L2$-normalized. Then we get a embedding vector $e=(e_{1},e_{2},...e_{T})\in R^{D\times T}$. In the embedding similarity module, we utilize cosine similarity to measure the similarity of two embedding vector $e_{u}$ and $e_{v}$ as follows: $\mathcal{S}\left(\mathbf{e}_{u},\mathbf{e}_{v}\right)=\frac{\mathbf{e}_{u}^{T}\cdot\mathbf{e}_{v}}{\left\|\mathbf{e}_{u}\right\|_{2}\cdot\left\|\mathbf{e}_{v}\right\|_{2}}.$ (1) To increase the distance between genuine and fake frames in the embedding space and improve generalizability, we computed the cosine similarities between genuine frames, between fake frames, and between genuine and fake frames. Specifically, we ensured that genuine frames from different positions exhibited similarity, fake frames from different positions exhibited similarity, while genuine and fake frames are dissimilar to each other. Thus, $\mathcal{L}_{ESM}^{Real}$ and $\mathcal{L}_{ESM}^{Fake}$ are proposed to make the real frames and fake frames in different positions similar: $\mathcal{L}_{\mathrm{ESM}}^{\mathrm{Real}}=\max_{\forall e_{x},e_{y},x\neq y}\left\lfloor\tau_{\text{same }}-\mathcal{S}\left(\mathbf{e}_{x},\mathbf{e}_{y}\right)\right\rfloor_{+},$ (2) $\mathcal{L}_{\mathrm{ESM}}^{\mathrm{Fake}}=\max_{\forall e_{m},e_{n},m\neq n}\left\lfloor\tau_{\text{same }}-\mathcal{S}\left(\mathbf{e}_{m},\mathbf{e}_{n}\right)\right\rfloor_{+},$ (3) where $e_{x}$ and $e_{y}$ refer to distinct positions of real frames, while $e_{m}$ and $e_{n}$ refer to those of fake frames. $\tau_{\text{same }}$ is the similarity threshold between frames from the same category, $\left\lfloor\\\ \dots\right\rfloor_{+}$ represents clipping below at zero. It is noteworthy that although we know the positions of frame-level authenticity labels, the temporal dimension of W2V2-XLS-R features does not inherently align with these frame-level labels. To tackle this issue, we ascertain the temporal authenticity in the time dimension of the embedding vector by calculating the ratio between the temporal dimensions of the label and the embedding vector. $\mathcal{L}_{ESM}^{Diff}$ is proposed to separate the real and fake frames, which can be formulated as: $\mathcal{L}_{\mathrm{ESM}}^{\mathrm{Diff}}=\max_{\forall e_{r},e_{f}}\left\lfloor\mathcal{S}\left(\mathbf{e}_{r},\mathbf{e}_{f}\right)-\tau_{\text{Diff }}\right\rfloor_{+},$ (4) where $e_{r}$ and $e_{f}$ refer to the embedding vector of real frames and fake frames. $\tau_{\text{diff }}$ is the similarity threshold to constraint the distance between real and fake frames. Finally, the embedding similarity module is optimized by $\mathcal{L}_{ESM}$, which takes into account the three aforementioned losses in a joint manner. The $\mathcal{L}_{ESM}$ is calculated as follows: $\mathcal{L}_{ESM}=\mathcal{L}_{ESM}^{Real}+\mathcal{L}_{ESM}^{Fake}+\mathcal{L}_{ESM}^{Diff}.$ (5) ### 2.4 Temporal convolution operation To effectively capture the positional information, we use the embedding vector as an local attention mask to perform temporal convolution operations. Consider a audio feature ${\mathbf{X}}\in R^{D_{in}\times T}$, where $D_{in}$ and $T$ represent the dimension of the vector and number of frames respectively. The temporal convolution layer learns a dynamic convolution kernel $\Bbbk\in R^{k\times D_{in}\times D_{out}}$, where $k$ is the size of temporal kernel, $D_{out}$ is the dimension of output feature. We only utilize the dynamic kernel $\Bbbk^{m}\in R^{k\times D_{in}}$ to compute $m^{th}$ channel of the output for convenient. Thus, the temporal convolution operation for the $t^{th}$ feature can be expressed as: $f_{t}^{m}=\sum_{i=0}^{k-1}\mathcal{\Bbbk}^{m}[i,:]\cdot\overline{\mathbf{X}}\left[:,t-\frac{k}{2}+i\right],$ (6) where $f_{t}^{m}$ is the value in the $m^{th}$ channel of output feature vector, $[\cdots]$ means a slice of a matrix, $(\cdot)$ denotes the inner product. $\overline{\mathbf{X}}$ is the modulated feature processed by neighbor similarity calculation: $\displaystyle\overline{\mathbf{X}}\left[:,t-\frac{k}{2}+i\right]=\mathbf{X}\left[:,t-\frac{k}{2}+i\right]\times\mathbf{a}[i,t],$ (7) $\displaystyle i\in[0,\ldots,k-1],$ where matrix $\mathbf{a}\in R^{k\times T}$ is a similarity matrix that calculate the local similarity for each temporal position, $\mathbf{a}[i,t]$ indicates the similarity between the $t^{th}$ feature vector and its $k$ neighbors. In practice, we determine the dynamic kernel weight based on the embedding vector generated by ESM module. We apply temporal convolution operation to the W2V2 features on two sequence 1D-CNNs, where both input channel and output channel remain unchanged to maintain consistency in temporal dimension. ### 2.5 Total loss Following two consecutive temporal convolution operation layers, to capture additional temporal information and align with the label dimensions, we subsequently employ 1D-CNN, fully connected (FC) layers, and sigmoid activation functions to calculate the BCE loss. The architecture details of TDL is shown in Table 1. The total loss is defined as follow: $\mathcal{L}_{all}=\mathcal{L}_{BCE}+\lambda\mathcal{L}_{ESM},$ (8) where $\lambda$ is set to 0.1 to balance the value of two losses. Table 1: Architecture of TDL network. module | kernel/stride | output shape ---|---|--- W2V2 | - | (batch,1024,1050) CONV | 3/1 | (batch,512,1050) 3/1 | (batch,32,1050) TCONV | 3/1 | (batch,1024,1050) TCONV | 3/1 | (batch,1024,1050) CONV | 1/1 | (batch,2,1050) Flatten/FC | - | (batch,132) ## 3 Experiments ### 3.1 Database Our experiments for PS scenario include two public datasets: ASVspoof2019PS (19PS) [10] and LAV-DF [19]. 19PS is constructed based on the ASVspoof2019 LA database [20]. All experiments on the 19PS dataset are conducted using 160ms resolution labels. The training, validation, and testing sets are distributed according to the original dataset allocation, consisting of 25,380, 24,844, and 71,237 utterances respectively. To evaluate the model’s generalizability, we conduct additional testing of the 19PS-trained model using the LAV-DF test set. LAV-DF represents a multi-modal temporal forgery dataset, containing a total of 26,100 videos in test set. We extract the audio track of each video and create 160ms frame level genuine and fake labels. We calculated the percentage of samples belonging to fake class at both the frame and sentence levels, as shown in the Table 2. We can observe that the frame-level labels in 19PS are balanced, facilitating model training. However, the LAV-DF dataset exhibits a lower proportion of spoof segments, making it unbalanced and presenting greater challenges for detection. Table 2: Percentages(%) of fake class in each dataset. dataset | subset | frame-level | utterance-level ---|---|---|--- 19PS | train | 53.00 | 89.83 19PS | dev | 52.31 | 89.74 19PS | test | 48.03 | 89.68 LAV-DF | test | 10.01 | 48.82 ### 3.2 Implementation details In order to address the issue of variable-length audio inputs, we employ the technique of zero-padding to the maximum length of training set. For the frame of genuine speech, we set the label to one, while for spoofing frame, the label is set to zero. In the case of 19PS, the maximum duration of speech in the training set is 21.03 seconds with a W2V2 feature dimension of (1050,1024) and the number of frames at a resolution of 160 ms is 132. For LFCC, we extracted 60-dimensional LFCC with a combination of static, delta and delta- delta coefficients. For training strategy, the Adam optimizer is adopted with $\beta_{1}=0.9$, $\beta_{2}=0.999$, $\varepsilon$ = $10^{-9}$ and weight decay is $10^{-4}$. We train all of the models for 100 epochs. The learning rate is initialized as $10^{-5}$ and halved every 5 epochs. It is worth mention that no data augmentation method is used for experiment. ### 3.3 Evaluation metrics In our experiment, we employ four evaluation metrics to assess model performance: Equal error rate (EER), precision, recall, and $F_{1}$ score. All metrics are computed based on frame-level authenticity labels of the partially spoofed audio. Precision, recall, and $F_{1}$ score are defined as follow: $Precison=\frac{TP}{TP+FP},$ (9) $Recall=\frac{TP}{TP+FN},$ (10) ${F_{1}score}=\frac{2\cdot Precison\cdot Recall}{Precison+Recall},$ (11) where $TP$, $TN$, $FP$, $FN$ represent the numbers of true positive, true negative, false positive, and false negative samples, respectively. ## 4 Results and discussions ### 4.1 Results Results on 19PS. We compare the performance of several baseline models in terms of EER metric, as presented in Table 3. All models are trained on the 19PS training dataset. TDL (w/o ESM) represents our model without ESM module. As shown in Table 3, our model achieve the lowest EER 1.92% in partially spoofed audio detection task. Based on the experimental results, We first observe that the impact of feature is greater than backbone. For instance, as seen in first and third row in Table 3, where the backbone is LCNN-BLSTM, the utilization of W2V2 features resulted in a 12.60% EER decrease compared to LFCC. Conversely, when feature remain consistent, as demonstrated in first and second row of the in Table 3, both employing the shared LFCC attribute, SELCNN-LSTM exhibited a marginal EER reduction of 0.28% in comparison to LCNN-LSTM. Furthermore, we find that the architecture design of the TDL network aligns well with partial spoofed detection. Specifically, when the features utilized W2V2-XLS-R, the TDL (without ESM module) still exhibits a 1.02% reduction in EER compared to the LCNN-BLSTM. Results on LAV-DF. To validate the generalizability of our proposed model, we train on 19PS and evaluate on the test set of LAV-DF for 4 evaluation metrics. The results of the testing are presented in the Table 4. Although LAV-DF is an unbalanced dataset, our proposed model achieve the best performance of 5.17% EER compared to baseline models. Table 3: EER results (%) on ASVspoof2019 PS dataset. Model | Feature | EER ---|---|--- LCNN-BLSTM [10] | LFCC | 16.21 SELCNN-BLSTM [11] | LFCC | 15.93 5gMLP [16] | W2V2-Large | 9.24 LCNN-BLSTM [10] | W2V2-XLS-R | 3.61 TDL (w/o emb) | W2V2-XLS-R | 2.59 TDL | W2V2-XLS-R | 1.92 Table 4: The four evaluation metrics results (%) for training on 19PS and testing on the LAV-DF test set. Model | Feature | EER$\downarrow$ | Precision$\uparrow$ | Recall$\uparrow$ | $F_{1}$ score$\uparrow$ ---|---|---|---|---|--- LCNN-BLSTM | LFCC | 10.94 | 95.93 | 73.73 | 83.38 LCNN-BLSTM | W2V2-XLS-R | 8.26 | 99.05 | 62.32 | 76.50 TDL | W2V2-XLS-R | 5.17 | 98.73 | 75.42 | 85.51 Table 5: EER results (%) of different label configuration on 19PS. Label Setting | EER$\downarrow$ | Precision$\uparrow$ | Recall$\uparrow$ | $F_{1}$ score$\uparrow$ ---|---|---|---|--- Boundary 1 | 3.85 | 79.72 | 82.01 | 80.85 real 0 fake 1 | 3.01 | 81.87 | 84.52 | 83.17 real 1 fake 0 | 1.92 | 87.62 | 94.34 | 90.86 Table 6: Parameters (in thousands) comparison. Model | Parameters | ---|---|--- TDL | 8,718 | LCNN-BLSTM | 21,511 | ### 4.2 Disscussion Label Setting. As we mentioned in Section 3.2, we set real frames, fake frames for 1 and 0. To the best of our knowledge, there has been no prior research discussing which label configuration will be beneficial to the final prediction. Therefore, we experiment with three different label settings on our proposed TDL model as shown in Table 5. “Boundary 1” indicates that we set the boundary frames between genuine and fake segments as 1, while other positions are set as 0. In practice, due to the sparsity of boundary frames, we set 4 boundary frames at the transition between genuine and fake segments. Additionally, we employ a weighted BCE loss, assigning a weight value of 100 to the boundary values, as a replacement for standard BCE. Experimental results demonstrate that this method is less effective compared to directly predicting the authenticity of individual frames. Additionally, since predicting boundaries often requires further verification of the genuineness of the segments on both sides, we did not adopt the boundary setting. For the frame-level direct prediction of authenticity, we conducted experiments by setting real frames as 0 and fake frames as 1, and alternatively by setting real frames as 1 and fake frames as 0, as shown in the “real 0 fake 1” and “real 1 fake 0” of the Table 5 respectively. Experiments results show that “real 1 fake 0” outperform “real 0 fake 1” in four evaluation metrics, especially in recall metric, which indicates that TDL can accurately identify genuine speech. When setting real frames as “1” and fake frames along with padding frames as “0”, we can better concentrate on the real segment. This is similar to previous works [21, 22] which also focus on the real speech distribution in fully-spoofed ADD task. Through our experiments, we have demonstrated that it is also significant in partially- spoofed ADD task. This is also why W2V2 features are effective in the field of ADD which only extracted by rich real source domains. Complexity Comparision. Apart from evaluating the performance, we measured the complexity of the models. For frame-level detection task, particularly for fine-grained prediction, the large final output dimension can result in excessive parameterization and low efficiency. Unlike LCNN, which convolves overall values, our proposed TDL model uses temporal convolution operation to selectively focus only on high-weight regions. It can be observed that the parameter count of TDL is only 40.53% of that of LCNN-BLSTM, which is shown in Table 6. ## 5 Conclusion In this paper, we propose an efficient temporary deepfake location approach based embeddings for partially spoofed audio detection. TDL can achieve outstanding performance benefits from two designed core modules: embedding similarity module and temporal convolution operation, which can effectively capture both feature and positional information. The experimental results demonstrate that TDL achieves the best performance in the 19PS dataset and also perform well in cross-dataset scenarios. ## References * [1] Rongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, and Yi Ren, “Prodiff: Progressive fast diffusion model for high-quality text-to-speech,” in Proceedings of ACM MM, 2022, pp. 2595–2605. * [2] X. Tan, J. Chen, H. Liu, J. Cong, C. Zhang, Y. Liu, X. Wang, Y. Leng, Y. Yi, L. He, et al., “Naturalspeech: End-to-end text to speech synthesis with human-level quality,” arXiv preprint arXiv:2205.04421, 2022. * [3] Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, et al., “Neural codec language models are zero-shot text to speech synthesizers,” arXiv preprint arXiv:2301.02111, 2023. * [4] Chak Ho Chan, Kaizhi Qian, Yang Zhang, and Mark Hasegawa-Johnson, “Speechsplit2. 0: Unsupervised speech disentanglement for voice conversion without tuning autoencoder bottlenecks,” in Proceedings of ICASSP, 2022, pp. 6332–6336. * [5] Y. Chen, D. Wu, T. Wu, and H. Lee, “Again-vc: A one-shot voice conversion using activation guidance and adaptive instance normalization,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 5954–5958. * [6] Huaizhen Tang, Xulong Zhang, Jianzong Wang, Ning Cheng, and Jing Xiao, “Avqvc: One-shot voice conversion by vector quantization with applying contrastive learning,” in Proceedings of ICASSP. IEEE, 2022, pp. 4613–4617. * [7] Andreas Nautsch, Xin Wang, Nicholas Evans, Tomi H Kinnunen, Ville Vestman, Massimiliano Todisco, Héctor Delgado, Md Sahidullah, Junichi Yamagishi, and Kong Aik Lee, “Asvspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 2, pp. 252–265, 2021. * [8] H. Delgado, N. Evans, T. Kinnunen, K. Lee, X. Liu, A. Nautsch, J. Patino, M. Sahidullah, M. Todisco, X. Wang, et al., “Asvspoof 2021: Automatic speaker verification spoofing and countermeasures challenge evaluation plan,” arXiv preprint arXiv:2109.00535, 2021. * [9] Jiangyan Yi, Ye Bai, Jianhua Tao, Haoxin Ma, Zhengkun Tian, Chenglong Wang, Tao Wang, and Ruibo Fu, “Half-truth: A partially fake audio detection dataset,” in Proceedings of Interspeech, 2021, pp. 1654–1658. * [10] Lin Zhang, Xin Wang, Erica Cooper, Junichi Yamagishi, Jose Patino, and Nicholas Evans, “An initial investigation for detecting partially spoofed audio,” in Proceedings of Interspeech, 2021, pp. 4264–4268. * [11] Lin Zhang, Xin Wang, Erica Cooper, and Junichi Yamagishi, “Multi-task learning in utterance-level and segmental-level spoof detection,” arXiv preprint arXiv:2107.14132, 2021. * [12] Zhiqiang Lv, Shanshan Zhang, Kai Tang, and Pengfei Hu, “Fake audio detection based on unsupervised pretraining models,” in Proceedings of ICASSP, 2022, pp. 9231–9235. * [13] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” Advances in neural information processing systems, vol. 33, pp. 12449–12460, 2020. * [14] Brecht Desplanques, Jenthe Thienpondt, and Kris Demuynck, “Ecapa-tdnn: Emphasized channel attention, propagation and aggregation in tdnn based speaker verification,” arXiv preprint arXiv:2005.07143, 2020. * [15] Jiangyan Yi, Ruibo Fu, Jianhua Tao, Shuai Nie, Haoxin Ma, Chenglong Wang, Tao Wang, Zhengkun Tian, Ye Bai, Cunhang Fan, et al., “Add 2022: the first audio deep synthesis detection challenge,” in Proceedings of ICASSP. IEEE, 2022, pp. 9216–9220. * [16] Lin Zhang, Xin Wang, Erica Cooper, Nicholas Evans, and Junichi Yamagishi, “The partialspoof database and countermeasures for the detection of short fake speech segments embedded in an utterance,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022. * [17] Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, et al., “Xls-r: Self-supervised cross-lingual speech representation learning at scale,” arXiv preprint arXiv:2111.09296, 2021. * [18] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017\. * [19] Zhixi Cai, Kalin Stefanov, Abhinav Dhall, and Munawar Hayat, “Do you really mean that? content driven audio-visual deepfake dataset and multimodal method for temporal forgery localization,” in 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2022, pp. 1–10. * [20] Xin Wang, Junichi Yamagishi, Massimiliano Todisco, Héctor Delgado, Andreas Nautsch, Nicholas Evans, Md Sahidullah, Ville Vestman, Tomi Kinnunen, Kong Aik Lee, et al., “Asvspoof 2019: A large-scale public database of synthesized, converted and replayed speech,” Computer Speech & Language, vol. 64, pp. 101114, 2020. * [21] Y. Zhang, F. Jiang, and Z. Duan, “One-class learning towards synthetic voice spoofing detection,” IEEE Signal Processing Letters, vol. 28, pp. 937–941, 2021. * [22] Yuankun Xie, Haonan Cheng, Yutian Wang, and Long Ye, “Learning A Self-Supervised Domain-Invariant Feature Representation for Generalized Audio Deepfake Detection,” in Proc. INTERSPEECH 2023, 2023, pp. 2808–2812.
# A full degree-of-freedom photonic crystal spatial light modulator Christopher L. Panuski<EMAIL_ADDRESS>Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Ian R. Christen Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Momchil Minkov Flexcompute, Inc., 130 Trapelo Rd., Belmont, MA 02478, USA Cole J. Brabec Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Sivan Trajtenberg-Mills Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Alexander D. Griffiths Institute of Photonics, Dept. of Physics, University of Strathclyde, Technology and Innovation Centre, Glasgow G1 1RD, UK Jonathan J.D. McKendry Institute of Photonics, Dept. of Physics, University of Strathclyde, Technology and Innovation Centre, Glasgow G1 1RD, UK Gerald L. Leake State University of New York Polytechnic Institute, Albany, NY 12203, USA Daniel J. Coleman State University of New York Polytechnic Institute, Albany, NY 12203, USA Cung Tran State University of New York Polytechnic Institute, Albany, NY 12203, USA Jeffrey St Louis State University of New York Polytechnic Institute, Albany, NY 12203, USA John Mucci State University of New York Polytechnic Institute, Albany, NY 12203, USA Cameron Horvath Applied Nanotools, Inc., Edmonton, AB T6G2M9, CA Jocelyn N. Westwood-Bachman Applied Nanotools, Inc., Edmonton, AB T6G2M9, CA Stefan F. Preble Microsystems Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA Martin D. Dawson Institute of Photonics, Dept. of Physics, University of Strathclyde, Technology and Innovation Centre, Glasgow G1 1RD, UK Michael J. Strain Institute of Photonics, Dept. of Physics, University of Strathclyde, Technology and Innovation Centre, Glasgow G1 1RD, UK Michael L. Fanto Air Force Research Laboratory, Information Directorate, Rome, NY, 13441, USA Dirk R. Englund<EMAIL_ADDRESS>Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA ###### Abstract Harnessing the full complexity of optical fields requires complete control of all degrees-of-freedom within a region of space and time — an open goal for present-day spatial light modulators (SLMs), active metasurfaces, and optical phased arrays. Here, we solve this challenge with a programmable photonic crystal cavity array enabled by four key advances: (i) near-unity vertical coupling to high-finesse microcavities through inverse design, (ii) scalable fabrication by optimized, 300 mm full-wafer processing, (iii) picometer- precision resonance alignment using automated, closed-loop “holographic trimming”, and (iv) out-of-plane cavity control via a high-speed $\upmu$LED array. Combining each, we demonstrate near-complete spatiotemporal control of a 64-resonator, two-dimensional SLM with nanosecond- and femtojoule-order switching. Simultaneously operating wavelength-scale modes near the space- and time-bandwidth limits, this work opens a new regime of programmability at the fundamental limits of multimode optical control. ## I Introduction Programmable optical transformations are of fundamental importance across science and engineering, from adaptive optics in astronomy [1] and neuroscience [2, 3, 4], to dynamic matrix operations in machine learning accelerators [5] and quantum computing [6, 7, 8]. Despite this importance, fast, energy-efficient, and compact manipulation of multimode optical systems — the core objective of spatial light modulators (SLMs) — remains an open challenge [9, 10]. Specifically, the limited modulation bandwidth and/or pixel density of liquid crystal (LC) SLMs, digital micromirror displays, and other two-dimensional (2D) modulator arrays prevents complete control over the optical fields they tune. Figure 1a illustrates these limitations for a typical SLM comprised of a two- dimensional (2D), $\Lambda$-pitch array of tunable pixels (subscript $p$) emitting at wavelength $\lambda$ into the solid angle $\Omega_{p}$ with a system (subscript $s$) modulation bandwidth $\omega_{s}$. Given these parameters, each “spatiotemporal” degree-of-freedom (DoF) simultaneously satisfying the minimum-uncertainty space- and time-bandwidth relations ($\delta A/\lambda^{2}\cdot\delta\Omega=1$ and $\delta t\cdot\delta\omega=1$, respectively) can be illustrated as a real-space voxel with area $\lambda^{2}/\Omega_{p}$ and time duration $1/\omega_{p}$ for pixel bandwidth $\omega_{p}$. The optical-delay-limited pixel bandwidth $\omega_{p}\approx(\Delta\epsilon_{p}/\epsilon)ck$ can be approximated as a function of the achievable permittivity swing $\Delta\epsilon_{p}$ (for the speed of light $c$) using first-order perturbation theory [11] or similarly derived from linear scattering theorems [12]. Figure 1: Full degree-of-freedom (DoF) spatiotemporal optical programming. Present-day SLMs (a) feature a 2D array of $\Lambda$-pitch pixels within an aperture area $A$. Each pixel radiates at wavenumber $k=2\pi/\lambda$ into the solid angle $\Omega_{p}$ and can be switched (blue $\leftrightarrow$ red color change indicates a $\pi$ phase change of the emitted field) over the timescale $T=1/\omega_{s}$ (given a modulation bandwidth $\omega_{s}$) with a large but slow fractional permittivity perturbation $\Delta\epsilon_{p}/\epsilon$ (e.g. liquid crystal rotation). The shaded volume indicates the smallest controllable near-field spatiotemporal mode. In the far-field (right), the corresponding shaded spatiotemporal bandwidth $\nu=\Omega_{s}\omega_{s}=(\lambda/\Lambda)^{2}\omega_{s}$ counts the controllable DoF per unit area and time in a single diffraction order. Trade- offs between $\omega_{s}$ and $\Omega_{s}$ in liquid crystal- (LC) [13, 3, 14], thermal- [15, 16, 17], micro-electro-mechanical system- (MEMS) [18, 19, 20, 21, 22], and electro-optic-driven (EO) [23, 24, 25, 26] SLMs (b) limit $\nu\ll\Omega_{p}\omega_{p}$, the accessible pixel bandwidth given the delay- limited bandwidth $\omega_{p}\sim ck\Delta\epsilon_{p}/\epsilon$. Spatiotemporal control is thus limited and scattering into undesired diffraction orders (grey $\times$s) reduces diffraction efficiency. Alternatively, a fully-filled array of wavelength-scale resonant apertures (c) emitting into the solid angle $\Omega_{p}^{\prime}$ can enhance the effect of fast (modulation frequency $\omega_{s}^{\prime}$), low-energy perturbations $\Delta\epsilon_{p}^{\prime}\ll\epsilon$ to simultaneously achieve space- and time-bandwidth limits (C1 and C2, respectively), yielding near-complete spatiotemporal control with $\nu^{\prime}\approx\Omega_{p}^{\prime}\omega_{p}^{\prime}$. Integrating over the switching interval $T=1/\omega_{s}$ and aperture area $A$ then gives the total DoF count [27] 111Note that the exact coefficient of proportionality in Eqn. 1 depends on the number of polarizations, the complex amplitude and phase controllability of each mode, and the exact definition of distinguishability when defining the Fourier uncertainty relations. For simplicity, we have omitted these $\mathcal{O}(1)$ coefficients. $F=\int_{A,\Omega_{p}}\frac{{\rm d}A}{\lambda^{2}}\cdot{\rm d}\Omega\int_{T,\omega_{p}}{\rm d}t\cdot{\rm d}\omega.$ (1) By comparison, the same switching period contains $N=A/\Lambda^{2}\leq F$ controllable modes, each confined to the pixel area $\Lambda^{2}$ and time window $T$ (shaded box in Fig. 1a). Complete spatiotemporal control with $N=F$ is only achieved under the following criteria: (C1) emitters fully “fill” the near-field aperture such that $\Omega_{p}$ matches the field-of-view $\Omega_{s}=(\lambda/\Lambda)^{2}$ of a single array diffraction order; and (C2) $\omega_{s}=\omega_{p}$. In the Fourier domain, the system’s “spatiotemporal bandwidth” $\nu=\Omega_{s}\omega_{s}$ counts the controllable DoF per unit area and time within a single far-field diffraction order. As illustrated by the shaded pillbox in Fig. 1a, (C1) and (C2) are both satisfied when $\nu$ matches the accessible pixel bandwidth $\Omega_{p}\omega_{p}$. Practical constraints have prevented present-day SLM technology from achieving this bound. In general, commercial devices approximate (C1) without achieving (C2). Specifically, they offer excellent near-field fill-factor across megapixel-scale apertures but use large $\mathcal{O}(\epsilon)$, slow index perturbations. Liquid crystal SLMs, for example, are limited to $\omega_{s}\sim 2\pi\times 10^{3}\text{ Hz}\ll\omega_{p}$ by the slow rotation of viscous, anisotropic molecules that modulate the medium’s phase delay [29, 30]. Digital micromirror-based SLMs offer moderately faster ($\mathord{\sim}10^{5}$ Hz) binary amplitude modulation by displacing a mechanical reflector, but at the expense of diffraction efficiency [31]. Mechanical phase shifters [32, 33, 19, 20, 34] improve this efficiency but still require design trade-offs between pixel size and response time. Recent research has focused on surmounting the speed limitations of commercial SLMs with integrated photonic phased arrays [16, 35, 36, 37] and active metasurfaces comprised of thermally [38, 39, 40], mechanically [41, 20], or electrically [25, 42, 43, 44, 26] actuated elements. These devices, however, do not satisfy (C1) (Table A2). Silicon photonics in particular has attracted significant interest due to its fabrication scalability; however, the combination of standard routing waveguides, high-power ($\mathord{\sim}$ mW/$\pi$ phase shift) thermal phase shifters, and vertical grating couplers in each pixel reduces the fill-factor of emitters, yielding $\Omega_{p}\gg\Omega_{s}$ [16]. Scattering into the numerous diffraction orders within $\Omega_{p}$ then reduces the achievable zero-order and overall diffraction efficiencies ($\eta_{0}$ and $\eta$, respectively). For this reason, $\eta_{0}$ is a useful measure of near-field fill. Various workarounds, including 1D phased arrays with transverse wavelength tunability [35, 45, 46], sparse antenna arrays [15], and switched arrays [47, 22] improve steering performance but restrict the spatiotemporal basis (i.e. limit $F$). Alternative nanophotonics-based approaches, often limited to 1D modulation, have their drawbacks as well: phase change materials [38, 39, 40] have slow crystallization rates and large switching energies, while electro- optic devices [48, 23, 25, 43, 44, 26, 49], to date, have primarily relied on large-area grating-based resonators to achieve appreciable modulation. Figure 2: The photonic crystal spatial light modulator (PhC-SLM). Complete spatiotemporal control is achieved by modulating an array of high-quality- factor ($Q>10^{5}$), small-mode-volume ($V<0.1\lambda^{3}$) silicon PhC cavities with a high-speed incoherent $\upmu$LED array (a). Absorbed $\upmu$LED pulses control the detuning $\Delta$ of resonant pixels via free carrier dispersion, which varies the amplitude and phase (illustrated by the length and color, respectively, of emission arrows at each cell) of the pixel’s complex reflection coefficient $r(\Delta)$. Despite sub-wavelength near-field confinement (b, inset simulated mode profile overlaid on a SEM micrograph of an $L4/3$-type cavity [50]), each pixel is designed for directional ($\Omega_{p}\approx\Omega_{s}=\lambda^{2}/\Lambda_{x}\Lambda_{y}$) far-field scattering $S(\vec{k})$ into the zeroth diffraction order (marked by $\times$s) to satisfy (C1). Combining the reflection from each resonant “antenna” in a large-scale aperture fabricated via optimized 300 mm wafer- scale processing (b, inset photograph) enables near-ideal SLM performance per the design criteria (C1-5) (c). Figure 1b compares the performance of these and other experimentally- demonstrated, active, 2D SLMs as a function spatiotemporal bandwidth’s two components: modulation bandwidth $\omega_{s}$ and field-of-view $\Omega_{s}$. Controllability aside, the evident trade-off between these parameters illustrates the difficulty of creating fast, compact modulator arrays with high $\nu$. Thus, in addition to satisfying the complete control criteria (C1) and (C2), an “ideal” SLM would (Fig. 2c): (C3) maximize $\nu$ by combining wavelength-scale pitches (for full-field $\Omega_{s}\rightarrow 2\pi$ beamforming) with gigahertz (GHz)-order bandwidths $\omega_{s}$ competitive with electronic processors; (C4) support femtojoule (fJ)-order switching energies as desired for information processing applications [51]; and (C5) have scalability to state-of-the-art megapixel-scale apertures. These criteria motivate the resonant architecture in Fig. 1c. Here, (C3) and (C4) are achieved by switching a fully-filled array of wavelength-scale resonant optical antennas with fast, fJ-order perturbations $\Delta\epsilon_{p}/\epsilon\ll 1$. Each resonator’s far-field scattering and quality factor $Q$ can then be tuned to achieve (C1) and (C2), respectively. Combined, this resonant SLM architecture enables complete, efficient control of the large spatiotemporal bandwidth supported by its constituent pixels. Figure 2 illustrates our specific implementation of this full-DoF resonant SLM: the photonic crystal spatial light modulator (PhC-SLM) [52]. Coherent signal light is reflected off a semiconductor slab (permittivity $\epsilon$) hosting a 2D array of semiconductor PhC cavities with instantaneous resonant frequency $\omega_{0}+\Delta_{mn}(t)$. A short-wavelength incoherent control plane imaged onto the cavity array controls each resonator’s detuning $\Delta_{mn}(t)\approx-\Delta\epsilon(t)/2\epsilon$ via the permittivity change $\Delta\epsilon_{p}(t)$ induced by photo-excited free carriers [53, 54]. We optimize the resonator bandwidth $\Gamma\approx\omega_{s}\approx 2\pi\times\mathrm{GHz}$ (corresponding to a quality factor $Q=\omega_{0}/\Gamma\sim 10^{5}$) to maximize the linewidth-normalized detuning $\Delta/\Gamma$ without significantly attenuating the cavity’s response at the carrier lifetime ($\tau$)-limited modulation rate $\omega_{s}=1/\tau$. Under these conditions, free carrier dispersion efficiently modulates the complex cavity reflectivity $r(\Delta)$ to enable fast ($>100$ MHz given a $\mathord{\sim}$ns free carrier lifetime [55]), low- energy (fJ-order) conversion of incoherent control light into a dense array of coherent, modulated signal modes (Appendix A). This out-of-plane, all-optical switching approach is motivated by the recent development of high-speed, high-brightness $\upmu$LED arrays [56, 57] integrated with complementary metal-oxide-semiconductor (CMOS) drive electronics for consumer displays [58, 59] and high-speed visible light communication [60, 61]. In particular, gallium nitride $\upmu$LED arrays with GHz-order modulation bandwidths [61, 62], sub-micron pixel pitches [63], and large pixel counts [64] have been demonstrated within the past few years. Applying these arrays for reconfigurable, “wireless” all-optical cavity control eliminates electronic tuning elements at each pixel to avoid optical loss, pixel pitch limitations, and interconnect bottlenecks for planar architectures (as aperture area $A$ grows, $\mathcal{O}(A)$ pixel controls eventually cannot be routed through the $\mathcal{O}(\sqrt{A})$ perimeter) [65]. Free of these constraints, we designed high-finesse, vertically-coupled microcavities offering coupling efficiencies $>90\%$, phase-dominant reflection spectra [17, 66], and directional emission $\Omega_{p}\approx\Omega_{s}$ for high-efficiency beamforming (Section II). Bespoke, wafer-scale processing allows us to fabricate these “resonant antennas” in arrays with mean quality factors $\langle Q\rangle>10^{6}$ and sub-nm resonant wavelength standard deviation (Section III). For fine tuning, we developed a parallel laser-assisted thermal oxidation [67, 68] protocol to then trim $8\times 8$ cavity arrays to picometer-order uniformity [67, 68] (Section IV), enabling high-speed spatial light modulation with fJ-order switching energies and $\omega_{s}>2\pi\times 100$ MHz (Section V). Compared to the previous devices surveyed in Fig. 1b, our PhC-SLM offers near-complete control over an order-of-magnitude larger spatiotemporal bandwidth. ## II Inverse-Designed Resonant Pixels The sub-wavelength (i.e. normalized volume $\tilde{V}=V/(\lambda/n)^{3}<1$ relative to a cubic wavelength in the confining dielectric of refractive index $n$), high-$Q$ (up to $\mathord{\sim}10^{7}$) [69, 70] modes of 2D PhC cavities enable (C4) [71], but at the expense of (C1) since $Q$ optimization (via hole displacements as in Fig. 3a [72]) cancels radiative leakage. The exact displacement parameters are typically numerically optimized in computationally expensive finite-difference time-domain (FDTD) simulations, which ultimately limits the number of free parameters. Compared to the ideal apertures in Fig. 1c, the optimized cavity unit cell confines a spatially complex mode with $\Omega_{p}\gg\Omega_{s}$ (Fig. 4b, background), violating (C1) and limiting the zero-order diffraction efficiency to $\eta_{0}\approx 0.04$. The result is poor beamforming performance as exemplified by the distorted, low-efficiency far-field pattern emitted by a $64\times 64$ cavity array with optimized detunings (derived with the algorithm in Appendix F) to match a target far-field image (MIT logo). Figure 3: Optimized holography with inverse-designed, vertically-coupled microcavity arrays. (a) Silicon $L3$ slab defect cavity design (hexagonal lattice constant $a=0.4~{}\upmu$m; hole radius $r/a=0.25$; slab thickness $t=220$ nm) with overlaid midplane magnetic field profile $H_{z}$ after $Q$ optimization by displacing ($\delta x_{i},~{}\delta y_{i}$) and resizing ($\delta r_{i}$) the shaded holes in the $16a\times 16(\sqrt{3}/2)a$ periodic unit cell. Hole shifts are magnified by $3\times$ for visualization. The confined cavity mode radiates into the broad far-field profile in (b, background), violating (C1) and yielding a zero-order diffraction efficiency $\eta_{0}\ll 1$. As a result, simulated trial holograms (c) from a $64\times 64$ cavity array with optimized detunings (Appendix F) have minimal overall diffraction efficiency $\eta$. Inverse design (b) solves these problems. Guided mode expansion (GME) approximates the mode’s $Q$ and far-field profile by sampling the losses $\\{c\\}$ at the array’s diffraction orders (white $\times$s) displaced by Bloch boundary conditions $\vec{k}_{i}$ (i.e. at the colored dots). An objective function $f$ that maximizes $Q$, confines $\vec{H}$, and minimizes $\\{c\\}$ at any non-zero diffraction order can then be efficiently optimized with respect to all hole parameters using reverse- mode automatic differentiation (b). The resulting devices with high-$Q$, efficient coupling, and directional emission enable high-performance ($\eta\sim 1$) resonant holography (d). Figure 4: Experimental comparison of existing (a-c) and inverse-designed PhC cavities with high-$Q$ and near- diffraction-limited vertical beaming. Superimposing a grating perturbation (a, green; $\delta r_{i}$ magnified by $20\times$ for visualization) on the $Q$-optimized design of Fig. 3a improves vertical coupling at the expense of reduced $Q$, yielding the simulated far-field intensity profile in (b, left) with $\eta_{0}=0.18$. Our measured far-field profile (b, right), collected from a grating-coupled cavity using a cross-polarized imaging setup (Appendix D), confirms the broad emission relative to the array field-of-view (dashed white line) $\Omega_{s}$. This mismatch explains the low effective “fill factor” and poor coupling observed in our resonant imaging (c, inset) and near-field reflection spectra (c, blue), respectively. An input Gaussian beam (with waist matched to the unit cell dimensions) is undercoupled and exhibits an amplitude-dominant power reflectivity $R=|r|^{2}$ modulation (c, solid green) with low phase variation $\Delta\phi$ (c, dashed green). Our inverse designed cavities (d) overcome these issues by optimizing every hole in the unit cell to vertically scatter cavity leakage for any target $Q$, producing “ideal” resonant SLM pixels satisfying (C1). Specifically, they support near- diffraction-limited emission (e) with $\eta_{0}\sim 1$ due to fully-filled near-field resonant scattering (f, inset), a $\mathord{\sim}5\times$ experimental resonance contrast enhancement (f), and $>94\%$ single-sided (i.e. assuming an ideal back-reflector described in Appendix G) coupling to an input Gaussian beam for phase-dominant modulation (f, green). Fortunately, these limitations are not fundamental: the effective scattering aperture $A_{0}=\lambda^{2}/\Omega_{p}$ of a resonant mode can extend beyond its $1/e$ decay area $A_{e}$. This apparent space-bandwidth violation $(A_{e}/\lambda^{2})\cdot\Omega_{p}=A_{e}/A_{0}<1$ is enabled by resonant scattering from the mode’s evanescent field, which raises the basic question: how should scatterers be arranged to produce a desired far-field emission pattern? One established approach is a harmonic $2a$-period grating perturbation (Fig. 4a) that “folds” energy concentrated at the band-edge $k_{x}=\pi/a$ back to $k_{||}=0$, yielding vertical radiation at the expense of reduced $Q$ [73, 74, 75, 76]. In the perturbative regime, the far-field scattering profile is an image of the broad band-edge mode. Thus, once the grating-induced loss becomes dominant, further magnifying the perturbation reduces $Q$ without significantly improving directivity. Fig. 4b shows the narrowed far-field profile produced by a $\delta r_{i}/r\approx 0.02$ grating perturbation, which balances the reduced $Q\approx 8\times 10^{5}$ and a modest diffraction efficiency improvement ($\eta_{0}=0.18$). By contrast, our design strategy (Fig. 3b) combines semi-analytic guided mode expansion (GME) simulations with automatic differentiation to maximize $\eta_{0}$ (and thereby the effective near-field fill factor) for a given target $Q$ using all of the hole parameters. In each iteration, GME approximates the cavity eigenmode and radiative loss rates $c_{mn}^{(i)}$ at each of the array’s reciprocal lattice vectors (i.e. diffraction orders) offset by the Bloch periodic boundary conditions $\vec{k}_{i}$ [77]. These coupling coefficients coarsely sample the cavity’s approximate far-field emission (Appendix C). Scanning $\vec{k}_{i}$ over the irreducible Brillouin zone of the rectangular cavity array improves the sampling resolution, and an overall $Q$ can be estimated by averaging the total loss rates $\Gamma^{(i)}=\sum_{mn}c_{mn}^{(i)}$ in each simulation. Reverse-mode automatic differentiation then allows us to efficiently optimize an objective function $f=\frac{1}{N}\sum_{i=1}^{N}\frac{c_{00}^{(i)}}{\Gamma^{(i)}}\arctan\left(\frac{Q}{Q_{0}}\right)|E_{0}|^{2}$ (2) targeting three main goals: 1) increase $Q$ to a design value $Q_{0}$; 2) force the associated radiative loss into the array’s zeroth diffraction order for efficient vertical coupling; and 3) minimize $V$ by maximizing $|E_{0}|$, the electric-field magnitude at the center of the unit cell. The resulting designs support tunable-$Q$ resonances with near-diffraction-limited ($\Omega_{p}\approx\Omega_{p}$) vertical beaming comparable to the ideal planar apertures of Fig. 1c. The example design of Fig. 4d, for instance, maintains $Q\approx 8\times 10^{5}$ with $\eta_{0}=0.86$ based on the simulated far-field profile in Fig. 4e. We prototyped each design at a commercial electron beam lithography (EBL) foundry 222Applied Nanotools, Inc. https://www.appliednt.com/ before transitioning to the wafer-scale foundry process described in Sec. III. The near- and far-field reflection characteristics of the fabricated devices were measured with the cross-polarized microscopy setup detailed in Appendix D. Fig. 4b-c and Fig. 4e-f show the results for the grating coupled and inverse- designed cavities, respectively. The optimal grating-coupled cavities offer $Q\sim 4\times 10^{5}$ at $\lambda\approx 1553\text{ nm}$ with a near-field resonant scattering profile well-centered on the cavity defect (Fig. 4c, inset). The mode mismatch between this wavelength-scale PhC mode and the wide- field input beam (Gaussian beam with $\mathord{\sim}150~{}\upmu\text{m}$ waist diameter for array-level excitation) is further evidenced by the small normalized reflection amplitude (relative to that of the inverse designed cavities) on resonance as well as the broad far-field profile (Fig. 4b) with $\eta_{0}=0.24$. By comparison, inverse design non-perturbatively modifies the cavity mode (Fig. 4d) to produce the near-ideal measured far-field profile in Fig. 4e satisfying (C1) with $\eta_{0}=0.98$ while simultaneously increasing $Q$ to $5.7\times 10^{5}$. We attribute the slight increase in zero-order diffraction efficiency over the simulated value $\eta_{0}=0.86$ to the substrate-dependent effects described in Appendix G. The fully-filled near-field resonant scattering image in Fig. 4f explains the close resemblance between this measured $S(\vec{k})$ and that of an ideal uniform aperture [79]. In addition, the narrowed emission profile $S(\vec{k})$ yields a $\mathord{\sim}5\times$ increase in cross-polarized reflection and the phase-dominant simulated direct reflection spectrum in Fig. 4f. The latter is achieved by 94% one-sided coupling to a Gaussian beam with optimized waist diameter (Appendix E). Combined, these results break the traditional coupling–$Q$ tradeoff (offering an order-of-magnitude improvement in the figure-of-merit $\eta_{0}\cdot Q$ for the prototype devices in Fig. 4) to enable high-performance beamforming at the space-bandwidth limit (C1). These results are supported by the simulated hologram in Fig. 3d: an array of optimally detuned, inverse-designed cavities forms a clear far-field image with a several order-of-magnitude improvement in overall diffraction efficiency ($\eta=0.83$) over existing designs. ## III Foundry-Fabricated High-Finesse Microcavity Arrays While EBL enables fabrication of few-pixel prototypes with state-of-the-art resolution and accuracy, serial direct-write techniques do not satisfy (C5). Field stitching issues and sample preparation aside, a single cm2, megapixel- scale sample would require a full day of EBL write time alone. We therefore developed a full-wafer deep-ultraviolet photolithography process specifically optimized for wavelength-pitch arrays of high-$Q/V$ PhC microcavities in a commercial foundry [80]. A central goal was to create vertical etch side-walls. The transmission electron microscope (TEM) cross-section in Fig. 5i shows that the default fabrication process (optimized for isolated waveguides) yielded an oblique ($100^{\circ}$), incomplete etch through the silicon device layer for the target PhC lattice parameters. Both nonidealities erase the membrane’s vertical reflection symmetry, leading to coupling between even- and odd- symmetry (about the slab midplane) modes that ultimately limits the achievable $Q$ of bandgap-confined resonances [81]. By contrast, our revised fabrication process achieves near-vertical $91^{\circ}$ sidewall angles (Fig. 5ii), yielding high-quality PhC lattices for a range of hole diameters between the $\mathord{\sim}100$ nm critical-dimension and $2r\approx a$ (Fig. 5c). Using TEM cross-sectioning and automated optical metrology as feedback over multiple 300 mm wafer runs in the AIM Photonics foundry’s 193 nm DUV water-immersion lithography line, this new process relies on a combination of dose-optimized reverse (positive) tone lithography, high-accuracy laser written masks, and optimized etch termination. Following fabrication and dicing, we post- processed individual die with a backside silicon nitride anti-reflection coating and, as required, suspend the PhC membranes with a timed wet etch. Figure 5: Full-wafer photonic crystal fabrication in an optimized 300 mm foundry process. A wafer (a) contains 64 complete reticles (b) each comprising millions of inverse designed PhC cavities. The before (i) and after (ii) false-color (blue: metal fill; red: silicon; yellow: silicon dioxide; green: etch mask) transmission electron microscope cross-sections show how process optimization enables high-quality PhC lattices (c) that support $Lm$-type cavity arrays with $\langle Q\rangle>10^{6}$ and sub-nanometer wavelength standard deviation (d). The resulting die contain isolated and arrayed PhC cavities with swept dimensions to offset systematic fabrication biases. We chose $Lm$-type cavity designs — formed by removing $m$ holes from the PhC lattice as demonstrated by the $L3$ unit cells in Fig. 4 — to host tunable-volume (via variable $m$), high-$Q$ resonant modes with even reflection symmetry (about the unit cell axes) as required for vertical emission [82]. The highest-performance isolated devices feature $Q>10^{6}$ with normalized volumes $\tilde{V}\approx 0.3$. With a joint spectral- and spatial-confinement (quantified by the figure-of- merit $Q/\tilde{V})\approx 4\times 10^{6}$, these devices are among the highest-finesse optical cavities ever fabricated in a foundry process. Our optimized foundry processing extends this exceptional single-device performance (rivaling record EBL-fabricated devices) to large-scale cavity arrays. We developed a fully-automated measurement system (Appendix D) to locate and characterize hundreds of cavities per second via parallel camera readout. The resulting data, extracted from over $10^{5}$ devices measured across the wafer, allow us to statistically analyze resonator performance and fabrication variability at the die, reticle, and wafer level. Fig. 5d, for example, shows resonant wavelength and $Q$ variations within $8\times 8$ arrays of four different cavity designs. Using camera readout of the reflected wide-field excitation, each data set is extracted from a single wavelength scan of a tunable laser. Besides the expected correlation between uniformity and mode volume [83], the data demonstrates — for the first time, to our knowledge — the ability to fabricate sub-wavelength ($\tilde{V}<1$) microcavity arrays with $\langle Q\rangle>10^{6}$ and sub-nanometer resonant wavelength standard deviation ($\sigma_{\lambda}\approx 0.6$ nm). Critically for beamforming, this uniformity also extends to the far-field: Appendix H shows that each cavity in an $8\times 8$ array emits vertically with $\eta_{0}=0.86\pm 0.07$, in quantitative agreement with the simulated result in Fig. 4. ## IV Holographic Trimming Figure 6: Parallel, fully-automated, and low-loss microcavity trimming via structured laser oxidation. In each iteration of the trimming loop (a), a weighted GS algorithm distributes a visible trimming laser with power $P_{0}$ to powers $\\{P_{i}\\}$ at desired cavities based on the measured resonant wavelength $\lambda_{i}$ of each cavity. A few nm-thick layer of thermal oxide grows at each optical focus (photographed spots in the inset cavity array image), reducing the as-fabricated standard deviation in hole parameters $\sigma\sim\text{nm}$ (b) and permanently shifting the targeted resonances. The initial and final near-field hyperspectral reflection images (c, color- coded by each device’s wavelength normalized detuning $\Delta/\Gamma$) show the $>100\times$ reduction in resonant-wavelength standard deviation to $\sigma_{\lambda}=2.5~{}\mathrm{pm}$ without affecting the mean quality factor $Q>10^{5}$. The effective dimensions of the final array (d) are thus homogenized to $\sigma^{\prime}\sim\text{pm}$ length-scales by oxidation. Regions of local oxide growth in helium ion microscopy images of the trimmed PhC-SLM (d) appear as bright areas. The device summary (e) shows the wavelength shift, $Q$ variation (quantified by dot area), and aligned reflection spectra of each device. In addition to these overlapping far-field emission profiles, programmable multimode interference requires each cavity to operate near a common resonant wavelength $\lambda_{0}$. For sufficiently high-$Q$ resonators, this tolerance cannot be solely achieved through optimized fabrication since $\mathcal{O}(\text{nm})$ fabrication fluctuations translate to $\mathcal{O}(\text{nm})$ resonant wavelength variations [84, 85]. Our prototype $8\times 8$ arrays of $L3$ cavities (chosen to optimally balance requirements on $Q$, $V$, directive emission, and fabrication tolerance) typically span a $\mathord{\sim}3~{}\text{nm}$ peak-to-peak wavelength variation (given $\sigma_{\lambda}\approx 0.6$ nm), corresponding to hundreds of linewidths for the target $Q\sim 10^{5}$. To correct this nonuniformity, we developed an automated, low-loss, and picometer-precision trimming procedure based on laser-assisted thermal oxidation (Fig. 6). Two features of our approach resolve the speed and controllability limitations of prior single-device implementations [67, 68]: 1) accelerated oxidation in a high-pressure chamber with in-situ characterization; and 2) holographic fanout of the trimming laser to simultaneously address multiple devices. In each iteration of the automated trimming loop (Fig. 6a), the resonant wavelengths $\\{\lambda_{i}\\}$ are measured and a subset $T$ containing $N$ devices is selected to maximize the total trimming distance $N(\min_{T}\\{\lambda_{i}\\}-\lambda_{t})$ to a target wavelength $\lambda_{t}$. Each cavity in $T$ is then targeted by a visible laser distributed by the liquid crystal SLM setup described in Appendix D. To generate the required phase masks, we developed an open-source, GPU- accelerated experimental holography software package that implements fixed- phase, weighted Gerchberg-Saxton (GS) phase retrieval algorithms 333slm-suite. https://github.mit.edu/cpanuski/qp-slm. Using camera feedback, the algorithm can generate thousands of near-diffraction-limited foci with $\mathord{\sim}1\%$ peak-to-peak power uniformity and single-camera-pixel- order location accuracy within a few iterations (Appendix I). The holographically-targeted pixels are then laser-heated with a computed exposure power and duration (based on the current trimming rates, resonance locations, and other array characteristics) to grow thermal oxide at the membrane surface. For thin oxide layers, the consumption of silicon during the reaction with ambient oxygen permanently blueshifts the cavity resonance in proportion to the oxide thickness $t_{\text{SiO}_{2}}$ (Fig. 6b) [68]. Per the Deal-Grove model, the rate-limiting diffusion of oxygen through the grown oxide accelerates with increasing oxygen pressure — a well-known technique in microelectronics fabrication [87]. We therefore oxidize our samples in pure oxygen with partial pressure $P_{\text{O}_{2}}=5~{}\text{bar}$, enabling ${\rm d}\lambda_{0}/{\rm d}t\approx 0.1~{}\text{nm/s}$ resonance trimming rates over $\Delta\lambda_{0}>20~{}\text{nm}$ wavelength ranges. After each trimming exposure, we remeasure the resonance statistics and recycle the loop until all devices are aligned within a set tolerance about $\lambda_{t}$. The trimming algorithm also accounts for long-term moisture adsorption to the membrane surface, thermal cross-talk, and trimming rate variations (Appendix J). Fig. 6 demonstrates the results of this trimming procedure applied to our prototype $8\times 8$ pixel PhC-SLM. Prior to trimming, the hyperspectral near-field reflection image in Fig. 6c shows the large ($>200$ linewidths for the mean quality factor $\langle Q\rangle=1.6\times 10^{5}$) resonant wavelength variation between the otherwise spatially uniform and high-fill resonant modes. Holographic trimming reduces the wavelength standard deviation and peak-to-peak spread by $>100\times$ to $\sigma_{\lambda}=2.5~{}\text{pm}$ and $\Delta\lambda_{0}^{\text{p-p}}=1.3\Gamma=13~{}\text{pm}$, respectively, enabling all 64 devices (imaged in Fig. 6d) to be resonantly excited at a common operating wavelength (Fig. 6e). Since $\sigma_{\lambda}$ is directly related to the corresponding hole radius and placement variability ($\sigma_{r}$ and $\sigma_{h}$, respectively) with an $\mathcal{O}(1)$ design- dependent constant of proportionality, the thermal oxide homogenizes the effective dimensions of each microcavity to the picometer scale. The mean quality factor and near-field reflection profile of the array remain largely unmodified throughout the process as evidenced by Fig. 6c and Fig. 6e. To our knowledge, these results are the first demonstration of parallel, in situ, non-volatile microcavity trimming. The achievable scale is currently limited by environmental factors that could be overcome with stricter process control (Appendix J). Even without these improvements, the current uniformity, scale, and induced loss outperform the corresponding metrics of the previous techniques reviewed in Appendix J, paving the way towards scalable integrated photonics with high-$Q$ resonators. ## V All-Optical Spatial Light Modulation Once trimmed to within a linewidth, each resonator reflects an incident coherent field $E_{\text{i}}(\vec{r},t)$, producing a far-field output [88] $E_{\text{r}}(\vec{k},t)=S(\vec{k})\sum_{m,n}r\\{\Delta_{mn}(t)\\}E_{\text{i}}(\vec{r}_{mn},t)e^{j\vec{k}\cdot\vec{r}_{mn}}$ (3) that can be dynamically controlled within $S(\vec{k})$ by setting the detuning $\Delta_{mn}(t)$ (and therefore the near-field reflection coefficient $r$) of each resonator. Experimentally, we measure the intensity pattern $|E_{r}(\vec{k})|^{2}$ on the back focal plane of a microscope objective above the PhC-SLM (as with the single-device characterization in Sec. II) and optically program $\Delta_{mn}(t)$ via photo-excited free carriers. The corresponding setups are detailed in Appendix D. Figure 7: Nanosecond switching (a-c) and spatial light modulation (d-f). (a) Peak phase shift $\Delta\phi$ and half-maximum switching interval $T_{\text{switch}}$ produced by pulses from a CMOS-integrated $\upmu$LED array imaged onto the cavity array (inset: cavities illuminated to form the letters ‘S’, ‘L’, and ‘M’) as a function of trigger duration $T_{\text{CMOS}}$ and pulse energy density $E_{\upmu\text{LED}}$. (b) Complex reflectivity ($r=\sqrt{R}e^{j\phi}$) modulation with femtojoule-order pulse energies $E_{\text{laser}}$ from a focused visible laser. (c) Output probe to input visible (pump) power transfer function $T(\omega)$ fit to a second-order response function, yielding $\omega_{s}=2\pi\times 135$ MHz limited by the free carrier lifetime $\tau\approx 1.1$ ns and cavity bandwidth $\Gamma\approx 1$ GHz. (d) Far-field intensity profile, half-maximum beam widths, and zero- order diffraction efficiency $\eta_{0}$ (integrated within the dashed white box) of the trimmed array at time $t_{0}$ with horizontal and vertical cross- sectional profiles (blue traces) compared to those of an $8\times 8$ array of planar apertures with 80% linear fill (black, dashed). (e/f) Analogous results for the switched array with an optically-patterned horizontal (vertical) amplitude grating at the maximum extinction time $t_{0}+6$ ns, producing $\pm 1^{\text{st}}$-order diffraction peaks over a $10.6^{\circ}$ ($14.5^{\circ}$) field-of-view and diffraction efficiency $\eta_{x}=0.22$ ($\eta_{y}=0.20$). Absent a control input ($\Delta_{mn}\approx 0$), Fig. 7d shows the static far- field intensity pattern $|E_{r}(\vec{k})|^{2}$ of a wide-field-illuminated (i.e. $E_{i}(\vec{r})\approx E_{i}$) $8\times 8$ trimmed array with $\\{Q\\}=1.85\times 10^{5}$ and $\sigma_{\lambda}=5$ pm at $\lambda=1562$ nm. The inverse-designed cavity unit cells minimize scattering into undesired diffraction orders, producing a high-efficiency ($\eta_{0}=0.66$) zero-order beam with the expected $1.3^{\circ}$ and $1.6^{\circ}$ horizontal and vertical beamwidths given the $42.0\lambda\times 36.4\lambda$ aperture size. The cross- sectional beam profiles are well matched to the simulated emission profile of uniform apertures with width $w=0.8\lambda$, suggesting an 80% effective linear fill of the array. This extracted value agrees with the observed zero- order efficiency and the array’s physical design (each $16a\times 16a$ cavity offering near-unity fill was padded to $20a\times 20a$ to limit coupling to adjacent cells). After confirming the static performance of the array, we conducted optical switching experiments with two sources: an incoherent $\upmu$LED array and a pulsed visible laser. The $\upmu$LED array contains $16\times 16$ individually-addressable gallium nitride $\upmu$LEDs with $>$150 MHz small- signal bandwidth and $\mathord{\sim}10^{6}$ cd/m2 peak luminances (at 450 nm) flip-chip bonded to high-efficiency CMOS drivers [89, 60]. Using the setup in Appendix D, we imaged the 100 $\upmu$m-pitch array with variable demagnification and rotation onto the PhC cavity array. Digitally triggering the CMOS drivers then enables reconfigurable, binary optical addressing as illustrated by the imaged projections of three letters on the PhC-SLM (Fig. 7a). We measured the resulting pixel reflection amplitude and phase using locked, shot-noise-limited balanced homodyne detection (Appendix D). Fig. 7a depicts the maximum phase shift $\Delta\phi$ as a function of CMOS trigger duration $T_{\text{CMOS}}$ and imaged pump energy density $E_{\upmu\text{LED}}$. Single-cavity switching is possible with energy densities below $10$ fJ/$\upmu$m2 (corresponding to $\mathord{\sim}100$ fJ total energy for our chosen demagnification) and a minimum trigger duration $T_{\text{CMOS}}\approx 5$ ns. Shorter trigger pulses produce relatively constant-width pulses (due to the $\upmu$LED fall time) with insufficient energy for high-contrast switching. Confining visible pump pulses in space and time to the silicon free-carrier diffusion length ($\sim 1~{}\upmu$m) and lifetime ($\tau\approx 1$ ns), respectively, would reduce the required switching energy and maximize bandwidth bandwidth. While either metric is achievable with existing $\upmu$LED arrays [90, 63] and optimization to achieve both simultaneously is ongoing [91], we demonstrated the expected performance enhancement with a pulsed visible ($\lambda=515$ nm) laser. Fig. 7b shows that 3 dB power reflectivity changes and high-contrast phase modulation are feasible for 5 fJ pump pulses over a switching interval $T_{\text{switch}}\approx 1$ ns, thereby satisfying (C4). Free-carrier dispersion is the dominant switching mechanism for these isolated, ns-order switching events (Appendix A). While repeated switching over $\upmu$s-order timescales leads to a slowly-varying thermo- optic detuning [92], various optical communications techniques (constant-duty line codes, for example [93]) can maintain average device temperature during high-speed free carrier modulation. To demonstrate this decoupling of switching mechanisms, we measured the normalized small-signal transfer function $T(\omega)$ between a harmonic pump power (produced by a network analyzer-driven amplitude electro-optic modulator) and the phase-locked homodyne response. When aligned to the thermally-detuned resonance, the results (Fig. 7c) match the expected second-order response $T(\omega)=1/\\{\left[1+(\omega\tau)^{2}\right]\left[1+(\omega/\pi\Gamma)^{2}\right]\\}$ set by carrier- and cavity-lifetime-limited bandwidths ($1/\tau$ for a fitted carrier lifetime $\tau=1.1$ ns and the measured $\Gamma=1.0$ GHz, respectively). While satisfying (C2) therefore requires higher-$Q$ resonators, the current regime of operation enables near-complete control over a larger bandwidth $\omega_{s}=2\pi\times 135~{}\text{MHz}\approx 1/\tau$, i.e. without significantly degrading the carrier-lifetime-limited modulation bandwidth. Combining this optimized switching with the space-bandwidth-limited vertical beaming of each resonator enables multimode programmable optics approaching the fundamental limits of spatiotemporal control. We currently probe the PhC- SLM in a wide-field, cross-polarized setup that produces amplitude-dominant Lorentzian reflection profiles $r(\Delta)\propto 1/(1+j\Delta)$ regardless of the resonator coupling regime (cavity emission is isolated from specular reflection). For simplicity, we therefore conducted proof-of-concept demonstrations using the PhC-SLM as an array of high-speed binary amplitude modulators. In this modality, a nanosecond-class pulsed visible laser is passively fanned out to the desired devices. Devices targeted by pump light are detuned far from resonance ($\Delta\gg\Gamma$) and effectively extinguished, whereas unactuated cavities retain their high $\Delta\approx 0$ reflectivity. We used pump-probe spectroscopy for wide-field imaging of these few-nanosecond switching events (Appendix D). Short infrared probe pulses were carved with a electro-optic amplitude modulator (DC biased to an intensity null) and variably delayed to coincide with the arrival of visible pump light at the PhC membrane, gating probe field transmission to the IR camera. We then measured the near- and far-field reflection as a function of the probe delay to reconstruct switching events with sub-nanosecond resolution. Fig. 7e-f plots the resulting far-field intensity profiles $|E_{r}|^{2}$ for horizontal and vertical on-off gratings. For a $5$ ns probe pulse width, the maximum near- field extinction of targeted cavities (7.4 dB and 9.8 dB for horizontal and vertical gratings, respectively) occurs within a $\mathord{\sim}6$ ns delay; i.e. just after the pump and probe pulses completely overlap. This minimum probe pulse width is limited by the requirement for high imaging contrast between probe pulses and leakage (due to the imperfect probe modulator extinction) given the instrument-limited trigger repetition rate ($\mathord{\sim}$MHz) and camera integration time. As expected, the input field is primarily scattered into first-order diffraction peaks within the (greater than $10^{\circ}$) 2D field-of-view of $S(\vec{k})$. The illustrated cross sectional beam profiles again agree with analytic results for a 80% filled linear array of uniform apertures (black dashed lines). For the horizontal grating Fig. 7e, the fit is scaled by a factor of $\mathord{\sim}2$ to account for the increased reflectivity of unactuated cavities during switching events, which we attribute to residual coupling between adjacent cavities. In both cases, the pattern diffraction efficiencies — measured as the fraction of integrated power within the outlined regions in Fig. 7 — $(\eta_{x},\eta_{y})=(0.22,0.20)$ compare favorably to the efficiency of the fitted uniform aperture array. Even with amplitude-dominant modulation, these metrics exceed the efficiencies of previous resonator-based experiments due to our high-directivity PhC antenna array [17]. ## VI Summary and Outlook These proof-of-concept experiments demonstrate near-complete spatiotemporal control of a narrow-band optical field filtered in space and time by an array of wavelength-scale, high-speed resonant modulators. While the general resonant architecture (Fig. 1c) is applicable to a range of microcavity geometries and modulation schemes, the combination of our high-$Q$, vertically-coupled PhC cavities with efficient, all-optical free-carrier modulation achieves (C1-5) with an ultrahigh per-pixel spatiotemporal bandwidth $\nu\approx 5.6~{}\text{MHz}\cdot\text{sr}$. This MHz-order modulation bandwidth per aperture-limited spatial mode corresponds to a more than ten-fold improvement over the 2D spatial light modulators reviewed in Fig 1b. Our wafer-scale fabrication and parallel trimming offer a direct route towards scaling this performance to spectrally-multiplexed, $\mathcal{O}(\text{cm}^{2})$ apertures for exascale interconnects beyond the reach of current electronic systems, thus motivating the continued development of optical addressing and control techniques. The PhC-SLM opens the door to a number of applications and opportunities, including: high-definition, high-frame-rate holographic displays by the integration of a back-reflector (see Appendices E-G) for one-sided, phase- only, and full-DoF spatiotemporal modulation; compact device integration via direct transfer printing of our cavity arrays onto a high-bandwidth $\upmu$LED array [94]; three-dimensional optical addressing and imaging by combining on- demand $\upmu$LED control with statically trimmed detuning profiles that continuously steer pre-programmed patterns [95]; large-scale programmable unitary transformations for universal linear optics processors [96]; focal plane array sensors for high-spatial-resolution readout of refractive index perturbations in imaging applications from endoscopy to bolometry and quantum- limited superresolution [97, 98, 99]; optical neural network acceleration via low-power, high-density unitary transformation of free-space optical inputs [5, 100]; and high-speed adaptive optics enabling free-space compressive sensing, deep-brain neural stimulation, and real-time scattering matrix inversion in complex media [101, 102]. Moreover, whereas we have so far considered only mode transformations, the PhC-SLM’s high-$Q/V$ resonant enhancement suggests the possibility of programming the quantum optical excitations/fields of these modes for applications ranging from multimode squeezed light generation [103], to multiplexed single photon sources for linear optics quantum computing [6, 104] or deterministic photonic logic [105, 106]. ###### Acknowledgements. The authors thank Flexcompute, Inc. for supporting FDTD simulations, the MIT.nano staff for fabrication assistance, and M. ElKabbash (MIT) for useful discussions. C.P. was supported by the Hertz Foundation Elizabeth and Stephen Fantone Family Fellowship. S.T.M. is funded by the Schmidt Postdoctoral Award and the Israeli Vatat Scholarship. Experiments were supported in part by Army Research Office grant W911NF-20-1-0084, supervised by M. Gerhold, the Engineering and Physical Sciences Research Council (EP/M01326X/1, EP/T00097X/1), and the Royal Academy of Engineering (Research Chairs and Senior Research Fellowships). This material is based on research sponsored by Air Force Research Laboratory under agreement number FA8650-21-2-1000. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the United States Air Force, the Air Force Research Laboratory or the U.S. Government. C.P. and D.E. conceived the idea, developed the theory, and led the research. M.M. and C.P. developed the far-field optimization technique and designs. C.B. developed the optimized resonator detuning theory. C.P. conducted the experiments with assistance from I.C. (trimming experiments), S.T.M. (holography software), and A.G. ($\upmu$LED measurements). J.J.M., M.D., and M.S. contributed the $\upmu$LED arrays and guided the incoherent switching experiments. C.H. and J.W.B. fabricated the initial samples for evaluation prior to foundry process development by C.T., J.S.L., J.M., and G.L. S.P. assisted with wafer post- processing. M.F. coordinated and led the foundry fabrication with assistance from G.L. and D.C. C.P. wrote the manuscript with input from all authors. ## References * Gardner _et al._ [2006] J. P. Gardner, J. C. Mather, M. Clampin, R. Doyon, M. A. Greenhouse, H. B. Hammel, J. B. Hutchings, P. Jakobsen, S. J. Lilly, K. S. Long, J. I. Lunine, M. J. Mccaughrean, M. Mountain, J. Nella, G. H. Rieke, M. J. Rieke, H.-W. Rix, E. P. Smith, G. Sonneborn, M. Stiavelli, H. S. Stockman, R. A. Windhorst, and G. S. Wright, The James Webb Space Telescope, Space Science Reviews 123, 485 (2006). * Packer _et al._ [2013] A. M. Packer, B. Roska, and M. Häusser, Targeting neurons and photons for optogenetics, Nature neuroscience 16, 805 (2013). * Marshel _et al._ [2019] J. H. Marshel, Y. S. Kim, T. A. Machado, S. Quirin, B. Benson, J. Kadmon, C. Raja, A. Chibukhchyan, C. Ramakrishnan, M. Inoue, J. C. Shane, D. J. McKnight, S. Yoshizawa, H. E. Kato, S. Ganguli, and K. Deisseroth, Cortical layer–specific critical dynamics triggering perception, Science 365, eaaw5202 (2019). * Demas _et al._ [2021] J. Demas, J. Manley, F. Tejera, K. Barber, H. Kim, F. M. Traub, B. Chen, and A. Vaziri, High-speed, cortex-wide volumetric recording of neuroactivity at cellular resolution using light beads microscopy, Nature Methods 18, 1103 (2021). * Hamerly _et al._ [2019] R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, Large-scale optical neural networks based on photoelectric multiplication, Physical Review X 9, 021032 (2019). * Kok _et al._ [2007] P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn, Linear optical quantum computing with photonic qubits, Reviews of modern physics 79, 135 (2007). * Barredo _et al._ [2018] D. Barredo, V. Lienhard, S. De Leseleuc, T. Lahaye, and A. Browaeys, Synthetic three-dimensional atomic structures assembled atom by atom, Nature 561, 79 (2018). * Ebadi _et al._ [2021] S. Ebadi, T. T. Wang, H. Levine, A. Keesling, G. Semeghini, A. Omran, D. Bluvstein, R. Samajdar, H. Pichler, W. W. Ho, S. Choi, S. Sachdev, M. Greiner, V. Vuletić, and M. D. Lukin, Quantum phases of matter on a 256-atom programmable quantum simulator, Nature 595, 227 (2021). * Shaltout _et al._ [2019a] A. M. Shaltout, V. M. Shalaev, and M. L. Brongersma, Spatiotemporal light control with active metasurfaces, Science 364, eaat3100 (2019a). * Piccardo _et al._ [2021] M. Piccardo _et al._ , Roadmap on multimode light shaping, Journal of Optics 24, 013001 (2021). * Joannopoulos _et al._ [2011] J. D. Joannopoulos, S. G. Johnson, and J. N. Winn, _Photonic Crystals: Molding the Flow of Light - Second Edition_ (Princeton, 2011). * Miller [2007] D. A. Miller, Fundamental limit to linear one-dimensional slow light structures, Physical review letters 99, 203903 (2007). * McKnight _et al._ [1994] D. J. McKnight, K. M. Johnson, and R. A. Serati, 256$\times$ 256 liquid-crystal-on-silicon spatial light modulator, Applied Optics 33, 2775 (1994). * Li _et al._ [2020] J. Li, P. Yu, S. Zhang, and N. Liu, Electrically-controlled digital metasurface device for light projection displays, Nature communications 11, 1 (2020). * Fatemi _et al._ [2019] R. Fatemi, A. Khachaturian, and A. Hajimiri, A nonuniform sparse 2D large-FOV optical phased array with a low-power PWM drive, IEEE Journal of Solid-State Circuits 54, 1200 (2019). * Sun _et al._ [2013] J. Sun, E. Timurdogan, A. Yaacobi, E. S. Hosseini, and M. R. Watts, Large-scale nanophotonic phased array, Nature 493, 195 (2013). * Horie _et al._ [2017] Y. Horie, A. Arbabi, E. Arbabi, S. M. Kamali, and A. Faraon, High-speed, phase-dominant spatial light modulation with silicon-based active resonant antennas, Acs Photonics 5, 1711 (2017). * Shrauger and Warde [2001] V. Shrauger and C. Warde, Development of a high-speed high-fill-factor phase-only spatial light modulator, in _Diffractive and Holographic Technologies for Integrated Photonic Systems_ , Vol. 4291 (International Society for Optics and Photonics, 2001) pp. 101–108. * Yang _et al._ [2014] W. Yang, T. Sun, Y. Rao, M. Megens, T. Chan, B.-W. Yoo, D. A. Horsley, M. C. Wu, and C. J. Chang-Hasnain, High speed optical phased array using high contrast grating all-pass filters, Optics express 22, 20038 (2014). * Wang _et al._ [2019] Y. Wang, G. Zhou, X. Zhang, K. Kwon, P.-A. Blanche, N. Triesault, K.-s. Yu, and M. C. Wu, 2D broadband beamsteering with large-scale MEMS optical phased array, Optica 6, 557 (2019). * Bartlett _et al._ [2019] T. A. Bartlett, W. C. McDonald, and J. N. Hall, Adapting Texas Instruments DLP technology to demonstrate a phase spatial light modulator, in _Emerging Digital Micromirror Device Based Systems and Applications XI_ , Vol. 10932 (International Society for Optics and Photonics, 2019) p. 109320S. * Zhang _et al._ [2022] X. Zhang, K. Kwon, J. Henriksson, J. Luo, and M. C. Wu, A large-scale microelectromechanical-systems-based silicon photonics LiDAR, Nature 603, 253 (2022). * Shuai _et al._ [2017] Y.-C. Shuai, D. Zhao, Y. Liu, C. Stambaugh, J. Lawall, and W. Zhou, Coupled bilayer photonic crystal slab electro-optic spatial light modulators, IEEE Photonics Journal 9, 1 (2017). * Junique _et al._ [2005] S. Junique, Q. Wang, S. Almqvist, J. Guo, H. Martijn, B. Noharet, and J. Y. Andersson, GaAs-based multiple-quantum-well spatial light modulators fabricated by a wafer-scale process, Applied optics 44, 1635 (2005). * Smolyaninov _et al._ [2019] A. Smolyaninov, A. El Amili, F. Vallini, S. Pappert, and Y. Fainman, Programmable plasmonic phase modulation of free-space wavefronts at gigahertz rates, Nature Photonics 13, 431 (2019). * Benea-Chelmus _et al._ [2021] I.-C. Benea-Chelmus, M. L. Meretska, D. L. Elder, M. Tamagnone, L. R. Dalton, and F. Capasso, Electro-optic spatial light modulator from an engineered organic layer, Nature communications 12, 1 (2021). * Gabor [1961] D. Gabor, Light and information, in _Progress in optics_ , Vol. 1 (Elsevier, 1961) pp. 109–153. * Note [1] Note that the exact coefficient of proportionality in Eqn. 1 depends on the number of polarizations, the complex amplitude and phase controllability of each mode, and the exact definition of distinguishability when defining the Fourier uncertainty relations. For simplicity, we have omitted these $\mathcal{O}(1)$ coefficients. * Heilmeier _et al._ [1968] G. H. Heilmeier, L. A. Zanoni, and L. A. Barton, Dynamic scattering: A new electrooptic effect in certain classes of nematic liquid crystals, Proceedings of the IEEE 56, 1162 (1968). * Zhang _et al._ [2014] Z. Zhang, Z. You, and D. Chu, Fundamentals of phase-only liquid crystal on silicon (LCOS) devices, Light: Science & Applications 3, e213 (2014). * Ren _et al._ [2015] Y.-X. Ren, R.-D. Lu, and L. Gong, Tailoring light with a digital micromirror device, Annalen der physik 527, 447 (2015). * Hornbeck [1983] L. J. Hornbeck, 128$\times$ 128 deformable mirror device, IEEE Transactions on Electron Devices 30, 539 (1983). * Greenlee _et al._ [2011] C. Greenlee, J. Luo, K. Leedy, B. Bayraktaroglu, R. Norwood, M. Fallahi, A.-Y. Jen, and N. Peyghambarian, Electro-optic polymer spatial light modulator based on a fabry–perot interferometer configuration, Optics express 19, 12750 (2011). * Tzang _et al._ [2019] O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, Wavefront shaping in complex media with a 350 kHz modulator via a 1D-to-2D transform, Nature Photonics 13, 788 (2019). * Chung _et al._ [2017] S. Chung, H. Abediasl, and H. Hashemi, A monolithically integrated large-scale optical phased array in silicon-on-insulator CMOS, IEEE Journal of Solid-State Circuits 53, 275 (2017). * Poulton _et al._ [2019] C. V. Poulton, M. J. Byrd, P. Russo, E. Timurdogan, M. Khandaker, D. Vermeulen, and M. R. Watts, Long-range LiDAR and free-space data communication with high-performance optical phased arrays, IEEE Journal of Selected Topics in Quantum Electronics 25, 1 (2019). * Rogers _et al._ [2021] C. Rogers, A. Y. Piggott, D. J. Thomson, R. F. Wiser, I. E. Opris, S. A. Fortune, A. J. Compston, A. Gondarenko, F. Meng, X. Chen, G. T. Reed, and R. Nicolaescu, A universal 3D imaging sensor on a silicon photonics platform, Nature 590, 256 (2021). * Wang _et al._ [2016] Q. Wang, E. T. Rogers, B. Gholipour, C.-M. Wang, G. Yuan, J. Teng, and N. I. Zheludev, Optically reconfigurable metasurfaces and photonic devices based on phase change materials, Nature photonics 10, 60 (2016). * Zhang _et al._ [2021] Y. Zhang, C. Fowler, J. Liang, B. Azhar, M. Y. Shalaginov, S. Deckoff-Jones, S. An, J. B. Chou, C. M. Roberts, V. Liberman, M. Kang, C. Ríos, K. A. Richardson, C. Rivero-Baleine, T. Gu, H. Zhang, and J. Hu, Electrically reconfigurable non-volatile metasurface using low-loss optical phase-change material, Nature Nanotechnology 16, 661 (2021). * Wang _et al._ [2021] Y. Wang, P. Landreman, D. Schoen, K. Okabe, A. Marshall, U. Celano, H.-S. P. Wong, J. Park, and M. L. Brongersma, Electrical tuning of phase-change antennas and metasurfaces, Nature Nanotechnology 16, 667 (2021). * Arbabi _et al._ [2018] E. Arbabi, A. Arbabi, S. M. Kamali, Y. Horie, M. Faraji-Dana, and A. Faraon, MEMS-tunable dielectric metasurface lens, Nature communications 9, 1 (2018). * Wu _et al._ [2019] P. C. Wu, R. A. Pala, G. Kafaie Shirmanesh, W.-H. Cheng, R. Sokhoyan, M. Grajower, M. Z. Alam, D. Lee, and H. A. Atwater, Dynamic beam steering with all-dielectric electro-optic III-V multiple-quantum-well metasurfaces, Nature communications 10, 1 (2019). * Park _et al._ [2021a] J. Park, B. G. Jeong, S. I. Kim, D. Lee, J. Kim, C. Shin, C. B. Lee, T. Otsuka, J. Kyoung, S. Kim, K.-Y. Yang, Y.-Y. Park, J. Lee, I. Hwang, J. Jang, S. H. Song, M. L. Brongersma, K. Ha, S.-W. Hwang, H. Choo, and B. L. Choi, All-solid-state spatial light modulator with independent phase and amplitude control for three-dimensional LiDAR applications, Nature nanotechnology 16, 69 (2021a). * Shirmanesh _et al._ [2020] G. K. Shirmanesh, R. Sokhoyan, P. C. Wu, and H. A. Atwater, Electro-optically tunable multifunctional metasurfaces, ACS nano 14, 6912 (2020). * Kim _et al._ [2019a] T. Kim, P. Bhargava, C. V. Poulton, J. Notaros, A. Yaacobi, E. Timurdogan, C. Baiocco, N. Fahrenkopf, S. Kruger, T. Ngai, T. Yukta, M. Watts, and V. Stojanović, A single-chip optical phased array in a wafer-scale silicon photonics/CMOS 3D-integration platform, IEEE Journal of Solid-State Circuits 54, 3061 (2019a). * Poulton _et al._ [2020] C. V. Poulton, M. J. Byrd, B. Moss, E. Timurdogan, R. Millman, and M. R. Watts, 8192-element optical phased array with 100∘ steering range and flip-chip CMOS, in _CLEO: Applications and Technology_ (Optical Society of America, 2020) pp. JTh4A–3. * Ito _et al._ [2020] H. Ito, Y. Kusunoki, J. Maeda, D. Akiyama, N. Kodama, H. Abe, R. Tetsuya, and T. Baba, Wide beam steering by slow-light waveguide gratings and a prism lens, Optica 7, 47 (2020). * Huang _et al._ [2016] Y.-W. Huang, H. W. H. Lee, R. Sokhoyan, R. A. Pala, K. Thyagarajan, S. Han, D. P. Tsai, and H. A. Atwater, Gate-tunable conducting oxide metasurfaces, Nano letters 16, 5319 (2016). * Ye _et al._ [2021] X. Ye, F. Ni, H. Li, H. Liu, Y. Zheng, and X. Chen, High-speed programmable lithium niobate thin film spatial light modulator, Optics Letters 46, 1037 (2021). * Minkov _et al._ [2017] M. Minkov, V. Savona, and D. Gerace, Photonic crystal slab cavity simultaneously optimized for ultra-high $Q/V$ and vertical radiation coupling, Applied Physics Letters 111, 131104 (2017). * Miller [2017] D. A. Miller, Attojoule optoelectronics for low-energy information processing and communications, Journal of Lightwave Technology 35, 346 (2017). * Panuski and Englund [2021] C. L. Panuski and D. R. Englund, All-optical spatial light modulators (2021), US Patent 11,022,826. * Soref and Bennett [1987] R. A. Soref and B. R. Bennett, Electrooptical effects in silicon, IEEE journal of quantum electronics 23, 123 (1987). * Panuski _et al._ [2019] C. Panuski, M. Pant, M. Heuck, R. Hamerly, and D. Englund, Single photon detection by cavity-assisted all-optical gain, Physical Review B 99, 205303 (2019). * Tanabe _et al._ [2008] T. Tanabe, H. Taniyama, and M. Notomi, Carrier diffusion and recombination in photonic crystal nanocavity optical switches, Journal of Lightwave Technology 26, 1396 (2008). * Huang _et al._ [2020] Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, Mini-LED, Micro-LED and OLED displays: Present status and future perspectives, Light: Science & Applications 9, 1 (2020). * Lin and Jiang [2020] J. Lin and H. Jiang, Development of microLED, Applied Physics Letters 116, 100502 (2020). * Templier _et al._ [2018] F. Templier, L. Dupré, B. Dupont, A. Daami, B. Aventurier, F. Henry, D. Sarrasin, S. Renet, F. Berger, F. Olivier, and L. Mathieu, High-resolution active-matrix 10-um pixel-pitch GaN LED microdisplays for augmented reality applications, in _Advances in Display Technologies VIII_ , Vol. 10556 (International Society for Optics and Photonics, 2018) p. 105560I. * Chen _et al._ [2019] C.-J. Chen, H.-C. Chen, J.-H. Liao, C.-J. Yu, and M.-C. Wu, Fabrication and characterization of active-matrix $960\times 540$ blue GaN-based micro-LED display, IEEE Journal of Quantum Electronics 55, 1 (2019). * Herrnsdorf _et al._ [2015] J. Herrnsdorf, J. J. D. McKendry, S. Zhang, E. Xie, R. Ferreira, D. Massoubre, A. M. Zuhdi, R. K. Henderson, I. Underwood, S. Watson, A. E. Kelly, E. Gu, and M. D. Dawson, Active-matrix GaN micro light-emitting diode display with unprecedented brightness, IEEE Transactions on Electron Devices 62, 1918 (2015). * Ferreira _et al._ [2016] R. X. G. Ferreira, E. Xie, J. J. D. McKendry, S. Rajbhandari, H. Chun, G. Faulkner, S. Watson, A. E. Kelly, E. Gu, R. V. Penty, I. H. White, D. C. O’Brien, and M. D. Dawson, High bandwidth GaN-based micro-LEDs for multi-Gb/s visible light communications, IEEE Photonics Technology Letters 28, 2023 (2016). * Cai _et al._ [2021] Y. Cai, J. I. Haggar, C. Zhu, P. Feng, J. Bai, and T. Wang, Direct epitaxial approach to achieve a monolithic on-chip integration of a HEMT and a single micro-LED with a high-modulation bandwidth, ACS applied electronic materials 3, 445 (2021). * Park _et al._ [2021b] J. Park, J. H. Choi, K. Kong, J. H. Han, J. H. Park, N. Kim, E. Lee, D. Kim, J. Kim, D. Chung, S. Jun, M. Kim, E. Yoon, J. Shin, and S. Hwang, Electrically driven mid-submicrometre pixelation of ingan micro-light-emitting diode displays for augmented-reality glasses, Nature Photonics 15, 449 (2021b). * Hassan _et al._ [2021] N. B. Hassan, F. Dehkhoda, E. Xie, J. Herrnsdorf, M. J. Strain, R. Henderson, and M. D. Dawson, Ultra-high frame rate digital light projector using chipscale LED-on-CMOS technology, arXiv preprint arXiv:2111.13586 (2021). * Miller [2010] D. A. Miller, Optical interconnects to electronic chips, Applied optics 49, F59 (2010). * Peng _et al._ [2019] C. Peng, R. Hamerly, M. Soltani, and D. R. Englund, Design of high-speed phase-only spatial light modulators with two-dimensional tunable microcavity arrays, Optics express 27, 30669 (2019). * Lee _et al._ [2009] H. Lee, S. Kiravittaya, S. Kumar, J. Plumhof, L. Balet, L. H. Li, M. Francardi, A. Gerardino, A. Fiore, A. Rastelli, and O. Schmidt, Local tuning of photonic crystal nanocavity modes by laser-assisted oxidation, Applied Physics Letters 95, 191109 (2009). * Chen _et al._ [2011] C. J. Chen, J. Zheng, T. Gu, J. F. McMillan, M. Yu, G.-Q. Lo, D.-L. Kwong, and C. W. Wong, Selective tuning of high-$Q$ silicon photonic crystal nanocavities via laser-assisted local oxidation, Optics express 19, 12480 (2011). * Asano _et al._ [2017] T. Asano, Y. Ochi, Y. Takahashi, K. Kishimoto, and S. Noda, Photonic crystal nanocavity with a $Q$ factor exceeding eleven million, Optics express 25, 1769 (2017). * Hu _et al._ [2018] S. Hu, M. Khater, R. Salas-Montiel, E. Kratschmer, S. Engelmann, W. M. Green, and S. M. Weiss, Experimental realization of deep-subwavelength confinement in dielectric optical resonators, Science advances 4, eaat2355 (2018). * Nozaki _et al._ [2010] K. Nozaki, T. Tanabe, A. Shinya, S. Matsuo, T. Sato, H. Taniyama, and M. Notomi, Sub-femtojoule all-optical switching using a photonic-crystal nanocavity, Nature Photonics 4, 477 (2010). * Minkov and Savona [2014] M. Minkov and V. Savona, Automated optimization of photonic crystal slab cavities, Scientific reports 4, 1 (2014). * Tran _et al._ [2009] N.-V.-Q. Tran, S. Combrié, and A. De Rossi, Directive emission from high-$Q$ photonic crystal cavities through band folding, Physical Review B 79, 041101 (2009). * Tran _et al._ [2010] N.-V.-Q. Tran, S. Combrié, P. Colman, A. De Rossi, and T. Mei, Vertical high emission in photonic crystal nanocavities by band-folding design, Physical Review B 82, 075120 (2010). * Portalupi _et al._ [2010] S. L. Portalupi, M. Galli, C. Reardon, T. Krauss, L. O’Faolain, L. C. Andreani, and D. Gerace, Planar photonic crystal cavities with far-field optimization for high coupling efficiency and quality factor, Optics express 18, 16064 (2010). * Qiu _et al._ [2012] C. Qiu, J. Chen, Y. Xia, and Q. Xu, Active dielectric antenna on chip for spatial light modulation, Scientific reports 2, 1 (2012). * Andreani and Gerace [2006] L. C. Andreani and D. Gerace, Photonic-crystal slabs with a triangular lattice of triangular holes investigated using a guided-mode expansion method, Physical Review B 73, 235114 (2006). * Note [2] Applied Nanotools, Inc. https://www.appliednt.com/. * Hansen [1981] R. C. Hansen, Fundamental limitations in antennas, Proceedings of the IEEE 69, 170 (1981). * Fahrenkopf _et al._ [2019] N. M. Fahrenkopf, C. McDonough, G. L. Leake, Z. Su, E. Timurdogan, and D. D. Coolbaugh, The AIM Photonics MPW: A highly accessible cutting edge technology for rapid prototyping of photonic integrated circuits, IEEE Journal of Selected Topics in Quantum Electronics 25, 1 (2019). * Asano _et al._ [2006] T. Asano, B.-S. Song, and S. Noda, Analysis of the experimental $Q$ factors ($\mathord{\sim}$1 million) of photonic crystal nanocavities, Optics express 14, 1996 (2006). * Kim _et al._ [2006] S.-H. Kim, S.-K. Kim, and Y.-H. Lee, Vertical beaming of wavelength-scale photonic crystal resonators, Physical Review B 73, 235117 (2006). * Sekoguchi _et al._ [2014] H. Sekoguchi, Y. Takahashi, T. Asano, and S. Noda, Photonic crystal nanocavity with a $Q$-factor of $\mathord{\sim}$9 million, Optics Express 22, 916 (2014). * Taguchi _et al._ [2011] Y. Taguchi, Y. Takahashi, Y. Sato, T. Asano, and S. Noda, Statistical studies of photonic heterostructure nanocavities with an average $Q$ factor of three million, Optics express 19, 11916 (2011). * Minkov _et al._ [2013] M. Minkov, U. P. Dharanipathy, R. Houdré, and V. Savona, Statistics of the disorder-induced losses of high-$Q$ photonic crystal cavities, Optics express 21, 28233 (2013). * Note [3] Slm-suite. https://github.mit.edu/cpanuski/qp-slm. * Lie _et al._ [1982] L. N. Lie, R. R. Razouk, and B. E. Deal, High pressure oxidation of silicon in dry oxygen, Journal of The Electrochemical Society 129, 2828 (1982). * Haus [1984] H. A. Haus, _Waves and fields in optoelectronics_ , Prentice-Hall series in solid state physical electronics (Prentice-Hall, Englewood Cliffs, NJ, 1984). * Zhang _et al._ [2013] S. Zhang, S. Watson, J. J. McKendry, D. Massoubre, A. Cogman, E. Gu, R. K. Henderson, A. E. Kelly, and M. D. Dawson, 1.5 Gbit/s multi-channel visible light communications using CMOS-controlled GaN-based LEDs, Journal of lightwave technology 31, 1211 (2013). * McKendry _et al._ [2009] J. J. McKendry, B. R. Rae, Z. Gong, K. R. Muir, B. Guilhabert, D. Massoubre, E. Gu, D. Renshaw, M. D. Dawson, and R. K. Henderson, Individually addressable alingan micro-led arrays with cmos control and subnanosecond output pulses, IEEE Photonics Technology Letters 21, 811 (2009). * Lan _et al._ [2020] H.-Y. Lan, I.-C. Tseng, Y.-H. Lin, G.-R. Lin, D.-W. Huang, and C.-H. Wu, High-speed integrated micro-led array for visible light communication, Optics letters 45, 2203 (2020). * Barclay _et al._ [2005] P. E. Barclay, K. Srinivasan, and O. Painter, Nonlinear response of silicon photonic crystal microresonators excited via an integrated waveguide and fiber taper, Optics express 13, 801 (2005). * Winzer and Essiambre [2006] P. J. Winzer and R.-J. Essiambre, Advanced modulation formats for high-capacity optical transport networks, Journal of Lightwave Technology 24, 4711 (2006). * Carreira _et al._ [2020] J. Carreira, E. Xie, R. Bian, J. Herrnsdorf, H. Haas, E. Gu, M. Strain, and M. Dawson, Gigabit per second visible light communication based on algainp red micro-led micro-transfer printed onto diamond and glass, Optics Express 28, 12149 (2020). * Shaltout _et al._ [2019b] A. M. Shaltout, K. G. Lagoudakis, J. van de Groep, S. J. Kim, J. Vučković, V. M. Shalaev, and M. L. Brongersma, Spatiotemporal light control with frequency-gradient metasurfaces, Science 365, 374 (2019b). * Bogaerts _et al._ [2020] W. Bogaerts, D. Pérez, J. Capmany, D. A. Miller, J. Poon, D. Englund, F. Morichetti, and A. Melloni, Programmable photonic circuits, Nature 586, 207 (2020). * Pahlevaninezhad _et al._ [2018] H. Pahlevaninezhad, M. Khorasaninejad, Y.-W. Huang, Z. Shi, L. P. Hariri, D. C. Adams, V. Ding, A. Zhu, C.-W. Qiu, F. Capasso, and M. J. Suter, Nano-optic endoscope for high-resolution optical coherence tomography in vivo, Nature photonics 12, 540 (2018). * Watts _et al._ [2007] M. R. Watts, M. J. Shaw, and G. N. Nielson, Microphotonic thermal imaging, Nature Photonics 1, 632 (2007). * Grace _et al._ [2020] M. R. Grace, Z. Dutton, A. Ashok, and S. Guha, Approaching quantum-limited imaging resolution without prior knowledge of the object location, JOSA A 37, 1288 (2020). * Wetzstein _et al._ [2020] G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. Miller, and D. Psaltis, Inference in artificial intelligence with deep optics and photonics, Nature 588, 39 (2020). * Mosk _et al._ [2012] A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, Controlling waves in space and time for imaging and focusing in complex media, Nature photonics 6, 283 (2012). * Yoon _et al._ [2020] S. Yoon, M. Kim, M. Jang, Y. Choi, W. Choi, S. Kang, and W. Choi, Deep optical imaging within complex scattering media, Nature Reviews Physics 2, 141 (2020). * Bourassa _et al._ [2021] J. E. Bourassa, R. N. Alexander, M. Vasmer, A. Patil, I. Tzitrin, T. Matsuura, D. Su, B. Q. Baragiola, S. Guha, G. Dauphinais, K. K. Sabapathy, N. C. Menicucci, and I. Dhand, Blueprint for a scalable photonic fault-tolerant quantum computer, Quantum 5, 392 (2021). * Bartolucci _et al._ [2021] S. Bartolucci, P. Birchall, H. Bombin, H. Cable, C. Dawson, M. Gimeno-Segovia, E. Johnston, K. Kieling, N. Nickerson, M. Pant, F. Pastawski, T. Rudolph, and C. Sparrow, Fusion-based quantum computation, arXiv preprint arXiv:2101.09310 (2021). * Heuck _et al._ [2020] M. Heuck, K. Jacobs, and D. R. Englund, Controlled-phase gate using dynamically coupled cavities and optical nonlinearities, Physical review letters 124, 160501 (2020). * Krastanov _et al._ [2021] S. Krastanov, M. Heuck, J. H. Shapiro, P. Narang, D. R. Englund, and K. Jacobs, Room-temperature photonic logical qubits via second-order nonlinearities, Nature communications 12, 1 (2021). * Komma _et al._ [2012] J. Komma, C. Schwarz, G. Hofmann, D. Heinert, and R. Nawrodt, Thermo-optic coefficient of silicon at 1550 nm and cryogenic temperatures, Applied Physics Letters 101, 041905 (2012). * Panuski _et al._ [2020] C. Panuski, D. Englund, and R. Hamerly, Fundamental thermal noise limits for optical microcavities, Physical Review X 10, 041046 (2020). * Nedeljkovic _et al._ [2011] M. Nedeljkovic, R. Soref, and G. Z. Mashanovich, Free-carrier electrorefraction and electroabsorption modulation predictions for silicon over the 1–14-$\upmu$m infrared wavelength range, IEEE Photonics Journal 3, 1171 (2011). * Vercruysse _et al._ [2021] D. Vercruysse, N. V. Sapra, K. Y. Yang, and J. Vuckovic, Inverse-designed photonic crystal circuits for optical beam steering, ACS Photonics 8, 3085 (2021). * Tamanuki _et al._ [2021] T. Tamanuki, H. Ito, and T. Baba, Thermo-optic beam scanner employing silicon photonic crystal slow-light waveguides, Journal of Lightwave Technology 39, 904 (2021). * Sakata _et al._ [2020] R. Sakata, K. Ishizaki, M. De Zoysa, S. Fukuhara, T. Inoue, Y. Tanaka, K. Iwata, R. Hatsuda, M. Yoshida, J. Gelleta, and S. Noda, Dually modulated photonic crystals enabling high-power high-beam-quality two-dimensional beam scanning lasers, Nature communications 11, 1 (2020). * Yaacobi _et al._ [2014] A. Yaacobi, J. Sun, M. Moresco, G. Leake, D. Coolbaugh, and M. R. Watts, Integrated phased array for wide-angle beam steering, Optics letters 39, 4575 (2014). * Minkov _et al._ [2020] M. Minkov, I. A. Williamson, L. C. Andreani, D. Gerace, B. Lou, A. Y. Song, T. W. Hughes, and S. Fan, Inverse design of photonic crystals through automatic differentiation, ACS Photonics 7, 1729 (2020). * Vuckovic _et al._ [2002] J. Vuckovic, M. Loncar, H. Mabuchi, and A. Scherer, Optimization of the $Q$ factor in photonic crystal microcavities, IEEE Journal of Quantum Electronics 38, 850 (2002). * Munsch _et al._ [2013] M. Munsch, N. S. Malik, E. Dupuy, A. Delga, J. Bleuse, J.-M. Gérard, J. Claudon, N. Gregersen, and J. Mørk, Dielectric GaAs antenna ensuring an efficient broadband coupling between an InAs quantum dot and a gaussian optical beam, Physical Review Letters 110, 177402 (2013). * Note [4] Flexcompute, Inc. Tidy3D. https://simulation.cloud. * Hughes _et al._ [2021] T. W. Hughes, M. Minkov, V. Liu, Z. Yu, and S. Fan, A perspective on the pathway toward full wave simulation of large area metalenses, Applied Physics Letters 119, 150502 (2021). * Levi and Stark [1984] A. Levi and H. Stark, Image restoration by the method of generalized projections with application to restoration from magnitude, JOSA A 1, 932 (1984). * Cala’Lesina _et al._ [2020] A. Cala’Lesina, D. Goodwill, E. Bernier, L. Ramunno, and P. Berini, On the performance of optical phased array technology for beam steering: effect of pixel limitations, Optics Express 28, 31637 (2020). * Liu and Nocedal [1989] D. C. Liu and J. Nocedal, On the limited memory BFGS method for large scale optimization, Mathematical programming 45, 503 (1989). * Shechtman _et al._ [2015] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, Phase retrieval with application to optical imaging: A contemporary overview, IEEE Signal Processing Magazine 32, 87 (2015). * Kim _et al._ [2012] S.-H. Kim, J. Huang, and A. Scherer, From vertical-cavities to hybrid metal/photonic-crystal nanocavities: towards high-efficiency nanolasers, JOSA B 29, 577 (2012). * Barnes _et al._ [2020] W. L. Barnes, S. A. Horsley, and W. L. Vos, Classical antennas, quantum emitters, and densities of optical states, Journal of Optics 22, 073501 (2020). * Iwase _et al._ [2012] E. Iwase, P.-C. Hui, D. Woolf, A. W. Rodriguez, S. G. Johnson, F. Capasso, and M. Lončar, Control of buckling in large micromembranes using engineered support structures, Journal of Micromechanics and Microengineering 22, 065028 (2012). * Čižmár _et al._ [2010] T. Čižmár, M. Mazilu, and K. Dholakia, In situ wavefront correction and its application to micromanipulation, Nature Photonics 4, 388 (2010). * Di Leonardo _et al._ [2007] R. Di Leonardo, F. Ianni, and G. Ruocco, Computer generation of optimal holograms for optical trap arrays, Optics Express 15, 1913 (2007). * Nogrette _et al._ [2014] F. Nogrette, H. Labuhn, S. Ravets, D. Barredo, L. Béguin, A. Vernier, T. Lahaye, and A. Browaeys, Single-atom trapping in holographic 2D arrays of microtraps with arbitrary geometries, Physical Review X 4, 021034 (2014). * Kim _et al._ [2019b] D. Kim, A. Keesling, A. Omran, H. Levine, H. Bernien, M. Greiner, M. D. Lukin, and D. R. Englund, Large-scale uniform optical focus array generation with a phase spatial light modulator, Optics letters 44, 3178 (2019b). * Jayatilleka _et al._ [2021] H. Jayatilleka, H. Frish, R. Kumar, J. Heck, C. Ma, M. N. Sakib, D. Huang, and H. Rong, Post-fabrication trimming of silicon photonic ring resonators at wafer-scale, Journal of Lightwave Technology 39, 5083 (2021). * Biryukova _et al._ [2020] V. Biryukova, G. J. Sharp, C. Klitis, and M. Sorel, Trimming of silicon-on-insulator ring-resonators via localized laser annealing, Optics Express 28, 11156 (2020). * Hagan _et al._ [2019] D. E. Hagan, B. Torres-Kulik, and A. P. Knights, Post-fabrication trimming of silicon ring resonators via integrated annealing, IEEE Photonics Technology Letters 31, 1373 (2019). * Han and Shi [2018] S. Han and Y. Shi, Post-fabrication trimming of photonic crystal nanobeam cavities by electron beam irradiation, Optics Express 26, 15908 (2018). * Gil-Santos _et al._ [2017] E. Gil-Santos, C. Baker, A. Lemaître, S. Ducci, C. Gomez, G. Leo, and I. Favero, Scalable high-precision tuning of photonic resonators by resonant cavity-enhanced photoelectrochemical etching, Nature communications 8, 1 (2017). * Spector _et al._ [2016] S. Spector, J. M. Knecht, and P. W. Juodawlkis, Localized in situ cladding annealing for post-fabrication trimming of silicon photonic integrated circuits, Optics Express 24, 5996 (2016). * Lipka _et al._ [2014] T. Lipka, M. Kiepsch, H. K. Trieu, and J. Müller, Hydrogenated amorphous silicon photonic device trimming by UV-irradiation, Optics express 22, 12122 (2014). * Atabaki _et al._ [2013] A. H. Atabaki, A. A. Eftekhar, M. Askari, and A. Adibi, Accurate post-fabrication trimming of ultra-compact resonators on silicon, Optics express 21, 14139 (2013). * Cai _et al._ [2013] T. Cai, R. Bose, G. S. Solomon, and E. Waks, Controlled coupling of photonic crystal cavities using photochromic tuning, Applied Physics Letters 102, 141118 (2013). * Hennessy _et al._ [2006] K. Hennessy, C. Högerle, E. Hu, A. Badolato, and A. Imamoğlu, Tuning photonic nanocavities by atomic force microscope nano-oxidation, Applied physics letters 89, 041118 (2006). ## Appendix A Analytic Model for Slab Switching Here, we develop an analytic model for all-optical switching in slab-type PhC cavities to estimate the required tuning energy. An absorbed control pulse produces a refractive index change $\delta n(\vec{r},t)=-\alpha_{c}N(\vec{r},t)+\alpha_{t}T(\vec{r},t)$ (4) proportional to the photo-excited carrier density $N$ and induced temperature change $T$ through the plasma dispersion and thermo-refractive effects, respectively. The thermo-refractive coefficient $\alpha_{t}={\rm d}n/{\rm d}T$ and empirical free-carrier “scattering volume” $\alpha_{c}=-{\rm d}n/{\rm d}N$ are typically both positive such that the two effects counteract one another. The evolution of $\delta T$ and $N$ are governed by the diffusion equations $\displaystyle\frac{\partial N(\vec{r},t)}{\partial t}$ $\displaystyle=\nabla\cdot(D_{c}\nabla N)-\frac{N}{\tau}+g(\vec{r},t)$ (5a) $\displaystyle\frac{\partial T(\vec{r},t)}{\partial t}$ $\displaystyle=\nabla\cdot(D_{t}\nabla T)+q(\vec{r},t)$ (5b) given the thermal diffusivity $D_{t}$ and assuming ambipolar diffusion of carriers with lifetime $\tau$ and diffusivity $D_{c}$. Over relevant timescales $t>w^{2}/D_{c}$ in a $w$-thick uniform slab, vertical diffusion can be neglected to yield solutions $\displaystyle N(\vec{r},t)$ $\displaystyle=g(\vec{r},t)*_{\vec{r},t}G(\vec{r},2D_{c}t)e^{-t/\tau}$ (6a) $\displaystyle T(\vec{r},t)$ $\displaystyle=q(\vec{r},t)*_{\vec{r},t}G(\vec{r},2D_{t}t)$ (6b) expressed as convolutions ($*$) of the inhomogeneous sources $g(\vec{r},t)$ and $q(\vec{r},t)$ with the two-dimensional Green’s function $G(\vec{r},\sigma^{2})=\frac{1}{2\pi w\sigma^{2}}\exp{-\frac{|\vec{r}|^{2}}{2\sigma^{2}}}.$ (7) All variables are considered uniform along the vertical axis; $\vec{r}$ in our notation thus corresponds only to transverse coordinates in the slab plane. We specifically consider solutions to Eqns. 6 in response to a focused, square- wave Gaussian control pulse with beam waist $2\sigma_{p}$, pulse-width $T$, and pulse (photon) energy $E$ ($E_{0}$) absorbed into the cavity with efficiency $\eta_{\text{abs}}$. The results can be considerably simplified with the conservative (i.e. underestimating plasma dispersion at short timescales $t\lesssim\tau$), albeit crude, assumption of instantaneous carrier diffusion to the diffusion length $\sqrt{D_{c}\tau}$. This method decouples carrier decay and diffusion to yield the carrier density $N(\vec{r},t)=N_{0}(t)G(\vec{r},2D_{c}\tau+\sigma_{p}^{2})$ (8) with time-dependent total population $N_{0}(t)=\eta_{\text{abs}}\frac{\tau}{T}\frac{E}{E_{0}}\begin{cases}(1-e^{-t/\tau}),&t\leq T\\\ e^{-t/\tau}(e^{T/\tau}-1),&t>T.\end{cases}$ (9) The recombination of each carrier pair releases the bandgap energy $E_{g}$ back into the slab with volumetric heat capacity $c_{v}$, yielding the source $q(\vec{r},t)=-\left(\frac{\partial N}{\partial t}\right)_{\text{decay}}\frac{E_{g}}{c_{v}}=\frac{N(\vec{r},t)}{\tau}\frac{E_{g}}{c_{v}}$ (10) that produces the temperature profile $T(\vec{r},t)=\frac{E_{g}}{c_{v}\tau}N_{0}(t)*_{t}G(\vec{r},2D_{t}t+2D_{c}\tau+\sigma_{p}^{2}).$ (11) Note that we neglect additional initial heating from above-band absorption. Given sufficiently small $|\delta n|$, the resulting linewidth-normalized resonance shift $\tilde{\Delta}(t)=\Delta(t)/\Gamma=-Q\int\frac{\delta n}{n}(\vec{r},t)|\vec{E}(\vec{r})|^{2}{\rm d}^{3}\vec{r}$ (12) for the electric field profile $\vec{E}(\vec{r})$ with normalization ($\int|\vec{E}(\vec{r})|^{2}{\rm d}^{3}\vec{r}=1$) is well-approximated by first-order perturbation theory [11]. We consider a Gaussian-shaped mode envelope $|\vec{E}|^{2}=G(\vec{r},\sigma_{0})$ fully-confined with uniform transverse amplitude within the high-index slab. Since Eqn. 11 must be evaluated numerically, we assume a constant temperature change $T(\vec{r},t)=T(0,t)$ across the mode — valid for typical experimental regimes of interest where $\sigma_{0}\ll 2D_{t}t+2D_{c}\tau+\sigma_{p}$ — to avoid the additional integration in Eqn. 12. The overlap between the optical mode and the static free carrier profile, on the other hand, can be analytically evaluated to yield the combined result $\displaystyle\tilde{\Delta}(t)=$ $\displaystyle\frac{\alpha_{c}}{n}N_{0}(t)\frac{\sigma_{0}^{2}}{2D_{c}\tau+\sigma_{p}^{2}+\sigma_{0}^{2}}\left(\frac{Q}{V}\right)$ $\displaystyle-Q\frac{\alpha_{t}}{n}\frac{E_{g}}{c_{v}\tau}N_{0}(t)*_{t}G(0,2D_{t}t+2D_{c}\tau+\sigma_{p}^{2})$ (13) for the cavity mode volume $V=\int\epsilon|E|^{2}{\rm d}^{3}\vec{r}/\max{\\{\epsilon|E|^{2}\\}}=2\pi w\sigma_{0}^{2}$. Since the reflected signal directly tracks the cavity amplitude in cross-polarization, the normalized reflectivity $r(t)=\int_{0}^{t}{\rm d}t^{\prime}e^{-\Gamma(t-t^{\prime})-i\int_{t}^{t^{\prime}}{\rm d}t^{\prime\prime}2\Delta(t^{\prime\prime})}$ (14) is finally found by numerically integrating the cavity evolution as dictated by coupled mode theory [88]. Figure A1: Estimated normalized reflection coefficient $r(t)=\sqrt{R(t)}e^{j\phi(t)}$ as a function of switching energy for the parameters in Table A1. Insets: same results in polar form for comparison to the Lorentzian reflection profile $r(\Delta)=1/(1+j\Delta)$ of a cavity under cross-polarized excitation with static detuning $\Delta$ (black dashed line). Fig. A1 plots the switching characteristics for the parameters in Table A1. Free-carrier dispersion dominates the response for the nanosecond-order timescales of interest followed by a slow ($\upmu$s-order), weak ($|\tilde{\Delta}|\ll 1$) thermal rebound. The three order-of-magnitude timescale difference effectively decouples the two modulation mechanisms. Note that the true reflection coefficient deviates from the Lorentzian response of a quasi-static cavity due to the fast (relative to the cavity decay rate $\Gamma$) carrier dynamics. These results indicate that a SLM with $10^{6}$ pixels operating with $\omega_{s}>2\pi\times 100$ MHz could be realized with $\mathcal{O}(\text{watt})$ optical control power. Parameter | Value | Source ---|---|--- $n_{\text{Si}}$ | 3.48 | [107] $E_{g}$ | 1.12 eV | [107] $\alpha_{t}$ | $1.8\times 10^{-4}$ K-1 | [107] $c_{v}$ | 1.64 J/cm${}^{3}\cdot$K | [108] $D_{t}$ | 0.26 cm2/s | [108] $\alpha_{c}$ | $8\times 10^{-9}$ $\upmu$m3 | [109] (Linearized) $D_{c}$ | 19 cm2/s | [55] $\tau$ | 1 ns | [55] $\lambda_{0}$ | 1550 nm | Assumed $Q$ | 200,000 | Assumed $\tilde{V}$ | 0.95 | [108] $w$ | 220 nm | Measured $2\sigma_{0}$ | $0.66~{}\upmu\text{m}$ | $2\sqrt{V/(2\pi w)}$ $\lambda_{p}$ | $0.53~{}\upmu\text{m}$ | Assumed $T$ | 0.5 ns | Assumed $2\sigma_{p}$ | $0.22~{}\upmu\text{m}$ | $\lambda_{p}/2$ $\eta_{\text{abs}}$ | 0.6 | FDTD Table A1: Parameters used for the simulated switching results in Fig. A1. The pulse parameters were selected to mimic the typical experimental conditions of Section V in the main text. ## Appendix B Performance Comparisons Table A2 compares the PhC-SLM demonstrated here to other actively-controlled, 2D SLMs (Fig. 1b). Wavelength-steered devices and switch arrays are omitted to restrict focus to the typical SLM architectures in Fig. 1. Notably, while beamsteering with PhC waveguides [47, 110, 111] and laser arrays [112] has recently been demonstrated, our device is the first (to our knowledge) to feature simultaneous emission from a 2D array of individually controllable PhC pixels. Class [Year] | Device | $\bm{N_{x}\times N_{y}}$ | $\bm{\Omega_{s}=\frac{\lambda}{\Lambda_{x}}\times\frac{\lambda}{\Lambda_{y}}}$ | $\bm{\zeta}$ [$\bm{\%}$] | $\bm{\omega_{s}/2\pi}$ [Hz] ---|---|---|---|---|--- EO [2022] | PhC-SLM | $\bm{8\times 8}$ | $\bm{10.6^{\circ}\times 14.5^{\circ}}$ | 64 | $\bm{1.4\times 10^{8}}$ EO [2021] | $\chi^{(2)}$ polymer-coated grating [26] | $2\times 2$ | $0.2^{\circ}\times 0.2^{\circ}$ | — | $5.0\times 10^{7}$ EO [2019] | $\chi^{(3)}$ thin-film plasmonic resonator [25] | $4\times 4$ | $0.8^{\circ}\times 1.1^{\circ}$ | 20* | $1.0\times 10^{9}$ EO [2017] | Bilayer guided resonators [23] | $6\times 6$ | $0.3^{\circ}\times 0.3^{\circ}$ | 40* | $2\times 10^{8}$ EO [2011] | $\chi^{(2)}$ polymer-coated grating [33] | $4\times 4$ | $0.1^{\circ}\times 0.1^{\circ}$ | 18* | $8.0\times 10^{5}$ EO [2005] | MQW micropillar modulators [24] | $128\times 128$ | $1.3^{\circ}\times 1.3^{\circ}$ | 50 | $1.3\times 10^{7}$ Thermal [2018] | Asymmetric Fabry-Perot cavity [17] | $6\times 6$ | $3.4^{\circ}\times 3.4^{\circ}$ | 59 | $1.4\times 10^{4}$ Thermal [2013] | Waveguided phased array [16, 113] | $8\times 8$ | $9.9^{\circ}\times 9.9^{\circ}$ | 10* | $1.1\times 10^{5}$ MEMS [2019] | Grating phase shifters [20] | $160\times 160$ | $4.4^{\circ}\times 4.1^{\circ}$ | 85* | $5.5\times 10^{4}$ MEMS [2019] | Piston mirrors [21] | $960\times 540$ | $3.4^{\circ}\times 3.4^{\circ}$ | — | $2.0\times 10^{4}$ MEMS [2014] | High-contrast gratings [19] | $8\times 8$ | $2.7^{\circ}\times 2.7^{\circ}$ | 36* | $5.0\times 10^{5}$ MEMS [2001] | Piston mirrors [18] | $256\times 256$ | $0.9^{\circ}\times 0.9^{\circ}$ | 86 | $5.0\times 10^{5}$ LC [2020] | Plasmonic metasurface [14] | $3\times 2$ | $0.3^{\circ}\times 0.3^{\circ}$ | — | $2.5\times 10^{1}$ LC [2019] | “MacroSLM” [3] | $1536\times 1536$ | $3.0^{\circ}\times 3.0^{\circ}$ | 95 | $6.0\times 10^{2}$ LC [1994] | Binary ferroelectric LC [13] | $256\times 256$ | $2.2^{\circ}\times 2.2^{\circ}$ | 79 | $8.3\times 10^{3}$ Table A2: Performance comparison of selected active 2D spatial light modulators from Fig. 1b. Estimated fill factors $\zeta$ are marked by a *. ## Appendix C Inverse Design Strategy We implement the inverse design strategy in Fig. 3b using the open-source guided mode expansion (GME) package Legume [114]. GME approximates the cavity eigenmode using the incomplete basis set of waveguide modes in an “effective” unpatterned slab (in effect transforming the 3D eigenproblem to 2D) and perturbatively computes the loss due to coupling to the radiative continuum [77]. During each optimization step, we aggregate the losses of the fundamental slab mode over four Bloch boundary conditions $\vec{k}_{i}$ at all wave vectors $\vec{g}_{mn}=\vec{k}_{i}+\vec{G}_{mn}=\vec{k}_{i}+2\pi(\frac{m}{\Lambda_{x}},\frac{n}{\Lambda_{y}})$ satisfying $|\vec{g}_{mn}|<g_{\text{max}}=2.5\times 2\pi/a$ given the reciprocal lattice vectors $\vec{G}_{mn}$. The objective function Eqn. 2 converges within tens of iterations, and the resulting design is then verified with $g_{\text{max}}=3\times 2\pi/a$ using a $3\times 3$ $\vec{k}_{i}$ grid in the Brillouin zone of the rectangular lattice of unit cells. Exemplary GME-approximated far-field profiles for an $L3$ cavity with two target quality factors $Q_{0}$ are shown in Fig. A2a,c for comparison to those computed using near-to-far-field transformations of FDTD-simulated fields [115, 82]. These results confirm that the perturbatively-computed GME coupling coefficients can be used to accurately estimate a cavity’s far-field scattering profile. Figure A2: Comparison of resonant wavelengths $\lambda_{0}$, quality factors $Q$, and far-field emission spectra computed from GME (left) and FDTD (right) simulations for two target $Q_{0}$ (top, bottom). The inverse design objective function (Eqn. 2) maximizes the directivity $D=4\pi S(0)/\int_{\Omega}S(\vec{k})~{}{\rm d}\Omega$ (for the light cone $\Omega$) of the emission profile $S(\vec{k})$ for any $Q_{0}$. The resulting aperture efficiency $\eta_{a}=\frac{D_{0}}{\max D}=\frac{\lambda^{2}}{A}\frac{S(0)}{\int_{\Omega}S(\vec{k})~{}{\rm d}\Omega}=\frac{A_{0}}{A}$ (15) compares $D$ to the maximum directivity $4\pi A/\lambda^{2}$ of an area $A=\Lambda_{x}\Lambda_{y}$ aperture at wavelength $\lambda$, and can therefore be interpreted as the fill factor of light scattered from an effective area $A_{0}$. Fig. A3 compares $\eta_{a}$ for grating-coupled and inverse-designed $L3$ cavities. For most inverse designs, $\eta_{a}\approx 1$ regardless of $Q_{0}$. Since GME assumes periodic boundary conditions (indicative of the true array design), scattering from neighboring unit cells enables designs with $\eta_{a}>1$. However, this “super-directive” performance is undesirable since the steerable field-of-view is narrowed to $\Omega_{p}<\Omega_{s}$. Figure A3: FDTD-computed aperture efficiency $\eta_{a}$ of vertically coupled $L3$ cavities based on a grating perturbation $\delta r/r\in[0,0.05]$ or an inverse-design target quality factor $Q_{0}\in[10^{2},10^{6}]$. Inset arrows illustrate parameter trends. ## Appendix D Experimental Setups Fig. A4 schematically illustrates the major components of our experimental setup. Here, we describe the design and function of each sub-assembly. Figure A4: Overview of experimental setups for measuring and controlling the photonic crystal SLM (PhC-SLM). A cross-polarized microscope (a) featuring balanced homodyne measurement (b) enables near- and far-field characterization of cavity arrays controlled by SLM-distributed coherent light (c) or high- speed incoherent $\upmu$LED arrays (d). TL: tunable infrared laser (Santec TSL-710), EOM: electro-optic amplitude modulator; $\lambda/2$: half-wave plate, PBS: polarizing beamsplitter; L1: 250 mm back-focal-plane lens; DM: long-pass dichroic mirror; OL: objective lens (Nikon Plan Fluor 40$\times$/0.60 NA or Nikon LU Plan 100$\times$/0.95 NA), L2: 250 mm back- focal-plane lens; SF: spatial filter; L3: 200 mm tube lens; v-SWIR: visible- short wave infrared camera (Xenics Cheetah 640); DAQ: data acquisition unit (NI USB-6343); $\Delta t$: trigger delay generator (SRS DG645); LO: local oscillator; PM: piezo mirror; BD: balanced detector (Thorlabs PDB480C-AC); Phase Lock: TEM LaseLock; LPF: low-pass filter; CWTL: continuous-wave trimming laser (Coherent Verdi V18); MLD: modulated laser diode (Hubner Cobolt or PicoLAS LDP); BE: $5\times$ visible beam expander; LCOS: high-power liquid crystal SLM (Santec SLM-300); L4: 300 mm; L5: 250 mm; PD: photo-detector; CL: collection lens (Zeiss Fluar 5$\times$/0.25 NA); VBE: $0.5\times-2\times$ variable beam expander; DP: dove prism. ### D.1 Near-Field Reflection Spectra The wide-field, cross-polarized microscope in Fig. A4(a) allows us to simultaneously measure the reflection from every cavity within a camera’s field-of-view. A visible illumination path (not illustrated) is joined with collimated infrared light from a tunable laser with a dichroic mirror and focused onto the back-focal-plane (BFP) of an objective by lens L1. The angle- of-incidence and spot size of the infrared beam on the sample are therefore controlled by translating L1 and varying the collimated beam diameter, respectively. In our typical wide-field configuration, a 7.2 mm beam diameter focused to the center of a $40\times$ objective’s BFP yields a $\mathord{\sim}150~{}\upmu$m waist-diameter, vertically-incident field that quasi-uniformly illuminates $10\times 10$ PhC cavity arrays. By orienting the input polarization at a $45^{\circ}$ angle relative to the dominant cavity polarization axis (with a half-wave plate or by physically rotating the sample), light coupled into and reflected by the PhC cavity is polarization rotated and can be isolated from direct, specular reflections with a polarizing beamsplitter. A kHz-rate free-running, dual-band (visible and infrared) camera images this cross-polarized reflection signal through the tube lens L3. For each frame collected during a laser sweep, the wavelength is interpolated from the recorded camera and laser output triggers and each cavity’s reflection is integrated over a fraction of pixels within its imaged unit cell boundary. We use the resulting high-contrast reflection spectra (across all devices within the field-of-view) to characterize device performance and monitor the cavity trimming process. The sample mount below the objective (OL) is temperature stabilized to within $10$ mK with a Peltier plate and feedback controller. For trimming experiments, the sample is placed in a high-pressure oxygen environment within a custom chamber offering in-situ optical access through a glass window. ### D.2 Calibrated Far-Field Measurement Inserting a lens (L2) in the collection path one focal length from the objective BFP allows us to measure the far-field profile $S(\vec{k})$ of individual or multiple cavities using the same setup. We position an iris at the intermediate image plane — located with a removable lens (not shown) placed before L3 — to spatially filter the emission from desired devices. We also calibrate the BFP scale using a reflective reference grating with known pitch. Due to the cross-polarized configuration, only a single polarization $\tilde{S}(\vec{k})\big{|}_{\theta}$ is imaged for any cavity-input polarization angle difference $\theta$. The complete cavity emission profile $S(\vec{k})=\tilde{S}(\vec{k})\big{|}_{\theta}+\tilde{S}(\vec{k})\big{|}_{\theta\pm\pi/2}$ (16) can therefore be reconstructed by sequentially imaging both polarizations as in Figs. A5a-c for $\theta=45^{\circ}$. For maximum accuracy, we used this technique for the experimental results in Fig. 4. Alternatively, the specific choice $\theta=45^{\circ}$ allows $S(\vec{k})$ to be reconstructed from a single measurement. Due to mirror symmetry about the cavity’s principal polarization axis $\hat{y}$, Fig. A5a-b show that $\hat{\sigma}_{\hat{y}}\\{\tilde{S}(\vec{k})\big{|}_{\pm 45^{\circ}}\\}=\tilde{S}(\vec{k})\big{|}_{\mp 45^{\circ}}$ for the reflection operator $\hat{\sigma}$. This alternative reconstruction $S(\vec{k})=\left[1+\hat{\sigma}_{\hat{y}}\right]S(\vec{k})\big{|}_{\pm 45^{\circ}}$ (17) is demonstrated experimentally in Fig. 4d, yielding excellent agreement with Fig. 4c. This technique simplifies high-throughput far-field measurements across cavity arrays (Fig. A10, for example). Figure A5: Cross-polarized back-focal-plane (BFP) imaging techniques for a grating-coupled $L3$ cavity. Two orthogonally polarized far-field profiles are imaged by orienting the input polarization $E_{\text{in}}$ at a $+45^{\circ}$ (a) or $-45^{\circ}$ (b) angle from the dominant cavity polarization axis (dashed line in inset). The complete cavity emission profile $S(\vec{k})$ can be reconstructed by summing both images (c) or approximated from a single polarized image (d), yielding near-identical images with quantitative agreement between the extracted $\eta_{0}$. ### D.3 Homodyne Measurement The shot-noise-limited balanced homodyne detection setup in Fig. A4c enables complex reflection coefficient measurements with greater than $>$3 dB shot- noise clearance below $1$ GHz [108]. Signal light reflected from the cavity combines with a path-length-matched (to within $\mathord{\sim}$mm based on time-delay measurements with a picosecond-class pulsed laser) local oscillator (LO), and both signals are coupled into a balanced detector using anti- reflection coated fibers. The in-phase ($I(t)$) and quadrature ($Q(t)$) components of the cavity reflection were sequentially measured by locking to the first and second harmonics of the balanced output in the presence of a piezo-driven LO phase dither. The resonant, cross-polarized cavity reflection $R$ and phase shift $\phi$ are then reconstructed as $\displaystyle R=\frac{\left[V_{p}-I(t)\right]^{2}+Q^{2}(t)}{V_{p}^{2}}$ $\displaystyle\phi=\arctan\frac{Q(t)}{V_{p}-I(t)}$ (18) by normalizing to the measured peak voltage swing $V_{p}$ of the interference signal. ### D.4 Parallel Cavity Trimming A liquid crystal on silicon (LCOS) SLM (Fig. A4c) actively distributes a high- power, continuous-wave visible laser to target devices during the cavity trimming procedure. The input laser was tunably attenuated with a motorized half-wave plate (preceding a PBS) and subsequently expanded to overfill the LCOS aperture. The LCOS SLM was re-imaged onto the objective BFP (as confirmed by imaging with L2 in place) using two lenses (L4, L5) with focal lengths chosen to optimally match the imaged SLM and objective pupil dimensions. Phase retrieval-computed holograms then evenly distribute power to an array of focused spots on the sample (Appendix I) when the mechanical, flip mirror shutter is opened. ### D.5 $\upmu$LED Imaging The collection optics in Fig. A4d maximize the intensity of a $\upmu$LED array projected onto the PhC membrane within the constraints dictated by the constant radiance theorem of incoherent imaging. Assuming a Lambertian emission profile, geometric optics gives the collection efficiency $\eta_{c}=\alpha_{c}^{2}$ for an objective lens (CL) with numerical aperture $\alpha_{c}$ focused on the $\upmu$LED array. The projection efficiency $\eta_{p}$ through the projection objective (OL, with numerical aperture $\alpha_{p}$) depends on the relative pupil sizes of both objectives and can be similarly approximated from geometric optics. The resulting intensity enhancement $\zeta=\eta_{c}\eta_{p}/M^{2}$ between the source and image (with magnification $M$) reaches a maximum $\zeta_{\text{max}}=\frac{1}{M^{2}+(1-\alpha_{p}^{2})/\alpha_{p}^{2}}$ when the CL-collimated light overfills the back aperture of OL. The resulting design criteria, $\alpha_{c}>\sqrt{\frac{M^{2}\alpha_{p}^{2}}{(M^{2}-1)\alpha_{p}^{2}+1}},$ (19) is achieved for our imaging setup with $\alpha_{c}=0.25$, $\alpha_{p}=0.95$, and $M\approx 1/30$. After CL, The overall magnification and rotation are fine-tuned with a variable beam expander and Dove prism, respectively. ## Appendix E Optimum Gaussian Coupling The far-field spatial overlap integral [116] $O(w_{0})=R^{2}\iint_{\Omega}\vec{E}_{c}(\theta,\phi)\times\vec{H}_{g}^{*}(w_{0},\theta,\phi)\sin\theta d\theta d\phi$ (20) over the hemisphere $\Omega$ at distance $R$ yields the power coupling $|O(w_{0})|^{2}$ between a cavity mode $c$ and fundamental Gaussian beam $g$ as a function of the Gaussian waist radius $w_{0}$. We compute the cavity electric field profile $\vec{E}_{c}(\theta,\phi)$ by applying a near-to-far field transformation to the FDTD-simulated cavity mode in a plane just above and parallel to the PhC slab 444Flexcompute, Inc. Tidy3D. https://simulation.cloud [118]. The resulting far-field profile is finely discretized — relative to the diffraction-limited beamwidth $\lambda/\Lambda\approx 14^{\circ}$ for typical cavity unit cell dimensions $\Lambda$ and resonant wavelength $\lambda$ — on a 1∘ grid in zenith and azimuth ($\theta$ and $\phi$, respectively). For the co-polarized Gaussian mode, we instead convert the magnetic field [88] $\displaystyle\vec{H}(x,y,z)=$ $\displaystyle\sqrt{\frac{\pi}{Z_{0}}}\frac{2w_{0}}{z+j\pi w_{0}^{2}}\exp{\frac{-j2\pi(x^{2}+y^{2})}{2(z+j\pi w_{0}^{2})}}$ $\displaystyle\times$ $\displaystyle\left[\hat{x}+\left(\frac{-xz-jx\pi w_{0}^{2}}{z^{2}+(\pi w_{0}^{2})^{2}}\right)\hat{z}\right]e^{-j2\pi z}$ (21) derived from paraxial diffraction theory (for the free space impedance $Z_{0}$) to spherical coordinates at a far-field distance $R$. Both modes are normalized to carry unit power $\int_{\Omega}\real\\{\vec{E}\times\vec{H}^{*}\\}/2$. Figure A6: Comparison of coupling between a fundamental Gaussian beam with waist $w_{0}$ and two different cavity designs with maximum unit cell dimension $\Lambda$. Fig. A6 compares the resulting power coupling as a function of normalized waist $2w_{0}/\Lambda$ for grating-coupled and inverse-designed cavity designs with maximum dimension $\Lambda$. In both cases, a backreflector is assumed for unidirectional emission into $\Omega$. Besides the increase in maximum coupling to 94%, the optimized waist diameter $2w_{0}\approx\Lambda$ indicates that the inverse-designed cavity’s effective near-field scattering profile fully fills the design unit cell. The power coupling $|O|_{\text{max}}^{2}$ to a single desired free-space mode also allows us to compute that mode’s amplitude reflection spectrum [88] $r(\tilde{\Delta})=\frac{2|O|_{\text{max}}^{2}/p-j\tilde{\Delta}}{1+j\tilde{\Delta}}$ (22) using temporal coupled mode theory assuming a normalized detuning $\tilde{\Delta}=\Delta/\Gamma$ from the cavity resonance and $p$-directional emission (i.e. $p=1$ for unidirectional emission with a backreflector or $p=2$ for symmetric emission). Fig. 4 of the main text plots these optimized spectra. Whereas the grating-coupled cavity is undercoupled with an amplitude- dominant reflection spectrum, the inverse-designed cavity is phase-dominant as desired for high efficiency beamforming. ## Appendix F Beamforming with a Coupled Amplitude-Phase Response Phase retrieval algorithms (Appendix I) optimize unity-magnitude near-field reflection coefficients $r=e^{j\phi}$ to generate a desired far-field intensity pattern, but fail for the coupled amplitude and phase reflection coefficients of Eqn. 22 [119, 120]. Here, we derive an alternative algorithm to compensate for this coupling. Assuming a spatially uniform field $E_{i}(\vec{r})=E_{i}$ incident on a $\Lambda_{x}\times\Lambda_{y}$-pitch array, the reflected far-field defined in Eqn. 3 is the product of the single-element far-field pattern $S(\vec{k})$ and the array factor $\mathcal{F}\\{r\\}=\sum_{m,n}\Gamma_{m,n}\exp{jk(m\Lambda_{x}u+n\Lambda_{y}v)},$ (23) i.e., the 2D discrete Fourier transform ($\mathcal{F}$) of the reflection coefficients $r$ with respect to the cosine-space coordinates $u=k_{x}/k$ and $v=k_{y}/k$. Since $S(\vec{k})$ is determined by element design (Section II), far-field pattern synthesis is achieved by optimizing the array factor. Given Eqn. 23, we can represent constrained far-field synthesis with the nonlinear program $\min_{\Delta}f(\Delta)=\min_{\Delta}\frac{1}{2}\norm{\frac{|\mathcal{F}\\{r(\Delta)\\}|^{2}}{\norm{|\mathcal{F}\\{r(\Delta)\\}|^{2}}_{F}}-\frac{|\textbf{Y}|^{2}}{\norm{|\textbf{Y}|^{2}}_{F}}}_{F}^{2},$ (24) where $r(\Delta)$ is the matrix of reflection coefficients as a function of the detuning matrix $\Delta$, $|\cdot|$ represents the complex modulus, $\norm{\cdot}_{F}$ is the Frobenius norm, and $Y$ is the goal image. By normalizing the Fourier transforms, this program optimizes image appearance as opposed to absolute intensity, which varies in the presence of reflection loss. We approximate the solution to the nonlinear program using the L-BFGS-B method — a low-memory quasi-Newton method with simple box constraints to limit detunings [121]. L-BFGS-B uses the gradient and a low-rank approximation of the objective function’s Hessian to approximate Newton’s method, yielding superlinear convergence. To efficiently optimize Eqn. 24, we computed its analytic gradient $\displaystyle\nabla f=$ $\displaystyle\frac{2\left(g(a_{1},b_{1})-\frac{\langle|\mathcal{F}\\{r(\Delta)\\}|^{2},h\rangle_{F}}{\norm{|\mathcal{F}\\{r(\Delta)\\}|^{2}}_{F}}g(a_{2},b_{2})\right)}{\norm{|\mathcal{F}\\{r(\Delta)\\}|^{2}}_{F}^{2}},$ (25) for $\displaystyle g(a,b)=\big{[}$ $\displaystyle(\Re{\mathcal{F}\\{a\\}}+\Im{\mathcal{F}\\{b\\}})\odot\Re{\nabla r}+$ $\displaystyle(\Re{\mathcal{F}\\{b\\}}-\Im{\mathcal{F}\\{a\\}})\odot\Im{\nabla r}\big{]},$ $h=\frac{|\mathcal{F}\\{r(\Delta)\\}|^{2}}{\norm{|\mathcal{F}\\{r(\Delta)\\}|^{2}}_{F}}-\frac{|\textbf{Y}|^{2}}{\norm{|\textbf{Y}|^{2}}_{F}},$ (26) and $\displaystyle a_{1}=h\odot\Re{\mathcal{F}\\{r(\Delta)\\}}$ $\displaystyle a_{2}=|\mathcal{F}\\{r(\Delta)\\}|^{2}\odot\Re{\mathcal{F}\\{r(\Delta)\\}}$ $\displaystyle b_{1}=h\odot\Im{\mathcal{F}\\{r(\Delta)\\}}$ $\displaystyle b_{2}=|\mathcal{F}\\{r(\Delta)\\}|^{2}\odot\Im{\mathcal{F}\\{r(\Delta)\\}},$ where $\odot$ and $\langle\rangle_{F}$ are the Hadamard/element-wise product and Frobenius inner product, respectively. Since Eqn. 25 contains only Fourier transforms and element-wise operations, it can be calculated in $\mathcal{O}\left(N\log(N)\right)$ time. We further accelerated L-BFGS-B by initializing $\Delta$ with a modified phase retrieval algorithm that enforces near-field constraint $r(\Delta)$ [122]. The described algorithm was implemented in python using the PyTorch package for GPU acceleration. Using an RTX 2080, the generation of the far-field patterns for Fig. 3 took only a few seconds. The success of the algorithm is demonstrated by the lack of a conjugate image (even in the highly undercoupled case). ## Appendix G Effect of Substrate Reflections Isolated slab PhC cavities feature symmetric, bi-directional emission due to vertical reflection symmetry about the slab midplane. When placed above a reflective substrate, however, interference between the (reflected) downwards and upwards emission paths alters the cavity’s radiation pattern $S(\vec{k})$ and $Q$ [123]. The results are analogous to the modified spontaneous emission from a quantum emitter placed above a mirror [124]. Figure A7: Quality factor ($Q$) trends for an $L3$ PhC cavity (resonant wavelength $\lambda_{0}$) placed a distance $d$ above a silicon back- reflector. Insets show the far-field emission pattern $S(\vec{k})$ at select points. Fig. A7 plots the FDTD-simulated quality factor trends of an inverse-designed $L3$ cavity as a function of the membrane-substrate gap spacing $d$. Since the optical thickness of the PhC slab is approximately $\lambda/2$, constructive interference at $d\approx m\lambda/2$ ($m\in 1,2,...$) maximizes vertical emission and minimizes $Q$. Destructive interference at $d\approx(2m-1)\lambda/4$ has the opposite effects. The resulting variation in $\eta_{0}$ explains the minor discrepancy between the simulated and measured $\vec{S}(k)$ in Fig. 4. We mitigated the impact of these effects through optimized sample preparation. Compressive stress in our silicon-on-insulator (SOI) die buckles suspended cavity arrays, yielding variations in $d$ — and therefore cavity reflectivity — across the membrane. As an alternative to previously proposed stress- engineered support structures [125], we flattened the suspended arrays by mechanically bowing the die with a backside set pin in a custom sample mount (Fig. A8). Unfortunately, the resulting (uniform) gap spacing $d=2~{}\upmu\text{m}$ minimizes vertical coupling at the design wavelength $\lambda_{0}=1.55~{}\upmu\text{m}$. We therefore added a back-side silicon nitride anti-reflection coating (ARC) on the substrate and timed the release etch to form a front-side ARC with the remaining oxide (Fig. A9). Figure A8: Sample mount with set pin to flatten stress-buckled suspended membranes. Figure A9: Comparison of simulated (dashed) and measured (solid) silicon-on-insulator (SOI) sample reflection before (blue) and after (green) anti-reflection optimization for normal incidence at $\lambda=1550$ nm. Measured values were calibrated with a known reference mirror. The final layer stack of silicon (Si, $n=3.48$), oxide (SiO2, $n=1.44$), and deposited silicon nitride (SiNx, $n=1.90$) is shown in the inset with optimized layers highlighted in green. ## Appendix H Far-Field Uniformity Fig. A10 demonstrates the far-field uniformity characteristic of inverse- designed and grating coupled $L3$ cavity arrays. Averaged across the $8\times 8$ arrays, the former offers a $\mathord{\sim}3\times$ improvement in zero- order diffraction and aperture efficiencies ($\langle\eta_{0}\rangle=0.86$, $\langle\eta_{a}\rangle=0.99$). Figure A10: Imaged far-field profiles $S(\vec{k})$ (over a 0.9 numerical aperture) for each device in an $8\times 8$ array of inverse designed (top) and grating-coupled (bottom) $L3$ PhC cavities. The extracted zero-order efficiencies $\eta_{0}$ and standard deviations are also provided. ## Appendix I The slm-suite Toolbox We developed the open-source python package slm-suite to simplify the creation of high-uniformity, arbitrary-geometry optical focus arrays using various phase retrieval algorithms. The package features: 1. 1. Automated wavefront calibration routines that measure the Fourier-plane source amplitude and phase using a super-pixel interference technique to compensate for aberrations along the SLM imaging train [126] 2. 2. Various graphical processing unit (GPU)-accelerated Gerchberg-Saxton (GS) algorithms that use the measured source constraints (1) to produce optimized spot array phase masks [127, 128, 129] 3. 3. Automated affine transformations between grating wave vectors applied to the SLM and image-space coordinates (i.e. camera pixels) by projecting and detecting a GS-computed spot array 4. 4. Camera-based feedback of measured spot amplitudes at known (calibrated) locations into phase retrieval algorithms to improve the uniformity of image- space spot arrays 5. 5. Automated evaluation metrics to monitor diffraction efficiency, spot amplitude and position tolerance, and spot quality. 6. 6. Simplified hardware interface and control. After calibration, high-uniformity optical foci can be generated at arbitrary image plane locations specified by the user. For example, Fig. A11 shows a $10\times 10$ spot array with $\mathord{\sim}1\%$ power uniformity and sub- micron placement accuracy formed on a $10\times 10$ cavity array during the trimming procedure of Section IV. Figure A11: Overlaid images of $10\times 10$ cavity (grey) and trimming spot (color) arrays demonstrating the $\ll\upmu$m placement accuracy and percent- order power uniformity of weighted Gerchberg-Saxton phase retrieval with experimental camera feedback. ## Appendix J Parallel Laser Oxidation As illustrated by the wavelength trends as a function of incident laser power and exposure time in Fig. A12, we found that laser-assisted thermal oxidation could be accelerated in a high-pressure oxygen environment. We therefore mount samples in a custom pressure chamber with a partial oxygen pressure ranging from 5 to 10 atm. Figure A12: Wavelength and quality factor trends as a function of incident visible laser power, demonstrating accelerated trimming with increased oxygen partial pressure $P_{\text{O}_{2}}$. Technique [Year] | Cavity Type | $\bm{N}$ | $\bm{\Delta\lambda_{0}^{\textbf{p-p}}}$ [pm] | $\bm{\langle Q\rangle}$ | In situ? | Parallel? ---|---|---|---|---|---|--- “Holographic” oxidation [2022] | Si PhC | 64 | 13 | $2\times 10^{5}$ | Y | Y Germanium implantation [2021] | Si ring [130] | 58 | 32 | $4\times 10^{3}$ | Y | N Laser-annealed cladding [2020] | Si ring [131] | 2 | 20* | $2\times 10^{4}$ | Y | N Boron implantation [2019] | Si ring [132] | 4 | 15 | $5\times 10^{3}$ | Y | N Electron-beam irradiation [2018] | Si PhC [133] | 4 | 400 | $3\times 10^{5}$ | N | N Photo-electro-chemical etching [2017] | GaAs disk [134] | 5 | 200* | $2\times 10^{4}$ | Y | N Annealed cladding [2016] | Si ring [135] | 5 | 90* | $3\times 10^{3}$ | Y | N Ultraviolet irradiation [2014] | a-Si ring [136] | 4 | 45 | $8\times 10^{3}$ | Y | N Post-fabrication etching [2013] | GaAs PhC [137] | 18 | 100* | $3\times 10^{4}$ | N | Y Photochromatic thin-film [2011] | GaAs PhC [138] | 3 | 340 | $8\times 10^{3}$ | Y | N Anodic oxidation [2006] | GaAs PhC [139] | 2 | 100* | $5\times 10^{3}$ | N | N Table A3: Comparison of microcavity array trimming techniques. Estimated values are marked with a *. $\Delta\lambda_{\text{p-p}}$ = peak-to-peak wavelength error; $\langle Q\rangle$ = mean quality factor. Figure A13: Flowchart of the holographic trimming algorithm. Trimming holograms are formed with weighted Gerchberg-Saxton (GS) algorithms and projected onto desired cavities for duration $\Delta t$ with power $P_{\text{trim}}$. Alternating trimming and resonance readout periods continue until the instantaneous wavelength $\lambda_{i}$ of any targeted cavity blueshifts past the target wavelength $\lambda_{t}$. Thereafter, a new set of target cavities is selected and trimmed. This selection and trimming sub-loop continues until all resonant wavelengths $\\{\lambda_{0}\\}$ are below the “rest” wavelength $\lambda_{\text{rest}}$, at which point trimming is halted and the resonances are continuously monitored at readout interval $\Delta t_{\text{rest}}$. When the resonances are sufficiently stable (redshifting from moisture adsorption to the silicon membrane is arrested), the total “rehydration” redshift $\Delta\lambda_{0}$ of each cavity is updated to better estimate the true resonant wavelength $\lambda_{0}\approx\lambda_{i}+\Delta\lambda_{0}$ from the instantaneous wavelengths $\\{\lambda_{i}\\}$ during trimming. The entire process terminates when the peak-to-peak static resonant wavelength uniformity $\Delta\lambda_{0}^{\text{p-p}}$ drops below the desired tolerance $\Delta\lambda_{\text{tol}}.$ The main loop of the trimming process (Fig. 6) consists of device selection, hologram setup, parallel laser oxidation, and resting intervals. The algorithm monitors two resonant wavelengths: the instantaneous wavelength $\lambda_{i}$ and the steady-state wavelength $\lambda_{0}$. Initially $\lambda_{i}=\lambda_{0}$; however, focusing high-power ($\mathord{\sim}10$ mW) visible light onto the cavity (as required to sufficiently heat the PhC membrane for thermal oxidation) causes a temporary blueshift $\Delta\lambda_{0}$ due to the desorption of moisture attached to hydrophilic hydroxyl surface terminations. For any target rest wavelength $\lambda_{t}$, we therefore trim devices to an instantaneous wavelength $\lambda_{i}=\lambda_{t}-\Delta\lambda_{0}$ that relaxes over $\mathcal{O}(\text{minute})$ timescales to $\lambda_{i}=\lambda_{0}=\lambda_{t}$ as moisture re-adsorbs to the surface. In practice, the stability and estimation of the “overtune” $\Delta\lambda_{0}$ limit the uniformity and scale of the trimming process, respectively. After initializing the cavity locations, scanning the device resonances, and calibrating the SLM (Section I), a spot array targeting every cavity (Fig. A11, for example) is projected on the membrane for a short (few second) duration. Monitoring the resonances at fixed intervals $\Delta t\approx 10$ s until $\lambda_{i}$ stabilizes to the rest wavelength $\lambda_{0}$ gives an initial estimate $\Delta\lambda_{0}=\lambda_{0}-\min\\{\lambda_{i}\\}$ for the overtune parameter of each cavity. We also update the target wavelength $\lambda_{t}=\min\\{\lambda_{0}\\}$ and rest wavelengths before continuing the trimming procedure. To update $\Delta\lambda_{0}$, we periodically conduct this same “rest loop” when $\lambda_{0}$ of each cavity is below an algorithmically chosen “checkpoint” wavelength $\lambda_{\text{rest}}$. As described in Section IV, a subset of $N$ cavities is then selected to maximize the total possible trimming distance to $\lambda_{t}$. The number of targeted cavities neighboring each untargeted cavity is also limited to reduce crosstalk. A spot array is then formed to evenly distribute the trimming laser to the selected devices. After confirming that the location accuracy and power uniformity of the array are within tolerance, we alternate exposure and readout intervals to grow thermal oxide with in situ monitoring. The laser power is progressively increased to reach a desired, wavelength-uniformity- dependent trimming rate. As evidenced by Fig. A12, the rate is relatively power-independent until reaching a threshold power. We detect and save these threshold powers for use when selecting the initial exposure power in each trimming loop. The trimming sub-loop continues until the estimated $\lambda_{0}$ of any targeted cavity crosses $\lambda_{t}$. New cavities are then selected, targeted, and trimmed until a rest period is triggered. When the peak-to-peak wavelength uniformity at the end of a rest period is below the user-defined tolerance $\lambda_{\text{tol}}$, the process is terminated. Table A3 compares the demonstrated performance compared to other arrayed microcavity trimming techniques.
# Stimulated emission of signal photons from dark matter waves Ankur Agrawal<EMAIL_ADDRESS>Present address: AWS Center for Quantum Networking, Boston, MA 02145, USA James Franck Institute, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Kavli Institute for Cosmological Physics, University of Chicago, Chicago, Illinois 60637, USA Akash V. Dixit Present address: National Institute of Standards and Technology, Boulder, CO 80305, USA James Franck Institute, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Kavli Institute for Cosmological Physics, University of Chicago, Chicago, Illinois 60637, USA Tanay Roy Present address: Superconducting Quantum Materials and Systems Center, Fermi National Accelerator Laboratory, Batavia, IL 60510, USA James Franck Institute, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Srivatsan Chakram Present address: Department of Physics and Astronomy, Rutgers, the State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854 James Franck Institute, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Department of Physics and Astronomy, Rutgers University, Piscataway, New Jersey 08854, USA Kevin He James Franck Institute, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Ravi K. Naik Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA David I. Schuster James Franck Institute, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Pritzker School of Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA Department of Applied Physics, Stanford University, Stanford, California 94305, USA Aaron Chou Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA ###### Abstract The manipulation of quantum states of light has resulted in significant advancements in both dark matter searches and gravitational wave detectors [1, 2, 3, 4]. Current dark matter searches operating in the microwave frequency range use nearly quantum-limited amplifiers [5, 3, 6]. Future high frequency searches will use photon counting techniques [1] to evade the standard quantum limit. We present a signal enhancement technique that utilizes a superconducting qubit to prepare a superconducting microwave cavity in a non- classical Fock state and stimulate the emission of a photon from a dark matter wave. By initializing the cavity in an $\left|n=4\right\rangle$ Fock state, we demonstrate a quantum enhancement technique that increases the signal photon rate and hence also the dark matter scan rate each by a factor of 2.78. Using this technique, we conduct a dark photon search in a band around $\mathrm{5.965\,GHz\,(24.67\,\mu eV)}$, where the kinetic mixing angle $\epsilon\geq 4.35\times 10^{-13}$ is excluded at the $90\%$ confidence level. ## Introduction The existence of dark matter (DM) is one of the greatest mysteries in physics, which has puzzled scientists for nearly a century. Despite the lack of direct detection, there is compelling evidence for its existence, including its estimated contribution of 27% to the universe’s energy density and its gravitational effects on galaxy dynamics and structure formation [7, 8, 9]. Axions and dark photons have emerged as leading candidates for dark matter due to their cosmological origins and low-energy properties, which allow them to exist as coherent waves with macroscopic occupation numbers [10, 11, 12, 13, 14]. Dark matter haloscope experiments in the microwave frequency range use a cavity to resonantly enhance the oscillating electric field generated by the DM field at a frequency corresponding to the mass of a hypothetical particle ($\nu=mc^{2}/h$)[15, 14]. Since the mass of DM is unknown a priori, experimental searches are typically conducted as radio scans in which the resonant cavity is tuned one step $d\nu$ at a time to test different frequency hypotheses. A key figure of merit is therefore the frequency scan speed, which in a photon counting experiment scales as $d\nu/dt\propto d\nu\,R_{s}^{2}/R_{b}$ where $R_{s}$ and $R_{b}$ are the signal and background count rates respectively. Quantum techniques have proven useful in accelerating the scan rate of axion and wavelike dark matter searches. Superconducting parametric amplifiers, which are quantum-limited, have reached the standard quantum limit (SQL) and add only 1/2 photon of noise per mode, as required by the Heisenberg uncertainty principle for phase-preserving measurements [16, 17, 18, 19, 20]. Alternatively, qubit-based single photon detection [1] does not consider the photon phase and can in principle measure without any added detection noise by achieving extremely low, sub-SQL background rates. While both of these methods employ quantum technologies, their operation is in some sense recognizable as an ideal classical amplifier and microwave photo- multiplier. Parametric amplifiers and cavity-qubit systems can also synthesize inherently quantum mechanical states of light such as squeezed states [21, 22, 23] in the former and Fock states [24, 25] or Schrodinger cat states [26, 27, 28, 29, 30] for the latter. Recently, squeezed state injection paired with phase-sensitive amplification was used to improve the scan rate of the HAYSTAC experiment [2]. In this work, we develop a new method in which we prepare an $n$-photon Fock state which enhances the signal rate by $\eta\,(n+1)$ times where $\eta$ is the efficiency of detecting the state. By creating a Fock state with $\left|n=4\right\rangle$ photons in the cavity, we observe a $2.78$-fold enhancement in the signal rate. We show that this technique is compatible with the previously-demonstrated noise reduction from photon counting [1]. The power delivered to the cavity by a current density generated by dark matter $\textbf{J}_{DM}$ is given by $P_{s}=\int dV\,\textbf{J}_{DM}(x)\cdot\textbf{E}(x)$ (1) and is proportional to the magnitude of oscillating electric field $\textbf{E}(x)$ in the cavity. In the conventional scenario, the cavity is cooled to the vacuum state, and the signal electric field builds up monotonically over the coherence time of the cavity or of the dark matter wave in a process akin to spontaneous emission. Alternatively, we may initialize the cavity with a non-zero $\textbf{E}(x)$ field from a coherent state sine wave or from a Fock state to induce stimulated emission. The Fock state has some advantages. First, unlike the homodyne or heterodyne detection using a coherent state pump, the Fock state is free from any shot noise, making it possible to measure small signal amplitudes far below the SQL. Second, a Fock state is symmetric in phase, making it equally sensitive to any instantaneous phase of the incoming DM wave, which is unknown a priori. We can model the action of the DM wave on a Fock state as a classical drive amplitude $\xi$ which shifts this phase-symmetric state away from the origin in the Wigner phase space by $\alpha$ (see Supp Fig. S6). The resultant state comprises both in-phase components which extracted excess power from the DM wave and also out-of-phase components which delivered their power to the DM wave. The stimulated emission process for DM converting into photons is enhanced by a factor of $(n+1)$ while the stimulated absorption process is enhanced by a factor of $n$. Mathematically, the stimulated emission into the cavity state from a dark matter wave can be described as shown below, $\begin{split}|\left\langle n+1\right|\mathcal{\hat{D}(\alpha)}\left|n\right\rangle|^{2}&=|\left\langle n+1\right|e^{(\alpha a^{\dagger}-\alpha^{*}a)}\left|n\right\rangle|^{2}\\\ &\sim|\left\langle n+1\right|\alpha a^{\dagger}\left|n\right\rangle|^{2}=(n+1)\alpha^{2}\end{split}$ (2) where $\mathcal{\hat{D}(\alpha)}$ is the displacement operator. From Eqn. (2), we can infer that the displacement ($\alpha\ll 1$) induced by the DM wave on a cavity prepared in $\left|n\right\rangle$ Fock state results in population of $\left|n+1\right\rangle$ state with probability proportional to $(n+1)$. Using number-resolving measurements, the signal can thus be observed with $(n+1)\times$ higher probability if the cavity is prepared in a larger $\left|n\right\rangle$ Fock state. We note that just as in other cases of quantum-enhanced metrology, the $(n+1)$ enhancement factor in the signal transition probability can be exactly canceled by the $1/(n+1)$ reduction in the coherence time of the probe. As a result, there would be no net improvement in the actual signal rate $R_{s}$. However, this consideration does not apply when the limiting coherence time is that of the dark matter wave rather than of the Fock photon state in the cavity. In such cases, the signal rate is not degraded by the reduction of the probe coherence time and retains the factor $(n+1)$. To our knowledge, this is one of the few cases where quantum metrology can provide a realizable improvement in a real-world application. Also, since the readout rate scales as the inverse of the probe coherence time, the background count rate may also scale linearly with $(n+1)$, for example for backgrounds associated with readout errors. For experiments with fixed tuning step size $d\nu$ given for example by the dark matter linewidth, the improvement in scan speed $d\nu/dt\propto d\nu\,R_{s}^{2}/R_{b}$ is therefore a single factor of $\eta\,(n+1)$. ## Fock state preparation and photon number resolving detector We couple the cavity to a non-linear element, in this case a superconducting transmon qubit to prepare and measure the Fock states, which are otherwise impossible to create in a linear system such as a cavity. The device used in this work is composed of three components - a high quality factor ($Q_{s}=4.06\times 10^{7}$) 3D multimode cavity [31] to accumulate and store the signal induced by the dark matter (storage, $\omega_{s}/2\pi=\mathrm{5.965\,GHz}$), a superconducting transmon qubit ($\omega_{q}/2\pi=\mathrm{4.95\,GHz}$), and a 3D cavity strongly coupled to a transmission line ($Q_{r}=9\times 10^{3}$) used to quickly read out the state of qubit (readout, $\omega_{r}/2\pi=\mathrm{7.789\,GHz}$) (Fig. 1 (a)). We mount the device to the base stage of a dilution refrigerator operating at $\mathrm{10\,mK}$. Figure 1: Fock state preparation in a cavity dispersively coupled to a transmon qubit. (a) A schematic of the multimode flute cavity showing the location of the storage cavity (red), readout cavity (green), and transmon chip with a SEM image of the Josephson junction (blue) [31]. (b) Creation of Fock states in a particular mode of the cavity using GRAPE method. Characterization of the cavity state using qubit spectroscopy (left) and Wigner tomography (right). Qubit spectroscopy is performed immediately after the optimal control (OCT) pulses, a single peak in each qubit excitation probability ($P_{e}$) distribution confirms the creation of the correct $\left|n\right\rangle$ Fock state. Resultant probability distribution is fitted to a Gaussian to estimate the state preparation fidelity. Grey dashed lines correspond to the expected shift in frequency, accounted for quadratic dispersive shift. (Right) Wigner tomography [32] is performed by coherently displacing the resultant cavity state in 2D phase space to map the average parity and thus, reconstruct the cavity state density matrix. The interaction between a superconducting transmon qubit [33, 34] and the field in a microwave cavity is described by the Jaynes-Cummings Hamiltonian [35] in the dispersive limit (qubit-cavity coupling $\ll$ qubit-cavity detuning) as, $\mathcal{H}/\hbar=\omega_{s}a^{\dagger}a+\frac{1}{2}(\omega_{q}+\chi a^{\dagger}a)\sigma_{z}$ (3) where $a$ and $a^{\dagger}$ are the annihilation and creation operators of the cavity mode and $\sigma_{z}$ is the Pauli $Z$ operator of the transmon. Eqn. 3 elucidates a key feature of this interaction - a photon number dependent shift ($\chi$) of the qubit transition frequency (see Fig. 1(b))[36]. Another important feature of this Hamiltonian is the quantum non-demolition (QND) nature of the interaction between the qubit and cavity which preserves the cavity state upon the measurement of the qubit state and vice-versa [37, 38, 36]. By driving the qubit at the Stark shifted frequency ($\omega_{q}+n\,\chi$), one would selectively excite the qubit if and only if there are exactly $n$-photons in the cavity. Recent works have shown that a single transmon has the capability to prepare any quantum state in a cavity and perform universal control on it [39, 27, 40, 41, 42, 23]. In this study, we used a GRadient Ascent Pulse Engineering (GRAPE) based method to generate optimal control pulses (OCT) [43, 27] that consider the full model of the time-dependent Hamiltonian and allow us to prepare non-classical states in a cavity. As shown in Fig. 1 (b), our approach successfully prepared cavity Fock states with pulse duration as short as $\mathcal{O}(1/\chi)$, which did not increase for higher Fock states. ## Stimulated emission protocol The stimulated emission protocol is divided into two parts: the first part involves the preparation of cavity in a desired Fock state, $\left|n\right\rangle$ and the second part involves the detection of the cavity in the $\left|n+1\right\rangle$ Fock state as depicted in Fig. 2(a). In order to actively suppress any false positive events such as the cavity accidentally starting in $\left|n+1\right\rangle$ state, we conditionally excite the qubit with a resolved $\pi$-pulse at the $(n+1)$-shifted peak three times. If and only if the qubit fails to excite in all three attempts do we proceed ahead with rest of the protocol. By doing this, we can suppress the false positive rate $\leq 3\%$. At the end of this sequence, we measure the efficiency of the state preparation for each $n$ by measuring the qubit excitation probability $P_{n}$ with a number resolved $\pi$-pulse centered at the $\left|n\right\rangle$ peak. The measured fidelities are $P_{0}=95.2\pm 0.3\%,P_{1}=91.2\pm 0.4\%,P_{2}=87.3\pm 0.5\%,P_{3}=81.6\pm 0.6\%,P_{4}=63.6\pm 0.7\%$. Figure 2: Stimulated emission protocol with number resolved $\pi$-pulse and hidden Markov model analysis. (a) Pulse sequence for stimulated emission includes cavity initialization in a Fock state, followed by three conditional checks to ensure cavity did not accidentally start in $\left|n+1\right\rangle$ state. The next part involves a cavity displacement drive to mimic a push from the DM and repeated conditional qubit measurements to detect the cavity in $\left|n+1\right\rangle$ state, where the first measurement collapses the cavity state to either $\left|n+1\right\rangle$ or not. If in $\left|n+1\right\rangle$, the subsequent measurements are QND and via the quantum Zeno effect, each measurement resets the clock on the decay of the $\left|n+1\right\rangle$ state. (b) Examples showing two measured qubit readout sequences for a cavity initialized in $\left|n\right\rangle=1$ Fock state followed by a small displacement drive $\alpha$. The left panel corresponds to no change in the cavity state after the DM drive as inferred by the absence of successful flips of the qubit state which results in a very small probability $P(n=2)$ that the cavity was in the $\left|n\right\rangle=2$ Fock state. The right panel corresponds to an emission event where the cavity state changed from $\left|1\right\rangle\xrightarrow{}\left|2\right\rangle$, resulting in multiple successful flips of the qubit state. The HMM analysis of this sequence of flips then indicates a very high likelihood ratio to be in $\left|n\right\rangle=2$ Fock state. In case of successful detection, we observe an exponential suppression of the detector error based false positive probability with only linear increase in the number of repeated measurements. After the state preparation, we apply a coherent drive to the cavity mimicking a push from the DM wave to characterize the detector. A series of repeated QND measurements are recorded by performing conditional $\pi$-pulses centered at the $(n+1)$-shifted peak. The time between two successive QND measurements is $\mathrm{5\,\mu s}$, which is relatively short compared to the lifetime of Fock states given by $T_{1}^{s}=\mathrm{1320\,\mu s}/n$ (see Table S2). This projective measurement resets the clock on the decay of the $\left|n+1\right\rangle$ state [44]. We then apply a hidden Markov model (HMM) analysis to reconstruct the cavity state and compute the probability that the cavity state had changed from $\left|n\right\rangle\rightarrow\left|n+1\right\rangle$ and assign a likelihood ratio $\lambda$ associated with such events (see Supplementary section for implementation of HMM analysis). In principle, it is possible to prepare Fock states $\left|n>4\right\rangle$ in the device. However, due to the presence of multiple cavity modes, simulating the complete Hamiltonian to generate the OCT pulses becomes computationally challenging and in practice, higher $(n+1)$ Fock states are prepared with lower fidelity. Furthermore, to prevent excessive signal photon loss, the Fock state decay rate which is also enhanced by a factor of $(n+1)$ must remain small compared to the sum of Stark shift and readout rate, which determines the maximum rate of number resolved measurements. For this study, we chose $\left|n=4\right\rangle$, such that the decay probability stays below $1\%$ between successive measurements. ## Signature of Fock enhancement To assess the performance of the detector after preparing the cavity in a particular Fock state $\left|n\right\rangle$, we carry out a series of experiments. We apply a small variable displacement ($\alpha\ll 1$) to the cavity and measure the relationship between the number of injected ($n_{\rm inj}=|\alpha|^{2}$) and detected photons. We perform 30 repetitions of the qubit measurement and apply a likelihood threshold of $\lambda_{\mathrm{thresh}}=10^{3}$ to distinguish positive and negative events. This threshold is determined based on the background cavity occupation $n_{b}^{c}=6\times 10^{-3}$, which is assumed to be caused by photon shot- noise from a hot cavity, as measured using the photon counting method described in [1] (refer to Fig. S15). Errors below this value are considered to be sub-dominant. For a cavity initialized in $\left|n\right\rangle$, the probability of finding the cavity in the Fock state $\left|l\right\rangle$ for a complex displacement $\alpha$ is described by the analytical expression [45]: $P_{nl}(|\alpha|^{2})=\big{|}\left\langle l\right|\hat{\mathcal{D}}(\alpha)\left|n\right\rangle\big{|}^{2}=(n!/l!)\alpha^{2(l-n)}e^{-|\alpha|^{2}}\times\mathcal{L}{n}^{l-n}(|\alpha|^{2})$, where $\mathcal{L}{n}^{l-n}$ is an associated Laguerre polynomial. The data obtained from the characterization of the detector is fitted to an expression, represented by Eqn. (4). $n_{\rm meas}=\eta\,P_{nl}(|\alpha|^{2}=\bar{n})+\delta$ (4) This equation takes into account the detection efficiency, $\eta$, and false positive probability, $\delta$. In cases where the cavity displacement $\alpha\ll 1$, the relationship between the injected photons ($n_{\rm inj}=|\alpha|^{2}$) and measured photons can be approximated such that $P_{nl}(|\alpha^{2}|)\approx(n+1)|\alpha^{2}|$, as shown in Eqn. (2) Figure 3: Stimulated emission enhancement. Mean number of measured photons (positive events) as a function of the mean number of injected photons in the cavity. After initializing the cavity in a Fock state $\left|n\right\rangle$ and applying a variable cavity displacement (mock DM drive), 30 repeated qubit measurements of the cavity photon state are performed and a threshold $\lambda_{\mathrm{thresh}}$ is applied to determine the cavity population at $\left|n+1\right\rangle$. Background events with $\alpha=0$ are subtracted to compare between different Fock states which may have systematic errors in the state preparation step. $\lambda_{\mathrm{thresh}}=10^{3}$ is chosen based on the observed background cavity occupation of $n_{b}^{c}=6\times 10^{-3}$ such that the detector based errors are still sub-dominant. The background cavity occupation is measured using the photon counting technique described in Ref. [1] with repeated qubit parity measurements. Detector efficiency ($\eta$) for each Fock state is determined from the fit as reported in the legend. The monotonic decrease in the efficiency is attributed to - higher decay probability to reach the same false positive probability and demolition probability (see Supp Fig. S13). Anomalous behavior in $\left|3\right\rangle$ is attributed to the state decaying to nearby modes in the band structure formed by multiple modes of the cavity, qubit and readout, which are close in the energy level $\left|q,s,r\right\rangle$. Fig. 3 displays the unique characteristic of stimulated emission enhancement, where a higher number of detected photons is observed for the cavity that was initialized in a higher Fock state. This result aligns with expectations and highlights the effectiveness of stimulated emission as a method for amplifying weak signals. The figure also showcases a clear, monotonic increase in the slope as the Fock states increase, further emphasizing the success of this technique. The resultant enhancement between $\left|n=4\right\rangle$ and $\left|n=0\right\rangle$ is 2.78 ($0.45\times 5/0.81\times 1$). The reduced efficiency $\eta=0.45$ to see the full $(n+1)$ enhancement can be explained by the enhanced decay rate of higher Fock states making it more difficult to achieve the fixed likelihood ratio threshold used for the comparison. For example, consider the $\left|n=5\right\rangle$ Fock state. During the course of 30 repeated measurements, it is 1.6 times more likely to decay than the $\left|n=1\right\rangle$ Fock state. In addition, the higher demolition probability 0.074 for $\left|n=4\right\rangle$ makes it more difficult to achieve high likelihood ratio since this photon state only persists for the first 14 quasi-QND measurements. The false positive probabilities $\delta$ are smaller than $10^{-4}$ for all Fock states, comparable to the measured residual photon occupation in the cavity. Further advancements could be achieved by utilizing a system with a higher Q value and reduced demolition probability. By combining weakly coupled qubits with the Echoed Conditional Displacement (ECD) technique [23], Fock states can be prepared with high fidelity while also keeping the demolition probability low. Additionally, a detector with reduced errors from factors such as thermal population and readout fidelity would also improve the results. We observe anomalous behavior for the $\left|n\right\rangle=3$ data which shows no signal enhancement. There is no direct way to investigate the cause but we suspect leakage to a nearby mode as the 3D cavity contains multiple modes which are closely spaced [31]. We tried this experiment on a different cavity mode and observed similar behavior but for $\left|n\right\rangle=1$. We have identified a couple of transitions with different modes which are closer in energy level with $\left|g,3\right\rangle$ and which could be facilitated by the always-on interaction of the transmon with all the modes. This frequency collision issue can be easily resolved in future designs such that the cavity modes are spaced further apart and the transmon has negligible overlap with the spectator modes. ## Dark photon search In order to conduct a dark photon search, we collect independent datasets for a cavity prepared in different Fock states and count the number of positive events in the absence of the mock DM drive. Additionally, we vary the dwell time ($\tau$) between the state preparation and the beginning of the measurement sequence in order to allow the coherent buildup of cavity field due to the dark matter. Once the measurement sequence begins, the quantum Zeno effect prevents further build-up of the signal field. Ideally, one would like to choose the dwell time as close to the lifetime of Fock state as possible to maximize the accumulation of signal and thus, the scan rate. In this work, the dwell time was varied to compare on equal footing the dependence on $n$ and not optimized for DM sensitivity. For this study, we chose a maximum dwell time of $\mathrm{20\,\mu s}$ to collect reasonable statistics with $\lambda_{\mathrm{thresh}}=10^{3}$ for all Fock states. Longer integration times comparable to or larger than the dark matter coherence time ($\mathrm{75\,\mu s}$) which are needed to realize the full benefit of Fock enhancement will be implemented in future dedicated experiments. Figure 4: Measured background counts for different Fock states in the cavity as a function of dwell time. There is no clear trend in the number of observed background counts indicating systematic effects which could be due to the state preparation steps. The error bars are plotted as $\sqrt{\mathrm{Counts}}$ and the $N_{\mathrm{trials}}\sim 20,000$ for each point. The number of measured counts shown in Fig. 4 is fit to a functional form given by Eqn. (5) (see Supplementary material), which has contributions coming from a coherent source (hence the ($n+1$) Fock enhancement factor), an incoherent source, and a state preparation dependent error $N_{\rm meas}=a_{0}\,(n+1)\,\tau\,(N_{\rm trials}\,\tau)+b\,(N_{\rm trials}\,\tau)+\\\ c_{n}\,N_{\rm trials}$ (5) where $a_{0}$ and $b$ and $c_{n}$’s are the fit parameters we extract from fitting the measured counts. The first term has two factors of $\tau$: one from the coherent buildup of signal energy in the storage cavity for $\tau<T_{1}^{s}$ which is included in the average signal rate $dN/dt$, and a second factor of $\tau$ for the total integration time $t_{\rm tot}=N_{\rm trials}\tau$. Figure 5: Exclusion of dark photon parameter space with stimulated emission. Shaded regions in the dark photon parameter space [13, 46] of coupling ($\epsilon$) and mass ($m_{\gamma}$) are excluded. In the orange band, dark photon is naturally produced in models of high scale cosmic inflation [14]. The exclusion set with stimulated emission based dark photon search is shown in the blue and red curves. On resonance with the storage cavity ($m_{\gamma^{\prime}}c^{2}=\hbar\omega_{s}$), the dark photon kinetic mixing angle is constrained to $\epsilon\leq 4.24\times 10^{-13}$ with 90% confidence. (Inset) The horizontal extent of the excluded region is set by the bandwidth of the number resolved qubit $\pi$-pulse which is insensitive to any drive outside the band. The vertical limit is set by the maximum $\epsilon$ which would result in dark photon rate greater than the value which would degrade the fidelity of Fock state preparation significantly. The region between the blue and red curve represents the exclusion with the stimulated emission experiment, whereas the excluded region above the red curve is mainly due to the failure of Fock state preparation which is easily detectable in the experiment (See Supp Fig. S16). By performing an ordinary least square (OLS) fit to the measured counts, we extract the fit parameters with their uncertainties tabulated in Table 1. The values of fit parameters obtained from performing a Maximum Likelihood Estimate (MLE) were comparable. Fitted Parameter | $\Theta$ | $\sigma_{\Theta}$ ---|---|--- $a_{0}\,(\mathrm{s^{-2}})$ | $1.9\times 10^{3}$ | $7.662\times 10^{5}$ $b\,(\mathrm{s^{-1}})$ | $-7.26$ | $4.2\times 10^{1}$ $c_{0}$ | $3.402\times 10^{-4}$ | $4.292\times 10^{-4}$ $c_{1}$ | $1.419\times 10^{-3}$ | $4.0\times 10^{-4}$ $c_{2}$ | $5.860\times 10^{-4}$ | $4.222\times 10^{-4}$ $c_{4}$ | $7.330\times 10^{-3}$ | $6.732\times 10^{-4}$ Table 1: Fitted parameters. Best fit and statistical uncertainties corresponding to each source of background counts. The Fock state with higher background counts have larger $c$ value indicating that the source of counts is related to the state preparation step and thus, it is valid to perform a background subtraction to demonstrate the enhancement technique. For $\tau=\mathrm{20\,\mu s}$ and $N_{\mathrm{trials}}\sim 20,000$, all three terms in Eqn. (5) contribute roughly equally to the counts observed in Fig. 4. The large statistical uncertainties on $a_{0}$ are due to the fact that the other two terms dominate the measured counts and fluctuations causing $a_{0}$ to swing up and down by a large amount. With the extracted value of $a_{0}$, we can compute the kinetic mixing angle of dark photon given by $\epsilon=\sqrt{\frac{a_{0}}{\rho_{\mathrm{DM}}m_{\mathrm{DM}}GV}}$ (see Supplemental material for derivation). With the measured uncertainties on all the parameters and using error propagation, we compute the $90\%$ confidence limit on $\epsilon$ to be $\epsilon_{fit}+1.28\,\sigma_{\epsilon}$. Thus, a dark photon candidate on resonance with the storage cavity ($m_{\gamma^{\prime}}c^{2}=\hbar\omega_{s}$), with mixing angle $\epsilon\geq 4.35\times 10^{-13}$ is excluded at the 90% confidence level. Fig. 5 shows the regions of dark photon parameter space excluded by the stimulated emission based search, assuming the dark photon comprises all the dark matter density ($\rho_{\mathrm{DM}}=\mathrm{0.4\,GeV/cm^{3}}$). The detector is maximally sensitive to dark matter candidates with masses within a narrow window around the resonance frequency of the cavity. This window is set by the lineshape of the dark matter [47] ($Q_{\mathrm{DM}}\sim 10^{6}$). For DM wave at $\sim\mathrm{6\,GHz}$, the FWHM linewidth is $\mathrm{6\,kHz}$, so the -$\mathrm{3\,dB}$ point is $\mathrm{3\,kHz}$ away from the cavity resonance and the efficiency of a finite bandwidth number resolved qubit $\pi$-pulse is non-zero. While the stimulated emission protocol is valid for small induced signal, the cavity state preparation using OCT pulses excludes DM wave corresponding to $|\alpha|>0.05$ (see Fig. S16) as it would result in larger preparation error than experimentally observed. ## Conclusions and outlook The preparation of a cavity in quantum states of light, such as a Fock state, has the potential to significantly enhance the DM signal rate and hence also the DM frequency scan rate compared to previous methods. In the current work, we have demonstrated a signal enhancement of $2.78\times$ by initializing the cavity in $\left|n=4\right\rangle$ versus $\left|n=0\right\rangle$ Fock state. The corresponding improvement in attainable scan speed is reduced because of the finite amounts of time required to prepare the initial Fock state and to read out the final state after integrating the signal, though in an optimized experiment the duty cycle should be comparable to that of other photon counting experiments. Despite these imperfections, even in this initial demonstration, the dark photon search sets an unprecedented sensitivity in an unexplored parameter space with $90\%$ confidence level. This method holds great promise for continued improvement as advancements in cavity coherence times [48] and state preparation methods [23] progress, and we expect the same improvement factor or better in future dedicated dark matter search experiments. While this study focuses on the detection of dark matter, the quantum-enhanced technique presented here can be applied more widely to sense ultra-weak forces in various settings, in cases where the signal accumulation is limited by the coherence time of the signal rather than by that of the probe. The Fock state stimulated emission can increase the rate of processes that involve the delivery of small amounts of energy, and the resulting signal quanta can be detected through number-resolved counting techniques that surpass the SQL. ## Acknowledgements We would like to acknowledge and thank Konrad Lehnert for proposing this idea. We thank Ming Yuan, A. Oriani, and A. Eickbusch for discussions, and the Quantum Machines customer support team for help with implementing the control software. We gratefully acknowledge the support provided by the Heising-Simons Foundation. This work made use of the Pritzker Nanofabrication Facility of the Institute for Molecular Engineering at the University of Chicago, which receives support from Soft and Hybrid Nanotechnology Experimental (SHyNE) Resource (NSF ECCS-1542205), a node of the National Science Foundation’s National Nanotechnology Coordinated Infrastructure. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics, with support from its QuantISED program. We acknowledge support from the Samsung Advanced Institute of Technology Global Research Partnership. Stimulated emission of signal photons from dark matter waves Supplemental Material Ankur Agrawal, Akash V. Dixit, Tanay Roy, Srivatsan Chakram, Kevin He, Ravi K. Naik, David I. Schuster, Aaron Chou ## Conceptual overview We use QuTip [49, 50] simulation tool to study the evolution of a cavity initialized in different Fock states under the action of a small displacement drive. As seen in Fig. S6, the probability of observing a dark matter induced signal is significantly enhanced for a cavity prepared in large $\left|n\right\rangle$ Fock state. Figure S6: Phase-space representation of the cavity state before and after the dark matter wave push. (Left plots) Displacing the cavity initialized in $\left|0\right\rangle$ in an arbitrary direction by a small coherent push $\beta_{\rm coh}\ll 1$ results in a small probability $P_{0,1}\propto|\mathcal{M}_{0,1}|^{2}$ for creating a $\left|1\right\rangle$ component by spontaneous emission of a photon from the DM wave. The direction of displacement is determined by the instantaneous phase of the DM wave which is randomized every DM coherence time, but since the initial cavity state is azimuthally symmetric, a displacement in any direction gives the same probability $P_{01}$. The red dashed line is shows as a guide to locate the origin w.r.t to the center of the blob. (Right plots) The cavity is initialized in a $\left|n\right\rangle=10$ Fock state which also has an azimuthally symmetric Wigner distribution. Displacing this distribution in an arbitrary direction shifts some part of the distribution to larger radius and other parts to smaller radius. For example, the lower plots shows a displacement of $\beta_{\rm coh}=0.1$ in the positive $X=Re(\beta)$ direction. The shift to larger radius corresponds to stimulated emission to states with larger photon number, for example $\left|11\right\rangle$ while the shift to smaller radius corresponds to stimulated absorption to states with smaller photon number, for example $\left|9\right\rangle$. As shown in the histograms, the stimulated enhancement factors ($N_{Fock}+1$) and $N_{Fock}$ give probabilities which satisfy $P_{10,11}=11\times P_{0,1}$ and $P_{10,9}=10\times P_{0,1}$. ## Dark matter conversion and scan rate The expected signal rate ($R_{s}$) from DM is proportional to the volume and the quality factor of the cavity. In DM experiments operating below $\mathrm{1\,GHz}$ ($h\nu\ll k_{B}T$), the background rate ($R_{b}$) is limited by the thermal occupation of the cavity such that the noise added by an amplifier in the readout process is sub-dominant. However, at frequencies in $\mathrm{5-30\,GHz}$ range, the DM search faces major challenges as the signal-to-noise ratio (SNR) rapidly plummets down. The expected DM signal decreases at higher frequencies due to following reasons: (1) the cavity volume diminishes as the cavity dimensions shrink to maintain the resonance condition ($V\propto\nu^{-3}$) for a $\mathrm{TM_{0n0}}$ type mode, (2) the quality factor of a normal metal cavity degrades due to the anomalous skin effect [51, 52]. Moreover, the noise from quantum-limited readout process increases at higher frequencies because the power scales as $h\nu\,(\frac{\nu}{Q_{\rm DM}})$, where $Q_{\rm DM}$ is the quality factor of a DM wave given by the escape velocity of DM from the galaxy. The first term in the expression represents the energy of one photon, which is determined by the standard quantum limit. This limit specifies that the minimum amount of noise added by an amplifier is equivalent to a mean photon number of $\bar{n}_{SQL}\geq 1$[16, 17, 18, 19, 20]. The signal frequency scan rate $R$ (in Hz/s), a key merit of haloscope type experiment, is proportional to $\propto R_{s}^{2}/R_{b}$ as shown in Eqn. (S6). Quantum- enhanced techniques to improve signal rate and reduce noise are required to improve SNR and hence accelerate the search for the extremely weak dark matter signal. The integration time ($\Delta t$) required for a background limited dark matter search is given by the time required to achieve $1\sigma$ sensitivity determined by Poisson counting statistics: $R_{s}\Delta t>\sqrt{R_{b}\Delta t}$, where $R_{s}=\bar{n}_{\mathrm{HP}}/\Delta t$ and background rate $R_{b}=\bar{n}_{c}^{b}/\Delta t$. The signal accumulation time for each Fock state preparation is given by $\tau$. For an optimal dark matter scanning experiment, we would want to start with a high-Q cavity ($Q_{\rm cav}>Q_{\rm DM}$) in the largest possible Fock state $\left|n\right\rangle$ such that $\tau$ (which must be smaller than the Fock state coherence time $Q_{cav}/(\nu*n)$) can be matched with the signal coherence time $Q_{\rm DM}/\nu$. Hence, the integration time at each frequency point scales as $R_{b}/R_{s}^{2}$ and, the scan rate is given by, $R=\Delta\nu/\Delta t=(\frac{\nu}{Q_{\rm DM}})\frac{1}{N\,\tau}\propto R_{s}^{2}/R_{b},$ (S6) such that $\Delta t$ is the sum of many iterations of duration $\tau$ and $N$ is the total number of iterations for each Fock state preparation. While the stimulated emission technique will generally enhance $R_{s}$ by a factor $(n+1)$, the coherence time of the prepared quantum state is also reduced by a factor $(n+1)$, thus also restricting the accumulation time $\tau$. The smaller coherence time then necessitates a faster readout rate which then generally causes the background rate $R_{b}$ to also increase by a factor $(n+1)$ for backgrounds which are associated with the readout procedure or due to a residual non-zero mode occupation number in the cavity. The overall improvement in $d\nu/dt$ due to Fock enhancement is then a single factor $\eta\,(n+1)$, where $\eta$ is the detection efficiency in the experiment. Each measurement sequence consists of state preparation and validation ($\mathrm{2\,\mu s}+3\times t_{m}=\mathrm{17\,\mu s}$), signal integration ($\mathrm{20\,\mu s}$) and 30 repeated measurements (30$\times t_{m}=\mathrm{150\,\mu s}$). There is an additional dead time in the experiment due to imperfect state preparation which constitutes $35\%\times\mathrm{17\,\mu s}\approx\mathrm{6\,\mu s}$ for $\left|n=4\right\rangle$ Fock state. The total search time at each Fock state is roughly $20,000\times\mathrm{193\,\mu s}=\mathrm{36.6\,s}$ with a duty cycle of $\frac{20}{193}=10\%$ ($\mathrm{3.6\,s}$ of integration). For the present apparatus, the duty cycle could be increased to $\sim 30\%$ if longer signal integration times are chosen at the expense of greater systematics due to Fock state decoherence. Higher Q cavities would enable even larger integration times and duty cycles. ## Experimental setup The cavities and qubit are mounted to the base plate of a dilution fridge (Bluefors LD400) operating at $\mathrm{10\,mK}$. The device is housed in two layers of $\mu$-metal to shield from magnetic fields. Signals sent to the device are attenuated and thermalized at each temperature stage of the cryostat as shown in Fig. S7. The field probing the readout resonator is injected via the weakly coupled port (shorter dipole stub antenna). Control pulse for the qubit are inserted through the strongly coupled readout port (longer dipole stub antenna). The storage cavity is driven through a direct port which is weakly coupled to the external microwave chain. Both control lines also contain an inline copper coated XMA attenuator that is threaded to the base plate. The signal from the readout resonator reflects off a Josephson parametric amplifier before being amplified by a cryogenic HEMT amplifier at the $\mathrm{4\,K}$ stage. The output is mixed down to $\mathrm{100\,MHz}$ IF signal before being digitized. Cavity and transmon fabrication details can be found in Supplemental materials of Ref. [1, 42]. Figure S7: Wiring diagram inside the dilution refrigerator and the room temperature measurement setup. Quantum Machines (QM) OPX controller was used to generate the arbitrary wave forms (DACs) and digitize the incoming readout signal (ADCs). All the tones are up-converted using IQ modulation, with local oscillator (LO) and AWG synced to an SRS F2575 RB source. Storage cavity is controlled via a direct port and qubit is controlled by injecting a drive into the strongly coupled port of the readout. Readout signal is injected into the weakly coupled port, and the signal is routed to the JPA using non reciprocal circulator elements. The amplified signal is routed to the HEMT for further amplification and the signal is then mixed down to $\mathrm{100\,MHz}$ IF, further amplified, and finally digitized. All the RF lines are heavily filtered with homemade eccosorb filters and attenuated to minimize stray radiation from entering the device. ## Fock state characterization We know that a Fock state has a definite photon number but no definite phase associated with it. Therefore, a simple qubit spectroscopy reveals as much as information as a Wigner tomogram would do. However, we use both these methods to compute and confirm the presence of Fock states in the cavity. Fig. 1 shows the qubit spectroscopy on the left and Wigner tomography of the resultant Fock states on the right. By fitting a Gaussian curve to the spectrum we estimate the fidelities to be $P_{0}=95.2\pm 0.3\%,P_{1}=91.2\pm 0.4\%,P_{2}=87.3\pm 0.5\%,P_{3}=81.6\pm 0.6\%,P_{4}=63.6\pm 0.7\%$. Moreover, we study the evolution of the cavity under the action of OCT pulses by tracking the cavity state as shown in Fig. S8 (b). We perform a number resolved qubit spectroscopy at different points in time to map out the occupation probability of different Fock levels, in agreement with the simulated trajectory. Figure S8: Fock state preparation in a cavity dispersively coupled to a transmon qubit. (a) Optimized control drives to the qubit (top) and cavity (bottom) to transfer the qubit-cavity state from $\left|g,0\right\rangle\xrightarrow{}\left|g,3\right\rangle$. The drive strengths are in units of $\mathrm{MHz}$. (b) Simulated trajectory of the cavity state using QuTip under the control drive. Measured cavity occupation probability showing all the levels explored during the evolution before converging to the desired state $\left|g,3\right\rangle$ at the end. The experiment involves playing a fraction of the OCT pulses and performing qubit spectroscopy with a number resolved $\pi$-pulse. Each vertical slice corresponds to the cavity occupation probability in different number states as function of the duration of pulse. Note that the simulation does not include the presence of neighboring cavity modes as well as the measurement infidelity. Control fields for the qubit and cavity are generated by a room temperature field-programmable gate array (FPGA) controller, which also provides a low-latency feedback signal to actively monitor the state of the cavity. In several recent works, it has been demonstrated that a single transmon is capable of preparing any quantum state in the cavity and perform universal control on it. The methods developed include - resonant swap of a qubit excitation to the cavity [40], SNAP gates [39], Blockade [42], and GRadient Ascent Pulse Engineering (GRAPE) based optimal control (OCT) pulses [41, 27] and Echoed Conditional Displacement (ECD) control pulses [23] to solve the inversion problem of finding control fields to transfer a quantum system from state A to B. We use a GRAPE based method to generate a set of optimal control pulses [41, 27] to prepare non-classical states in an cavity. In the optimal control, we consider the full model of the time dependent Hamiltonian and generate the control pulses which maximizes the target state fidelity as has been previously demonstrated in nuclear magnetic resonance experiments [43] as well as superconducting circuits [27]. The main advantage of this approach is that the duration of state preparation pulses can be as short as 1/$\chi$ and does not necessarily increase for higher Fock states [27, 23]. We use OCT pulses to successfully prepare cavity Fock states as shown in Fig. 1 (b). We briefly investigated the SNAP protocol [39] to prepare Fock states but did not pursue further as it suffers from two issues limiting the maximum achievable fidelity. First, the number of constructed sequences scales as $(2n+1)$, requiring large number of gates, limiting operations which are feasible in the presence of decoherence. Second, the constructed model fails to account for the higher order Kerr non-linearity ($\mathcal{H}_{Kerr}/\hbar=K(a^{\dagger})^{2}(a)^{2}$) in the Hamiltonian, which is non-negligible at higher photon occupation. This results in finite occupation probability at the other Fock states as well. ## Calibration of number resolved $\pi$ ulse In order to successfully probe the presence of Fock states in the cavity, we need a very well calibrated narrow bandwidth $\pi$-pulse - amplitude as well as frequency. We perform an amplitude Rabi experiment with a $3\mu$s (4$\sigma$) long Gaussian pulse such that its spectral spread is smaller than the $\chi$-shift. After a coarse amplitude sweep, we perform a finer sweep centered at the initial guess and apply a set of 12 Gaussian pulses in succession to amplify any gate errors such as under/over rotation, frequency drift to get a better estimate of the $\pi$-pulse amplitude. For qubit transition frequency, we initialize the cavity in a particular Fock state $\left|n\right\rangle$ and perform a qubit Ramsey interferometry experiment. It consists of two $\pi/2$ pulses separated by a variable delay time. A Fast Fourier Transform (FFT) of the resultant time oscillations gives the shift in the qubit transition frequency which accounts for any higher order corrections as well. The computed transition frequencies and amplitude are used in the actual experiment to minimize the errors. ## Device calibration By applying a weak coherent tone at the storage cavity frequency, we induce a variable displacement $\alpha$ of the cavity state. We calibrate the number of photons injected into the storage cavity by varying the drive amplitude and performing qubit spectroscopy. By fitting the qubit spectrum as shown in Fig. S9 to a Poisson distribution, we extract the cavity occupation, $\bar{n}=\mathrm{|\alpha|^{2}}$. Similarly, the qubit drive strength is calibrated by performing qubit Rabi experiments with varying AWG amplitude and pulse length. The measured drive strength and the AWG amplitude mapping is used to send the correct OCT pulses to the device. See Supp. section in [1] and [31] for more details on device parameter calibrations. Figure S9: Qubit spectroscopy reveals cavity displacement (a) Pulse sequence showing the calibration of cavity photon number by performing qubit spectroscopy. (b) The cavity is displaced using a variable weak coherent drive for a finite period of time. The resulting population of the cavity is determined by performing qubit spectroscopy with a resolved $\pi$-pulse. The cavity photon number dependent shift of the qubit transition frequency reveals the cavity population. By fitting the spectrum to a Poisson distribution we extract the weights of the cavity number states in the prepared coherent state and thus, the mean photon occupation. ## Hidden Markov model analysis We adopt the Hidden Markov Model (HMM) approach presented in [1] to consider all the possible errors that may occur during the measurement process and alter the state of the cavity, qubit, and readout. The cavity and qubit states are treated as hidden variables that emit readout signals. The Markov chain is modeled by the transition matrix ($T$) (Eqn. (S8)) which describes the evolution of the joint cavity-qubit hidden state $s\in[\left|n^{\prime}g\right\rangle,\left|n^{\prime}e\right\rangle,\left|ng\right\rangle,\left|ne\right\rangle]$ and the emission matrix ($E$) (Eqn. (S9)) that calculates the probability of a readout signal $R\in$ [$\mathcal{G}$, $\mathcal{E}$] given a certain hidden state. It’s important to note that with a number-resolved $\pi$-pulse centered at the $n$-shifted peak, we can only determine whether the cavity is in the $\left|n\right\rangle$ Fock state or not ($\left|n^{\prime}\neq n\right\rangle$). The transition and emission matrix were first introduced in [53] and implemented in [1]. The change in qubit state probability is not affected by the cavity state. However, the repeated measurement of the cavity state with a number selective $\pi$-pulse introduces an additional channel through which the cavity loses its excitation. It is called demolition probability ($p_{d}$), which signifies the non-QNDness of the projective measurement of the cavity state with the qubit. In simple terms, a single qubit measurement is $(1-p_{D})\%$ QND. We followed the protocol described in Ref.[54] for repeated parity measurement and replaced it with number selective $\pi$-pulse. Table S2 and Fig. S13 show the measured values for different Fock states. The elements of the emission matrix represent the readout fidelities for the ground and excited states of the qubit, which are influenced by noise from the first stage JPA. More information on the experimental protocols used can be found in the supplemental material. We used the backward algorithm (Eqn. (S7)) [53, 55] to calculate the probability ($P(n_{0})$) that the cavity state was in the $\left|n+1\right\rangle$ Fock state after the injection of a synthetic DM signal. The algorithm takes a set of $N+1$ measured readout signals ($R_{0},R_{1},...,R_{N}$) as input, where $n_{0}=n+1$ represents the target Fock state and $n_{0}^{\prime}$ can take on values in the range [$0,n,n+2,...$]. $\displaystyle P(n_{0})=\sum_{s_{0}\in[\left|n_{0},g\right\rangle,\left|n_{0},e\right\rangle]}\sum_{s_{1}}...\sum_{s_{N}}$ $\displaystyle E_{s_{0},R_{0}}T_{s_{0},s_{1}}E_{s_{1},R_{1}}$ (S7) $\displaystyle\cdots$ $\displaystyle T_{s_{N-1},s_{N}}E_{s_{N},R_{N}}$ In the reconstruction process, all possible scenarios are taken into account. For instance, a readout measurement of $\mathcal{G}$ followed by $\mathcal{E}$ could happen due to the successful detection of the cavity in the $\left|n+1\right\rangle$ state with a probability of $P_{n_{0}n_{0}}P_{gg}F_{e\mathcal{E}}/2$. On the other hand, it could also be caused by either a qubit heating event ($P_{n_{0}^{\prime}n_{0}^{\prime}}P_{ge}F_{e\mathcal{E}}/2$) or a readout error ($P_{n_{0}^{\prime}n_{0}^{\prime}}P_{gg}F_{g\mathcal{E}}/2$). The data in Fig. 2(b) illustrates the results of the measurement of the readout signals and the reconstructed initial probabilities of the cavity state. The panels on the left show instances when there was no emission event, while the panels on the right depict cases where a positive emission event took place. The cavity state shifted from $\left|1\right\rangle$ to $\left|2\right\rangle$ as a result of a variable displacement drive $\mathcal{D}(\alpha)$, and the change was accurately reflected in the reconstructed probability. The reconstructed cavity state probabilities undergo a likelihood ratio test $(\lambda=\frac{P(n_{0}=n+1)}{P(n_{0}\neq n+1)})$ to determine if the cavity state changed or not. A positive detection of a photon signal $\left|n\right\rangle\rightarrow\left|n+1\right\rangle$ is declared when the threshold $\lambda>\lambda_{\mathrm{thresh}}$. The false positive rate is limited to less than $\frac{1}{\lambda_{\mathrm{thresh}}+1}$ as a result of this procedure. A higher detection threshold can be achieved by increasing the number of repeated measurements, but at the cost of decreased efficiency which is linear in the number of measurements. Hence, $\lambda_{\mathrm{thresh}}$ is chosen to strike a balance between the detection efficiency and the false positive rate, keeping it below the observed physical photon background. This will be discussed further in the next section. ## Elements of hidden Markov model The hidden Markov model relies on independent measurements of the probabilities contained in the transition and emission matrices. The elements of these matrices depend on the parameters of the experiment and the device, including the lifetimes of the qubit and cavity, qubit spurious population, and readout fidelities. ### Transmission matrix elements The transition matrix captures the possible qubit (cavity) state changes. Qubit relaxation $\left|e\right\rangle\rightarrow\left|g\right\rangle$ occurs with a probability $P_{eg}^{\downarrow}=1-e^{-t_{m}/T_{1}^{q}}$. The probability of spontaneous heating $\left|g\right\rangle\rightarrow\left|e\right\rangle$ of the qubit towards its steady state population is given by $P_{ge}^{\uparrow}=\bar{n}_{q}[1-e^{-t_{m}/T_{1}^{q}}]$. Unlike a two-level system such as a qubit, the cavity state may change from $\left|n\right\rangle\rightarrow\left|n^{\prime}\right\rangle$ via either decay ($\left|n\right\rangle\rightarrow\left|n-1\right\rangle$) or excitation ($\left|n\right\rangle\rightarrow\left|n+1\right\rangle$) with probabilities $P_{n,n-1}=1-e^{-t_{m}/T_{1}^{n}}$ or $P_{n,n+1}=\bar{n}_{c}[1-e^{-t_{m}/T_{1}^{n}}]$ respectively. The lifetime of the Fock state is modified due to the enhanced decay [40] ($T_{1}^{n}=T_{1}^{s}/n$) as compared to the bare lifetime of a coherent state (see Supp Fig. S12). Yet another possible source of change in the cavity state is the repeated qubit measurement itself. In most cases, we assume the interaction between the qubit and cavity to be QND, i.e., the measurement of cavity state by qubit does not perturb the state. However, we measured the QNDness of repeated qubit measurement for $\left|n\right\rangle=1$ Fock state to be $97.4\%$ which corresponds to a demolition probability $p_{d}$ of $2.6\%$ per measurement (see Fig. S13). Hence, we add this term in the transition matrix. All these probabilities are computed using independently measured qubit, cavity and readout parameters described below. $T=\blockarray{ccccc}\left|n^{\prime}g\right\rangle&\left|n^{\prime}e\right\rangle\left|ng\right\rangle\left|ne\right\rangle\\\ \block{[cccc]c}P_{n^{\prime}n^{\prime}}P_{gg}P_{n^{\prime}n^{\prime}}P_{ge}P_{n^{\prime}n}P_{ge}P_{n^{\prime}n}P_{gg}\hskip 3.0pt\left|n^{\prime}g\right\rangle\\\ P_{n^{\prime}n^{\prime}}P_{eg}P_{n^{\prime}n^{\prime}}P_{ee}P_{n^{\prime}n}P_{eg}P_{n^{\prime}n}P_{ee}\hskip 3.0pt\left|n^{\prime}e\right\rangle\\\ P_{nn^{\prime}}P_{gg}P_{nn^{\prime}}P_{ge}P_{nn}P_{ge}P_{nn}P_{gg}\hskip 3.0pt\left|ng\right\rangle\\\ P_{nn^{\prime}}P_{eg}P_{nn^{\prime}}P_{ee}P_{nn}P_{ee}P_{nn}P_{eg}\hskip 3.0pt\left|ne\right\rangle\\\ $ (S8) The lifetime of the qubit is determined by applying a $\pi$ pulse and waiting for a variable time before measuring the population as shown in Fig. S10. We map out the qubit population as a function of the delay time, fit it with an exponential characterizing the Poissonian nature of the decay process, and obtain $T_{1}^{q}=\mathrm{115\pm 10\,\mu s}$. The dephasing time of the qubit is measured by a Ramsey interferometry experiment with a $\pi/2$ pulse, variable delay, and a final $\pi/2$ with its phase advanced by $\omega_{r}t$ where $\omega_{r}$ is the Ramsey frequency. The phase advancement is implemented in the software. During the variable delay period, a series of $\pi$ pulses are applied to perform spin echos and reduce sensitivity to low frequency noise. We observe a dephasing time of $T_{2}^{q}=\mathrm{160\pm 10\,\mu s}$ which extends to $T_{2}^{e}=\mathrm{236\pm 6\,\mu s}$ with a single echo sequence. Figure S10: Qubit lifetime and dephasing time measurement. (Top) $T_{1}$ measurement by sending a $\pi$-pulse to excite the transmon to $\left|e\right\rangle$ state and monitor its decay as a function of variable delay time. By fitting an exponential function to the qubit excitation probability $P_{e}$, we extract the $T_{1}^{q}=\mathrm{115\pm 10\,\mu s}$. (Bottom) $T_{2}$ measurement with a Ramsey experiment. The sequence consists of two $\pi/2$-pulse separated by a variable delay time. The envelope of the measured oscillations informs the $T_{2}^{e}=\mathrm{236\pm 6\,\mu s}$ and its frequency provides us the detuning between the drive and the transmon resonance frequency. In this case, we intentionally introduced a 60 kHz synthetic detuning. The storage cavity lifetime is calibrated by performing a cavity $T_{1}$ experiment. We use the OCT pulse to prepare the cavity in a $\left|n\right\rangle=1$ Fock state, wait for a variable delay time and probe the cavity state at the end by Rabi driving the qubit with a resolved $\pi$-pulse. The resultant cavity population is fitted to an exponential to obtain $T_{1}^{s}=\mathrm{1.36\pm 0.02\,ms}$ as shown in Fig. S11. To measure the cavity dephasing time, the cavity is initialized in a superposition state $(a\left|0\right\rangle+b\left|1\right\rangle)$ by applying a weak coherent drive ($\alpha$) such that approximately only the first two photon states $\left|n\right\rangle$ are populated. The contrast in the signal will be determined by the relative amplitude ($\frac{a^{2}}{a^{2}+b^{2}}$) without any loss of information. After a variable delay time, an identical displacement drive with its phase advanced by $\omega_{r}\,t$ is applied before probing the $\left|n\right\rangle=0$ with a resolved $\pi$-pulse on the qubit. The oscillations are fitted to obtain a dephasing time of $T_{2}^{s}=\mathrm{2.39\pm 0.02\,ms}$. Figure S11: Storage cavity lifetime and dephasing time from $\mathrm{T_{1}}$ and Ramsey measurements. The long lived storage cavity mode is ideal for holding a signal photon induced by the dark matter while a series of repeated photon counting measurements is performed. The qubit spurious population is determined by measuring the relative populations of its ground and excited states [56]. This is done by utilizing the $f$-level of the transmon. Two Rabi experiments are conducted swapping population between the $\left|e\right\rangle$ and $\left|f\right\rangle$ levels. First, we apply a $\pi_{ge}$ pulse to invert the qubit population followed by the $\left|e\right\rangle-\left|f\right\rangle$ Rabi experiment. Second, no $\pi_{ge}$ pulse is applied before the $ef$ Rabi oscillation. The ratio of the amplitudes of the oscillations gives us the ratio of the populations of the excited and ground state. Assuming that $P(g)+P(e)=1$ and measuring $\frac{P(e)}{P(g)}=0.02$, corresponds to an effective qubit temperature of $\mathrm{54\,mK}$. Figure S12: Decay rate of Fock states. Measured lifetime of the different Fock states prepared in the cavity using GRAPE pulses. The decay rate is inversely proportional to the lifetime ($\Gamma=1/T_{1}$). The curve shows enhancement in the decay rate as a function of the Fock state $\left|n\right\rangle$. The linear fit to the data fits well as predicted by [57], where $T^{n}_{1}=T_{1}/n$. In a dispersive interaction, we assume that the measurement of cavity state via a parity or number resolved $\pi$-pulse does not perturbed the state i.e. it is a quantum non-demolition (QND) measurement or in other words, does not induce additional relaxation in the cavity mode. However, a recent study has shown that parity measurements, while highly QND, can induce a small amount of additional relaxation [54]. Hence, in order to estimate the same, but, in the context of number resolved qubit measurement, we follow the method described in [54], where we perform a cavity $T_{1}$ experiment interleaved with varying number of repeated number resolved qubit measurements during the delay time. In that experiment, the total relaxation rate was modeled as a combination of the bare storage lifetime $\tau_{s}$ and a demolition probability $p_{d}$ associated with each qubit measurement. In Fig. S13, we show the extracted total decay time ($\tau_{\rm tot}$) and demolition probability $p_{d}=2\pm 0.02\%$ when the cavity is prepared in $\left|n\right\rangle=1$ Fock state. In other words, a single number resolved qubit measurement is $98\%$ QND. Unfortunately, for $\left|n\right\rangle=2$, $p_{d}=4\pm 0.04\%$ gets worse, which is not a good sign but probably explains why the detection efficiency in Fig. 3 falls off at higher Fock states. This acts like an additional source of loss to the cavity mode but only when the resolved $\pi$-pulse is on resonance with the shifted qubit frequency. Figure S13: QNDness of storage cavity measurement. Storage cavity $T_{1}$ measurements were performed with repeated number resolved qubit measurements interleaved during the delay time with a variable repetition interval time $\tau_{\rm rep}$. The extracted total decay time was fit to a model $1/\tau_{\rm tot}=1/\tau_{s}+p_{d}/\tau_{\rm rep}$. From the fit (red line), we infer a demolition probability per readout of $p_{d}=2.6\%$ corresponding to a QNDness of 97.4%, which is a bit lower than reported for a parity protocol [54]. The natural decay time of the storage $\tau_{s}=\mathrm{1360\,\mu s}$ is indicated by a dashed grey line. $\left|n\right\rangle$ | $\tau_{s}(\mu s)$ | $p_{d}$ | $\sigma_{p_{d}}$ ---|---|---|--- $\left|1\right\rangle$ | 1360 | 0.026 | 0.002 $\left|2\right\rangle$ | 660 | 0.040 | 0.004 $\left|3\right\rangle$ | 527 | 0.10 | 0.05 $\left|4\right\rangle$ | 319 | 0.074 | 0.012 Table S2: Demolition probability. Measured lifetime of the different Fock states and their fitted demolition probability with error bars. $p_{d}$ can be approximated with a linear dependence on $n$. ### Emission matrix elements In order to characterize the emission matrix it is necessary to measure the readout infidelity of a particular transmon state. We consider only two possible transmon states ($\left|g\right\rangle,\left|e\right\rangle$) in this case as the number resolved $\pi$-pulses have very narrow spectral width ($\sigma_{\nu}\ll\alpha_{q}$). Each state is prepared 20,000 times and the resultant quadrature values are digitized to assign a voltage in the ($I-Q$) space. The phase of the readout pulse is pre-calibrated to align the signal along the $I$ axis. The histogram corresponding to each state is fitted with a sum of two Gaussian functions to estimate the overlap region and calculate the readout fidelity, $\mathcal{F}=97\%$ (Fig. S14). A discriminator value (red dashed line) is used to assign each readout signal either $\mathcal{G}$ or $\mathcal{E}$ in real-time. $E=\frac{1}{2}\hskip 3.0pt\blockarray{ccc}\mathcal{G}&\mathcal{E}\\\ \block{[cc]c}F_{g\mathcal{G}}F_{g\mathcal{E}}\hskip 3.0pt\left|n^{\prime}g\right\rangle\\\ F_{e\mathcal{G}}F_{e\mathcal{E}}\hskip 3.0pt\left|n^{\prime}e\right\rangle\\\ F_{g\mathcal{G}}F_{g\mathcal{E}}\hskip 3.0pt\left|ng\right\rangle\\\ F_{e\mathcal{G}}F_{e\mathcal{E}}\hskip 3.0pt\left|ne\right\rangle\\\ $ (S9) Figure S14: QND readout of the transmon state. (Top) shows the two quadrature values of the down-converted readout signal for qubit prepared in $\left|g\right\rangle$ and $\left|e\right\rangle$ state. Each state preparation contains 20,000 single shot points. (Bottom) By fitting a sum of two Gaussian we can estimate the overlap region and assign a single-shot readout fidelity as $\mathcal{F}=97\%$. The red dashed line shows an optimal value of the threshold for tagging a qubit state based on a single shot readout. Readout errors are due to voltage excursions from amplifier noise or spurious qubit transitions. The emission matrix should only contain readout errors that occur due to voltage fluctuations. Errors due to qubit transitions during the readout window are accounted for in the transition matrix. To disentangle the two contributions, we subtract the readout errors caused by the spontaneous heating and decay of the qubit to obtain $F_{g\mathcal{G}}=\mathrm{97.5\pm 1\%}$ and $F_{e\mathcal{E}}=\mathrm{96.8\pm 1\%}$. Device Parameter | Value ---|--- Qubit frequency | $\omega_{q}=2\pi\times\mathrm{4.961\,GHz}$ Qubit anharmonicity | $\alpha_{q}=-2\pi\times\mathrm{143.2\,MHz}$ Qubit decay time | $T_{1}^{q}=\mathrm{115\pm 10\,\mu s}$ Qubit dephasing time | $T_{2}^{q}=\mathrm{160\pm 10\,\mu s}$ Qubit echo time | $T_{2}^{e}=\mathrm{236\pm 6\,\mu s}$ Qubit residual occupation | $\bar{n}_{q}=\mathrm{2\pm 1}\times 10^{-2}$ Storage frequency | $\omega_{s}=2\pi\times\mathrm{5.965\,GHz}$ Storage decay time | $T_{1}^{s}=\mathrm{1360\pm 23\,\mu s}$ Storage dephasing time | $T_{2}^{s}=\mathrm{2390\pm 286\,\mu s}$ Storage-Qubit Stark shift | $\chi=-2\pi\times\mathrm{1.285\,MHz}$ Storage residual occupation | $\bar{n}_{c}=\mathrm{6.3\pm 3}\times 10^{-3}$ Readout frequency | $\omega_{r}=2\pi\times\mathrm{7.790\,GHz}$ Readout $\left|e\right\rangle$ shift | $2\chi_{r}^{e}=-2\pi\times\mathrm{1.53\,MHz}$ Readout fidelity ($\left|g\right\rangle$) | $F_{g\mathcal{G}}=\mathrm{97.5\pm 1\%}$ Readout fidelity ($\left|e\right\rangle$) | $F_{e\mathcal{E}}=\mathrm{96.8\pm 1\%}$ Table S3: Device parameters. Measured qubit, storage, and readout cavity parameters. These independently measured values are necessary to determine for the transition and emission matrices. This enables the hidden Markov model to capture the behavior of the system during the measurement sequence. ## Detector characterization To characterize the detector, the cavity population is varied by applying a weak drive and the cavity photon number is counted using the technique described in the main text. In order to extract the efficiency ($\eta$) and false positive probability ($\delta$) of the detector, the relationship between injected photon population ($\bar{n}_{\mathrm{inj}}$) and measured photon population ($\bar{n}_{\mathrm{meas}}$) is fit to $\bar{n}_{\mathrm{meas}}=\eta\times\bar{n}_{\mathrm{inj}}+\delta$. ### Detector efficiency The detector efficiency and false positive probability is determined at varying thresholds for detection $\lambda_{\mathrm{thresh}}$. As the detection threshold is increased, more repeated number resolved qubit measurements are required to determine the presence of a photon. This suppresses false positives due to qubit errors but also leads to a decrease in the detector efficiency as events with low likelihood ratio are now rejected. Also, the maximum achievable likelihood ratio decreases with the Fock state prepared in the cavity. Hence, we decided to keep the false positive probability same for comparing different Fock states at the expense of detection efficiency. We performed a single photon counting experiment [1] to determine the occupation number of the background photons in the cavity (Fig. S15) and chose the threshold such that $\frac{1}{\lambda_{\mathrm{thresh}}+1}<\bar{n}_{b}^{c}\Rightarrow\lambda_{\rm thresh}=10^{3}$. This measured background translates into a metrological gain of $\mathrm{11.0\,dB}$ below the SQL. Figure S15: Photon counting to measure the cavity background. Repeated parity measurement technique followed from [1] to measure the real photon occupation in the cavity $\bar{n}_{b}^{c}=6.23\cdot 10^{-3}$ with detector based errors less than $10^{-8}$. ### Cavity backgrounds The number of events which cross $\lambda_{\mathrm{thresh}}$ for cavity prepared in different Fock states are not same, and are subtracted from the $\alpha\neq 0$ events to demonstrate the enhancement effect. The number of such events are listed in the table below and are similar to the false positive probability from the fits in Fig. 3. The number of counts are corrected for the detection efficiency to reflect the actual counts. There is no reason to expect the number of counts would increase with $n$ and we include them in the systematic uncertainties while conducting a dark photon search. In an independent dataset, we observe a similar number of counts as reported in Fig. 4. $\left|n\right\rangle$ | $N_{\rm trials}$ | $N_{\rm bkgd}$ | $\%$ ---|---|---|--- $\left|0\right\rangle$ | 114915 | 23 | 0.020 $\left|1\right\rangle$ | 111601 | 315 | 0.282 $\left|2\right\rangle$ | 113364 | 60 | 0.053 $\left|4\right\rangle$ | 108626 | 827 | 0.761 Table S4: Background counts. Number of background counts $N_{\rm bkgd}$ reported for cavity initialized in different Fock states out of $N_{\rm trials}$. ## Converting background counts to dark photon exclusion As described in the main text, for a coherent source we expect the signal rate to be proportional to the term $a_{0}$. We will derive the expression and compute the kinetic mixing angle which corresponds to $a_{0}$. See Supp. Section in Ref. [1] for discussion about dark matter induced signal. ### Kinetic mixing angle exclusion For a dark matter candidate on resonance with the cavity frequency ($m_{\mathrm{DM}}c^{2}=\hbar\omega_{c}$), the rate of photons deposited in the cavity prepared in a Fock state $\left|n\right\rangle$ by the coherent build up of electric field in time $\tau$ ($\tau<T_{1}^{s},Q_{\rm DM}/{\nu}$) is given by: $\frac{dN_{\mathrm{HP}}}{dt}=\frac{U/\omega_{s}}{\tau}=\frac{1}{2}\frac{E^{2}V}{\omega_{s}}\frac{1}{\tau}=\frac{1}{2}J^{2}_{\mathrm{DM}}(n+1)\tau^{2}\frac{GV}{\omega_{s}}\frac{1}{\tau}$ (S10) The stimulated emission factor appears via the enhancement of magnitude of the electric field generated inside the cavity. The volume of the cavity is $34.5\times 0.5\times\mathrm{2.5\,cm^{3}}=\mathrm{43.13\,cm^{3}}$. $\mathcal{G}$ encompasses the total geometric factor of the particular cavity used in the experiment. This includes a factor of $1/3$ due to the dark matter field polarization being randomly oriented every coherence time. For the lowest order mode of the rectangular cavity coupled to the qubit with $\textbf{E}=\mathrm{(}\frac{\pi x}{l})\mathrm{(}\frac{\pi y}{w})\textbf{z}$ the geometric form factor is given by: $G=\frac{1}{3}\frac{\left|\int dVE_{z}\right|^{2}}{V\int dV\left|E_{z}\right|^{2}}=\frac{1}{3}\frac{2^{6}}{\pi^{4}}$ (S11) The dark photon generated current is set by the density of dark matter in the galaxy $\rho_{\mathrm{DM}}=\mathrm{0.4\,GeV/cm^{3}}=2\pi\times\mathrm{9.67\times 10^{19}\,GHz/cm^{3}}$: $J^{2}_{\mathrm{DM}}=2\epsilon^{2}m^{4}A^{\prime 2}=2\epsilon^{2}m^{2}\rho_{\mathrm{DM}}$ (S12) where $\epsilon$ is the kinetic mixing angle between the dark photon and the visible matter. Substituting Eqn. S12 into Eqn. S10 yields the signal rate of photons deposited in the cavity by a dark photon dark matter candidate: $\frac{dN_{\mathrm{HP}}}{dt}=(n+1)\epsilon^{2}\rho_{\mathrm{DM}}m_{\mathrm{DM}}GV\tau$ (S13) The total number of photons we expect to be deposited is determined by the photon rate and the integration time ($N_{\mathrm{\rm trials}}\,\tau$) for each Fock state: $N_{\mathrm{HP}}=\frac{dN_{\mathrm{HP}}}{dt}\times\tau\times N_{\mathrm{\rm trials}}\\\ =(n+1)\epsilon^{2}\rho_{\mathrm{DM}}m_{\mathrm{DM}}GV\tau^{2}N_{\mathrm{\rm trials}}$ (S14) Comparing the first term in Eq. 5, we can rewrite the terms to obtain $\epsilon^{2}=\frac{a_{0}}{\rho_{\mathrm{DM}}m_{\mathrm{DM}}GV}$. ### Calculating 90% confidence limit Expt. Parameter | $\Theta$ | $\sigma_{\Theta}$ ---|---|--- $a_{m}$ | $1.9\times 10^{3}(\mathrm{s^{-2}})$ | $\sigma_{a}=9.807\times 10^{5}(\mathrm{s^{-2}})$ $\omega_{s}$ | $\mathrm{5.965\,GHz}$ | $\sigma_{\omega_{s}}=\mathrm{25\,Hz}$ $Q_{s}$ | $5.11\times 10^{7}$ | $\sigma_{Q_{s}}=1.4\times 10^{5}$ $V$ | $\mathrm{43.13\,cm^{3}}$ | $\sigma_{V}=\mathrm{1.2\,cm^{3}}$ $G$ | $0.002$ | $\sigma_{G}=0.0002$ Table S5: Stimulated emission experimental parameters. Systematic uncertainties of physical parameters in the experiment must be incorporated in determining the excluded dark photon mixing angle $\epsilon$. The uncertainty in the dark photon (HP) conversion is determined in the previous section. The storage cavity frequency uncertainty is obtained by Ramsey interferometry. The quality factor of the cavity is given by $Q_{s}=\omega_{s}T_{1}^{s}$ so the uncertainty is calculated as $\sigma_{Q_{s}}^{2}=(\omega_{s}\sigma_{T_{1}^{s}})^{2}+(T_{1}^{s}\sigma_{\omega_{s}})^{2}$. The volume uncertainty is estimated by assuming machining tolerances of 0.005 inches in each dimension. The form a factor uncertainty is estimated from assuming $1\%$ error in the simulated structure. Of the experimental quantities, the DP conversion has the largest systematic uncertainty. By estimating the strength of the coherent drive in the absence of an external drive and the measured background counts for different Fock states, we perform a dark photon search. We determine the dark photon mixing angle $\epsilon$ that can be excluded at the 90% confidence level by using standard error propagation formula. We determine the standard deviation on $\epsilon$ given, we have the error estimates for all the parameters tabulated above. The estimated value of $\epsilon_{0}=1.6\times 10^{-15}\pm 3.30\times 10^{-13}$, dominated by the error on $a_{0}$. We can now set the $90\%$ confidence limit on the kinetic mixing angle term as $\epsilon^{90\%}=\epsilon_{0}+1.28\sigma_{\epsilon}=4.24\times 10^{-13}$. This leads us to exclude, with $90\%$ confidence, dark photon with mixing angle $\epsilon^{90\%}$ greater than $4.24\times 10^{-13}$ as shown in Fig. S17. Figure S16: Fock state preparation with non-zero occupation in the cavity. Measuring the Fock state preparation probability as a function of initializing the cavity with varying occupation before the OCT pulses. Red dashed line corresponds to the maximum tolerable injected number of photons where the fidelity changes significantly. ## Dark photon parameter space exclusion A dark photon candidate that could result in more detector counts than background counts is constrained by the cavity occupation number which degrades the fidelity of Fock state preparation in the cavity by a significant amount. In order to estimate the same, the cavity is prepared with varied number of mean photons before applying the the OCT pulse. The resultant state is measured with the same procedure as the stimulated emission protocol to compute the fidelity. We observe that the fidelity changes significantly when the mean injected photon number goes above $\bar{n}>0.05$ shown by the red dashed line in Fig. S16. The maximum number of photons sourced from the dark photon which is tolerable before the state preparation is out of control. In principle, we can exclude any value of $\epsilon$ which is above the red curve as it will break the first step of the stimulated emission experiment. Figure S17: Excluded $\epsilon$ with $\mathrm{m}_{\gamma\prime}$. Shaded regions in the dark photon parameter space of coupling ($\epsilon$) and mass ($m_{\gamma}$) are excluded with $90\%$ confidence. The horizontal extent is set by the bandwidth of the number resolved qubit $\pi$-pulse which is insensitive to any drive outside the band. The vertical limit is set by the minimum $\epsilon$ which would result in dark photon rate greater than the value which would degrade the fidelity of Fock state preparation significantly. The above calculations assume an infinitely narrow dark matter line. To obtain the excluded region of the dark photon kinetic mixing angle, we must account for the lineshape of the dark matter [47]. We convolve the dark matter lineshape, characterized by $Q_{\mathrm{DM}}\sim\mathrm{10^{6}}$, to obtain the region shown in Fig. S17. The storage cavity contain an infinite set of discrete resonances each with a unique coupling to the dark matter. We focus only on the lowest order cavity mode that has a non-zero coupling to the dark matter as well as the qubit. In principle, the interactions between any modes and the dark matter could result in additional sensitivity to the dark photon. This would require the mode of interest to have a sufficiently large geometric form factor as well as a resolvable photon number dependent qubit shift. Future dark matter searches could employ structures with multiple resonances to enable multiple simultaneous searches [31]. ## References * [1] Dixit, A. V. _et al._ Searching for dark matter with a superconducting qubit. _Phys. Rev. Lett._ 126, 141302 (2021). URL https://link.aps.org/doi/10.1103/PhysRevLett.126.141302. * [2] Backes, K. M. _et al._ A quantum enhanced search for dark matter axions. _Nature_ 590, 238–242 (2021). URL https://doi.org/10.1038/s41586-021-03226-7. * [3] Brubaker, B. M. _et al._ First results from a microwave cavity axion search at $24\text{ }\text{ }\mu\mathrm{eV}$. _Phys. Rev. Lett._ 118, 061302 (2017). URL https://link.aps.org/doi/10.1103/PhysRevLett.118.061302. * [4] Tse, M. _et al._ Quantum-enhanced advanced LIGO detectors in the era of gravitational-wave astronomy. _Physical Review Letters_ 123 (2019). URL https://doi.org/10.1103/physrevlett.123.231107. * [5] Braine, T. _et al._ Extended search for the invisible axion with the axion dark matter experiment. _Physical Review Letters_ 124 (2020). URL https://doi.org/10.1103/physrevlett.124.101303. * [6] Kim, J. _et al._ Near-quantum-noise axion dark matter search at capp around $9.5\text{ }\text{ }\mathrm{\mu}\mathrm{eV}$. _Phys. Rev. Lett._ 130, 091602 (2023). URL https://link.aps.org/doi/10.1103/PhysRevLett.130.091602. * [7] Chou, A. S. _et al._ Snowmass cosmic frontier report. _arXiv preprint arXiv:2211.09978_ (2022). URL https://doi.org/10.48550/arXiv.2211.09978. eprint 2211.09978v1. * [8] Tanabashi, M. _et al._ Review of particle physics. _Physical Review D_ 98 (2018). URL https://doi.org/10.1103/physrevd.98.030001. * [9] Rubin, V. C., Thonnard, N. & W. K., J. F. Rotational properties of 21 SC galaxies with a large range of luminosities and radii, from NGC 4605 /r = 4kpc/ to UGC 2885 /r = 122 kpc/. _The Astrophysical Journal_ 238, 471 (1980). URL https://doi.org/10.1086/158003. * [10] Preskill, J., Wise, M. B. & Wilczek, F. Cosmology of the invisible axion. _Physics Letters B_ 120, 127–132 (1983). URL https://doi.org/10.1016/0370-2693%2883%2990637-8. * [11] Abbott, L. & Sikivie, P. A cosmological bound on the invisible axion. _Physics Letters B_ 120, 133–136 (1983). URL https://doi.org/10.1016/0370-2693%2883%2990638-x. * [12] Dine, M. & Fischler, W. The not-so-harmless axion. _Physics Letters B_ 120, 137–141 (1983). URL https://doi.org/10.1016/0370-2693%2883%2990639-1. * [13] Arias, P. _et al._ WISPy cold dark matter. _Journal of Cosmology and Astroparticle Physics_ 2012, 013–013 (2012). URL https://doi.org/10.1088/1475-7516/2012/06/013. * [14] Graham, P. W., Mardon, J. & Rajendran, S. Vector dark matter from inflationary fluctuations. _Physical Review D_ 93 (2016). URL https://doi.org/10.1103/physrevd.93.103520. * [15] Sikivie, P. Experimental tests of the ”invisible” axion. _Physical Review Letters_ 51, 1415–1417 (1983). URL https://doi.org/10.1103/physrevlett.51.1415. * [16] Caves, C. M. Quantum limits on noise in linear amplifiers. _Physical Review D_ 26, 1817–1839 (1982). URL https://doi.org/10.1103/physrevd.26.1817. * [17] Yamamoto, T. _et al._ Flux-driven josephson parametric amplifier. _Applied Physics Letters_ 93, 042510 (2008). URL https://doi.org/10.1063/1.2964182. * [18] Eichler, C. & Wallraff, A. Controlling the dynamic range of a josephson parametric amplifier. _EPJ Quantum Technology_ 1, 2 (2014). URL https://doi.org/10.1140/epjqt2. * [19] Roy, T. _et al._ Broadband parametric amplification with impedance engineering: Beyond the gain-bandwidth product. _Applied Physics Letters_ 107, 262601 (2015). URL https://doi.org/10.1063/1.4939148. * [20] Esposito, Martina _et al._ Development and characterization of a flux-pumped lumped element josephson parametric amplifier. _EPJ Web Conf._ 198, 00008 (2019). URL https://doi.org/10.1051/epjconf/201919800008. * [21] Caves, C. M. Quantum-mechanical noise in an interferometer. _Physical Review D_ 23, 1693–1708 (1981). URL https://doi.org/10.1103/physrevd.23.1693. * [22] Lawrie, B. J., Lett, P. D., Marino, A. M. & Pooser, R. C. Quantum sensing with squeezed light. _ACS Photonics_ 6, 1307–1318 (2019). URL https://doi.org/10.1021/acsphotonics.9b00250. * [23] Eickbusch, A. _et al._ Fast universal control of an oscillator with weak dispersive coupling to a qubit. _Nature Physics_ 18, 1464–1469 (2022). URL https://doi.org/10.1038/s41567-022-01776-9. * [24] Hofheinz, M. _et al._ Generation of fock states in a superconducting quantum circuit. _Nature_ 454, 310–314 (2008). URL https://doi.org/10.1038/nature07136. * [25] Wolf, F. _et al._ Motional fock states for quantum-enhanced amplitude and phase measurements with trapped ions. _Nature Communications_ 10 (2019). * [26] Schrödinger, E. Die gegenwärtige situation in der quantenmechanik. _Naturwissenschaften_ 23, 844–849 (1935). * [27] Heeres, R. W. _et al._ Implementing a universal gate set on a logical qubit encoded in an oscillator. _Nature Communications_ 8 (2017). URL https://doi.org/10.1038/s41467-017-00045-1. * [28] Ofek, N. _et al._ Extending the lifetime of a quantum bit with error correction in superconducting circuits. _Nature_ 536, 441–445 (2016). URL https://doi.org/10.1038/nature18949. * [29] Hu, L. _et al._ Quantum error correction and universal gate set operation on a binomial bosonic logical qubit. _Nature Physics_ 15, 503–508 (2019). URL https://doi.org/10.1038/s41567-018-0414-3. * [30] Campagne-Ibarcq, P. _et al._ Quantum error correction of a qubit encoded in grid states of an oscillator. _Nature_ 584, 368–372 (2020). URL https://doi.org/10.1038/s41586-020-2603-3. * [31] Chakram, S. _et al._ Seamless high-$Q$ microwave cavities for multimode circuit quantum electrodynamics. _Physical Review Letters_ 127 (2021). URL https://doi.org/10.1103/physrevlett.127.107701. * [32] Cahill, K. E. & Glauber, R. J. Density operators and quasiprobability distributions. _Physical Review_ 177, 1882–1902 (1969). URL https://doi.org/10.1103/physrev.177.1882. * [33] Koch, J. _et al._ Charge-insensitive qubit design derived from the cooper pair box. _Physical Review A_ 76 (2007). URL https://doi.org/10.1103/physreva.76.042319. * [34] Ambegaokar, V. & Baratoff, A. Tunneling between superconductors. _Physical Review Letters_ 10, 486–489 (1963). URL https://doi.org/10.1103/physrevlett.10.486. * [35] Jaynes, E. T. & Cummings, F. W. Comparison of quantum and semiclassical radiation theories with application to the beam maser. _Proceedings of the IEEE_ 51, 89–109 (1963). URL https://ieeexplore.ieee.org/document/1443594. * [36] Schuster, D. I. _et al._ Resolving photon number states in a superconducting circuit. _Nature_ 445, 515–518 (2007). URL https://doi.org/10.1038/nature05461. * [37] Brune, M., Haroche, S., Lefevre, V., Raimond, J. M. & Zagury, N. Quantum nondemolition measurement of small photon numbers by rydberg-atom phase-sensitive detection. _Physical Review Letters_ 65, 976–979 (1990). URL https://doi.org/10.1103/physrevlett.65.976. * [38] Gleyzes, S. _et al._ Quantum jumps of light recording the birth and death of a photon in a cavity. _Nature_ 446, 297–300 (2007). URL https://doi.org/10.1038/nature05589. * [39] Heeres, R. W. _et al._ Cavity state manipulation using photon-number selective phase gates. _Phys. Rev. Lett._ 115, 137002 (2015). URL https://link.aps.org/doi/10.1103/PhysRevLett.115.137002. * [40] Wang, H. _et al._ Measurement of the decay of fock states in a superconducting quantum circuit. _Phys. Rev. Lett._ 101, 240401 (2008). URL https://link.aps.org/doi/10.1103/PhysRevLett.101.240401. * [41] Leung, N., Abdelhafez, M., Koch, J. & Schuster, D. Speedup for quantum optimal control from automatic differentiation based on graphics processing units. _Phys. Rev. A_ 95, 042318 (2017). URL https://link.aps.org/doi/10.1103/PhysRevA.95.042318. * [42] Chakram, S. _et al._ Multimode photon blockade. _Nature Physics_ (2022). URL https://doi.org/10.1038/s41567-022-01630-y. * [43] Khaneja, N., Reiss, T., Kehlet, C., Schulte-Herbrüggen, T. & Glaser, S. J. Optimal control of coupled spin dynamics: design of NMR pulse sequences by gradient ascent algorithms. _Journal of Magnetic Resonance_ 172, 296–305 (2005). URL https://doi.org/10.1016%2Fj.jmr.2004.11.004. * [44] Itano, W. M., Heinzen, D. J., Bollinger, J. J. & Wineland, D. J. Quantum zeno effect. _Physical Review A_ 41, 2295–2300 (1990). URL https://doi.org/10.1103/physreva.41.2295. * [45] de Oliveira, F. A. M., Kim, M. S., Knight, P. L. & Buek, V. Properties of displaced number states. _Physical Review A_ 41, 2645–2652 (1990). URL https://doi.org/10.1103%2Fphysreva.41.2645. * [46] McDermott, S. D. & Witte, S. J. Cosmological evolution of light dark photon dark matter. _Physical Review D_ 101 (2020). URL https://doi.org/10.1103/physrevd.101.063030. * [47] Foster, J. W., Rodd, N. L. & Safdi, B. R. Revealing the dark matter halo with axion direct detection. _Physical Review D_ 97 (2018). URL https://doi.org/10.1103/physrevd.97.123006. * [48] Milul, O. _et al._ A superconducting quantum memory with tens of milliseconds coherence time. _arXiv preprint arXiv:2302.06442_ (2023). * [49] Johansson, J., Nation, P. & Nori, F. QuTiP: An open-source python framework for the dynamics of open quantum systems. _Computer Physics Communications_ 183, 1760–1772 (2012). URL https://doi.org/10.1016/j.cpc.2012.02.021. * [50] Johansson, J., Nation, P. & Nori, F. QuTiP 2: A python framework for the dynamics of open quantum systems. _Computer Physics Communications_ 184, 1234–1240 (2013). URL https://doi.org/10.1016/j.cpc.2012.11.019. * [51] Pippard, A. B. & Bragg, W. L. The surface impedance of superconductors and normal metals at high frequencies ii. the anomalous skin effect in normal metals. _Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences_ 191, 385–399 (1947). URL https://royalsocietypublishing.org/doi/abs/10.1098/rspa.1947.0122. * [52] Reuter, G. E. H., Sondheimer, E. H. & Wilson, A. H. The theory of the anomalous skin effect in metals. _Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences_ 195, 336–364 (1948). URL https://royalsocietypublishing.org/doi/abs/10.1098/rspa.1948.0123. * [53] Zheng, H., Silveri, M., Brierley, R. T., Girvin, S. M. & Lehnert, K. W. Accelerating dark-matter axion searches with quantum measurement technology (2016). URL https://arxiv.org/abs/1607.02529. * [54] Sun, L. _et al._ Tracking photon jumps with repeated quantum non-demolition parity measurements. _Nature_ 511, 444–448 (2014). URL https://doi.org/10.1038%2Fnature13436. * [55] Hann, C. T. _et al._ Robust readout of bosonic qubits in the dispersive coupling regime. _Physical Review A_ 98 (2018). URL https://doi.org/10.1103/physreva.98.022305. * [56] Jin, X. _et al._ Thermal and residual excited-state population in a 3d transmon qubit. _Physical Review Letters_ 114 (2015). URL https://doi.org/10.1103/physrevlett.114.240501. * [57] Lu, N. Effects of dissipation on photon statistics and the lifetime of a pure number state. _Physical Review A_ 40, 1707–1708 (1989). URL https://doi.org/10.1103%2Fphysreva.40.1707.
# Simulating spacetime with indefinite causal order via Rindler observers Aleksandra Dimić Faculty of Physics, University of Belgrade, Studentski Trg 12-16, 11000 Belgrade, Serbia<EMAIL_ADDRESS>Marko Milivojević Faculty of Physics, University of Belgrade, Studentski Trg 12-16, 11000 Belgrade, Serbia<EMAIL_ADDRESS>Dragoljub Gočanin Faculty of Physics, University of Belgrade, Studentski Trg 12-16, 11000 Belgrade, Serbia <EMAIL_ADDRESS>Časlav Brukner Vienna Center for Quantum Science and Technology (VCQ), University of Vienna, Faculty of Physics, Boltzmanngasse 5, A-1090 Vienna, Austria Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria<EMAIL_ADDRESS> ###### Abstract Realization of indefinite causal order, a theoretical possibility that even causal relations between physical events can be subjected to quantum superposition, apart from its general significance for the fundamental physics research, would also enable quantum information processing that outperforms protocols in which the underlying causal structure is definite. In this paper, we propose a way to simulate specific spacetime with indefinite metric structure by exploiting the equivalence between stationary observers sitting in the vicinity of the event horizon of a Schwarzschild black hole and Rindler observers in Minkowski space. Namely, by putting a Rindler observer, who resides in causally definite Minkowski background, in a state of quantum superposition of having two different values of proper acceleration, we can simulate the experience of a stationary observer in gravitational field with indefinite metric generated by a Schwarzschild black hole in a state of quantum superposition of being at two different spatial locations with respect to the observer. In this manner, a pair of entangled Rindler observers can be used to simulate quantum communication protocols such as gravitational quantum switch or the violation of Bell’s inequality for temporal order. We also discuss the possibility of experimental realization by means of optomechanical resonators. ## 1 Introduction The principle of causality lays in the core of every physical theory and, depending on the context, it has various interpretations. From an operational point of view, causality can be understood as signaling/communication relations between physical observers (systems, in general), an information flow whose properties are intimately related to the nature of space and time, the notions of which have evolved through several stages. In the old Newtonian picture, space and time are two generically different entities, universal for all observers. There is a single flat Euclidean space and a single global time that enables us to universally distinguish between past, present and future. Together, they constitute an absolute, independent background structure relative to which every physical event takes place. Signals can propagate in space with unlimited speed (action at a distance) and, consequently, each event can be caused by any other in its present or past. Einstein’s theory of special relativity (SR) changed this paradigm: space and time became united into a four dimensional spacetime continuum - the Minkowski space - in which signals cannot travel faster than the speed of light, enforcing them to stay inside, or on the local light cone. Nevertheless, the structure of Minkowski space adhered the character of an independent, fixed background on which dynamical matter fields propagate. The radical change came with Einstein’s theory of general relativity (GR). The dynamics of gravitational field, that is, spacetime itself according to GR, is not given a priori, since it is coupled to the dynamics of matter fields. There is no prescribed, independent metric structure, no absolute background stage relative to which locations of physical events are to be defined, there are just dynamical fields, spacetime being one of them, and physical events are only located relative to one another. Every event has its own past and future, a class of events from which it can receive information and a class of events to which it can send information and the possibility of communication between different observers is completely determined by dynamical configuration of light cones. While in flat Minkowski space all light cones have the same slope, in curved spacetimes of GR, they can be tilted relative to each other, according to the distribution of matter. For a given observer in a spacetime with definite causal order, that is, an observer in a gravitational field with a definite metric structure, the causal relations between physical events (as they appear to the observer) are uniquely determined. Next logical question would be: can spacetime have indefinite causal order, that is, can it be in a state in which a particular observer experiences quantumly superposed metric structure? It is generally expected that the unification of quantum mechanics (QM) and gravitational physics will provide us with some deeper insights concerning the nature of space and time at the microscopic (Planck) scale. That is the main subject of quantum gravity. However, the standard methods of quantization of matter fields employed in Quantum Field Theory (QFT) do not seem to work for Einstein gravity; it holds a status of a non-renormalizabile theory with undetermined high-energy degrees of freedom. In order to transcend the traditional concepts established within GR and QFT, various radical approaches of “quantizing” gravity are proposed so far, stemming from String Theory, Quantum Loop Gravity, Noncommutative Field Theory, etc. Up to date, there has been no empirical evidence that would support or disprove any of the proposed “high-energy theories”. This state of affairs motivates us to take an _operational point of view_ and reconsider in which sense and to what extent can the fundamental principles of QM be applied to gravity while adhering the tenets of GR [1, 2]. From a broader perspective, our goal is to find ways to “lure out”, in laboratory conditions, effects that would distinctly characterize gravity to have quantum features. An important question that arises out of these considerations is whether it is possible to impose the principle of quantum superposition upon the causal structure of spacetime, that is, can we have a quantum superposition of two (or more) _macroscopically distinct_ metric structures? In the work of Oreshkov, Costa and Brukner [3], it was found that it is possible to formulate quantum mechanics without any reference to a global causal structure. The resulting framework - the process matrix formalism \- allows for processes incompatible with any definite order between operations performed on quantum systems. These indefinite causal structures are shown to be advantageous for quantum computing [4, 5] and quantum communication [6, 7, 8]. One particular example that has experimental demonstration is “quantum switch” [4, 9, 10, 11, 12, 13], where the main idea is to use an auxiliary quantum system which can coherently control the order in which certain operations are applied. In the case of the so called gravitational quantum switch (GQS) [14] the role of the control system is played by a gravitating object prepared in a state of quantum superposition of being at two different spatial locations. Due to entanglement with the gravitating object, the spacetime itself is expected to be in a state of quantum superposition of having two macroscopically distinct metrics generated by the gravitating object. Here we propose a method for simulating some specific indefinite causal structures, potentially in laboratory conditions, by utilizing the equivalence between stationary observers in the vicinity of the event horizon of a Schwarzschild black hole and Rindler observers in Minkowski space [15]. It is based on the generalization of Einstein’s equivalence principle to spacetimes with indefinite metric structure by which we claim that _quantum superposition of two macroscopically distinct metric structures of spacetime is locally equivalent to a “quantum reference frame”_ [16] _in flat spacetime with two superposed proper accelerations_. This allows us to understand the experience of an observer sitting in indefinite gravitational field, in particular, in two distinct “superposed Schwarzschild metrics”, in terms of a Rindler observer in flat Minkowski space in the state of superposition of having two different values of its proper accelerations. We present a Rindler-version of GQS and the protocol for violation of Bell’s inequality for temporal order that involves three Rindler observers. Finally, we discuss the possibility of experimental realization of these Rindler-protocols by the means of optomechanical oscillators [17, 18, 19, 20]. ## 2 Rindler observers Consider an arbitrary inertial observer in $(1+1)$ \- dimensional Minkowski space, and a light cone of a single event of its worldline. This observer defines global time $t$ running along its worldline. With respect to it, we introduce an observer that has constant proper acceleration of magnitude $\alpha$ in the $x$-direction, called the _Rindler observer_. The metric of the $(1+1)$ \- dimensional Minkowski space covered by globally inertial coordinates $x^{\mu}=(t,x)$ is given by $ds^{2}=-dt^{2}+dx^{2}$, where we set $c=1$, implying the same dimensions of space and time. The worldline of a Rindler observer, parameterized by its proper time $\tau$, is given by the parametric equations: $t(\tau)=\frac{1}{\alpha}\sinh(\alpha\tau),\;\;x(\tau)=\pm\frac{1}{\alpha}\cosh(\alpha\tau).$ (1) Thus, the shape of the Rindler observer’s worldline is a hyperbola, $t^{2}(\tau)-x^{2}(\tau)=-1/\alpha^{2}$, with branches embedded in the space- like separated wedges of the above mentioned light cone, called the left (L) and the right (R) Rindler wedge (see Fig. 1 (left panel)). Rindler observer with larger proper acceleration has more curved worldline. The structure of light cones in Minkowski space is such that Rindler observers in R-wedge can only witness the events from regions R and P, and so, the null surface $t=x$ acts as an event horizon for these observers. Regions L and R are causally disconnected from each other, meaning that Rindler observers in L-wedge cannot communicate with Rindler observers in R-wedge. Consider now two Rindler observers in the R-wedge, with different proper accelerations $\alpha_{1}$ and $\alpha_{2}$. Let the second one be more curved than the first one, that is, let $\alpha_{2}>\alpha_{1}$. A photon sent to the left from the source $S$, with spacetime coordinates $t_{s}=0$ and $x_{s}=x_{0}>0$, intersects worldlines of these Rindler observers at proper times $\tau_{1}$ and $\tau_{2}$, respectively (see Fig. 1 (right panel)). At $t=0$ both observers are closer to the origin than S. This implies that $\alpha_{2}x_{0}>\alpha_{1}x_{0}>1$. This configuration has an interesting feature that will turn out to be important. Namely, given the values of $x_{0}$ and $\alpha_{1}$, there exists a unique value for $\alpha_{2}$, defined as the non trivial solution ($\alpha_{2}\neq\alpha_{1}$) of the equation $\alpha_{2}x_{0}=(\alpha_{1}x_{0})^{\frac{\alpha_{2}}{\alpha_{1}}},$ (2) for which $\tau_{1}=\tau_{2}$ (for details, see Appendix A). Figure 1: Rindler observers. (Left panel): _Hyperbolic worldlines of left and right Rindler observers in Minkowski space._ Patches L and R, called the left and the right Rindler wedge, respectively, are causally disconnected. This feature disables left and right Rindler observer to communicate with each other. (Right panel): _Photon’s worldline intersects the Rindlers._ Two Rindler observers in R-wedge with different proper accelerations $\alpha_{1}$ and $\alpha_{2}$, $\alpha_{1}<\alpha_{2}$. A photon sent from the point-like source $S$, with spacetime coordinates $t_{s}=0$ and $x_{s}=x_{0}>0$, intersects worldlines of these Rindler observers at proper times $\tau_{1}$ and $\tau_{2}$, respectively. ## 3 Indefinite causal order via Rindler observers Imagine that we have a simple system involving a Schwarzschild black hole111Having in mind that we are going to make a connection to Rindler observers in Minkowski space, it is more suitable to talk about a Schwarzschild black hole rather than some ordinary spherically symmetric gravitating object, e.g. a planet, because of the importance of an event horizon., and a stationary observer sitting in his/hers isolated laboratory that is well enough localized and has negligible effect on the gravitational field. We do not assume any fixed background spacetime to which we could refer to and define the locations of objects. Black hole and the observer are located relative to each other. Now suppose that the black hole and the observer are in a state of quantum superposition of being at two different relative distances from each other, where by relative distance we mean the physical proper distance between the observer and the black hole’s horizon, the length of a stationary observer’s meter stick if he/she would try to touch the horizon with it. The observer would “feel” that he/she resides in a gravitational field with indefinite metric structure, a kind of a “quantum spacetime” the nature of which we want to comprehend. From an operational viewpoint, one could equivalently say that the observer is in a state of superposition of being at two different locations in spacetime with definite Schwarzschild metric. In the latter case, having a definite spacetime structure, we can introduce the standard Schwarzschild coordinates $(t,r,\theta,\phi)$ outside the horizon, The metric is given by: $ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega^{2}_{2},$ (3) with $f(r)=1-\frac{R_{S}}{r}$ and $\Omega_{2}^{2}=d\theta^{2}+\sin^{2}{\theta}d\phi^{2}$, the metric of a $2$-sphere $S^{2}$. The radial distance between the stationary observer sitting at $r_{lab}$ and the event horizon at $R_{S}$ is $\rho=\int_{R_{S}}^{r_{lab}}\frac{dr}{f(r)}.$ (4) To be in “two superposed Schwarzschild gravitational fields” is effectively the same as being in a state of superposition of having two different radial distances from the horizon of a Schwarzschild black hole. In general, for every stationary $(r=\rm{const}.)$ observer in a gravitational field of a Schwarzschild black hole there is an equivalent Rindler observer in Minkowski space and vice versa (according to the Einstein’s principle of equivalence gravitational field is _locally equivalent_ to an accelerating reference frame in flat spacetime). Even more natural correspondence holds if a stationary Schwarzschild observer is close to the horizon, because spacetime metric in the vicinity of the horizon reduces to the metric of Minkowski space in Rindler coordinates. In that case, if the Schwarzschild observer’s proper distance from the horizon is $\rho$, then its corresponding Rindler observer’s distance from the origin will also be $\rho$. These two observers have the same proper acceleration, inversely proportional to $\rho$. For Rindler observer this relation between its proper acceleration and its distance from the origin always holds, but for a stationary Schwarzschild observer it holds only in the vicinity of black hole’s horizon, and so, this condition becomes important for obtaining genuine equivalence (see Appendix B for details). Hence, we can effectively transcribe the original system (an observer and a black hole in a state of quantum superposition of being at two different relative distances from each other) in terms of the corresponding Rindler observer in Minkowski space having two superposed values of its proper acceleration. This entails the claim that Einstein’s equivalence principle holds even in spacetime with indefinite metric structure, in other words, that quantum superposition of two macroscopically distinct metric structures of spacetime is _locally equivalent_ to a “quantum reference frame” in flat spacetime with two superposed proper accelerations. It is an extension of Einstein’s equivalence principle that assumes its compatibility with the linearity of quantum mechanics applied to spacetime. Figure 2: Possible configurations involving two stationary observers, $A_{S}$ and $B_{S}$, and a Schwarzschild black hole that is in a state of superposition of being at two different locations relative to them, and the corresponding configurations that involve entangled Rindler observers in Minkowski space. Let us now take two stationary observers, Schwarzschild-Amber $(A_{S})$ and Scwarzchild-Blue $(B_{S})$222Note that amber and blue are, conveniently, the actual names of the colors designating the two observers., sitting in their isolated laboratories, and a Schwarzschild black hole that is in a state of superposition of being at two different locations with respect to them. $A_{S}$ and $B_{S}$ then reside in a spacetime with indefinite metric structure. These two observers correspond to the pair of Rindler observers in Minkowski space, Rindler-Amber $(A_{R})$ and Rindler-Blue $(B_{R})$, with entangled proper accelerations. By examining all possible configurations of this system, we conclude that there are four nonequivalent cases (see Fig. 2). For example, the first configuration corresponds to the superposition of a state in which communication between observers $A_{S}$ ($A_{R}$) and $B_{S}$ ($B_{R}$) is impossible due to the presence of the horizon, and the other state in which they can communicate. The correspondence can be extended to the case of many observers residing in spacetime with indefinite metric structure, and here we will use it to simulate two simple quantum information protocols that can be naturally established using indefinite causal structures - gravitational quantum switch (for this the second configuration will be relevant) and the protocol for the violation of Bell’s inequality for temporal order (for which we need three observers). ## 4 Simulating gravitational quantum switch via entangled Rindler observers The idea that a gravitating object in spatial superposition can induce a superposition of two gravitational fields dates back to Feynman [21] and it was promoted, for example, in [14, 22, 23, 24]. Most importantly for this work, it was employed in [14] as a way of obtaining gravitational quantum switch. Basically, a gravitating object is prepared in a state of quantum superposition of being at two different spatial locations, thus producing, due to its entanglement with the gravitational field, a spacetime with indefinite metric structure. This opens a possibility of defining a communication protocol in which one can obtain a superposition of temporal order for two operationally defined physical events. The state of the gravitating object plays the role of a quantum control for the order of these events (see [14] for the complete account). We will now present a somewhat simpler version of gravitational quantum switch that can be more easily transcribed in terms of Rindler observers. The setup is illustrated in Fig. $3$. It involves two observers, $A_{S}$ and $B_{S}$, sitting in their isolated laboratories, a photon source $S$, and a Schwarzschild black hole that is in the state of superposition of having two different locations relative to the observers. In the first case the black hole is closer to $A_{S}$, state $|L\rangle$, and in the other it is closer to $B_{S}$, state $|R\rangle$. In both superposed states, $A_{S}$ and $B_{S}$ lay on the same radial ray, they are sitting at some fixed (but different) distance from the horizon, and have the same distance between each other. We can think of this as having two observers, with fixed relative distance, in a gravitational field with two superposed macroscopically distinct Schwarzschild metrics. A photon source is connected to the observers and it can send photons to them. The source is gravity- sensitive and it is adjusted so that it emits a photon in the polarization state $|{\Psi}\rangle$ towards the black hole. The position of the black hole thus plays the role of a quantum control for the whole process. Due to the fact that time runs slower for the observer closer to the horizon, we can arrange things so that the photon passes through both laboratories at the same moment of their local proper time (see Appendix C for details). This is analogues to the case of two Rindler observers from Section $2$. When the photon gets inside the laboratory, instantaneously, a unitary transformation, $U_{A}$ or $U_{B}$, depending on the laboratory, is applied on its polarization state. The meeting of the photon and the laboratory $A_{S}$ and instantaneous application of unitary $U_{A}$ is the event a, and likewise, the meeting of the photon and the laboratory $B_{S}$ and instantaneous application of unitary $U_{B}$ is the event b. Figure 3: Gravitational quantum switch. The system involves a photon source $S$, two stationary observers, $A_{S}$ and $B_{S}$, and a black hole in the state of superposition of having two different locations relative to them. Photon source shoots the photon towards the black hole, and so, the position of the black hole plays the role of a quantum control for the whole process. The photon is in a superposition of traveling in two opposite directions. If $A_{S}$ and $B_{S}$ are, in both superposed states, close to the horizon, than this whole system corresponds to the system involving two Rindler observers in Minkowski space with _entangled proper accelerations_ (Fig. $4$). In the Rindler-scenario, worldline of a photon emitted by the source $S$ intersects worldlines of the two Rindler observers $A_{R}$ and $B_{R}$ with entangled proper accelerations. On the left panel, the proper acceleration of $A_{R}$ ($\alpha_{1}$) is smaller than the proper acceleration of $B_{R}$ ($\alpha_{2}$) and on the right panel it is other way around, $A_{R}$ has larger proper acceleration ($\alpha_{2}$) and $B_{R}$ has smaller proper acceleration ($\alpha_{1}$). By choosing a suitable values for the proper accelerations $\alpha_{1}$ and $\alpha_{2}$, with $\alpha_{1}<\alpha_{2}$, these meetings occur at the same proper time $\tau^{*}$ (see discussion in Section $2$). The photon is in the polarization state $|{\Psi}\rangle$ and it is acted upon by the Rindler observers333By “observer” we mean the internal dynamical degrees of freedom inside the laboratory, whatever they may be. according to their “color” degree of freedom, “amber” ($A$) or “blue” ($B$), that identifies and distinguishes them. Observer $A_{R}$ performs a unitary transformation $U_{A}$ at $\tau^{*}$ without disturbing the trajectory of the photon, and observer $B_{R}$ performs a unitary transformation $U_{B}$ at $\tau^{*}$ also without disturbing the trajectory of the photon. In general, degrees of freedom of laboratory (its kinematic mode and internal degrees of freedom) can get entangled with the state of the photon. Internal state of the laboratories evolves according to the observer’s proper time, since it accounts for the physical rate of change. Thus, we will take a state $|{\tau_{\alpha}}\rangle$ of observer’s “clock” and its “ticking” (which depends on $\alpha$) to be abstractions of its entire actual state and its evolution, respectively, without getting into details of what are observer’s actual degrees of freedom and its hamiltonian. We now define event $a$ to be the meeting of the photon with $A_{R}$ and instantaneous application of the unitary $U_{A}(\tau^{*})$, and likewise, we define event $b$ to be the meeting of the photon with $B_{R}$ and instantaneous application of the unitary $U_{B}(\tau^{*})$. Figure 4: Quantum switch via Rindler observers. Worldline of a photon emitted by the source $S$ intersects worldlines of the two Rindler observers $A_{R}$ and $B_{R}$ that have entangled proper accelerations. On the left panel, $A_{R}$ has smaller proper acceleration ($\alpha_{1}$) than $B_{R}$ ($\alpha_{2}$), and on the right panel, $A_{R}$ has larger proper acceleration ($\alpha_{2}$) than $B_{R}$ ($\alpha_{1}$). By choosing a suitable values for the proper accelerations, these meetings occur at the same proper time $\tau^{*}$. Rindler observers act upon the photon according to their ”color” degree of freedom, ”amber” ($A$) or ”blue” ($B$), that identifies them. Observer $A_{R}$ performs a unitary transformation $U_{A}$ at $\tau^{*}$ without disturbing the trajectory of the photon, and observer $B_{R}$ performs a unitary transformation $U_{B}$ at $\tau^{*}$ also without disturbing the trajectory of the photon. When laboratories meet the photon, they instantaneously come to rest and remain that way until some particular moment $t_{m}$ of inertial observer $C$’s proper time at which a projective measurement is performed in order to disentangle the state of the photon from that of the laboratories. Observer $C$ eventually receives the photon. It would also be convenient to neutralize the accelerations of Rindler laboratories at the moment they meet the photon because kinematic state of the laboratory can also get entangled with the state of the photon444This is analogues to letting the corresponding Schwarzschild observers $A_{S}$ and $B_{S}$ to become free falling at the time they meet the radially falling photon. In this way we can avoid the difficulty of disentangling these kinematic degrees of freedom from the state of the photon afterwards. Neutralization can be achieved by instantaneously putting to rest each of the laboratories when they meet the photon, making them inertial from that point on. For the sake of reference, we will update the state of the whole system (Rindlers $\otimes$ photon) by using the proper time $t$ of inertial observer $C$ sitting at $x=0$. If the system is prepared at $t=0$ in the composite state $|{\tau_{\alpha_{1}}(0),A}\rangle|{\tau_{\alpha_{2}}(0),B}\rangle|{\Psi}\rangle$, the photon is first sent to the $A_{R}$ (left panel), and transmitted in the same direction to $B_{R}$. In the other case, when the state of the system is $|{\tau_{\alpha_{1}}(0),B}\rangle|{\tau_{\alpha_{2}}(0),A}\rangle|{\Psi}\rangle$ (right panel), the signal first gets to $B_{R}$ and then to $A_{R}$. If the system is prepared in a state of superposition of these two states, at $t=0$ we have $\displaystyle\frac{1}{\sqrt{2}}\Big{(}$ $\displaystyle|{\tau_{\alpha_{1}}(0),A}\rangle|{\tau_{\alpha_{2}}(0),B}\rangle+|{\tau_{\alpha_{1}}(0),B}\rangle$ $\displaystyle|{\tau_{\alpha_{2}}(0),A}\rangle\Big{)}|{\Psi}\rangle.$ (5) At $t<t_{1}$ (where $t_{1}$ is the $C$’s time coordinate of the intersection of the photon’s worldline with the less curved Rindler worldline) the state is $\displaystyle\frac{1}{\sqrt{2}}\Big{(}$ $\displaystyle|{\tau_{\alpha_{1}}(t),A}\rangle|{\tau_{\alpha_{2}}(t),B}\rangle+|{\tau_{\alpha_{1}}(t),B}\rangle$ $\displaystyle|{\tau_{\alpha_{2}}(t),A}\rangle\Big{)}|{\Psi}\rangle.$ (6) When the photon passes through the laboratories, unitary transformation $U_{A}(\tau^{*})$ or $U_{B}(\tau^{*})$ is applied on it, depending on the laboratory. After the passage of the photon through both laboratories, at some instant $t$ such that $t>t_{2}$ (where $t_{2}$ is the $C$’s time coordinate of the intersection of the photon’s worldline with the more curved Rindler worldline) the state of the whole system is given by $\displaystyle\frac{1}{\sqrt{2}}\Big{(}$ $\displaystyle|{\tau^{*}+t-t_{1},{A}}\rangle$ $\displaystyle|{\tau^{*}+t-t_{2},{B}}\rangle U_{B}(\tau^{*})U_{A}(\tau^{*})|{\Psi}\rangle+$ (7) $\displaystyle|{\tau^{*}+t-t_{1},{B}}\rangle$ $\displaystyle|{\tau^{*}+t-t_{2},{A}}\rangle U_{A}(\tau^{*})U_{B}(\tau^{*})|{\Psi}\rangle\Big{)},$ where the differences $t-t_{1}$ and $t-t_{2}$ are the time intervals during which respective Rindler laboratories are at rest relative to $C$. Finally, we need to disentangle the state of the photon from the internal state of Rindler laboratories. To this end, at some fixed moment $t_{m}$ of the $C$’s global time, a projective measurement (postselection of the internal state of the Rindler laboratories) is performed in the superposition basis $\\{|{m_{i}}\rangle,|{m_{i}^{\perp}}\rangle|i=1,2\\}$, separately for each laboratory. The basis states are given by $\displaystyle|{m_{i}}\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}\Big{(}|{\tau^{*}+t_{m}-t_{i},A}\rangle+|{\tau^{*}+t_{m}-t_{i},B}\rangle\Big{)},$ $\displaystyle|{{m_{i}}^{\perp}}\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}\Big{(}|{\tau^{*}+t_{m}-t_{i},A}\rangle-|{\tau^{*}+t_{m}-t_{i},B}\rangle\Big{)}.$ (8) Postselection on any pair of possible measurement results leads to the following state of the photon $\frac{1}{\sqrt{2}}(U_{B}(\tau^{*})U_{A}(\tau^{*})\pm U_{A}(\tau^{*})U_{B}(\tau^{*}))|{\Psi}\rangle.$ (9) which corresponds to indefinite temporal order of events $a$ and $b$. ## 5 Violation of Bell’s inequality for temporal order In Ref. [14] it was shown that by using a massive object in a spatial superposition (and by extension, superposition of spacetime with two different geometries) as a control system, one can realize events with “entangled temporal order”. This allows violation of Bell’s inequalities [25, 26, 27, 28, 29, 30, 31] for temporal order. Here we present an alternative realization of this protocol using Rindler observers. To this end, we consider the following situation: laboratory $A_{R}$ is in the left Rindler wedge, laboratory $B_{R}$ is in the right Rindler wedge and laboratory $C_{R}$ is in the superposition of being in right and left Rindler wedge. Figure 5: Protocol for the violation of Bell’s inequality. In the “gravitational” scenario, black hole is in a superposition of being between $A_{S}$ and $C_{S}$ and between $C_{S}$ and $B_{S}$. This corresponds to Rindler scenario where $C_{R}$ is in a superposition of being in both wedges, left and right, while $A_{R}$ and $B_{R}$ have swapped magnitudes accelerations, while staying in the same respective wedge. This corresponds to the three stationary laboratories $A_{S}$, $B_{S}$ and $C_{S}$ residing in spacetime generated by a Schwarzschild black hole that is in a state of superposition of being between $A_{S}$ and $C_{S}$ and between $C_{S}$ and $B_{S}$ (see Fig. 5). The protocol for violation of Bell’s inequality goes as follows: two sources $S_{1}$ and $S_{2}$ send two photons (one photon each), which are initially in the uncorrelated state $|{\Psi_{L}}\rangle\otimes|{\Psi_{R}}\rangle$, into right and left Rindler wedge, respectively. When photon meets a Rindler laboratory $X_{R}$ ($A_{R}$, $B_{R}$ or $C_{R}$), local unitary transformation $U_{X}(\tau^{*})$ is performed on it and the photon proceeds in the same direction. As in the previous protocol, after the passage of the photons, the states of the laboratories are decoupled from the states of the photons. Initial state of the laboratories and the photons is given by $\displaystyle\frac{1}{\sqrt{2}}\Big{(}|{\tau_{-\alpha_{2}}^{A}}\rangle|{\tau_{\alpha_{2}}^{C}}\rangle|{\tau_{\alpha_{1}}^{B}}\rangle+|{\tau_{-\alpha_{1}}^{A}}\rangle|{\tau_{-\alpha_{2}}^{C}}\rangle|{\tau_{\alpha_{2}}^{B}}\rangle\Big{)}|{\Psi_{L}}\rangle|{\Psi_{R}}\rangle,$ (10) where $|{\tau_{\beta}^{\rm lab}}\rangle=|{\tau_{\beta}(0),{\rm lab}}\rangle$, $\beta\in\\{\alpha_{1},\alpha_{2},-\alpha_{1},-\alpha_{2}\\}$, ${\rm lab}\in\\{A,B,C\\}$. One can readily check that the joint state of the two photons, after performing appropriate measurements on the internal degrees of freedom of the Rindler laboratories555Trace out of the degrees of freedom of laboratories can be done, in principle, in many ways; one suggestion is given in the previous section. is $\frac{1}{\sqrt{2}}(U_{A}(\tau^{*})|{\Psi_{L}}\rangle U_{C}(\tau^{*})U_{B}(\tau^{*})|{\Psi_{R}}\rangle\pm U_{C}(\tau^{*})U_{A}(\tau^{*})|{\Psi_{L}}\rangle U_{B}(\tau^{*})|{\Psi_{R}}\rangle).$ (11) If we perform unitary transformations such that states $U_{A}|{\Psi_{L}}\rangle$, $U_{C}U_{B}|{\Psi_{R}}\rangle$ are orthogonal to $U_{C}U_{A}|{\Psi_{L}}\rangle$, $U_{B}|{\Psi_{R}}\rangle$, respectively, then the state (11) is maximally entangled. One possible choice of states and operations is $|{\Psi_{L}}\rangle=|{+}\rangle=1/\sqrt{2}(|{0}\rangle+|{1}\rangle)$, $|{\Psi_{R}}\rangle=|{0}\rangle$, $U_{A}=H=\frac{1}{\sqrt{2}}\left(\begin{array}[]{cc}1&1\\\ 1&-1\\\ \end{array}\right)$, $U_{B}=\sigma_{z}=\left(\begin{array}[]{cc}1&0\\\ 0&-1\\\ \end{array}\right)$ and $U_{C}=\sigma_{x}=\left(\begin{array}[]{cc}0&1\\\ 1&0\\\ \end{array}\right)$. Finally, we can imagine two inertial observers sitting at $x=0$, for example, that can choose suitable measurements on the corresponding photons and perform Bell’s test, thus confirming the violation of Bell’s inequality for temporal order. ## 6 Conclusion To conclude, we applied an equivalence between stationary observers near the event horizon of a Schwarzschild black hole and Rindler observers in Minkowski space to simulate quantum information protocols in gravitational field with indefinite metric structure. We claim that such gravitational field is locally equivalent to a quantum non-inertial reference frame in Minkowski space that has superposed proper acceleration. An important example is gravitational quantum switch, where one uses a gravitating object in a state of superposition of being at two different spatial positions as a quantum control, for which we need two Rindler observers in Minkowski space with entangled proper accelerations. Likewise, the violation of Bell’s inequality for temporal order can be simulated by using three Rindler observers. Thus, we are able to “mimic” the experience of a stationary observer in spacetime with two superposed Schwarzschild metrics by preparing the corresponding Rindler observer in a state of superposition of having two different proper accelerations. There is a growing effort in demonstrating quantum features of nano-to- mesoscale optomechanical systems. This may provide a challenging, yet feasible experimental realizations for the proposed Rindler protocols [17]. Recently, mesoscopic mechanical resonators were considered as quantum non-inertial reference frames [18, 19] and entanglement of two massive mechanical oscillators is achieved [20]. It has been proposed to utilize quantum optical fields in order to prepare and measure the quantum states of mechanical resonators, conceivably opening the possibility to quantumly control the acceleration of such quantum non-inertial reference frames [17]. A potential drawback of the proposed protocols might arise due to the Unruh effect, that is, the fact that Rindler observer experiences ordinary Minkowski vacuum as a thermal state. In this context, Rindler observer should be viewed as Unruh-DeWitt detector [32], which interacts with, for example, a scalar quantum field in Minkowski space. The temperature detected by the Rindler observer is related to its proper acceleration $\alpha$ by the relation $T=\hbar\alpha/2\pi k_{B}c$. The increase of the thermal noise may affect the final state, such that it can no more be considered as a coherent superposition, but rather a (convex) classical mixture. However, since in our scheme we can choose $\alpha$ to be arbitrarily small by tuning the other parameters, the Unruh effect can always be made negligible. In future work, it would be interesting to explore further the possibility of transcribing quantum information protocols in general spacetimes with indefinite causal order in terms of equivalent quantum non-inertial reference frames in Minkowski space, with a goal of establishing the full correspondence. This could be a step towards a better understanding of quantum nature of spacetime. Acknowledgments The authors thank Natália Móller, Dejan Simić, Marko Vojinović, Nikola Paunković and Ämin Baumeler for helpful comments. A.D. and M.M. acknowledge support from the project No. ON171035 and D.G. from the project No. ON171031 of Serbian Ministry of Education and Science. Additionally, A.D. acknowledges support from scholarship awarded from The Austrian Agency for International Cooperation in Education and Research (OeAD-GmbH). A.D. and D.G. acknowledge grant FQXi-MGA-1806 that supported their stay in Vienna. A.D. and D.G. would also like to thank University of Vienna and IQOQI for hospitality during their stay. Č.B. acknowledges the support of the Austrian Academy of Sciences through Innovationsfonds Forschung, Wissenschaft und Gesellschaft, Austrian Science Fund (FWF) through the projects I-2526-N27 and I-2906 and the University of Vienna through the research platform TURIS. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. Author Contribution. A.D., M.M., D.G. and Č.B. equally contributed to all aspects of the research. ## References * [1] L. Hardy, arXiv:0509120 [gr-qc]. * [2] L. Hardy, _J. Phys. A_ 40, 3081 (2007). * [3] O. Oreshkov, F. Costa, and Č. Brukner, _Nat. Commun._ 3, 1092 (2012). * [4] G. Chiribella, G. M. D’Ariano, P. Perinotti, and B. Valiron, _Phys Rev. A_ 88, 022318 (2013). * [5] M. Araújo, F. Costa, and Č. Brukner, _Phys. Rev. Lett._ 113, 250402 (2014). * [6] A. Feix, M. Araújo, and Č. Brukner, _Phys. Rev. A_ 92, 052326 (2015). * [7] P. A. Guérin, A. Feix, M. Araújo, and Č. Brukner, _Phys. Rev. Lett._ 117, 100502 (2016). * [8] D. Ebler, S. Salek and G. Chiribella, _Phys. Rev. Lett._ 120, 1205020 (2018). * [9] G. Chiribella, _Phys. Rev. A_ 86, 040301(R) (2012). * [10] N. Friis, V. Dunjko, W. Dür, and H. J. Briegel, _Phys. Rev. A_ 89, 030303(R) (2014). * [11] L. M. Procopio, A. Moqanaki, M. Araújo, F. Costa, I. A. Calafell, E. G. Dowd, D. R. Hamel, L. A. Rozema, Č. Brukner, and P. Walther, _Nat. Commun._ 6, 7913 (2015). * [12] T. M. Rambo, J. B. Altepeter, P. Kumar, and G. M. D’Ariano, _Phys. Rev. A_ 93, 052321 (2016). * [13] G. Rubino, L. A. Rozema, A. Feix, M. Araújo, J. M. Zeuner, L. M. Procopio, Č. Brukner, and P. Walther, _Sci. Adv._ 3, e1602589 (2017). * [14] M. Zych, F. Costa, I. Pikovski, and Č. Brukner, arXiv: 1708.00248. * [15] F. Dahia, and P.J.F. da Silva, _General Relativity and Gravitation_ , 43, 269 (2011). * [16] F. Giacomini, E. Castro-Ruiz, Č. Brukner, arXiv:1712.07207 [quant-ph]. * [17] R. Kaltenbaek, M. Aspelmeyer, P. F. Barker, A. Bassi, J. Bateman, K. Bongs, et al, _EPJ Quantum Technology_ 3, 5 (2015). * [18] B. N. Katz, M. P. Blencowe, and K. C. Schwab, _Phys. Rev. A_ 92, 042104 (2015). * [19] M. Abdi, P. Degenfeld-Schonburg, M. Sameti, C. Navarrete-Benlloch, and M. J. Hartmann, _Phys. Rev. Lett._ 116, 233604 (2016). * [20] C. F. Ockeloen-Korppi, E. Damskagg, J.-M. Pirkkalainen, M. Asjad, A. A. Clerk, F. Massel, M. J. Woolley and M. A. Sillanpaa, _Nature_ 556, 478 (2018). * [21] R. Feynman, Chapel Hill Conference Proceedings, 1957. * [22] C. Anastopoulos and B.-L. Hu, _Class. Quant. Grav._ 32, 165022 (2015). * [23] S. Bose, A. Mazumdar, G. W. Morley, H. Ulbricht, M. Toro, M. Paternostro et al., _Phys. Rev. Lett._ 119, 240401 (2017). * [24] C. Marletto and V. Vedral, _Phys. Rev. Lett._ 119, 240402 (2017). * [25] R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, _Rev. Mod. Phys._ 81, 865 (2009). * [26] J. S. Bell, _Physics_ 1, 195 (1964). * [27] J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, _Phys. Rev. Lett._ 23, 880 (1969). * [28] S. J. Freedman and J. F. Clauser, _Phys. Rev. Lett._ 28, 938 (1972). * [29] B. Hensen et al., _Nature_ 526, 682 (2015). * [30] M. Giustina et al., _Phys. Rev. Lett._ 115, 250401 (2015). * [31] L. K. Shalm et al., _Phys. Rev. Lett._ 115, 250402 (2015). * [32] N. D. Birrell, and P. C. W. Davies, _Quantum fields in curved space_ , (Cambridge University Press, 1984). ## APPENDIX ## Appendix A Equal proper time condition for two Rindler observers Here we give a simple derivation of the relation between proper accelerations of two Rindler observers that must be satisfied so that a photon’s worldline intersects worldlines of the Rindler observers at the same moment of their individual proper time. From (Fig. A1) we can see that $x_{0}-x_{i}=t_{i},$ (12) where $i=\\{1,2\\}$. Given the parametric equations of Rindler observer’s worldline, $t_{i}=\frac{1}{\alpha_{i}}\sinh(\alpha_{i}\tau_{i}),\;\;x_{i}=\frac{1}{\alpha_{i}}\cosh({\alpha_{i}\tau_{i}}),$ (13) we can deduce the instant of the observer’s proper time $\tau_{i}$ in which he/she receives the signal from the point-source $S$ at $x_{0}>0$. From (12) and (13) it follows that $\frac{{\rm e}^{\alpha_{i}\tau_{i}}+{\rm e}^{-\alpha_{i}\tau_{i}}}{2\alpha_{i}}+\frac{{\rm e}^{\alpha_{i}\tau_{i}}-{\rm e}^{-\alpha_{i}\tau_{i}}}{2\alpha_{i}}=x_{0},$ (14) Figure A1: A photon intersecting two Rindler worldlines. Worldline of a photon sent from a point-like source $S$ intersects worldlines of two Rindler observers at instances $\tau_{1}$ and $\tau_{2}$ of their respective proper time. Inertial reference frame coordinates of the intersection points are denoted by $t_{i}$, $x_{i}$, $i=\\{1,2\\}$. Figure A2: Numerical analysis. Ratio of accelerations for Rindler observers. If we take that $X=\alpha_{1}x_{0}$ and $Y=\alpha_{2}x_{0}$ then (16) becomes $Y=\Phi_{X}(Y)=X^{\frac{Y}{X}}$. We have found three classes of solutions, but we can regard only one class as relevant if we take an assumption $Y>X$. One trivial solution in all three cases is $Y=X$. In a), case $X<1$ is illustrated, with $X=1/2$, where we have only trivial solution. When $X\in(1,{\rm e})$, we have non-trivial solution, $Y>X$. That case is represented in b), for $X={\rm e}/2$. Finally, in the case $X>{\rm e}$, we get two solutions - trivial one and the other for which $Y<X$. That case is illustrated in c), where $X$ is chosen to be $2{\rm e}$. and so, $\tau_{i}=\frac{1}{\alpha_{i}}\ln(\alpha_{i}x_{0}).$ (15) Note that both the prefactor and the argument of the logarithm are positive. The condition of equality of the proper times $\tau_{1}$ and $\tau_{2}$ gives us the following relation between $\alpha_{1}$ and $\alpha_{2}$: $\alpha_{2}x_{0}=(\alpha_{1}x_{0})^{\frac{\alpha_{2}}{\alpha_{1}}}.$ (16) By introducing new variables, $X:=\alpha_{1}x_{0}$ and $Y:=\alpha_{2}x_{0}$, previous equation can be formulated as $Y=X^{\frac{Y}{X}}.$ (17) Numerical analysis of (17) shows that solution $Y=X$, which exists for each value of $X$, is unique for $X<1$ (Fig. A2 (a)). This solution is trivial, since it corresponds to a single Rindler observer (or two overlapping Rindler observers with $\alpha_{1}=\alpha_{2}$). If $X>1$, there are two possible solutions for $Y$, one which is trivial and the other that can be greater or less than $X$ (Fig. A2 (b,c)). Note that for every value of $x_{0}$ (position of the source) we have a continuous infinity of pairs of Rindler trajectories that satisfy (16). ## Appendix B Schwarzschild metric in the vicinity of a horizon Worldlines of stationary observers in curved spacetime do not conform to the geodesics defined by the spacetime metric. To maintain a fixed position they must oppose their inertia with some proper acceleration. Generally, $4$-acceleration of an observer in curved spacetime is given by $a^{\mu}=U^{\nu}\nabla_{\nu}U^{\mu}$, where $U^{\mu}$ is its $4$-velocity, and its proper acceleration by $a=\sqrt{g_{\mu\nu}a^{\mu}a^{\nu}}$. In particular, for a stationary observer in Schwarzschild metric $ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega^{2}_{2},$ (18) with $f(r)=1-\frac{R_{S}}{r}$ and $\Omega_{2}^{2}=d\theta^{2}+\sin^{2}{\theta}d\phi^{2}$, the metric of a $2$-sphere $S^{2}$, we have $a=\frac{R_{S}}{2r^{2}\sqrt{f(r)}}.$ (19) Analogously to the proper time $d\tau=\sqrt{f(r)}dt$ of an observer sitting at $r=const$, we can introduce proper radial distance, $d\rho=\frac{dr}{\sqrt{f(r)}}.$ (20) Integrating from $R_{S}$ to $r$ we get the proper radial distance of the stationary observer at some fixed $r$ from the event horizon, $\rho=r\left(1-\frac{R_{S}}{r}\right)^{1/2}+\frac{R_{S}}{2}\ln\left[\frac{2r}{R_{S}}-1+\frac{2r}{R_{S}}\left(1-\frac{R_{S}}{r}\right)^{1/2}\right].$ (21) From (21) it follows that $\rho\sim r$ in the limit $r/R_{S}\rightarrow+\infty$, and so from (19) we get $a\sim R_{S}/2\rho^{2}$, which is just the Newtonian inverse square law. Now, let $r=R_{S}+\epsilon$ for some small $\epsilon$. In the limit $\epsilon/R_{S}\rightarrow 0$ we have $\rho\sim 2\sqrt{R_{S}\epsilon}$, and the proper acceleration of a stationary observer in the vicinity of event horizon is inversely proportional to its proper distance from the horizon, $a\sim 1/\rho$. In the intermediate region, proper acceleration is some very complicated function of observers proper radial distance. Schwarzschild metric near the horizon becomes $ds^{2}=-\rho^{2}d\eta^{2}+d\rho^{2}+R_{S}^{2}d\Omega^{2}_{2}.$ (22) where we introduced new time coordinate $\eta=\frac{t}{2R_{S}}$. The non- angular part of the above metric is the metric of $(1+1)$-dimensional Minkowski space, denoted by $\mathcal{M}_{2}$, in Rindler coordinates. This becomes evident if we start with the metric of $\mathcal{M}_{2}$ in Minkowski coordinates $(T,X)$ $ds^{2}_{\mathcal{M}_{2}}=-dT^{2}+dX^{2},$ (23) and introduce Rindler coordinates $(\rho,\eta)$ by $T=\rho\sinh\eta$ and $X=\rho\cosh\eta$ in which the above metric takes the form $ds^{2}_{\mathcal{M}_{2}}=-\rho^{2}d\eta^{2}+d\rho^{2}.$ (24) Coordinate $\rho$ is time-like and $\eta$ is space-like. Since $X^{2}-T^{2}=\rho^{2}\geq 0$, coordinates $(\rho,\eta)$ cover only part of $\mathcal{M}_{2}$ \- the right Rindler wedge. Rindler coordinates $(\rho,\eta)$ become singular at $\rho=0$ but, using the Minkowski coordinates $(T,X)$, one could analytically continue them from the right Rindler wedge to the whole Minkowski space. Similarly, in the case of a Schwarzschild black hole, we use Kruskal coordinates to make an analytic continuation of Schwarzschild coordinates $(t,r)$ across the horizon thus obtaining their maximal extension. The event horizon of a black hole, defined by $\rho=0$ as seen from (22), corresponds to the light cone $T=\pm X$, and near-horizon black hole geometry is $Rindler\times S^{2}$. An observer at $r=const$ $(r\approx R_{S})$ in Schwarzschild metric corresponds to a uniformly accelerating observer with $\rho=const$ in the Rindler wedge, i.e. an observer in Minkowski space whose worldline is a hyperbola $X^{2}-T^{2}=\rho^{2}=const$, whose constant proper acceleration is given by $\alpha=\frac{1}{\rho}=\frac{1}{2\sqrt{R_{S}}\sqrt{r-R_{S}}}.$ (25) Figure C1: Schematic representation of black hole and laboratories $A_{1}$ and $A_{2}$. Radially falling photon passes through stationary laboratories $A_{1}$ and $A_{2}$ sitting at two different radial distances from the black hole. The different ticking rates of local clocks make it possible to arranged the relative distances so that meetings of the photon with the laboratories occur at the same moment of their local proper times. ## Appendix C Equal proper times for free falling photon Start with the equation for radially free falling photon in Schwarzschild coordinates: $\frac{dt}{dr}=-\frac{1}{1-\frac{R_{S}}{r}}=-\frac{r}{r-R_{S}},$ (26) where $R_{S}$ is the Schwarzschild radius. A radially falling photon, emitted at $r=R_{0}$, passes through the laboratory $A_{1}$, sitting at $r=r_{1}$, at the time $t_{1}$. A simple calculation gives us that for $t_{1}$ we have $t_{1}=R_{0}-r_{1}+R_{S}\ln{\frac{R_{0}-R_{S}}{r_{1}-R_{S}}}.$ (27) The same photon passes through the laboratory $A_{2}$ sitting at $r=r_{2}$, at the time $t_{2}$ given by (see Fig. C1) $t_{2}=R_{0}-r_{2}+R_{S}\ln{\frac{R_{0}-R_{S}}{r_{2}-R_{S}}}.$ (28) We are looking for the condition for both events to happen at the same local proper times measured in laboratories $A_{1}$ and $A_{2}$, i.e. $\tau_{1}=\tau_{2}.$ (29) Proper times of the laboratories are related to the global time $t$ (proper time of a stationary observer at infinity) by $\tau_{i}=\sqrt{1-\frac{R_{S}}{r_{i}}}t_{i}=\sqrt{1-\frac{R_{S}}{r_{i}}}\left(R_{0}-r_{i}+R_{S}\ln{\frac{R_{0}-R_{S}}{r_{i}-R_{S}}}\right).$ (30) Numerical analysis of the previous equation shows that it is not possible to obtain the relation (29) for arbitrary values of $R_{0}$ and $R_{S}$. However, if the ratio $R_{S}/R_{0}<10^{-4}$, two possible solutions, pairs of $r_{1}$ and $r_{2}$ that satisfy the condition of equal proper times, are always present. It is important to mention that one position, $r_{1}$, is very close to the black hole, that is, $r_{1}<10R_{S}$, while the other position, $r_{2}$, can be further away.
# CLEVR_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over Images Shailaja Keyur Sampat, Akshay Kumar, Yezhou Yang and Chitta Baral Arizona State Universiy, USA {ssampa17,akuma216,yz.yang<EMAIL_ADDRESS>$\frac{}{}$ corresponding author ###### Abstract Most existing research on visual question answering (VQA) is limited to information explicitly present in an image or a video. In this paper, we take visual understanding to a higher level where systems are challenged to answer questions that involve mentally simulating the hypothetical consequences of performing specific actions in a given scenario. Towards that end, we formulate a vision-language question answering task based on the CLEVR Johnson et al. (2017a) dataset. Wethen modify the best existing VQA methods and propose baseline solvers for this task. Finally, we motivate the development of better vision-language models by providing insights about the capability of diverse architectures to perform joint reasoning over image-text modality111Dataset setup scripts and code for baselines are made available at https://github.com/shailaja183/clevr_hyp. For additional details about the dataset creation process, refer supplementary material.. ## 1 Introduction In 2014, Michael Jordan, in an interview Gomes (2014) said that “Deep learning is good at certain problems like image classification and identifying objects in the scene, but it struggles to talk about how those objects relate to each other, or how a person/robot would interact with those objects. For example, humans can deal with inferences about the scene: what if I sit down on that?, what if I put something on top of something? etc. There exists a range of problems that are far beyond the capability of today’s machines." While this interview was six years ago, and since then there has been a lot of progress in deep learning and its applications to visual understanding. Additionally, a large body of visual question answering (VQA) datasets Antol et al. (2015); Ren et al. (2015); Hudson and Manning (2019) have been compiled and many models have been developed over them, but the above mentioned “inferences about the scene” issue stated by Jordan remains largely unaddressed. In most existing VQA datasets, scene understanding is holistic and questions are centered around information explicitly present in the image (i.e. objects, attributes and actions). As a result, advanced object detection and scene graph techniques have been quite successful in achieving good performance over these datasets. However, provided an image, humans can speculate a wide range of implicit information. For example, the purpose of various objects in a scene, speculation about events that might have happened before, consider numerous imaginary situations and predicting possible future outcomes, intentions of a subject to perform particular actions, and many more. Figure 1: Motivation for the proposed CLEVR_HYP dataset: an example demonstrating how humans can do mental simulations and reason over resulting scenario. Among the above, an ability to imagine taking specific actions and simulating probable results without actually acting or experiencing is an important aspect of human cognition (Figure 1 gives an example of this). Thus, we believe that having autonomous systems equipped with a similar capability will further advance AI research. This is particularly useful for robots performing on-demand tasks in safety-critical situations or navigating through dynamic environments, where they imagine possible outcomes for various situations without executing instructions directly. Motivated by the above, we propose a challenge that attempts to bridge the gap between state-of-the-art AI and human-level cognition. The main contributions of this paper222Our work focuses on the capability of neural models to reason about the effects of actions given a visual-linguistic context and not on models that deal with intuitive physics. are as follows; * • We formalize a novel question answering task with respect to a hypothetical state of the world (in a visual form) when some action (described in a textual form) is performed. * • We create a large-scale dataset for this task, and refer it as CLEVR_HYP i.e. VQA with hypothetical actions performed over images in CLEVR Johnson et al. (2017a) style. * • We first evaluate the direct extensions of top VQA and NLQA (Natural language QA) solvers on this dataset. Then, we propose new baselines to solve CLEVR_HYP and report their results. * • Through analysis and ablations, we provide insights about the capability of diverse architectures to perform joint reasoning over image-text modality. I: 1. 1. TA: Paint the small green ball with cyan color. QH: Are there equal yellow cubes on left of purple object and cyan spheres? (A: yes) 2. 2. TA: Add a brown rubber cube behind the blue sphere that inherits its size from the green object. QH: How many things are either brown or small? (A: 6) 3. 3. TA: John moves the small red cylinder on the large cube that is to the right of purple cylinder. QH: What color is the object that is at the bottom of the small red cylinder? (A: yellow) Figure 2: Three examples from CLEVR_HYP dataset: given image (I), action text (TA), question about hypothetical scenario (QH) and corresponding answer (A). The task is to understand possible perturbations in I with respect to various action(s) performed as described in TA. Questions test various reasoning capabilities of a model with respect to the results of those action(s). ## 2 Related Work In this section we situate and compare our work with related areas such as implicit text generation/retrieval for a visual, visual question answering (VQA) over synthetic images, question answering (QA) involving hypothetical reasoning, and language-based manipulation in visual domains closest to CLEVR_HYP. #### Implicit Text Generation for a Visual: VisualComet Park et al. (2020) and Video2Commonsense Fang et al. (2020) have made initial attempts to derive implicit information about images/videos contrary to traditional factual descriptions which leverage only visual attributes. VisualComet aims to generate commonsense inferences about events that could have happened before, events that can happen after and people’s intents at present for each subject in a given image. They use a vision- language transformer that takes a sequence of inputs (image, event, place, inference) and train a model to predict inference in a language-model style. Video2Commonsense focuses on generating video descriptions that can incorporate commonsense facts related to intentions, effects, and implicit attributes about actions being performed by a subject. They extract top-ranked commonsense texts from the Atomic dataset and modify training objective to incorporate this information. While both involve a visual-textual component and actions, their key focus is about generating plausible events and commonsense respectively. Whereas, our work is related to performing certain actions and reasoning about its effect on the overall visual scene. #### Language-based Manipulation in Visual Domain: Learning a mapping from natural language instructions to a sequences of actions to be performed in a visual environment is a common task in robotics Kanu et al. (2020); Gaddy and Klein (2019); Shridhar et al. (2020). Another relevant task is vision-and-language navigation Anderson et al. (2018); Chen et al. (2018); Nguyen et al. (2019), where an agent navigates in a visual environment to find goal location by following natural language instructions. Both above works include visuals, natural language instructions and a set of actions that can be performed to achieve desired goals. In this way, it is similar to our CLEVR_HYP, but in our case, models require reasoning about the effect of actions performed rather than determining which action to perform. Also, we frame this in a QA style evaluation rather than producing instructions for low-level controls. Manipulation of natural images with language is an emerging research direction in computer vision. Teney et al. (2020) proposed a method for generating counterfactual of VQA samples using image in-painting and masking. Also, there are works Dong et al. (2017); Nam et al. (2018); Reed et al. (2016) which use Generative Adversarial Networks (GANs) Goodfellow et al. (2014) for language conditioned image generation and manipulation. However, both the above tasks are more focused at object and attribute level manipulation rather than at action level. #### VQA over Synthetic Images: While natural images-based VQA datasets reflect challenges one can encounter in real-life situations, the requirement of costlier human annotations and vulnerability to biases are two major drawbacks. Contrary to them, synthetic datasets allow controlled data generation at scale while being flexible to test specific reasoning skills. For the above reasons, following benchmark VQA datasets have incorporated synthetic images; COG Yang et al. (2018) and Shapes Andreas et al. (2016) contain images with rendered 2D shapes; SHRDLU Winograd (1971), CLEVR Johnson et al. (2017a), and CLEVR-dialog Kottur et al. (2019) have rendered scenes with 3D objects; DVQA Kafle et al. (2018) and FigureQA Kahou et al. (2017) have synthetically generated charts (bar chart, pie chart, dot-line etc.); VQA-abstract Antol et al. (2015) and IQA Gordon et al. (2018) involves question-answering over synthetically rendered clipart-style scenes and interactive environments respectively. Our proposed dataset CLEVR_HYP uses CLEVR Johnson et al. (2017a) style rendered scenes with 3D objects as a visual component. It is distinct from all other synthetic VQA datasets for two key reasons; first, integration of action domain in synthetic VQA and second, the requirement of mental simulation in order to answer the question. #### QA involving Hypothetical Reasoning: In the language domain, WIQA Tandon et al. (2019) dataset tests the model’s ability to do what-if reasoning over procedural text as a 3-way classification (the influence between pair of events as positive, negative or no-effect). In vision-language domains, a portion of TQA Kembhavi et al. (2017) and VCR Zellers et al. (2019) are relevant. Questions in TQA and VCR involve hypothetical scenarios about multi-modal science contexts and movie scenes respectively. However, none of the above two datasets’ key focus is on the model’s capability to imagine changes performed over the image. As shown in Figure 3, the setting of TIWIQ (a benchmark dataset for “physical intelligence”) Wagner et al. (2018) has some similarity with ours. It has synthetically rendered table-top scenes, four types of actions (push, rotate, remove and drop) being performed on an object and what-if questions. Figure 3: Example from TIWIQ Wagner et al. (2018). To our best knowledge, TIWIQ dataset is not publicly available. Based on our understanding from their manuscript, we observe following important distinction with this work. Our questions focus on the impact of actions on the whole image, while in TIWIQ questions are about impact of actions on a specific object in the image. Moreover, we frame CLEVR_HYP as a classification task, contrary to TIWIQ which is a generative task. Our CLEVR_HYP dataset has 175k automatically generated image-action text-question samples which is much larger compared to TIWIQ which has only 1020 samples and manually crafted ground-truths. ## 3 CLEVR_HYP Task and Dataset Figure 2 gives a glimpse of CLEVR_HYP task. We opt for synthetic dataset creation as it allows automated and controlled data generation at scale with minimal biases. More details are described below. #### 3 Inputs: Image(I), Action Text (TA) and Hypothetical Question (QH) #### 1\. Image(I): It is a given visual for our task. Each image in the dataset contains 4-10 randomly selected 3D objects rendered using Blender Blender Online Community (2019) in CLEVR Johnson et al. (2017a) style. Objects have 4 attributes listed in the Table 1. Additionally, these objects can be referred using 5 relative spatial relations (left, right, in front, behind and on). We provide scene graphs333Scene graphs and Functional Programs (for action text and question) are not provided at the test-time. containing all ground-truth information about a scene, that can be considered as a visual oracle for a given image. Attr. | Possible values in CLEVR_HYP ---|--- Color | gray, blue, brown, yellow, | red, green, purple, cyan Shape | cylinder, sphere or cube Size | small or big Material | metal (shining) or rubber (matte) Table 1: Object attributes in CLEVR_HYP scenes. (a) Function Catalog for CLEVR_HYP, extended from CLEVR Johnson et al. (2017a) (b) Dataset creation pipeline Figure 4: CLEVR_HYP dataset creation process with example and function catalog used for ground-truth answer generation. (for more details, see Appendix A.4) #### 2\. Action Text (TA): It is a natural language text describing various actions performed over the current scene. The action can be one of four: 1. (i) Add new object(s) to the scene 2. (ii) Remove object(s) from the scene 3. (iii) Change attributes of the object(s) 4. (iv) Move object(s) within scene (might be in plane i.e. left/right/front/back or out of plane i.e. move one object on top of another object444For simplicity, we assume that any object can be put on another object regardless of its size, material or shape.) To generate action text, we start with manually written templates involving the aforementioned actions. For example, action involving change in the attribute of object(s) to a given value, we have a template of the following kind; ‘Change the $<$A$>$ of $<$Z$><$C$><$M$><$S$>$ to $<$V$>$’. Where $<$A$>$, $<$Z$>$, $<$C$>$,$<$M$>$,$<$S$>$,$<$V$>$ are placeholders for the attribute, size, color, material, shape and a value of attribute respectively. Each action text in the CLEVR_HYP is associated with a functional program which if executed on an image’s scene graph, yields the new scene graph that simulates the effects of actions. Functional programs for action texts3 are built from the basic functions that correspond to elementary action operations (right part of Figure 4a). For the above mentioned ‘change’ attribute action template, the equivalent functional program can be written as; ‘change_attr($<$A$>$,filter_size($<$Z$>$,filter _color($<$C$>$, filter_material($<$M$>$filter_ shape($<$S$>$, scene())))),$<$V$>$)’. It essentially means, first filter out the objects with desired attributes and then update the value of their current attribute A to value V. #### 3\. Question about Hypothetical Situation (QH): It is a natural language query that tests various visual reasoning abilities after simulating the effects of actions described in TA. There are 5 possible reasoning types similar to CLEVR; 1. (i) Counting objects fulfilling the condition 2. (ii) Verify existence of certain objects 3. (iii) Query attribute of a particular object 4. (iv) Compare attributes of two objects 5. (v) Integer comparison of two object sets (same, larger or smaller) Similar to action texts, we have templates and corresponding programs for questions. Functional programs for questions3 are executed on the image’s updated scene graph (after incorporating effects of the action text) and yields the ground-truth answer to the question. Functional programs for questions are made of primitive functions shown in left part of the Figure 4a). Paraphrasing: In order to create a challenging dataset from linguistic point of view and to prevent models from overfitting on templated representations, we leverage noun synonyms, object name paraphrasing and sentence-level paraphrasing. For noun synonyms, we use a pre-defined dictionary (such as cubeb̃lock, sphereb̃all and so on). We programmatically generate all possibilities to refer to an object in the image (i.e. object name paraphrasing) and randomly sample one among them. For sentence level paraphrasing, we use Text-To-Text Transfer Transformer (T5) Raffel et al. (2020) fine-tuned over positive samples from Quora Question Pairs (QQP) dataset Iyer et al. (2017) for question paraphrasing. We use Fairseq Ott et al. (2019) for action text paraphrasing which uses round-trip translation and mixture of experts Shen et al. (2019). Note that we keep the action text and question as separate inputs for the purpose of simplicity and keeping our focus on building solvers that can do mental simulation. One can create a simple template like “$<$QH$>$ if $<$proper-noun/pronoun$>$ $<$TA$>$?" or “If $<$proper-noun/pronoun$>$ $<$TA$>$, $<$QH$>$?" if they wish to process action and question as a single text input. For example, “How many things are the same size as the cyan cylinder if I add a large brown rubber cube behind the blue object." or “If I add a large brown rubber cube behind the blue object, how many things are the same size as the cyan cylinder?". However, having them together adds further complexity on the solver side as it first has to figure out what actions are performed and what is the question. By providing ground-truth object information (as a visual oracle) and machine- readable form of questions & action texts (oracle for linguistic components). This information can be used to develop models which can process semi- structured representations of image/text or for the explainability purposes (to precisely know which component of the model is failing). #### Output: Answer (A) to the Question (QH), which can be considered as a 27-way classification over attributes (8 colors + 3 shapes + 2 sizes + 2 material), numeric (0-9) and boolean (yes/no). Dataset Partitions and Statistics: We create CLEVR_HYP dataset containing 175k image-action text-question samples using the process mentioned in Figure 4b. For each image, we generate 5 kinds of action texts (one for each add, remove, move in-plane and move out-of-plane and change attribute). For each action text type, we generate 5 questions (one for each count, exist, compare integer, query attribute and compare attribute). Hence, we get 5*5 unique action text-question pairs for each image, covering all actions and reasoning types in a balanced manner as shown in Figure 5a (referred as Original partition). However, it leads to a skewed distribution of answers as observed from 5b. Therefore, we curate a version of the dataset (referred as Balanced partition) consisting of 67.5k samples where all answer choices are equally- likely as well. (a) Distribution based on Action Text types and Question types (b) Distribution of Answer types Figure 5: Visualization of distributions for actions, questions and answers in Original_Train partition of CLEVR_HYP. Additionally, we create two small challenge test sets (1500 image-action text- question samples each)- 2HopActionText (2HopTA) and 2HopQuestion (2HopQH) to test generalization capability of the trained models. In 2HopTA, we create action text which requires model to understand two different actions being taken on the scene. For example, ‘Add a small blue metal cylinder to the right of large yellow cube and remove the large cylinder from the scene.’ and ’Move the purple object on top of small red cube then change its color to cyan.’. In 2HopQH, we create questions which require model to understand logical combinations of questions using ‘and’, ‘or’ and ‘not’. For example, ‘How many objects are either red or cylinder?’ and ‘Are there any rubber cubes that are not green?’. In Table 2, we provide size of the various partitions and measure the diversity of the dataset in various aspects. For images, we calculate average number of objects present in the scene from the length of scene graph. For balanced partition, the number of images are much less compared to original, but more average number of objects per image. This is most likely due to the need to accommodate integers 4-9 more frequently as ground-truth answers. For textual components, we show average lengths (number of tokens separated by whitespaces) and count unique utterances as a measure of diversity. The original partition of the resulting dataset has 80% and 83% unique action text and questions respectively. For balanced partition, length and unique utterances for action text are nearly same as the original partition but for questions, it decreases. Questions in the original partition have been observed to enforce more strict and specific object references (such as small red metal cubes) compared to balanced partition (small cubes, red metal objects etc.), reducing the average length and uniqueness. It is intuitive for 2Hop partitions to have higher average length and uniqueness for $T_{A}$ and $Q_{H}$ respectively. This shows that despite having created this dataset from templates and rendered images with a limited set of attributes, it is still fairly challenging. Split | #I | | Avg. --- #Obj #TA | | Unique --- #TA | Avg. --- $T_{A}$ Len. #QH | | Unique --- #QH | Avg. --- QH Len. Original_Train | 5k | 6.4 | 25k | 20.7k | 12.8 | 125k | 103.7k | 22.6 Original_Val | 1k | 6.7 | 5k | 3.8k | 12.8 | 25k | 20.9k | 23.1 Original_Test | 1k | 6.4 | 5k | 3.6k | 12.6 | 25k | 20.7k | 22.8 Balanced_Train | 5k | 7.6 | 25k | 21.1k | 12.8 | 67.5k | 58.2k | 20.3 Balanced_Val | 1k | 7.6 | 5k | 3.9k | 12.7 | 13.5k | 11.5k | 20.7 Balanced_Test | 1k | 7.5 | 5k | 3.7k | 12.6 | 13.5k | 11.4k | 20.4 2Hop$T_{A}$_Test | 1k | 6.4 | 3k | 2.6k | 18.6 | 15k | 12.5k | 22.8 2Hop$Q_{H}$_Test | 1k | 6.4 | 3k | 2.2k | 12.6 | 15k | 13.7k | 29.3 Table 2: CLEVR_HYP dataset splits and statistics (# represents number of, k represents thousand). ## 4 Models that we experiment with Models trying to tackle CLEVR_HYP dataset have to address four key challenges; 1. (i) understand hypothetical actions and questions in complex natural language, 2. (ii) correctly disambiguate the objects of interest and obtain the structured representation (i.e. scene graphs or functional programs) of various modalities if required by the solver, 3. (iii) understand the dynamics of the world based on the various actions performed over it, 4. (iv) perform various kind of reasoning to answer the question. ### 4.1 Random The QA task in CLEVR_HYP dataset can be considered as a 27-class classification problem. Each answer choice is likely to be picked with a probability of 1/27. Therefore, the performance of the random baseline is 3.7%. ### 4.2 Human Performance We performed human evaluation with respect to 500 samples from the CLEVR_HYP dataset. Accuracy of human evaluations on original test, 2Hop$A_{T}$ and 2Hop$Q_{H}$ are 98.4%, 96.2% and 96.6% respectively. ### 4.3 Transformer Architectures Pre-trained transformer-based architectures have been observed Li et al. (2020) to capture a rich hierarchy of language-structures (text-only models) and effectively map entities/words with corresponding image regions (vision- language models). We experiment with various transformer-based models to understand their capability to understand the effects of actions on a visual domain. #### Baseline 1- Machine Comprehension using RoBERTa: To evaluate the hypothetical VQA task through the text-only model, we convert images into the templated text using scene graphs. The templated text contains two kind of sentences; one describing properties of the objects i.e. “There is a $<$Z$>$ $<$C$>$ $<$M$>$ $<$S$>$", the other one describing the relative spatial location i.e. “The $<$Z$>$ $<$C$>$ $<$M$>$ $<$S$>$ is $<$R$>$ the $<$Z1$>$ $<$C1$>$ $<$M1$>$ $<$S1$>$". For example, “There is a small green metal cube." and “The large yellow rubber sphere is to the left of the small green metal cube". Then we concatenate templated text with the action text to create a reading comprehension passage. We use state-of-the-art machine comprehension baseline RoBERTa Liu et al. (2019) finetuned on the RACE dataset Lai et al. (2017)555architecture=roberta large, epochs=5, learning rate=$1\mathrm{e}{-05}$, batch size=2, update frequency=2, dropout=0.1, optimizer=adam with eps=$1\mathrm{e}{-06}$.. Finally, we predict an answer to the question using this reading comprehension passage. #### Baseline 2- Visual Question Answering using LXMERT Proposed by Tan and Bansal (2019), LXMERT is one of the best transformer based pre-trainable visual-linguistic representations which supports VQA as a downstream task. Typical VQA systems take an image and a language input. Therefore, to evaluate CLEVR_HYP in VQA style, we concatenate action text and question to form a single text input. Since LXMERT is pre-trained on the natural images, we finetune it over CLEVR_HYP dataset666epochs=4, learning rate=$5\mathrm{e}{-05}$, batch size=8 and then use it to predict answer. Nomenclature I: Image, SG: Scene Graph, TT: Templated Text, $T_{A}$: Action Text, $Q_{H}$: Hypothetical Question, A: Answer, FP: Functional Program, ’: Updated Modality Baseline 1: ${I}$${SG}$${TT+T_{A}}$${RoBERTa_{RACE}}$${A}$${\qquad Q_{H}}$ Baseline 3: ${I}$${I^{\prime}}$${LXMERT_{CLEVR}}$${A}$${P}$${FP}$${Q}$ Baseline 2: ${\qquad I}$${LXMERT_{CLEVR\\_HYP}}$${A}$${T_{A}+Q_{H}}$ Baseline 4: ${I}$${SG}$${SG^{\prime}}$${\longrightarrow}$${Symbolic}$${A}$${P}$${FP}$${Q}$${FP}$ Figure 6: Graphical visualization of baseline models over CLEVR_HYP described above. ### 4.4 Systematically incorporating effects of actions into neural models #### Baseline 3- Text-editing Image Baseline: In this method, we break-down the QA task with mental simulation in two parts; first, learn to generate an updated image (such that it has incorporated the effects of actions) and then perform visual question answering with respect to the updated image. We use the idea from Text Image Residual Gating proposed in Vo et al. (2019) to implement the first part. However there are two important distinctions; Their focus is on the retrieval from the given database. We modify their objective and develop text-adaptive encoder-decoder with residual connections to generate new image. Also, editing instructions in their CSS dataset Vo et al. (2019) were quite simple. For example, ‘add red cube’ and ‘remove yellow sphere’. In this case, one can add the red cube anywhere in the scene. We modify their architecture to precisely place objects to their relative spatial references (on left/right/front/ behind). Once we get the updated image, we feed it to the LXMERT Tan and Bansal (2019) finetuned over the CLEVR Johnson et al. (2017a) dataset along with the question and predict the answer. #### Baseline 4- Scene Graph Update Model: Instead of directly manipulating images, in this method, we leverage image scene graphs to convert image-editing problem into graph-editing problem, conditioned on the action text. This is an emerging research direction to deal with changes in the visual modality over time or with new sources of information, as observed from recent parallel works Chen et al. (2020); He et al. (2020). We first use Mask R-CNN He et al. (2017) to get the segmentation mask of the objects and predict attributes (color, material, size, and shape) with an acceptance threshold of 0.9. Segmentation mask of each object along with original image is then passed through ResNet-34 He et al. (2016) to extract precise 3D coordinates of the object. We get the structured scene graph for the image. Then we use seq2seq with attention model originally proposed in Johnson et al. (2017b) to generate functional programs (FP) for action text and question. The execution engine executes programs on scene graph, implemented as a neural module network Andreas et al. (2017) to update the scene representation and answer questions. We learn to update scene graphs according to functional program for the action text using reinforcement learning777finetuning learning rate=$1\mathrm{e}{-05}$, 1M iterations with early stopping, batch size=32. The reward function is associated with our ground-truth program executor and generates reward if prediction exactly matches with ground-truth execution. Once we get the updated scene representation, we use neural-symbolic model888supervised pretraining learning rate=$7\mathrm{e}{-04}$, num iterations=20k, batch size=32 and then finetuning $1\mathrm{e}{-05}$, at most 2M iterations with early stopping, batch size=32 proposed by Yi et al. (2018) to obtain the final answer. It is notable that Yi et al. (2018) achieved near- perfect performance on the CLEVR QA task in addition to being fully explainable. ## 5 Baseline Results In this section, we benchmark models described above on the CLEVR_HYP. The dataset is formulated as a classification task with exactly one correct answer, so we use standard accuracy as evaluation metric. We then analyze their performance according to question and action types. Overall Baseline Performance for Various Test Sets of CLEVR_HYP --- Original Test | Balanced Test | 2HopTA Test | 2HopQH Test BL1 | BL2 | BL3 | BL4 | BL1 | BL2 | BL3 | BL4 | BL1 | BL2 | BL3 | BL4 | BL1 | BL2 | BL3 | BL4 57.2 | 63.9 | 64.7 | 70.5 | 55.3 | 65.2 | 69.5 | 68.6 | 53.3 | 49.2 | 55.6 | 64.4 | 55.2 | 52.9 | 58.7 | 66.5 $\frac{}{}$ Performance break-down by Action Types and Reasoning Types for Baseline 3 and 4 --- | Original Test | | | 2Hop$A_{T}$ Test | | | Original Test | | | 2Hop$Q_{H}$ Test | BL3 | BL4 | | | BL3 | BL4 | | | BL3 | BL4 | | | BL3 | BL4 Add | 58.2 | 65.9 | | Add+Remove | 53.6 | 63.2 | | Count | 60.2 | 74.3 | | And | 59.2 | 67.1 Remove | 89.4 | 88.6 | | Add+Change | 55.4 | 64.7 | | Exist | 69.6 | 72.6 | | Or | 58.8 | 67.4 Change | 88.7 | 91.2 | | Add+Move | 49.7 | 57.5 | | CompInt | 56.7 | 67.3 | | Not | 58.1 | 65.0 Move(in-plane) | 61.5 | 69.4 | | Remove+Change | 82.1 | 85.5 | | CompAttr | 68.7 | 70.5 | | | | Move(on) | 53.3 | 66.1 | | Remove+Move | 52.6 | 66.4 | | QueryAttr | 65.4 | 68.1 | | | | | | | | Change+Move | 53.8 | 63.3 | | | | | | | | Table 3: Baseline performance over CLEVR_HYP (BLx represents one of the four Baselines described above). Quantitative results from above experiments can be visualized in top part of the Table 3. Among the methods described above, the scene graph update model has the best overall performance 70.5% on original test data. Text-editing model is best over balanced set, but observed to have the poor generalization capability when two actions or reasoning capabilities have to be performed. CLEVR_HYP requires models to reason about effect of hypothetical actions taken over images. LXMERT is not directly trained for this objective therefore, it struggles to do well on this task. The reason behind the poor performance of text-only baseline is due to its limitation to incorporate detailed spatial locations into the templates that we use to convert image into a machine comprehension passage. Two of our models (scene graph update and text-editing image) are transparent to visualize intermediate changes in the scene after performing actions. We analyse their ability to understand actions and make appropriate changes as shown in below part of Table 3. For the scene graph method, we compare the ground-truth functional program with the generated program and measure their exact-match accuracy. For the text-editing image method, we generate scene graphs for both images (original image and image after text-editing) and compare them. For attributes, we do exact-match, whereas for location information we consider matching only on the basis of relative spatial location. Both scene graph and text-editing models do quite well on ‘remove’ and ‘change’ actions whereas struggle when new objects are added or existing objects are moved around. The observation is consistent when multiple actions are combined. Therefore, actions remove+change can be performed with maximum accuracy whereas other combinations of actions accomplish relatively lower performance. It leads to the conclusion that understanding the effect of different actions are of varied complexity. Most models demonstrate better performance over counting, existence and attribute query type of questions than comparison questions. The scene graph update and text-editing methods show a performance drop of 6.1% and 9.1% respectively when multiple actions are performed on the scene. However, there is less of a performance gap for models on 2HopQH compared to the test set, suggesting that models are able to better generalize with respect to multiple reasoning skills than complex actions. ## 6 Conclusion We introduce CLEVR_HYP, a dataset to evaluate the ability of VQA systems after hypothetical actions are performed over the given image. We create this dataset by extending the data generation framework of CLEVR Johnson et al. (2017a) that uses synthetically rendered images and templates for reasoning questions. Our dataset is challenging because rather than asking to reason about objects already present in the image, it asks about what would happen in an alternative world where changes have occurred. We provide ground-truth representations for images, hypothetical actions and questions to facilitate the development of models that systematically learn to reason about underlying process. We create several baseline models to benchmark CLEVR_HYP and report their results. Our analysis shows that the models are able to perform reasonably well (70.5%) on the limited number of actions and reasoning types, but struggle with complex scenarios. While neural models have achieved almost perfect performance on CLEVR and considering human performance as upperbound (98%), there is a lot of room for improvement on CLEVR_HYP. Our future work would include relaxing constraints by allowing a larger variety of actions, attributes and reasoning types. By extending this approach further for natural images, we aim to contribute in the development of better vision+language models. ## Acknowledgements We are thankful to the anonymous reviewers for the constructive feedback. This work is partially supported by the grants NSF 1816039, DARPA W911NF2020006 and ONR N00014-20-1-2332. ## References * Anderson et al. (2018) Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In _Proceedings of the IEEE CVPR_. * Andreas et al. (2016) Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In _Proceedings of IEEE conference on CVPR_ , pages 39–48. * Andreas et al. (2017) Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2017. Neural module networks. * Antol et al. (2015) Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In _ICCV_. * Blender Online Community (2019) Blender Online Community. 2019. _Blender - a 3D modelling and rendering package_. Blender Foundation, Blender Institute, Amsterdam. * Chen et al. (2018) Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2018. Touchdown: Natural language navigation and spatial reasoning in visual street environments. * Chen et al. (2020) Lichang Chen, Guosheng Lin, Shijie Wang, and Qingyao Wu. 2020. Graph edit distance reward: Learning to edit scene graph. _arXiv preprint arXiv:2008.06651_. * Dong et al. (2017) Hao Dong, Simiao Yu, Chao Wu, and Yike Guo. 2017. Semantic image synthesis via adversarial learning. In _Proceedings of the IEEE ICCV_ , pages 5706–5714. * Fang et al. (2020) Zhiyuan Fang, Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020\. Video2commonsense: Generating commonsense descriptions to enrich video captioning. * Gaddy and Klein (2019) David Gaddy and Dan Klein. 2019. Pre-learning environment representations for data-efficient neural instruction following. * Gomes (2014) Lee Gomes. 2014. Machine-learning maestro michael jordan on the delusions of big data and other huge engineering efforts. * Goodfellow et al. (2014) Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. _arXiv preprint arXiv:1406.2661_ , 4(5):6. * Gordon et al. (2018) Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and Ali Farhadi. 2018. Iqa: Visual question answering in interactive environments. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 4089–4098. * He et al. (2017) Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. 2017. Mask r-cnn. In _Proceedings of the IEEE international conference on computer vision_ , pages 2961–2969. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In _Proceedings of IEEE conference on computer vision and pattern recognition_ , pages 770–778. * He et al. (2020) Xuanli He, Quan Hung Tran, Gholamreza Haffari, Walter Chang, Trung Bui, Zhe Lin, Franck Dernoncourt, and Nhan Dam. 2020. Scene graph modification based on natural language commands. * Hudson and Manning (2019) Drew A. Hudson and Christopher D. Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. * Iyer et al. (2017) Shankar Iyer, Nikhil Dandekar, and Kornél Csernai. 2017. First quora dataset release: Question pairs. _data. quora. com_. * Johnson et al. (2017a) Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017a. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In _Proceedings of IEEE Conference on Computer Vision and Pattern Recognition_ , pages 2901–2910. * Johnson et al. (2017b) Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017b. Inferring and executing programs for visual reasoning. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 2989–2998. * Kafle et al. (2018) Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. 2018. Dvqa: Understanding data visualizations via question answering. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 5648–5656. * Kahou et al. (2017) Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Ákos Kádár, Adam Trischler, and Yoshua Bengio. 2017. Figureqa: An annotated figure dataset for visual reasoning. _arXiv preprint arXiv:1710.07300_. * Kanu et al. (2020) John Kanu, Eadom Dessalene, Xiaomin Lin, Cornelia Fermuller, and Yiannis Aloimonos. 2020. Following instructions by imagining and reaching visual goals. _arXiv preprint arXiv:2001.09373_. * Kembhavi et al. (2017) Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 4999–5007. * Kottur et al. (2019) Satwik Kottur, José M. F. Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2019. Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. * Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. _arXiv preprint arXiv:1704.04683_. * Li et al. (2020) Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2020\. What does BERT with vision look at? In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5265–5275, Online. Association for Computational Linguistics. * Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_. * Nam et al. (2018) Seonghyeon Nam, Yunji Kim, and Seon Joo Kim. 2018. Text-adaptive generative adversarial networks: Manipulating images with natural language. In _Advances in NIPS_ , pages 42–51. * Nguyen et al. (2019) Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. 2019. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In _The IEEE Conference on Computer Vision and Pattern Recognition_. * Ott et al. (2019) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In _Proceedings of NAACL-HLT 2019: Demonstrations_. * Park et al. (2020) Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, and Yejin Choi. 2020. Visualcomet: Reasoning about the dynamic context of a still image. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21(140):1–67. * Reed et al. (2016) Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. _arXiv preprint arXiv:1605.05396_. * Ren et al. (2015) Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Exploring models and data for image question answering. * Shen et al. (2019) Tianxiao Shen, Myle Ott, Michael Auli, and Marc’Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. _International Conference on Machine Learning_. * Shridhar et al. (2020) Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. * Tan and Bansal (2019) Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. _arXiv preprint arXiv:1908.07490_. * Tandon et al. (2019) Niket Tandon, Bhavana Dalvi Mishra, Keisuke Sakaguchi, Antoine Bosselut, and Peter Clark. 2019. Wiqa: A dataset for "what if…" reasoning over procedural text. * Teney et al. (2020) Damien Teney, Ehsan Abbasnedjad, and Anton van den Hengel. 2020. Learning what makes a difference from counterfactual examples and gradient supervision. _arXiv preprint arXiv:2004.09034_. * Vo et al. (2019) Nam Vo, Lu Jiang, Chen Sun, Kevin Murphy, Li-Jia Li, Li Fei-Fei, and James Hays. 2019. Composing text and image for image retrieval-an empirical odyssey. In _CVPR_. * Wagner et al. (2018) Misha Wagner, Hector Basevi, Rakshith Shetty, Wenbin Li, Mateusz Malinowski, Mario Fritz, and Ales Leonardis. 2018. Answering visual what-if questions: From actions to predicted scene descriptions. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pages 0–0. * Winograd (1971) Terry Winograd. 1971. Procedures as a representation for data in a computer program for understanding natural language. Technical report, MASSACHUSETTS INST OF TECH CAMBRIDGE PROJECT MAC. * Yang et al. (2018) Guangyu Robert Yang, Igor Ganichev, Xiao-Jing Wang, Jonathon Shlens, and David Sussillo. 2018. A dataset and architecture for visual reasoning with a working memory. * Yi et al. (2018) Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Josh Tenenbaum. 2018. Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. In _Advances in NIPS_ , pages 1031–1042. * Zellers et al. (2019) Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 6720–6731. ## Appendix A Appendix ### A.1 Relation of CLEVR_HYP dataset with real-world situations Teaching methodologies leverage our ability to mentally simulate scenarios along with the metaphors to aid understanding about new concepts. In other words, to explain unfamiliar concepts, we often reference familiar concepts and provide additional clues to establish mapping between them. This way, a person can create a mental simulation about unfamiliar concept and aid basic understanding about it. For example, we want to explain a person how a ‘zebra’ looks like, who has previously seen a ‘horse’, we can do so using example in Figure 7a. This naturally follows for more complex concepts. Let say, one wants to describe the structure of an atom to someone, he might use the analogy of a planetary system, where the components (planets $\sim$ electrons) circulate around a central entity (sun $\sim$ nucleus). One more such example is provided in Figure 7b. (a) learning the concept ‘zebra’ from the ‘horse’ (b) learning about ‘animal cell’ by comparison with ‘plant cell’ Figure 7: Extension of CLEVR_HYP for more complex real-world scenarios. For humans, learning new concepts and performing mental simulations is omnipresent in day-to-day life. Therefore, CLEVR_HYP dataset is very much grounded in the real world. Models developed on this dataset can serve a broad range of applications, particularly the ones where possible outcomes have to be predicted without actually executing the actions. For example, robots performing on-demand tasks in safety-critical situations or self-driving vehicles. In addition, these models can be an important component for other vision and language tasks such as automatic expansion of existing knowledge bases, zero shot learning and spatio-temporal visual reasoning. ### A.2 Rejecting Bad Samples in CLEVR_HYP Automated methods of question generation sometimes create invalid items, classified as ‘ill-posed’ or ‘degenerate’ by CLEVR Johnson et al. (2017a) dataset generation framework. They consider question “What color is the cube to the right of the sphere?" as ill-posed if there were many cubes right of the sphere, or degenerate if there is only one cube in the scene and reference to the sphere becomes unnecessary. In addition to this, we take one more step of quality control in order to prevent ordinary VQA models from succeeding over CLEVR_HYP without proper reasoning. In CLEVR_HYP, one has to perform actions described in T over image I and then answer question Q with respect to the updated scenario. Therefore, to prevent ad-hoc models from exploiting biases in CLEVR_HYP, we pose the requirement that a question must have different ground-truth answers for CLEVR_HYP and image-only model. One such example is shown in Figure 8. For image (I), Q1 leads to different answers for CLEVR and CLEVR_HYP, making sure that one needs to correctly incorporate the effect of T. Q2 is invalid for a given image- action text pair in the CLEVR_HYP as one can answer it correctly without understanding T. ### A.3 More Examples from CLEVR_HYP Beyond Figure 10, all rest of the pages show more examples from our CLEVR_HYP dataset. Each dataset item has 4 main components- image(I), action text (TA), question about the hypothetical states (QH) and answer (A). We classify samples based on what actions are taken over the image and the kind of reasoning is required to answer questions. I: Image-only model: Q1: Is there any large sphere? A: Yes Q2: Is there any large cube? A: Yes CLEVR_HYP: T: Remove all matte objects from the scene. I’: Q1: Is there any large sphere? A: No ✓ Q2: Is there any large cube? A: Yes ✗ Figure 8: Validity of questions in CLEVR_HYP ### A.4 Function Catalog As described in Section 3 and shown in Figure 4, each action text and question is associated with a functional program. We provide more details about these basic functions in Table 4 that was used to generate ground-truth answers for our dataset. Each function has input and output arguments, which are limited to following data types: * • object: a single object in the scene * • objset: a set of zero or more objects in scene * • integer: an integer in [0,10] * • boolean: ‘yes’ or ‘no’ * • values: possible attribute values mentioned in Table 1 ### A.5 Paraphrasing In order to create a challenging dataset from the linguistic point of view and to prevent models from overfitting on templated representations, we leverage word synonyms and paraphrasing methods. This section provides more details about paraphrasing methods used in our dataset. Function | Input Type → Output Type | Return Value ---|---|--- scene | $\phi$ → objset | Set of all objects in the scene unique | objset → object | | Object if objset is singleton; else raise exception --- (to verify whether the input is unique or not) relate | object × relation → objset | Objects satisfying given spatial relation for input object count | objset → integer | Size of the input set exist | objset → boolean | ‘Yes’ if the input set is non-empty and ‘No’ otherwise filter_size | objset × size → objset | Subset of input objects that match the given size filter_color | objset × color → objset | Subset of input objects that match the given color filter_material | objset × material → objset | Subset of input objects that match the given material filter_shape | objset × shape → objset | Subset of input objects that match the given shape query_size | object → size | Size of the input object query_color | object → color | Color of the input object query_material | object → material | Material of the input object query_shape | object → shape | Shape of the input object same_size | object → objset | Set of objects that have same size as input (excluded) same_color | object → objset | Set of objects that have same color as input (excluded) same_material | object → objset | Set of objects that have same material as input(excluded) same_shape | object → objset | Set of objects that have same shape as input (excluded) equal_size | size × size → boolean | ‘Yes’ if inputs are equal, ‘No’ otherwise equal_color | color × color → boolean | ‘Yes’ if inputs are equal, ‘No’ otherwise equal_material | material × material → boolean | ‘Yes’ if inputs are equal, ‘No’ otherwise equal_shape | shape × shape → boolean | ‘Yes’ if inputs are equal, ‘No’ otherwise equal_integer | integer × integer → boolean | ‘Yes’ if two integer inputs are equal, ‘No’ otherwise less_than | integer × integer → boolean | ‘Yes’ if first integer is smaller than second, else ‘No’ greater_than | integer × integer → boolean | ‘Yes’ if first integer is larger than second, else ‘No’ and | objset × objset → objset | Intersection of the two input sets or | objset × objset → objset | Union of the two input sets. not_size | object → objset | Subset of input objects that do not match given size not_color | object → objset | Subset of input objects that do not match given color not_material | object → objset | Subset of input objects that do not match given material not_shape | object → objset | Subset of input objects that do not match given shape add | objset × object → objset | Input set with input object added to it remove | objset × object → objset | Input set with input object removed from it add_rel | | objset × object x object --- x relation → objset | Input set with new object (first input) added at the --- given spatial location relative to second input object remove_rel | | objset × object x object --- x relation → objset | Input set with object (first input) removed from the --- given spatial location relative to second input object change_loc | | objset × object x object --- x relation → objset | Input set with object (first input) location changed to a --- given spatial location relative to second input object change_size | objset × size → objset | Input set with size updated to the given value change_color | objset × color → objset | Input set with color updated to the given value change_material | objset × material → objset | Input set with material updated to the given value change_shape | objset × shape → objset | Input set with shape updated to the given value Table 4: (upper) Original function catalog for CLEVR proposed in Johnson et al. (2017a), which we reuse in our data creation process (lower) New functions added to the function catalog for the CLEVR_HYP dataset. #### small gray metal cube: [small gray object, small metal object, small cube, small gray cube, small gray metal object, gray metal cube, small gray metal cube] #### large brown rubber cylinder: [brown object, large brown object, large cylinder, brown rubber object, brown cylinder, large brown rubber object, large brown cylinder, brown rubber cylinder, large brown rubber cylinder] Figure 9: Object paraphrases for 2 objects in the scene #### Object Name Paraphrasing There can be many ways an object can be referred in the scene. For example, ‘large purple metal sphere’ in image below can also be referred to as ‘sphere’ as there is no other sphere present in the image. In order to make templates more challenging, we use these alternative expressions to refer objects in the action text or question. We wrote a python script that takes scene graph of the image and generates all possible names one can uniquely refer for each object in the scene. When paraphrasing is performed, one of the generated names is randomly chosen and replaced. Figure 9 demonstrates list of all possible name variants for two objects in the given image. #### Synonyms for Paraphrasing We use word synonyms file provided with CLEVR dataset generation code. #### Sentence/Question Level Paraphrasing For action text paraphrasing, we use Fairseq Ott et al. (2019) based paraphrasing tool which uses round-trip translation and mixture of experts Shen et al. (2019). Specifically, we use pre-trained round-trip models (En-Fr and Fr-En) and choose top-5 paraphrases manually for each template. For question paraphrasing, the quality of round-trip translation and mixture of experts was not satisfactory. Therefore, we use Text-To-Text Transfer Transformer (T5) Raffel et al. (2020) fine-tuned over positive samples from Quora Question Pairs (QQP) dataset Iyer et al. (2017) and choose top-5 per template. ### A.6 Computational Resources All of our experiments are performed over Tesla V100-PCIE-16GB GPU. [1] TA: A small red sphere is added to the right of the green object. QH: There is a gray cylinder; how many spheres are to the right of it? A: 2 Classification: Add action, Counting question Split: val [2] TA: All the purple objects become metallic. QH: What number of shiny things are to the left of the small yellow sphere? A: 3 Classification: Change action, Counting question Split: val [3] TA: John puts a large red metal cube behind the blue rubber cylinder. QH: There is a small green cylinder that is in front of the gray thing; are there any large red things behind it? A: Yes Classification: Add action, Existence question Split: val [4] TA: Remove all matte objects from the scene. QH: Is there any large sphere? A: No Classification: Remove action, Existence question Split: val [5] TA: The large cylinder behind the red shiny sphere is moved in front of the green sphere. QH: Is there a purple object that is to the right of the big yellow cube that is behind the cyan rubber sphere? A: No Classification: Move (in-plane) action, Existence question Split: val [6] TA: A small green metal sphere is added behind the small red cube. QH: What color is the large cylinder that is to the right of the green object? A: Brown Classification: Add action, Query Attribute question Split: val [7] TA: The purple cylinder behind the cube disappers from the scene. QH: What material is the object on the left of brown metal cylinder? A: Rubber Classification: Remove action, Query Attribute question Split: val [8] TA: There is a sphere that is to the left of the gray cylinder; it shrinks in size. QH: What size is the blue object? A: Small Classification: Change action, Query Attribute question Split: val [9] TA: The brown thing is moved in front of the pink rubber cube. QH: What shape is the object that is in front of the pink rubber cube? A: Cylinder Classification: Move (in-plane) action, Query Attribute question Split: val Figure 10: More examples from the CLEVR_HYP dataset [10] TA: The small red sphere is moved onto the small cube that is in front of the gray sphere. QH: What material is the object that is below the small metal sphere? A: Rubber Classification: Move (out-of-plane) action, Query Attribute question Split: val [11] TA: A small yellow metal object is placed to the right of red cylinder; it inherits its shape from the blue object. QH: Are there any other things that have the same shape as the blue matte object? A: Yes Classification: Add action, Compare Attribute question Split: val [12] TA: Hide all the cylinders from the scene. QH: Are there any other things that have the same size as the gray sphere? A: No Classification: Remove action, Compare Attribute question Split: val [13] TA: The small block is displaced and put on the left of the blue cube. QH: Is there anything else on the right of the cyan sphere that has the same color as the large metal cylinder? A: No Classification: Move (in-plane) action, Compare Attribute question Split: val [14] TA: Jill places the small cube on the large cube that is to the left of cyan cylinder. QH: There is an object below the brown cube; does it have the same shape as the green object? A: Yes Classification: Move (out-of-plane) action, Compare Attribute question Split: val [15] TA: A small brown cube is added to the scene which is made of same material as the golden block. QH: Are there an equal number of green objects and brown cubes? A: Yes Classification: Add action, Compare Integer question Split: val [16] TA: The tiny cylinder is withdrawn from the scene. QH: Is the number of rubber objects greater than the number of shiny objects? A: No Classification: Remove action, Compare Integer question Split: val [17] TA: All small metal spheres are transformed into cylinders. QH: Are there fewer brown objects that are to the right of the red sphere than the cylinders? A: Yes Classification: Change action, Compare Integer question Split: val [18] TA: The sphere is placed in front of the large blue cube that is to the left of the yellow shiny object. QH: Are there an equal number of gray things to the right of the brown rubber cube and cylinders? A: No Classification: Move (in-plane) action, Compare Integer question Split: val Figure 11: More examples from the CLEVR_HYP dataset
# Linear and Nonlinear MMSE Estimation in One-Bit Quantized Systems under a Gaussian Mixture Prior Benedikt Fesl, and Wolfgang Utschick The authors are with Chair of Signal Processing, Technical University of Munich, Munich, Germany (e-mail: <EMAIL_ADDRESS>[email protected]). ###### Abstract We present new fundamental results for the mean square error (MSE)-optimal conditional mean estimator (CME) in one-bit quantized systems for a Gaussian mixture model (GMM) distributed signal of interest, possibly corrupted by additive white Gaussian noise (AWGN). We first derive novel closed-form analytic expressions for the Bussgang estimator, the well-known linear minimum mean square error (MMSE) estimator in quantized systems. Afterward, closed- form analytic expressions for the CME in special cases are presented, revealing that the optimal estimator is linear in the one-bit quantized observation, opposite to higher resolution cases. Through a comparison to the recently studied Gaussian case, we establish a novel MSE inequality and show that that the signal of interest is correlated with the auxiliary quantization noise. We extend our analysis to multiple observation scenarios, examining the MSE-optimal transmit sequence and conducting an asymptotic analysis, yielding analytic expressions for the MSE and its limit. These contributions have broad impact for the analysis and design of various signal processing applications. ###### Index Terms: One-bit quantization, Bussgang, conditional mean estimator, mean square error, Gaussian mixture, MMSE. ©This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. ## I Introduction Bayesian estimators are a cornerstone in classical estimation theory, affecting many signal processing applications. In particular, the CME as the optimal estimator for all Bregman loss functions, with the MSE as the most prominent representative [1], is of great importance. This has led to the analysis of the CME and its properties under various conditions, e.g., in an AWGN channel [2] or with different noise models [3]. In particular, the cases where the CME is linear are of great interest due to the practical implications [4]. Despite its great importance in signal processing, the CME lacks theoretical understanding in quantized systems, which, e.g., occur in the modeling of analog-to-digital converters (ADCs), imposing a nonlinear inverse problem. Natural fields of application are, e.g, lossy compression [5], wireless sensor networks [6], audio coding [7], control theory [8], positioning [9], or channel estimation [10, 11]. Recently, the CME was examined in case of one-bit quantization in a jointly Gaussian setting [12], [13]; it was shown that the CME is linear in the quantized observation in many special cases, although it necessitates an elaborate numerical evaluation in general. A viable alternative, in general, is the linear MMSE estimator, which can be derived via the Bussgang decomposition [14, 11, 15], motivated by Bussgang’s theorem [16], or, alternatively, the additive quantization noise model [17]. Moreover, the statistically equivalent linear model via the Bussgang decomposition allows for theoretical system analysis, e.g., spectral efficiency [11], capacity [18], nonideal hardware effects [19, 20], or nonlinear system characterization [21, 22, 23]. However, similar to the analysis of the CME, the Bussgang estimator was mainly investigated for the case of a zero-mean Gaussian distributed signal, allowing for closed-form solutions of the Bussgang gain [24, Sec. 9.2] and the covariance matrix of the quantized observation via the arcsine law [25, 26]. A natural generalization of the zero-mean Gaussian case is the zero-mean GMM, which covers a wide class of probability density functions (PDFs) that can be reasonably approximated, especially in wireless communications [27, 28]. This has motivated the analysis of the Bussgang gain for GMM distributed signals [29, 30]. However, the Bussgang decomposition has not been fully investigated for the general multivariate case for one-bit quantization. More importantly, the linear MMSE and the CME for a GMM prior are not investigated thus far. The contributions of this letter are as follows: We generalize the analytic expressions for the Bussgang gain and the arcsine law from the Gaussian case to the general GMM case, which is important for the evaluation of the linear MMSE estimator. Afterward, we study the CME estimator in different special cases. We derive a novel closed-form solution for the CME in the univariate case, which turns out to be linear in the observation and thus equal to the Bussgang estimator. This allows for finding an analytic expression of the cross-correlation between the signal of interest and the auxiliary quantization noise from the Bussgang decomposition, which is vanishing in both the low and high signal-to-noise ratio (SNR) regimes or in the degenerate Gaussian case. Furthermore, we derive a novel MSE inequality, revealing that the GMM distribution leads to a consistently higher MSE than the Gaussian distribution under a fixed global variance constraint. Subsequently, we investigate a multiple observation scenario, where the MSE- optimal observation sequence and two equivalent expressions of the CME for the noiseless case are derived. Subsequent to finding analytic expressions of the MSE and its limit, the MSE inequality is shown to also hold for this case. All theoretical results are validated with numerical experiments. In addition, more general cases are evaluated, highlighting the strong impact of stochastic resonance. ## II System Model We consider the generic system equation ${\bm{R}}=Q({\bm{Y}})=Q({\bm{h}}{\bm{a}}^{\operatorname{T}}+{\bm{N}})\in\mathbb{C}^{N\times M}$ where ${\bm{R}}=[{\bm{r}}_{1},{\bm{r}}_{2},\dots,{\bm{r}}_{M}]$ contains $M$ quantized observations of the vector of interest ${\bm{h}}\in\mathbb{C}^{N}$ with the known vector ${\bm{a}}\in\mathbb{C}^{M}$ that fulfills the power constraint $\|{\bm{a}}\|_{2}^{2}=M$. Let the vector ${\bm{h}}\sim p({\bm{h}})$ be a zero-mean GMM random variable (RV), i.e., $\displaystyle p({\bm{h}})=\sum_{k=1}^{K}p_{k}\mathcal{N}_{\mathbb{C}}({\bm{h}};{\bm{0}},{\bm{C}}_{k})$ (1) with the global covariance matrix ${\bm{C}}_{{\bm{h}}}=\sum_{k=1}^{K}p_{k}{\bm{C}}_{k}$. Further, ${\bm{N}}=[{\bm{n}}_{1},\dots,{\bm{n}}_{M}]$ where ${\bm{n}}_{i}\sim\mathcal{N}_{\mathbb{C}}({\bm{0}},\eta^{2}\operatorname{\mathbf{I}})$ is AWGN and $Q(\cdot)=\frac{1}{\sqrt{2}}\left(\operatorname{sign}(\Re(\cdot))+{\operatorname{j}}\operatorname{sign}(\Im(\cdot))\right)$ is the complex-valued one-bit quantization function, which is applied element- wise to the input vector/matrix. The system model can be equivalently described in its (column-wise) vectorized form as ${\bm{r}}=Q({\bm{y}})=Q({\bm{A}}{\bm{h}}+{\bm{n}})\in\mathbb{C}^{NM}$ (2) with ${\bm{A}}={\bm{a}}\otimes\operatorname{\mathbf{I}}$, ${\bm{r}}=\operatorname{vec}({\bm{R}})$, ${\bm{y}}=\operatorname{vec}({\bm{Y}})$, and ${\bm{n}}=\operatorname{vec}({\bm{N}})$. ## III The Bussgang Estimator In the context of quantization, the linear MMSE estimator is referred to as the Bussgang estimator [11], as it is motivated by Bussgang’s theorem [16]. In particular, the Bussgang decomposition implies that the system (2) can be linearized as the statistically equivalent model $\displaystyle{\bm{r}}=Q({\bm{y}})={\bm{B}}{\bm{y}}+{\bm{q}},$ (3) where ${\bm{B}}$ is the Bussgang gain, enforcing that ${\bm{q}}$ is uncorrelated (but not independent) of ${\bm{y}}$. The Bussgang estimator reads as $\displaystyle\hat{{\bm{h}}}_{\text{LMMSE}}={\bm{C}}_{{\bm{h}}{\bm{r}}}{\bm{C}}_{{\bm{r}}}^{-1}{\bm{r}}=({\bm{C}}_{{\bm{h}}}{\bm{A}}^{\operatorname{H}}{\bm{B}}^{\operatorname{H}}+{\bm{C}}_{{\bm{h}}{\bm{q}}}){\bm{C}}_{{\bm{r}}}^{-1}{\bm{r}}.$ (4) In the case of jointly zero-mean Gaussian quantizer input, it is well-known that ${\bm{B}}$ and ${\bm{C}}_{{\bm{r}}}$ can be computed in closed-form, and ${\bm{C}}_{{\bm{h}}{\bm{q}}}={\bm{0}}$, cf. [11], [24, Sec. 9.2]. Although the zero-mean GMM is a natural generalization, covering a much larger class of PDFs that can be approximated, the expressions for ${\bm{C}}_{{\bm{h}}{\bm{r}}}$, ${\bm{B}}$, and ${\bm{C}}_{{\bm{r}}}$ in case of one-bit quantization are not fully investigated thus far. This motivates the derivation of these expressions in the following. ###### Theorem 1. The involved quantities for the linear MMSE estimator (4) are computed as $\displaystyle{\bm{B}}$ $\displaystyle=\sqrt{\frac{2}{\pi}}\sum_{k=1}^{K}p_{k}\operatorname{diag}({\bm{C}}_{{\bm{y}}|k})^{-\frac{1}{2}}{\bm{C}}_{{\bm{y}}|k}{\bm{C}}_{{\bm{y}}}^{-1},$ (5) $\displaystyle{\bm{C}}_{{\bm{h}}{\bm{r}}}$ $\displaystyle=\sqrt{\frac{2}{\pi}}\sum_{k=1}^{K}p_{k}{\bm{C}}_{k}{\bm{A}}^{\operatorname{H}}\operatorname{diag}({\bm{C}}_{{\bm{y}}|k})^{-\frac{1}{2}},$ (6) $\displaystyle{\bm{C}}_{{\bm{r}}}$ $\displaystyle=\frac{2}{\pi}\sum_{k=1}^{K}p_{k}(\sin^{-1}(\Re(\bar{{\bm{C}}}_{{\bm{y}}|k}))+{\operatorname{j}}\sin^{-1}(\Im(\bar{{\bm{C}}}_{{\bm{y}}|k})))$ (7) with $\bar{{\bm{C}}}_{{\bm{y}}|k}=\operatorname{diag}({\bm{C}}_{{\bm{y}}|k})^{-\frac{1}{2}}{\bm{C}}_{{\bm{y}}|k}\operatorname{diag}({\bm{C}}_{{\bm{y}}|k})^{-\frac{1}{2}}$ and ${\bm{C}}_{{\bm{y}}}=\sum_{k=1}^{K}p_{k}{\bm{C}}_{{\bm{y}}|k}$ where ${\bm{C}}_{{\bm{y}}|k}={\bm{A}}{\bm{C}}_{k}{\bm{A}}^{\operatorname{H}}+\eta^{2}\operatorname{\mathbf{I}}$. Proof: See Appendix -A. ###### Remark 1. The Bussgang gain (5) is in accordance with the findings in [30]; however, the authors only discuss the univariate case, and one-bit quantization is not analyzed. In contrast to the Gaussian case, the Bussgang gain (5) is not a diagonal matrix in general, which aligns with the statement in [14]. The expression (7) can be interpreted as a weighted version of the arcsine law [25, 26]. Since GMMs are universal approximators [31], a straightforward application of the above results is to approximate an unknown density with a GMM, allowing to compute the linear MMSE estimator. A similar approach was adopted in [27]. ###### Corollary 1. Based on the results of Theorem 1, the cross-covariance matrix ${\bm{C}}_{{\bm{h}}{\bm{q}}}={\bm{C}}_{{\bm{h}}{\bm{r}}}-{\bm{C}}_{{\bm{h}}}{\bm{A}}^{\operatorname{H}}{\bm{B}}^{\operatorname{H}}$ of the signal of interest ${\bm{h}}$ and the auxiliary quantization noise ${\bm{q}}$ in (3) is $\displaystyle\begin{aligned} {\bm{C}}_{{\bm{h}}{\bm{q}}}=\sqrt{\frac{2}{\pi}}\sum_{k=1}^{K}p_{k}&\left({\bm{C}}_{k}{\bm{A}}^{\operatorname{H}}\operatorname{diag}({\bm{C}}_{{\bm{y}}|k})^{-\frac{1}{2}}\right.\\\ &\left.-{\bm{C}}_{{\bm{h}}}{\bm{A}}^{\operatorname{H}}\operatorname{diag}({\bm{C}}_{{\bm{y}}|k})^{-\frac{1}{2}}{\bm{C}}_{{\bm{y}}|k}{\bm{C}}_{{\bm{y}}}^{-1}\right),\end{aligned}$ (8) contrary to the Gaussian case, where ${\bm{C}}_{{\bm{h}}{\bm{q}}}={\bm{0}}$ [11]. After deriving the linear MMSE estimator for the general case, we investigate the CME in the following, where we particularly discuss special cases in which it is linear. ## IV The Conditional Mean Estimator In the general case, the CME is not analytically tractable, necessitating a numeric approach to solve the involved integral expressions. However, when rewriting the CME as $\displaystyle\operatorname{\mathbb{E}}[{\bm{h}}|{\bm{r}}]$ $\displaystyle=\sum_{k=1}^{K}p(k|{\bm{r}})\operatorname{\mathbb{E}}[{\bm{h}}|{\bm{r}},k]$ (9) $\displaystyle=\sum_{k=1}^{K}\frac{p_{k}}{\sum_{i=1}^{K}p_{i}p({\bm{r}}|i)}\int{\bm{h}}p({\bm{h}}|k)p({\bm{r}}|{\bm{h}})\operatorname{d}{\bm{h}},$ (10) utilizing the law of total expectation and Bayes’ rule, all involved densities are conditioned on a GMM component. This allows for effectively treating them as in the Gaussian case, directly enabling simplified numerical evaluations discussed in [12, 13], whose discussion is left out due to space limitations. However, we derive novel closed-form analytic solutions of the CME for special cases in the following, accompanied by comparisons to the Gaussian case analyzed in [12, 13]. ### IV-A Univariate Case with a Single Observation We consider the case of a scalar system $r=Q(h+n)$ with $h\sim\sum_{k=1}^{K}p_{k}\mathcal{N}_{\mathbb{C}}(0,\sigma_{k}^{2})$ and $n\sim\operatorname{\mathcal{N}_{\mathbb{C}}}(0,\eta^{2})$. ###### Theorem 2. The CME for the scalar system is computed as $\displaystyle\operatorname{\mathbb{E}}[h|r]=\sqrt{\frac{2}{\pi}}\sum_{k=1}^{K}p_{k}\frac{\sigma_{k}^{2}}{\sqrt{\sigma_{k}^{2}+\eta^{2}}}r.$ (11) Proof: See Appendix -B. ###### Remark 2. Remarkably, in contrast to the high-resolution case, the CME is linear in the quantized observation, i.e., the optimal estimator becomes linear through the specific nonlinearity of the quantization process. Furthermore, the jointly Gaussian case is not unique for the CME to be linear, in contrast to linear AWGN channels [4]. The result can be immediately extended to a multivariate zero-mean GMM with diagonal covariances. The MSE of the CME is then computed to $\displaystyle\text{MSE}_{\textup{GMM}}=\sigma_{\text{glob}}^{2}-\frac{2}{\pi}\left(\sum_{k=1}^{K}p_{k}\frac{\sigma_{k}^{2}}{\sqrt{\sigma_{k}^{2}+\eta^{2}}}\right)^{2}$ (12) where $\sigma^{2}_{\text{glob}}=\sum_{k=1}^{K}p_{k}\sigma_{k}^{2}$ is the global variance. An interesting analysis is a comparison to the case of a Gaussian distributed RV with the same global variance, i.e., $h\sim\operatorname{\mathcal{N}_{\mathbb{C}}}(0,\sigma_{\text{glob}}^{2})$, for which the closed-form MSE of the corresponding CME is given as, cf. [12], $\displaystyle\textup{MSE}_{\textup{Gauss}}=\sigma_{\text{glob}}^{2}-\frac{2}{\pi}\frac{\sigma_{\text{glob}}^{4}}{\sigma_{\text{glob}}^{2}+\eta^{2}}.$ (13) This allows to compare the estimation performance of the CMEs when changing the distribution of the RV of interest from a Gaussian to a GMM while keeping the global variance fixed. We note that both estimators are optimal with respect to the considered distribution. ###### Theorem 3. For the considered scalar system, under a fixed global variance $\sigma_{\textup{glob}}^{2}$, it holds for all SNRs that $\displaystyle\textup{MSE}_{\textup{Gauss}}\leq\textup{MSE}_{\textup{GMM}}.$ (14) Proof: See Appendix -C. ###### Remark 3. In the high SNR regime, we get $\displaystyle\lim_{\eta^{2}\to 0}\textup{MSE}_{\textup{GMM}}=\sigma_{\textup{glob}}^{2}-\frac{2}{\pi}\bar{\sigma}^{2}$ (15) with $\bar{\sigma}=\sum_{k=1}^{K}p_{k}\sigma_{k}$, and the inequality (14) directly follows from the weighted Cauchy-Schwarz inequality $\displaystyle\bar{\sigma}^{2}\leq\sigma_{\textup{glob}}^{2}$ (16) with equality if and only if $\sigma_{k}^{2}=\sigma^{2}$ for all $k=1,\dots,K$, i.e., when the GMM degenerates to a Gaussian. The observation that a GMM distribution leads to a strictly higher MMSE than the Gaussian distribution under a fixed global variance constraint is not stated in the literature so far. Due to the linearity of the CME, it is equal to the Bussgang estimator, representing the linear MMSE, cf. Section III. Using the result of Corollary 1, we can further compute $\displaystyle\operatorname{\mathbb{E}}[hq^{*}]=\sqrt{\frac{2}{\pi}}\sum_{k=1}^{K}p_{k}\left(\frac{\sigma_{k}^{2}}{\sqrt{\sigma_{k}^{2}+\eta^{2}}}-\frac{\sigma_{\text{glob}}^{2}\sqrt{\sigma_{k}^{2}+\eta^{2}}}{\sigma_{\text{glob}}^{2}+\eta^{2}}\right),$ (17) which is in contrast to the Gaussian case where $\operatorname{\mathbb{E}}[hq^{*}]~{}=~{}0$ [11, Appendix A]. Moreover, the correlation $\operatorname{\mathbb{E}}[hq^{*}]$ vanishes in both the low and high SNR regime, i.e., $\displaystyle\lim_{\eta\to\infty}\operatorname{\mathbb{E}}[hq^{*}]=\lim_{\eta\to 0}\operatorname{\mathbb{E}}[hq^{*}]=0.$ (18) ### IV-B Univariate Noiseless Case with Multiple Observations We consider the noiseless case with multiple pilot observations ${\bm{r}}=Q({\bm{a}}h)$, for which the closed-form CME together with the optimal pilot sequence was derived in [12] for the Gaussian case. First, we show that the pilot sequence for the Gaussian case is also MSE-optimal for the GMM case. ###### Theorem 4. The MSE-optimal pilot sequence for the considered system contains equidistant phase shifts $\psi_{m}=\frac{\pi(m-1)}{2M}$ for all $m=1,\dots,M$, such that $[{\bm{a}}]_{m}=\exp({\operatorname{j}}\psi_{m})$. Proof: See Appendix -D. ###### Remark 4. In contrast to the Gaussian case, the amplitudes of the GMM are not Rayleigh distributed, which does not impact the design of the optimal pilot sequence since the amplitude information is lost through the one-bit quantization, and the circular symmetry property is not affected [32]. ###### Theorem 5. The CME for the considered system has the two equivalent expressions $\displaystyle\operatorname{\mathbb{E}}[h|{\bm{r}}]$ $\displaystyle=\sqrt{\frac{2}{\pi}}\sum_{k=1}^{K}p_{k}\sigma_{k}{\bm{a}}^{\operatorname{H}}{\bm{C}}_{{\bm{r}}}^{-1}{\bm{r}}$ (19) $\displaystyle=\sum_{k=1}^{K}p_{k}\frac{2M\sigma_{k}}{\sqrt{\pi}}\sin\left(\frac{\pi}{4M}\right)\exp\left({\operatorname{j}}\varphi({\bm{r}})\right)$ (20) where ${\bm{C}}_{{\bm{r}}}^{-1}$ is equivalent to the analytic expression that solely depends on the number of pilots from the Gaussian case [12, Lemma 1], and $\varphi({\bm{r}})=\angle(\frac{1}{M}\sum_{m=1}^{M}[{\bm{r}}]_{m})-\frac{(M-1)\pi}{4M}$ [12]. Proof: See Appendix -E. ###### Remark 5. The result of Theorem 5 is interesting since it leads to two equivalent formulations of the CME, one being linear and one being nonlinear in the observation. This allows for different but equivalent implementations of the optimal estimator in a practical system. Moreover, the expression (20) allows for a simplified formulation of the closed-form analytic MSE in the following. The MSE of the CME (20) is computed to $\displaystyle\text{MSE}_{\text{GMM}}$ $\displaystyle=\sigma_{\text{glob}}^{2}-\frac{4M^{2}}{\pi}\sin^{2}\left(\frac{\pi}{4M}\right)\bar{\sigma}^{2}.$ (21) Thus, we get in the limit of infinitely many pilots, cf. [12], $\displaystyle\lim_{M\to\infty}\text{MSE}_{\text{GMM}}=\sigma_{\text{glob}}^{2}-\frac{\pi}{4}\bar{\sigma}^{2}.$ (22) Observing the MSE expression for the Gaussian case in [12], under a fixed global variance $\sigma_{\text{glob}}^{2}$, we directly see by the weighted Cauchy-Schwarz inequality (16) that $\displaystyle\text{MSE}_{\text{Gauss}}\leq\text{MSE}_{\text{GMM}}$ (23) holds for all numbers of observations $M$, generalizing the result from (14). ## V Numerical Results For each simulation, we draw $10{,}000$ samples from a GMM for estimating the normalized MSE $\operatorname{\mathbb{E}}[\|{\bm{h}}-\hat{{\bm{h}}}\|_{2}^{2}]/\operatorname{\mathbb{E}}[\|{\bm{h}}\|_{2}^{2}]$. In Fig. 1, we choose a ground-truth GMM with $K=2$ components with the weights $p_{1}=0.8$ and $p_{2}=0.2$ and $N=64$-dimensional randomly chosen covariances following to the procedure in [12], which are afterward scaled with a factor of $0.1$ and $10$ for $k=1$ and $k=2$, respectively. This resembles a simple GMM where the differences to the Gaussian case are evident. We compare the linear MMSE estimator (4) with the quantities derived in Theorem 1 to the suboptimal linear Bussgang estimator assuming a Gaussian prior $\operatorname{\mathcal{N}_{\mathbb{C}}}({\bm{0}},{\bm{C}}_{{\bm{h}}})$ (mism. Gauss) for $M\in\\{1,8,16,32\\}$ with the pilot sequence from Theorem 4. The Gaussian approximation is tight in the low SNR regime but shows a considerable gap for medium and high SNRs, highlighting the importance of the novel derived linear MMSE estimator for the GMM case. In the following, we consider the same GMM but for $N=1$, where we choose $\sigma_{1}^{2}=0.1$ and $\sigma_{2}^{2}=10$, which are afterward normalized such that $\sigma_{\text{glob}}^{2}=1$. This ensures that the inequality (16) has a non-negligible gap. Figure 1: Comparison of the linear MMSE estimator with the suboptimal estimator assuming a Gaussian prior for $N=64$ and $M\in\\{1,8,16,32\\}$. Figure 2: MSE results of the CME from Theorem 2 for the univariate case $r=Q(h+n)$ in comparison with the Gaussian case, validating Theorem 3. We first verify the result of Theorem 2 for the univariate case with $N=M=1$ in Fig. 2. It can be seen that the analytic MSE expression (12) is on par with the evaluation of the CME (11), converging to the noiseless case (15). Additionally, we have evaluated the estimator where the cross-correlation $\operatorname{\mathbb{E}}[hq^{*}]$ is neglected (mism. corr.), which shows a performance loss in the medium SNR regime, being in accordance with (18); the estimator that erroneously assumes a Gaussian distributed input, evaluating the CME from [12] (mism. Gauss) deteriorates from the CME with a considerable gap. The CME when the distribution is changed to a Gaussian and its limit show a clearly lower MSE over the whole SNR range, validating Theorem 3 and Remark 3. Fig. 3 assesses the CME from Theorem 5 for the MSE-optimal pilot sequence in Theorem 4 in the noiseless case. It can be observed that the limit is achieved already with a few observations, similar to the Gaussian case [12], which yields a lower MSE for all observations, cf. (23). Finally, Fig. 4 evaluates the CME for the noisy case with multiple observations, i.e., ${\bm{r}}=Q({\bm{a}}h+{\bm{n}})$, which has no analytic expression and is computed numerically via the algorithm from [33], implemented in [34], cf. [12]. It can be seen that the MSE limit for infinitely many observations without AWGN is drastically outperformed with only a few observations and finite SNRs. This behavior is attributed to the fundamental effect of stoachstic resonance [35], where noise can improve the performance in a quantized system. In comparison to the Gaussian case with the same global variance $\sigma_{\text{glob}}^{2}=1$ (CME Gauss), the stochastic resonance effect seems to be more pronounced, and the MSE inequality does not hold anymore, especially with more observations and in the low SNR regime. Moreover, the sub-optimal low-complexity linear MMSE estimator (4) (LMMSE GMM) degrades from the CME, especially for higher numbers of observations. Further results are shown in Appendix -F. Figure 3: Performance of the CME from Theorem 5 for the MSE-optimal pilot sequence from Theorem 4 in comparison to the Gaussian case. Figure 4: Comparison of the CME with the MSE-optimal transmit sequence to the Gaussian case for ${\bm{r}}=Q({\bm{a}}h+{\bm{n}})$ with $M\in\\{1,5,10\\}$. ## VI Conclusion We have presented novel fundamental results for the linear MMSE estimator and the CME in one-bit quantized systems for GMM distributed inputs. In addition to novel closed-form solutions for the CME, highlighting its linearity in special cases, a new MSE inequality regarding the Gaussian and GMM case was established, which also holds in the analyzed asymptotic regime. However, this inequality does not hold in the general case as the GMM shows a more pronounced stochastic resonance effect. The presented results are of use for various signal processing applications. ### -A Proof of Theorem 1 ###### Proof. We first observe that ${\bm{y}}\sim\sum_{k=1}^{K}p_{k}\operatorname{\mathcal{N}_{\mathbb{C}}}({\bm{y}};{\bm{0}},{\bm{C}}_{{\bm{y}}|k})$ with ${\bm{C}}_{{\bm{y}}|k}={\bm{A}}{\bm{C}}_{k}{\bm{A}}^{\operatorname{H}}+\eta^{2}\operatorname{\mathbf{I}}$ due to the Gaussianity of the noise. Thus, ${\bm{y}}|k\sim\operatorname{\mathcal{N}_{\mathbb{C}}}({\bm{y}};{\bm{0}},{\bm{C}}_{{\bm{y}}|k})$, which is used in the following. Utilizing the law of total expectation for the definition of the Bussgang gain, cf. [14], yields $\displaystyle{\bm{B}}$ $\displaystyle=\operatorname{\mathbb{E}}[Q({\bm{y}}){\bm{y}}^{\operatorname{H}}]\operatorname{\mathbb{E}}[{\bm{y}}{\bm{y}}^{\operatorname{H}}]^{-1}$ (24) $\displaystyle=\sum_{k=1}^{K}p_{k}\operatorname{\mathbb{E}}[Q({\bm{y}}){\bm{y}}^{\operatorname{H}}|k]\operatorname{\mathbb{E}}[{\bm{y}}{\bm{y}}^{\operatorname{H}}]^{-1}.$ (25) The solution of $\operatorname{\mathbb{E}}[Q({\bm{y}}){\bm{y}}^{\operatorname{H}}|k]=\sqrt{\frac{2}{\pi}}\operatorname{diag}({\bm{C}}_{{\bm{y}}|k})^{-\frac{1}{2}}{\bm{C}}_{{\bm{y}}|k}$ is known from the Gaussian case, cf., e.g., [24, Sec. 9.2], yielding the result in (5). Similarly, the cross-covariance matrix (6) is computed as $\displaystyle{\bm{C}}_{{\bm{h}}{\bm{r}}}$ $\displaystyle=\operatorname{\mathbb{E}}[{\bm{h}}{\bm{r}}^{\operatorname{H}}]=\sum_{k=1}^{K}p_{k}\operatorname{\mathbb{E}}[{\bm{h}}Q({\bm{y}})^{\operatorname{H}}|k]$ (26) where $\operatorname{\mathbb{E}}[{\bm{h}}Q({\bm{y}})^{\operatorname{H}}]=\sqrt{\frac{2}{\pi}}{\bm{C}}_{k}{\bm{A}}^{\operatorname{H}}\operatorname{diag}({\bm{C}}_{{\bm{y}}|k})^{-\frac{1}{2}}$ is known from the Gaussian case, cf., e.g., [11]. Finally, the computation of the covariance matrix ${\bm{C}}_{{\bm{r}}}$ is a direct consequence of the law of total expectation, i.e., $\displaystyle\operatorname{\mathbb{E}}[{\bm{r}}{\bm{r}}^{\operatorname{H}}]=\operatorname{\mathbb{E}}[Q({\bm{y}})Q({\bm{y}})^{\operatorname{H}}]=\sum_{k=1}^{K}p_{k}\operatorname{\mathbb{E}}[Q({\bm{y}})Q({\bm{y}})^{\operatorname{H}}|k]$ (27) and the known solution for the Gaussian case, known as the arcsine law, cf. [25, 26]. ∎ ### -B Proof of Theorem 2 ###### Proof. We observe that $p(k|{\bm{r}})=p_{k}$ for all $k=1,\dots,K$ in (9) since the zero-mean GMM is symmetric around the origin, and thus, the quantized observation is uninformative for evaluating the responsibility. The solution of $\displaystyle\operatorname{\mathbb{E}}[h|r,k]=\sqrt{\frac{2}{\pi}}\frac{\sigma_{k}^{2}}{\sqrt{\sigma_{k}^{2}+\eta^{2}}}r$ (28) is known from the Gaussian case, see, e.g., [12]. ∎ ### -C Proof of Theorem 3 ###### Proof. Comparing (12) and (13), after removing the equivalent terms and taking the square root on both sides, we need to show that $\displaystyle\sum_{k=1}^{K}p_{k}\frac{\sigma_{k}^{2}}{\sqrt{\sigma_{k}^{2}+\eta^{2}}}\leq\frac{\sum_{k=1}^{K}p_{k}\sigma_{k}^{2}}{\sqrt{\sum_{k=1}^{K}p_{k}\sigma_{k}^{2}+\eta^{2}}}.$ (29) Since the weights $p_{k}$ form a convex combination, (29) holds if $f(x)=\frac{x}{\sqrt{x+\eta^{2}}}$ is a concave function for all $x>0$ based on the definition of concave functions [36, Sec. 3.1.8]. Since $\displaystyle\frac{\partial^{2}}{\partial x^{2}}f(x)=-\frac{x+4\eta^{2}}{4\sqrt{(\eta^{2}+x)^{5}}}<0\text{ for all }x,\eta^{2}>0,$ (30) it follows that $f(x)$ is a concave function for all $x>0$. Thus, (29) is fulfilled, finishing the proof. ∎ ### -D Proof of Theorem 4 ###### Proof. As a direct consequence of the circular symmetry of the individual Gaussians, it immediately follows that the zero-mean GMM distribution is also circularly symmetric and thus has uniformly distributed phases [32]. Based on this, the same proof holds as in [12, Appendix B], yielding the same MSE-optimal sequence. ∎ ### -E Proof of Theorem 5 ###### Proof. Similar to Theorem 2, the responsibility $p(k|{\bm{r}})=p_{k}$ since the pilot observations contain no amplitude information. The solution $\operatorname{\mathbb{E}}[h|{\bm{r}},k]$ is given in [12]. Since $\bar{{\bm{C}}}_{{\bm{y}}|k}={\bm{a}}{\bm{a}}^{\operatorname{H}}$ for all $k=1,\dots,K$, the computation of ${\bm{C}}_{{\bm{r}}}$ in (7) degenerates to the Gaussian case. As both functions (19) and (20) lead to the same MSE [12], they are equivalent on the discrete input domain by the uniqueness of the CME [1, Th. 1]. ∎ ### -F Additional Numerical Results Fig. 5 shows the correlation $\operatorname{\mathbb{E}}[hq^{*}]$ from (17) over the SNR for the same setting as in Fig. 2, which is vanishing in the low and high SNR regime in accordance with (18) and the performance loss in Fig. 2. Figure 5: Correlation of the RV of interest $h$ with the quantization noise $q$, cf. Corollary 1, for the univariate case $r=Q(h+n)$. Fig. 6 evaluates the same setting as in Fig. 4 but compares the MSE-optimal sequence for the noiseless case derived in Theorem 4 with the all-ones sequence ${\bm{a}}={\bm{1}}$. It can be observed that the pilot sequence from Theorem 4 outperforms the all-ones sequence in medium to high SNRs, highlighting its superiority also in the non-asymptotic SNR regime. Figure 6: Comparison of the CME with the MSE-optimal and the all-ones sequence for the case ${\bm{r}}=Q({\bm{a}}h+{\bm{n}})$ with $M\in\\{1,2,5,10\\}$ and $\sigma_{\text{glob}}^{2}=1$. ## References * [1] A. Banerjee, X. Guo, and H. Wang, “On the optimality of conditional expectation as a Bregman predictor,” _IEEE Trans. Inf. Theory_ , vol. 51, no. 7, pp. 2664–2669, 2005. * [2] D. Guo, Y. Wu, S. S. Shitz, and S. Verdú, “Estimation in Gaussian noise: Properties of the minimum mean-square error,” _IEEE Trans. Inf. Theory_ , vol. 57, no. 4, pp. 2371–2385, 2011. * [3] A. Dytso and H. V. Poor, “Properties of the conditional mean estimator in Poisson noise,” in _IEEE Inf. Theory Workshop (ITW)_ , 2019, pp. 1–5. * [4] E. Akyol, K. Viswanatha, and K. Rose, “On conditions for linearity of optimal estimation,” _IEEE Trans. Inf. Theory_ , vol. 58, no. 6, pp. 3497–3508, 2012\. * [5] A. Kipnis, Y. C. Eldar, and A. J. Goldsmith, “Fundamental distortion limits of analog-to-digital compression,” _IEEE Trans. Inf. Theory_ , vol. 64, no. 9, pp. 6013–6033, 2018. * [6] S. Khobahi and M. Soltanalian, “Signal recovery from 1-bit quantized noisy samples via adaptive thresholding,” in _52nd Asilomar Conf. Signals, Syst., Comput._ , 2018, pp. 1757–1761. * [7] Y. You, _Audio Coding: Theory and Applications_. Springer NY, 2010. * [8] R. E. Curry, _Estimation and Control with Quantized Measurements_. MIT Press, 1970. * [9] F. Wendler, M. Stein, A. Mezghani, and J. A. Nossek, “Quantization-loss reduction for 1-bit BOC positioning,” in _ION Int. Tech. Meeting (ITM)_ , 2013, pp. 509–518. * [10] M. Ivrlac and J. Nossek, “On MIMO channel estimation with single-bit signal-quantization,” in _ITG Workshop Smart Antennas_ , 2007. * [11] Y. Li, C. Tao, G. Seco-Granados, A. Mezghani, A. L. Swindlehurst, and L. Liu, “Channel estimation and performance analysis of one-bit massive MIMO systems,” _IEEE Trans. Signal Process._ , vol. 65, no. 15, pp. 4075–4089, 2017. * [12] B. Fesl, M. Koller, and W. Utschick, “On the mean square error optimal estimator in one-bit quantized systems,” _IEEE Trans. Signal Process._ , vol. 71, pp. 1968–1980, 2023. * [13] M. Ding, I. Atzeni, A. Tölli, and A. L. Swindlehurst, “On the optimal MMSE channel estimation for one-bit quantized MIMO systems,” 2024, arXiv preprint: 2404.05536. * [14] O. T. Demir and E. Bjornson, “The Bussgang decomposition of nonlinear systems: Basic theory and MIMO extensions [lecture notes],” _IEEE Signal Process. Mag._ , vol. 38, no. 1, pp. 131–136, 2021. * [15] Q. Wan, J. Fang, H. Duan, Z. Chen, and H. Li, “Generalized Bussgang LMMSE channel estimation for one-bit massive MIMO systems,” _IEEE Trans. Wireless Commun._ , vol. 19, no. 6, pp. 4234–4246, 2020. * [16] J. J. Bussgang, “Crosscorrelation functions of amplitude-distorted Gaussian signals,” MIT Res. Lab. Electron., Tech. Rep. 216, 1952. * [17] A. K. Fletcher, S. Rangan, V. K. Goyal, and K. Ramchandran, “Robust predictive quantization: Analysis and design via convex optimization,” _IEEE J. Sel. Topics Signal Process._ , vol. 1, no. 4, pp. 618–632, 2007. * [18] A. Mezghani and J. Nossek, “Capacity lower bound of MIMO channels with output quantization and correlated noise,” in _Int. Symp. Inf. Theory_ , 2012\. * [19] E. Björnson, J. Hoydis, M. Kountouris, and M. Debbah, “Massive MIMO systems with non-ideal hardware: Energy efficiency, estimation, and capacity limits,” _IEEE Trans. Inf. Theory_ , vol. 60, no. 11, pp. 7112–7139, 2014\. * [20] E. Björnson, L. Sanguinetti, and J. Hoydis, “Hardware distortion correlation has negligible impact on UL massive MIMO spectral efficiency,” _IEEE Trans. Commun._ , vol. 67, no. 2, pp. 1085–1098, 2019. * [21] P. Banelli and S. Cacopardi, “Theoretical analysis and performance of OFDM signals in nonlinear AWGN channels,” _IEEE Trans. Commun._ , vol. 48, no. 3, pp. 430–441, 2000. * [22] D. Dardari, V. Tralli, and A. Vaccari, “A theoretical characterization of nonlinear distortion effects in OFDM systems,” _IEEE Trans. Commun._ , vol. 48, no. 10, pp. 1755–1764, 2000. * [23] A. Fakhrizadeh Esfahani, J. Schoukens, and L. Vanbeylen, “Using the best linear approximation with varying excitation signals for nonlinear system characterization,” _IEEE Trans. Instrum. Meas._ , vol. 65, no. 5, pp. 1271–1280, 2016. * [24] A. Papoulis and S. U. Pillai, _Probability, Random Variables and Stochastic Processes_. McGraw-Hill Education, 2002. * [25] J. Van Vleck and D. Middleton, “The spectrum of clipped noise,” _Proc. IEEE_ , vol. 54, no. 1, pp. 2–19, 1966. * [26] G. Jacovitti and A. Neri, “Estimation of the autocorrelation function of complex Gaussian stationary processes by amplitude clipped signals,” _IEEE Trans. Inf. Theory_ , vol. 40, no. 1, pp. 239–245, 1994. * [27] B. Fesl, N. Turan, B. Böck, and W. Utschick, “Channel estimation for quantized systems based on conditionally Gaussian latent models,” _IEEE Trans. Signal Process._ , vol. 72, pp. 1475–1490, 2024. * [28] B. Böck, M. Baur, N. Turan, D. Semmler, and W. Utschick, “A statistical characterization of wireless channels conditioned on side information,” 2024, arXiv preprint: 2406.04282. * [29] S. Zhidkov, “Performance analysis and optimization of OFDM receiver with blanking nonlinearity in impulsive noise environment,” _IEEE Trans. Veh. Technol._ , vol. 55, no. 1, pp. 234–242, 2006. * [30] P. Banelli, “Non-linear transformations of Gaussians and Gaussian-mixtures with implications on estimation and information theory,” 2013, arXiv preprint: 1111.5950. * [31] T. T. Nguyen, H. D. Nguyen, F. Chamroukhi, and G. J. McLachlan, “Approximation by finite mixtures of continuous density functions that vanish at infinity,” _Cogent Math. Statist._ , vol. 7, no. 1, p. 1750861, 2020. * [32] B. Picinbono, “On circularity,” _IEEE Trans. Signal Process._ , vol. 42, no. 12, pp. 3473–3482, 1994. * [33] A. Genz, “Numerical computation of multivariate normal probabilities,” _J. Comput. Graph. Statist._ , vol. 1, no. 2, pp. 141–149, 1992. * [34] P. Virtanen _et al._ , “SciPy 1.0: Fundamental algorithms for scientific computing in Python,” _Nat. Methods_ , vol. 17, pp. 261–272, 2020. * [35] M. D. McDonnell, N. G. Stocks, C. E. M. Pearce, and D. Abbott, _Stochastic Resonance: From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization_. Cambridge University Press, 2008. * [36] S. P. Boyd and L. Vandenberghe, _Convex Optimization_. Cambridge University Press, 2014.
# Critical Three-Dimensional Ising Model on Spheriods from the Conformal Bootstrap Daniel Berkowitz Yale University Department of Physics <EMAIL_ADDRESS>George T. Fleming Yale University Department of Physics <EMAIL_ADDRESS> ###### Abstract We construct a conformal map from $\mathbb{R}^{3}$ to a three-dimensional spheriod, which includes $\mathbb{S}^{3}$, a double-cover of the 3-ball, and $\mathbb{R}\times\mathbb{S}^{2}$ as limiting cases. Using the data of the critical three-dimensional Ising model on $\mathbb{R}^{3}$ that was computed using the conformal bootstrap method, we numerically estimate the fourth-order Binder cumulant of the critical three-dimensional $\phi^{4}$ theory on $\mathbb{S}^{3}$. We expect this estimate will enable an interesting comparison between the conformal bootstrap and future calculations of critical $\phi^{4}$ theory on $\mathbb{S}^{3}$ using the Quantum Finite Element (QFE) method. ††preprint: APS/123-QED ## I INTRODUCTION The last decade has seen major advances in the general study of conformal field theories beyond the special cases of two-dimensional spacetimes or maximal supersymmetry through the widespread development of the conformal boostrap [1]. Particularly notable is the success of the conformal boostrap in constraining the data (scaling dimensions and OPE coefficients) of the three- dimensional critical Ising model CFT [2, 3, 4, 5], surpassing the previous best results using Markov chain Monte Carlo (MCMC) on regular cubic discretizations of a three-dimensional torus [6, 7, 8]. This has also driven the development of improved methods for constructing three-dimensional conformal blocks, the basis functions of the conformal group analogous to spherical harmonics for the rotation group [3, 9, 10]. These ingredients allow for an accurate estimate of the four-point functions [11] of the critical Ising model. In order to keep up with developments in the conformal bootstrap, a new approach for MCMC-based calculations optimized for the study of conformal fixed points in quantum field theories, called Quantum Finite Elements (QFE), has recently been developed [12, 13, 14, 15, 16, 17, 18, 19]. QFE is a general framework for MCMC calculations on static curved manifolds. Relevant to the study of CFTs, there is no conformal map from flat Euclidean space $\mathbb{R}^{3}$, where conformal bootstrap calculations are performed, to the torus $\mathbb{T}^{3}$, where traditional MCMC calculations on cubic grids are performed, making direct comparison in the infinite volume limit difficult for most of the CFT data except a few leading scaling dimensions determined via finite size scaling. As we will show, conformal maps from $\mathbb{R}^{3}$ to spheroids, including the special cases of the sphere $\mathbb{S}^{3}$; the double cover of the 3-ball; and the cylinder $\mathbb{R}\times\mathbb{S}^{2}$, do exist and MCMC calculations can be performed on these manifolds using QFE. In particular, observables can be constructed in QFE to directly calculate all the CFT data that appears in the conformal block expansion: scaling dimensions and OPE coefficients. As we shall see, one class of observables where the accuracy of QFE calculations are expected to surpass those of the conformal bootstrap is in the calculation of moments of the average magnetization $\displaystyle M=\int d^{3}x\sqrt{g(x)}\phi(x),\quad$ (1) $\displaystyle m_{n}=\left\langle\int d^{3}x\sqrt{g(x)}\left(\phi(x)-M\right)^{n}\right\rangle.$ In QFE and traditional calculations on cubic lattices, moments of average magnetization can be computed very accurately due to the increased statistics of averaging over the volume. We will first define the manifolds we are working with and obtain a general Weyl factor which will allow us to define a conformal map between a 3D spheroid and $\mathbb{R}^{3}$ and discuss the limiting cases of $\mathbb{S}^{3}$, $\mathbb{R}\crossproduct\mathbb{S}^{2}$, and the double- cover of 3-ball in detail. We will show how to map the approximate four-point function computed in the conformal bootstrap on $\mathbb{R}^{3}$ for the critical 3D Ising model to the four-point function on a spheroid. Then, we integrate the two-point function and apply Monte-Carlo integration to the four-point function and obtain estimates for the fourth-order Binder cumulant of the critical 3D Ising model on $\mathbb{S}^{3}$. Finally, we will discuss how this result can be compared with future lattice calculations, including quantum finite elements (QFE). ## II Conformal Invariance of a 3D Spheroid ### II.1 Conformal Invariance of $\mathbb{S}^{3}$ In this section we will furnish a conformal mapping between a general 3D spheroid and $\mathbb{R}^{3}$. We will later use the resultant Weyl factor to obtain an estimate of the fourth-order Binder cumulant for the critical 3D Ising model on $\mathbb{S}^{3}$. Our spheroid can be defined as a set of points embedded in $\mathbb{R}^{4}$ which satisfies the following relation in Cartesian coordinates $\frac{x^{2}}{a^{2}}+\frac{y^{2}}{a^{2}}+\frac{z^{2}}{a^{2}}+\frac{w^{2}}{b^{2}}=1\quad(a,b>0).$ (2) When $b\rightarrow 0$ our spheroid approaches the 3D analogue of a disc or a 2-ball, which is a 3-ball. Technically speaking this 3-ball will be the superposition of two 3-balls superimposed onto each other at their boundaries. This will be discussed in more detail later. In the opposite limit when $b\rightarrow\infty$ our spheroid can be understood to approach $\mathbb{R}\crossproduct\mathbb{S}^{2}$. In an earlier calculation $\phi^{4}$ theory at its Wilson-Fisher fixed point was analyzed [19] on $\mathbb{R}\crossproduct\mathbb{S}^{2}$ using the QFE. When $b\rightarrow a$ the spheroid approaches $\mathbb{S}^{3}$, which is the next manifold we wish to study $\phi^{4}$ theory on using the QFE. A set of equations which satisfy (2) and encapsulates all of the cases we just outlined and everything else in-between are $\displaystyle x=a\sin\psi\sin\theta\cos\phi,$ (3) $\displaystyle y=a\sin\psi\sin\theta\sin\phi,$ $\displaystyle z=a\sin\psi\cos\theta,$ $\displaystyle w=b\cos\psi,$ where $\psi$ and $\theta$ range from 0 to $\pi$, and $\phi$ ranges from 0 to $2\pi$. Intuitively this set of coordinates can be deduced by noticing that as one goes up a dimension from the circle, $\mathbb{S}^{1}$, to the sphere, $\mathbb{S}^{2}$, that an extra parameter, $\theta$, is introduced which ranges from 0 to $\pi$ in the following manner $\displaystyle x_{\mathbb{S}^{2}}=\sin\theta x_{\mathbb{S}^{1}},$ (4) $\displaystyle y_{\mathbb{S}^{2}}=\sin\theta y_{\mathbb{S}^{1}},$ $\displaystyle z_{\mathbb{S}^{2}}=\cos\theta.$ For the coordinates originally associated with the lower dimensional sphere, a factor of $\sin\theta_{i}$, where $\theta_{i}$ ranges from 0 to $\pi$ is included and represents a new degree of freedom present on the higher dimensional sphere. The new independent Cartesian coordinate which differentiates the higher dimensional embedding space from the one dimension lower space is parameterized by $\cos\theta_{i}$. We see that carrying out this construction from the $\mathbb{S}^{2}$, parameterized using standard spherical coordinates, to the $\mathbb{S}^{3}$ results in (3). Extending this to arbitrary dimensions results in the following parameterization for a n-sphere with radius $r$, $\displaystyle x_{1}$ $\displaystyle=r\cos\left(\theta_{1}\right)$ (5) $\displaystyle x_{2}$ $\displaystyle=r\sin\left(\theta_{1}\right)\cos\left(\theta_{2}\right)$ $\displaystyle x_{3}$ $\displaystyle=r\sin\left(\theta_{1}\right)\sin\left(\theta_{2}\right)\cos\left(\theta_{3}\right)$ $\displaystyle\vdots$ $\displaystyle x_{n-1}$ $\displaystyle=r\sin\left(\theta_{1}\right)\cdots\sin\left(\theta_{n-2}\right)\cos\left(\phi\right)$ $\displaystyle x_{n}$ $\displaystyle=r\sin\left(\theta_{1}\right)\cdots\sin\left(\theta_{n-2}\right)\sin\left(\phi\right).$ Because the n-sphere and in turn the n-spheroid can be parameterized in a convenient coordinate system the results in this paper can be generalized to n dimensions. Using (3) we obtain the following metric representation for a 3D spheroid $\displaystyle ds_{sph}^{2}=$ (6) $\displaystyle\left(b^{2}\sin^{2}\psi+a^{2}\cos^{2}\psi\right)d\psi^{2}+a^{2}\sin^{2}{\psi}(d\theta^{2}+\sin^{2}{\theta}d\phi^{2}).$ Using the following change in coordinates inspired by [20], $w=\int\sqrt{b^{2}\sin^{2}\psi+a^{2}\cos^{2}\psi}d\psi$, we can rewrite (6) as $ds_{sph}^{2}=dw^{2}+f(w)^{2}(d\theta^{2}+\sin^{2}{\theta}d\phi^{2}),$ (7) where $f(w)=a\sin\psi$. This metric can be related conformally to the following metric on $\mathbb{R}^{3}$ $\begin{aligned} ds_{flat}^{2}=(dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}{\theta}d\phi^{2})\end{aligned},$ (8) by introducing this functional dependency, $r=e^{g(w)}$, which results in $\displaystyle ds_{flat}^{2}=$ (9) $\displaystyle e^{2g(w)}\left(\frac{dg}{dw}\right)^{2}\left(dw^{2}+\left(\frac{1}{\frac{dg}{dw}}\right)^{2}\left(d\theta^{2}+\sin^{2}{\theta}d\phi^{2}\right)\right).$ We can now set $\left(\frac{1}{\frac{dg}{dw}}\right)=a\sin\psi(w)$ (10) and obtain $g(w)=\int\frac{1}{a}\csc\psi dw$. Going back to the original coordinate transformation we applied to (6), we can rewrite $g(w)$ as $\displaystyle g(w)$ $\displaystyle=\int\frac{1}{a}\csc\psi\sqrt{b^{2}\sin^{2}\psi+a^{2}\cos^{2}\psi}d\psi$ (11) $\displaystyle=\int\sqrt{\frac{b^{2}}{a^{2}}+\cot^{2}\psi}d\psi.$ By doing so we recover the following metric which is conformal to (7) $ds_{flat}^{2}=\frac{e^{2g(w)}}{a^{2}\sin^{2}\psi}\left(dw^{2}+f(w)^{2}(d\theta^{2}+\sin^{2}{\theta}d\phi^{2})\right).$ (12) By comparing (12) to (7) we can deduce that the Weyl factor of our 3D spheroid is $\Omega_{spheroid}\left(x_{i}\right)=a\sin\psi{e^{-\int\sqrt{\frac{b^{2}}{a^{2}}+\cot^{2}\psi}d\psi}}.$ (13) For the case of $\mathbb{S}^{3}$ when $b=a=1$ this reduces to $\Omega_{\mathbb{S}^{3}}\left(x_{i}\right)=2\cos^{2}\frac{\psi}{2}.$ (14) Going back to how we parameterized $r$ in (8) and setting $b=a=1$ we obtain the following mapping between a point on $\mathbb{S}^{3}$ and a point in $\mathbb{R}^{3}$ $\displaystyle z=\tan\left(\frac{\psi}{2}\right)\cos{\theta},$ (15) $\displaystyle y=\tan\left(\frac{\psi}{2}\right)\sin{\theta}\sin{\phi},$ $\displaystyle x=\tan\left(\frac{\psi}{2}\right)\sin{\theta}\cos{\phi}.$ From a geometric perspective our conformal mapping of $\mathbb{S}^{3}$ to $\mathbb{R}^{3}$ is the exact higher dimensional analogue of the standard stereographic projection commonly performed from $\mathbb{S}^{2}$ to $\mathbb{R}^{2}$. For the general case this mapping can be accomplished by placing a $\mathbb{S}^{n}$ on $\mathbb{R}^{n}$ with its south pole centered on the origin of $\mathbb{R}^{n}$ and drawing lines from the north pole which intersect both $\mathbb{S}^{n}$ and $\mathbb{R}^{n}$. Each one of those lines are oriented by a set of n angles and their intersection with $\mathbb{S}^{n}$ and $\mathbb{R}^{n}$ provides a one to one mapping between $\mathbb{S}^{n}$ and $\mathbb{R}^{n}$. The north pole itself cannot be mapped using only a single cover because in the limiting case the line that would intersect the north pole becomes parallel to $\mathbb{R}^{n}$ and hence never intersects it. That is why $\mathbb{S}^{n}$ can be thought of as a one point compactification of $\mathbb{R}^{n}$. A picture of this stereographic projection is provided in Fig. 1. (a) Figure 1: Illustration of the conformal mapping between an infinite plane and $\mathbb{S}^{2}$ with the polar angle shifted in the following manner, $\theta\rightarrow\pi-\theta$, compared to what we have in (14). This imagine originally appeared in [20] ### II.2 Conformal Invariance of $R\crossproduct\mathbb{S}^{2}$ As we previously mentioned, in the limit, $b\rightarrow\infty$, (2) gives the equation for a 3D cylinder which is topologically equivalent to $R\crossproduct\mathbb{S}^{2}$. One way of seeing this is to examine what happens to (2) as $b\rightarrow\infty$ and $-\infty<w<\infty$. In the limit of, $b\rightarrow\infty$, and $w$ being finite, (2) reduces to $\frac{x^{2}}{a^{2}}+\frac{y^{2}}{a^{2}}+\frac{z^{2}}{a^{2}}=1.$ (16) Despite the disappearance of $w$ the manifolds which (16) admits are still embedded in $\mathbb{R}^{4}$. Thus $w$ can be parameterized independently of x, y, and z in an arbitrary fashion. Because we wish to recover $R\crossproduct\mathbb{S}^{2}$ we will set $w=t$, where $t$ ranges from $\left(-\infty,\infty\right)$ and parameterize x, y, and z using standard spherical coordinates. Doing so results in the following metric $\displaystyle ds_{3-cyl}^{2}=dx^{2}+dy^{2}+dz^{2}+dw^{2},$ (17) $\displaystyle ds_{3-cyl}^{2}=dt^{2}+a^{2}(d\theta^{2}+\sin^{2}{\theta}d\phi^{2}),$ where $a$ is the radius of $\mathbb{S}^{2}$. Using (5), this metric can be generalized to $R\crossproduct\mathbb{S}^{n}$ $\displaystyle ds_{n-cyl}^{2}=dt^{2}+d\Omega^{2}_{n-1},$ (18) where $d\Omega^{2}_{n-1}$ is the metric of $\mathbb{S}^{n-1}$ which can be obtained for any n using (5). As the reader can verify $dt^{2}+a^{2}(d\theta^{2}+\sin^{2}{\theta}d\phi^{2})$ is conformally related to $(dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}{\theta}d\phi^{2})$ through the following Weyl factor $\displaystyle\Omega_{3-cyl}^{2}=e^{-2t/a},$ (19) generated by defining $r$ as $\displaystyle r=ae^{\frac{t}{a}}.$ (20) It should be mentioned that setting $b$ equal to a positive, real, finite number does not result in the geometry of a finite length $\mathbb{R}\crossproduct\mathbb{S}^{2}$. Rather the resultant geometry is a 3D ellipsoid. Only in the limit as $b\rightarrow\infty$ is $\mathbb{R}\crossproduct\mathbb{S}^{2}$ realized. Because $\mathbb{R}\crossproduct\mathbb{S}^{2}$ isn’t a compact manifold using integration to find the Binder cumulant (34) isn’t trivial. To obtain an estimate of the Binder cumulant for this non-compact geometry one can perform a single point compactification of $\mathbb{R}\crossproduct\mathbb{S}^{2}$ which results in $\mathbb{S}^{1}\crossproduct\mathbb{S}^{2}$ and define a lattice field theory on that manifold. Once a lattice field theory is defined on $\mathbb{S}^{1}\crossproduct\mathbb{S}^{2}$ one can perform a Monte Carlo simulation to compute an estimate of the Binder cumulant on that compactified geometry as was done in [19]. When this compactification is done the manifold is defined by two radii $r1$ and $r2$, where $r1$ denotes the radius of $\mathbb{S}^{1}$ and, $r2$ is the radius of $\mathbb{S}^{2}$. In the limit as $r1\rightarrow\infty$, $\mathbb{S}^{1}\crossproduct\mathbb{S}^{2}$ approaches $\mathbb{R}\crossproduct\mathbb{S}^{2}$. Therefore if one has a method, such as the QFE, which allows them to formulate lattice field theories on curved manifolds they can study the critical 3D Ising model on $\mathbb{S}^{1}\crossproduct\mathbb{S}^{2}$ as $r1\rightarrow\infty$ and observe what the Binder cumulant approaches. ### II.3 Conformal Invariance of 3-Ball Before we move on to calculating the Binder cumulant on $\mathbb{S}^{3}$ it is prudent to talk about how one would do the corresponding calculation for the 3-ball. The 3-ball is imminently related to $\mathbb{S}^{3}$. One can construct $\mathbb{S}^{3}$ by superimposing two 3-balls and defining an equivalence class which identifies all of the points which make up the shared boundaries of this superimposed ball. In other words, the points which make up the boundary of the two superimposed 3-balls, which are two 2-spheres, are identified as a single point. This identification of the boundary with a single point can be realized by projecting out the boundary of this superimposed 3-ball into a higher dimensional space in which the boundary points of the 2-spheres merge at the north pole; thus resulting in a newly formed $\mathbb{S}^{3}$. Mathematically we see a hint of this construction by setting, $b=0$, and obtaining the following r from our earlier coordinate transformation $\displaystyle r=\lim_{b\to 0}e^{\int\sqrt{\frac{b}{a}+\cot^{2}\psi}d\psi}=\sin\psi.$ (21) As it can be seen when $b\to 0$ our $r$ doesn’t cover the whole plane $\mathbb{R}^{3}$ because $r$ only ranges from 0 to 1. This is because these coordinates only map points located on the lower hemisphere and excludes points located on our 3-ball’s upper hemisphere. This is an artifact of $\mathbb{S}^{3}$ being a construction of two superimposed 3-balls with their shared boundaries being glued together at a single point. The total mapping can be realized by noticing that we could have defined our functional dependency right above (9) as $r=e^{-g(w)}$. Such a functional dependency would have allowed us to obtain a different Weyl factor, $\Omega(x_{i})$, which nonetheless yields the exact same Binder cumulant for the case of $\mathbb{S}^{3}$ as the one we previously calculated. However it would also result in the following radius $\displaystyle r=\lim_{b\to 0}e^{-\int\sqrt{\frac{b}{a}+\cot^{2}\psi}d\psi}=\csc\psi.$ (22) which ranges from 1 to $\infty$. Thus in order to compute the two and four- point functions for the superimposed 3-ball one must differentiate between points on the northern hemisphere and points on the southern hemisphere because they both have different radial coordinates, (21) and (22). The resulting calculation involves group averaging the four-point function over all of the different combination of points that can be on one hemisphere and the opposite hemisphere. More information of this group averaging for the 2-ball which can be extrapolated to the 3-ball can be found in [20]. Preliminary results obtained using the five operators reported in table 1 of [21] and setting $\frac{b}{a}=10^{-5}$ yields, $U_{4}=0.38703\pm.0.00570$, which suggests that the Binder cumulant for these two superimposed 3-balls is similar to the Binder cumulant for $\mathbb{S}^{3}$ as we show in section IV. This similarity between the Binder cumulant of the superimposed 3-ball and $\mathbb{S}^{3}$ are in concord with the similarity found in [20] between the Binder cumulant of the superimposed 2-ball(disc) and $\mathbb{S}^{2}$. For the related case of the interior of a single, non-superimposed sphere that was considered in [22] one has to take into account the boundary. This is done by studying the boundary conformal field theory(BCFT)[23, 24] which has its own scaling dimensions and operators associated with it. For the case with a free boundary on $\mathbb{S}^{2}$ the scaling dimension of the relevant operator was found to be [25, 26, 27, 28] $\Delta_{\tilde{\sigma}}=1.276(2)$. Thus in computing the Binder cumulant of a 3-ball with a boundary one must differentiate between pairs of points which are either both on the boundary, inside the bulk or where one point is inside the bulk and the other is on the boundary. The need to differentiate the location of points on the two aforementioned manifolds is a similarity that the two-superimposed 3-balls and the single 3-ball with boundary share with each other. Using the known values of the scaling dimensions on the boundary of a 3-ball and its interior allows one in theory to use the formalism we presented in this paper to compute an estimate of the fourth-order Binder cumulant. An estimate of the fourth-order Binder cumulant for the critical 3D Ising model on a 3-ball with a boundary computed by integrating its two and four-point functions can be compared with the estimate obtained in [22]. Comparing this hypothetical estimate to what was obtained in [22] can increase our understanding of how to study BCFTs via numerical simulations. In the future we plan to do the calculation we outlined in this section in the limit when, $\frac{b}{a}=0$, and perform a comparison of the value of the Binder cumulant obtained from direct integration to [22]. Furthermore in the future the QFE can be extended to apply to quantum field theories formulated on curved manifolds with boundaries. ## III The Four-Point Function for the Critical 3d Ising Model The four-point function for the critical 3D Ising model whose form is restricted by conformal symmetry is given below $\left\langle\phi(x_{1})\phi(x_{2})\phi(x_{3})\phi(x_{4})\right\rangle_{flat}=\frac{g(u,v)}{\left|x_{1}-x_{2}\right|^{2\Delta_{\sigma}}\left|x_{3}-x_{4}\right|^{2\Delta_{\sigma}}},$ (23) where $x_{i}$ is a point in $\mathbb{R}^{3}$ and $u$, and $v$ are the following conformally invariant cross ratios $\displaystyle u=\frac{\left(x_{12}^{2}x_{34}^{2}\right)}{\left(x_{13}^{2}x_{24}^{2}\right)},$ (24) $\displaystyle v=\frac{\left(x_{32}^{2}x_{14}^{2}\right)}{\left(x_{13}^{2}x_{24}^{2}\right)}.$ For our four-point function (23), $\Delta_{\sigma}$ is the scaling dimension of $\phi(x_{i})$ and its value [29] estimated by the conformal bootstrap is $\Delta_{\sigma}=0.5181489(10)$. The numerator of (23), $g(u,v)$, can be expressed as a OPE in terms of conformal blocks $g(r,\eta)=1+\sum_{\mathscr{O}\in\sigma\times\sigma}C_{\sigma\sigma\mathscr{O}}^{2}g_{\Delta_{\mathscr{O}},\ell_{\mathscr{O}}}(r,\eta),$ (25) where $\displaystyle r=\sqrt{\frac{z\bar{z}}{\left(\sqrt{1-z}+1\right)^{2}\left(\sqrt{1-\bar{z}}+1\right)^{2}}}$ (26) $\displaystyle\eta=\frac{\frac{z}{\left(\sqrt{1-z}+1\right)^{2}}+\frac{\bar{z}}{\left(\sqrt{1-\bar{z}}+1\right)^{2}}}{2\left(\frac{z\bar{z}}{\left(\sqrt{1-z}+1\right)^{2}\left(\sqrt{1-\bar{z}}+1\right)^{2}}\right)^{\frac{1}{2}}}$ $\displaystyle u=z\bar{z}$ $\displaystyle v=(1-z)(1-\bar{z}).$ More information on the physical meaning behind these coordinates (r, $\eta$, $z$, $\bar{z}$) can be found in [11, 3]. The sum in (25) is over the operators (excluding the unit operator) that are present in the $\phi(x_{i})\times\phi(x_{j})$ OPE of dimension $\Delta_{\mathscr{O}}$ and spin $\ell_{\mathscr{O}}$. The scale dimensions and spins for these operators are provided in table 2 of [30]. To evaluate (25) we use following recursion relation which was first reported in [31] $\displaystyle(4r)^{\Delta}h_{\Delta,\ell}(r,\eta)\equiv g_{\Delta,\ell}(r,\eta)$ (27) $\displaystyle h_{\Delta,\ell}(r,\eta)=h_{\ell}^{(\infty)}(r,\eta)+\sum_{k}\frac{c_{1}r^{n_{1}}}{\Delta-\Delta_{1}}h_{\Delta_{1}+n_{1},\ell_{1}}(r,\eta)$ $\displaystyle+\sum_{k}\frac{c_{2}r^{n_{2}}}{\Delta-\Delta_{2}}h_{\Delta_{2}+n_{2},\ell_{2}}(r,\eta)+\sum_{k}\frac{c_{3}r^{n_{3}}}{\Delta-\Delta_{3}}h_{\Delta_{3}+n_{3},\ell_{3}}(r,\eta)$ where information on what $h_{\ell}^{(\infty)}(r,\eta)$, $c_{i}$, $n_{i}$, $\ell_{i}$, and $\Delta_{i}$ are is described in [3]. This recursion relation converges quickly and is easy to evaluate using a computer algebra system like Mathematica. For our purposes we evaluate $h_{\Delta,\ell}$ to 12th order in $r$ where $h_{\ell}^{(\infty)}$ is not expanded in terms of r. The result of using this recursion relation (27) to compute $h(r,\eta)$ up to nth order should be a nth order polynomial in r whose coefficients include $h_{\ell}^{(\infty)}(r,\eta)$. In the appendix, there will be a Mathematica code that evaluates (25) as a sum over the operators present in table 1 of [5] up to any order in $r$. The code can be easily modified to include the eleven terms which appear in table 2 of [30]. ## IV Binder Cumulant Estimate Using the Weyl factor, $\Omega\left(x_{i}\right)=2\cos^{2}\left(\frac{\psi_{i}}{2}\right)$, we obtained earlier we can map the two and four-point functions of the critical 3D Ising model from $\mathbb{R}^{3}$ to $\mathbb{S}^{3}$ as is shown below in $\displaystyle\left\langle\phi\left(x_{1}\right)\phi\left(x_{2}\right)\right\rangle_{g_{uv}}=\frac{1}{\Omega\left(x_{1}\right)^{\Delta_{\sigma}}}\frac{1}{\Omega\left(x_{2}\right)^{\Delta_{\sigma}}}\left\langle\phi\left(x_{1}\right)\phi\left(x_{2}\right)\right\rangle_{\text{flat }}$ (28) $\displaystyle\left\langle\phi\left(x_{1}\right)\phi\left(x_{2}\right)\phi\left(x_{3}\right)\phi\left(x_{4}\right)\right\rangle_{g_{uv}}$ $\displaystyle=\frac{1}{\Omega\left(x_{1}\right)^{\Delta_{\sigma}}}\ldots\frac{1}{\Omega\left(x_{4}\right)^{\Delta_{\sigma}}}\left\langle\phi\left(x_{1}\right)\ldots\phi\left(x_{4}\right)\right\rangle_{\text{flat }}.$ We will use these variables in (15) to construct our two and four-point functions As it can be seen below conformal symmetry greatly restricts the form of the following two-point function $\left\langle\phi\left(x_{1}\right)\phi\left(x_{2}\right)\right\rangle_{flat}=\frac{1}{x_{12}^{2\Delta}},$ (29) where $x_{ij}$ =$|x_{i}-x_{j}|$. The two quantities we need to find by integrating our two and four-point functions over $\mathbb{S}^{3}$ in order to obtain our fourth-order Binder cumulant are the following magnetization densities $\begin{array}[]{l}\left\langle\sigma^{2}\right\rangle=\rho^{2}\int\mathrm{d}S_{1}\mathrm{~{}d}S_{2}\left\langle\phi\left(x_{1}\right)\phi\left(x_{2}\right)\right\rangle_{g_{uv}},\\\ \left\langle\sigma^{4}\right\rangle=\rho^{4}\int\mathrm{d}S_{1}\cdots\mathrm{d}S_{4}\left\langle\phi\left(x_{1}\right)\phi\left(x_{2}\right)\phi\left(x_{3}\right)\phi\left(x_{4}\right)\right\rangle_{g_{uv}},\end{array}$ (30) where $\rho$ is the areal density of the spins, and $\mathrm{d}S_{i}$ represents the number of spins in an infinitesimal area. For the $\mathbb{S}^{3}$, $\rho$, and $\mathrm{d}S_{i}$ can respectively be expressed as $\frac{1}{2\pi^{2}}$ and $\sin^{2}{\psi_{i}}\sin{\theta_{i}}d\psi_{i}d\theta_{i}d\phi_{i}$. We can reduce the computational cost of integrating our two and four-point functions by taking advantage of the SO(4) symmetry of $\mathbb{S}^{3}$. Because SO(4) has six generators, which corresponds to six independent rotations, we can rotate our $\mathbb{S}^{3}$ in such a way that some of the angles that we would normally need to integrate over are fixed; thus reducing the computational cost of our multidimensional Monte Carlo integration. This can be seen because SO(4) is locally isomorphic to $SO(3)\otimes SO(3)$. Thus there are six independent rotations for $\mathbb{S}^{3}$ three for each SO(3). This can be seen by noticing that the Lie Algebra of SO(4) can be represented as two copies of the Lie Algebra of SO(3). The key for efficiently evaluating these integrals (30) is to use the SO(4) symmetry of $\mathbb{S}^{3}$. For the two-point function we can naively evaluate a 6th dimensional integral over these points on its surface, $(\psi_{1},\theta_{1},\phi_{1})$ and $(\psi_{2},\theta_{2},\phi_{2})$. By rotating $\mathbb{S}^{3}$ we can set the first point to be $(0,0,0)$ and the second point to be $(\psi_{2},0,0)$. This results in the following integral for the 2nd order magnetization density. $\left\langle\sigma^{2}\right\rangle=\int_{0}^{\pi}\frac{\left(\left(2\pi^{2}\right)(4\pi)\sin^{2}\left(\psi_{2}\right)\right)\left(\frac{1}{2\pi^{2}}\right)^{2}}{\left(2\cos^{2}\left(\frac{\psi_{2}}{2}\right)\right){}\left(\frac{\sin^{2}\left(\psi_{2}\right)}{\left(1+\cos\left(\psi_{2}\right)\right){}^{2}}\right)^{0.518149}}\,d\psi_{2}$ (31) which yields from Mathematica’s INTEGRATE function $\left\langle\sigma^{2}\right\rangle=0.847359$ (32) The four-point function can be ultimately expressed using the following coordinates $(\psi_{1},\theta_{1},\phi_{1})$, $(\psi_{2},\theta_{2},\phi_{2})$, $(\psi_{3},\theta_{3},\phi_{3})$, $(\psi_{4},\theta_{4},\phi_{4})$. Using the SO(4) group we can reduce the dimensionality of our integral for $\left\langle\sigma^{4}\right\rangle$ from twelve to six by fixing the following coordinates $(0,0,0),(\psi_{2},0,0),(\psi_{3},\theta_{3},0),(\psi_{4},\theta_{4},\phi_{4})$. Using Mathematica NINTEGRATE, we preformed 10,000 Monte Carlo evaluations and obtained the following estimate of the fourth-order magnetization and its associated statistical error $\left\langle\sigma^{4}\right\rangle=1.59083\pm 0.00016.$ (33) We now have all that we need to compute an estimate of the fourth-order Binder cumulant. $U_{4}=\frac{3}{2}\left(1-\frac{1}{3}\frac{\left\langle\sigma^{4}\right\rangle}{\left\langle\sigma^{2}\right\rangle^{2}}\right)$ (34) $U_{4}=0.39220\pm 0.00011.$ (35) For now we exclude sources of error orientating from uncertainty inherent to the CFT data obtained through the bootstrap. This result must be understood within the context of the OPE representation of the four-point function. Infinitely many operators of varying scaling dimension and spin exist which must be summed in order to obtain an exact expression for the four-point function of the critical 3D Ising model. The finite number of operators whose CFT data has been obtained from the bootstrap are the leading order operators which contribute the most to the four-point function. However, because we only took into account data pertaining to eleven of those operators a systematic error will be present in our calculation as a result of us not being able to integrate the exact four-point function. The remainder of the operators posses a higher spin and/or scaling dimension. Thus their inclusion would allow us to more accurately compute the four point function when the points are very close to each other. The range of this systematic error can be estimated by taking the difference between the Binder cumulant computed using the eleven operators listed in table 2 of [30] and the resultant cumultant one obtains if they use only ten of the operators. There is no definitive answer for which ten operators one should include. Thus we performed this estimate using two similar, but different methodologies. The first methodology involves computing a sequence of Binder cumulants as operators are added to the OPE in the order of their scaling dimension. This means first computing the Binder cumulant while only including the operator with the lowest scaling dimension, $\epsilon$, in the OPE representation of the four-point function and than including the operator with the next lowest scaling dimension, $T_{\mu,\nu}$, until ten of the eleven operators are included in the four-point function. The next methodology is similar except operators are included sequentially into the OPE in order of increasing spin. In terms of increasing scaling dimension and spin, one respectively obtains the following Binder cumulants and potential estimates for the systemic error $\displaystyle U_{Scaling}=0.39216\pm 0.00011$ (36) $\displaystyle\Delta U_{4-Scaling}=U_{4}-U_{Scaling}=0.00004,$ $\displaystyle U_{Spin}=0.39165\pm 0.00011$ (37) $\displaystyle\Delta U_{4-Spin}=U_{4}-U_{Spin}=0.00141.$ One way to interpret the magnitude of the systematic errors that we obtained using bootstrap data is to compare it to the magnitude of the systematic error generated by doing the analogous calculation using the QFE. A direct comparison of such nature cannot be done at the moment because we haven’t had a chance to apply the QFE to $\phi^{4}$ theory on $\mathbb{S}^{3}$. However it is reasonable to expect that the relative error that we will obtain when we do the aforementioned calculation will be similar to the relative error obtained for the analogous calculation [32, 33] on $\mathbb{S}^{2}$ which has already been done. Below are the statistical(58) and systematic(90) errors for an estimate of the fourth-order Binder cumulant of $\phi^{4}$ theory at its Wilson-Fisher conformal fixed point on $\mathbb{S}^{2}$ computed using the QFE (Monte Carlo Values) [32, 33] and direct integration (Analytic CFT Values). $\begin{array}[]{ll}\text{ Monte Carlo Values: }&U_{4,cr}=0.85020(58)(90)\\\ \text{ Analytic CFT Values: }&U_{4}^{*}=0.8510207(63).\end{array}$ (38) If we compute the relative error of the above QFE result we obtain $\delta U_{4,cr}=\frac{9\crossproduct 10^{-4}}{0.85020}\approx 10^{-3}.$ (39) This is a reasonable estimate of the systematic error that we expect our QFE calculation on $\mathbb{S}^{3}$ to yield. Therefore in order to interpret the magnitude of the relative error that the bootstrap presently yields for the fourth-Binder cumulant of the critical 3D Ising model we should compare it to (39). $\delta U_{4,scaling-dim}\approx 9\crossproduct 10^{-5},$ (40) $\delta U_{4,spin}\approx 3.6\crossproduct 10^{-3}.$ (41) If the systematic error in our calculation is closer to (40) that would suggest that the current CFT data that we have from the bootstrap is enough to compute an accurate estimate of the four-point function relative to the implementation of the QFE that was used in [18]. However if the systematic error is much closer to (41), that would indicate that the current bootstrap results aren’t enough to match the accuracy of the QFE and that additional data on higher order operators is needed so that the accuracy of the two calculations can be in agreement with each other. To demonstrate the convergence of the Binder cumulant as we add terms to the four-point function we show in figure 2 the Binder cumulant as a function of these operators for both methodologies. We used Monte-Carlo integration and 1,000 iterations to compute each Binder cumulant. We are confident that our range is representative of the systematic error because it is evident that as we include operators in the OPE that our results are converging to a definitive value. The difference between preceding cumulants have a tendency to shrink as we include more operators, thus the inclusion of more operators will allow us to further minimize the systematic error. Thus in order to compute a more accurate estimate of the fourth-Binder cumulant using integration we need additional bootstrap results for higher order operators and to include more Monte Carlo iterations. Figure 2b shows an interesting phenomenon. Excluding when only the operator with both the lowest scaling dimension and spin is included, the inclusion of an operator with higher spin results in the value of $U_{4}$ jumping. When the highest operator included in the four-point function has spin 2 the values of $U_{4}$ varies little as additional spin 2 operators are included. It is only when spin 4 operators are included that we see such a jump and again see that the value of $U_{4}$ varies very little when additional spin 4 operators are included. We see a similar jump when we include a spin 6 operator. The jump though decreases in magnitude as higher and higher spin operators are included. This suggests that the value of the Binder cumulant approaches some definitive number as we increase the operators in the OPE representation of the four point function. The origin of this phenomenon deserves to be investigated. As a check that our procedure for evaluating the four-point function on $\mathbb{S}^{3}$ is correct we calculated the Binder cumulant for the free theory. The correlation functions for a free CFT are given in [34] and its Binder cumulant should be zero. Using Monte Carlo integration, 10,000 iterations, and accuracy goal 15, we calculated $U_{4}=-1.9176\times 10^{-6}\pm 4.7357\times 10^{-5}.$ (42) Our result is very comfortably within the range of the expected result of 0 for the Binder cumulant. We hope to check our results for the Binder cumulant of the critical 3D Ising model on $\mathbb{S}^{3}$ using the QFE in the near future. ## V Concluding Remarks Using the data of the critical 3D Ising model computed using the conformal bootstrap method, we integrated the approximate two and four-point functions to obtain an estimate of the fourth-order Binder cumulant. We also showed how this approach could be used to estimate the Binder cumulants for the 3-ball and other 3D spheroids. Our approach is an extension of the work by Deng and Blote [20] to three dimensions and we showed how it could be extended further to higher dimensional spheroids. The immediate application of our result is to compare this estimate of the Binder cumulant with one computed in an upcoming calculation of $\phi^{4}$ theory on $\mathbb{S}^{3}$ using quantum finite elements (QFE). A favorable comparison of the two methods would give us further confidence that QFE is a correct framework for computing non- perturbative quantum field theories on curved manifolds. (a) Binder cumulant plot in order of scaling dimension (b) Binder cumulant plot in order of increasing spin Figure 2: Plots of the Binder cumultant as we include operators into the OPE of the four-point function in order of their scaling dimension and spin. We count the unit operator as the first term included. We started with the 3rd operator plot in order of scaling dimension. The statistical error bars don’t show in the spin graph because of the large difference in values of the Binder cumulant when only spin zero operator are included versus when spin $2>$ operators are included. The statistical errors for all of the points in our spin graph are around $\approx.0004$. ###### Acknowledgements. Both authors thank Richard Brower for his immense contributions towards the development of the QFE. In addition we thank him for the intellectually stimulating conversations that led us down the road to doing the calculation we presented in this manuscript. We acknowledge support from the United States Department of Energy through grant number DE-SC0019061. ## Appendix A Appendixes Using the following code and the CFT data reported for the five operators included in table 1 of [5] we performed 100,000 Monte Carlo evaluations with our accuracy goal set to 5 and calculated $\left\langle\sigma^{4}\right\rangle$ to be $1.591463$ with an error of $0.000050$. This allows us to obtain the following estimate of the fourth- order Binder cumulant $U_{4}=0.391765\pm 0.000035.$ (43) The code below can be easily modified to include the additional CFT data [30] that we used to compute (35). This code defines the $h_{\Delta,\ell}(r,\eta)$ which we wish to calculate through the recursion relation given in (27). The only input for the user is ”n”, which is the order one wishes to compute the recursion relation up to. H[\[CapitalDelta]\[CapitalDelta]_, LL_] := {n = 12;, recursion[a_] := Normal[Series[a /. hh -> h, {r, 0, n}]], Nest[recursion, {h[\[CapitalDelta]_, L_] := ((LegendreP[L, \[Eta]]/( Sqrt[1 - rr^2] Sqrt[-4 \[Eta]^2 rr^2 + (1 + rr^2)^2]) + Sum[((-(( 2^(1 - 4 k) k ((2 k)!)^2 Pochhammer[1 + L, 2 k])/((k!)^4 Pochhammer[1/2 + L, 2 k])))*(r)^(2* k)/(\[CapitalDelta] - (1 - L - 2*k)))* hh[1 - L, L + 2*k], {k, 1, n/2}] + Sum[((-(( k (1/2 - k + L) Pochhammer[1/2, k]^2 Pochhammer[ 1/4 (3 - 2 k + 2 L), k]^2)/((1/2 + k + L) (k!)^2 Pochhammer[ 1/4 (1 - 2 k + 2 L), k]^2)))*(r)^(2* k)/(\[CapitalDelta] - ((3/2) - k)))* hh[(3/2) + k, L], {k, 1, n/2}] + Sum[((-(( 2^(1 - 4 k) k ((2 k)!)^2 Pochhammer[1 - 2 k + L, 2 k])/((k!)^4 Pochhammer[3/2 - 2 k + L, 2 k])))*(r)^(2* k)/(\[CapitalDelta] - (2 + L - 2*k)))* hh[2 + L, L - 2*k], {k, 1, L/2}])), \[CapitalDelta] = \ \[CapitalDelta]\[CapitalDelta];, L = LL;, (((LegendreP[L, \[Eta]]/( Sqrt[1 - rr^2] Sqrt[-4 \[Eta]^2 rr^2 + (1 + rr^2)^2]) + Sum[((-(( 2^(1 - 4 k) k ((2 k)!)^2 Pochhammer[1 + L, 2*k])/((k!)^4 Pochhammer[1/2 + L, 2*k])))*(r)^(2* k)/(\[CapitalDelta] - (1 - L - 2*k)))* hh[1 - L, L + 2*k], {k, 1, n/2}] + Sum[((-(( k (1/2 - k + L) Pochhammer[1/2, k]^2 Pochhammer[ 1/4 (3 - 2 k + 2 L), k]^2)/((1/2 + k + L) (k!)^2 Pochhammer[ 1/4 (1 - 2 k + 2 L), k]^2)))*(r)^(2* k)/(\[CapitalDelta] - ((3/2) - k)))* hh[(3/2) + k, L], {k, 1, n/2}] + Sum[((-(( 2^(1 - 4 k) k ((2 k)!)^2 Pochhammer[1 - 2 k + L, 2 k])/((k!)^4 Pochhammer[3/2 - 2 k + L, 2 k])))*(r)^(2* k)/(\[CapitalDelta] - (2 + L - 2*k)))* hh[2 + L, L - 2*k], {k, 1, L/2}])))}[[4]], n/2]}[[3]] This code is the numerator that appears in the four-point function for the critical 3D Ising model and is computed in accordance with (25) G = (1 + (H[1.412625, 0]*(((4*r)^(1.412625))*(1.0518537)^2)) + (H[ 3.82966, 0] (((4*r)^(3.82966))*(0.053029)^2)) + (H[3, 2]*(((4*r)^(3)) ((0.5181489/Sqrt[0.946539])^2))) + (H[5.509, 2] (((4*r)^(5.509))*(0.0172)^2)) + (H[5.02274, 4] (((4*r)^(5.02274))*(0.1319)^2))); This last piece of code plots the four-point function, showing that our computation of the four-point function agrees with [11]. {FourPointFunction = (G/(1 + Abs[z]^1.0362978 + (Abs[z]^1.0362978/ Abs[1 - z])^1.0362978) /. rr -> r /. r -> Abs[ z/(1 + Sqrt[1 - z])^2] /. \[Eta] -> (z/(1 + Sqrt[1 - z])^2 + Conjugate[z]/(1 + Sqrt[1 - Conjugate[z]])^2)/(2* Abs[z/(1 + Sqrt[1 - z])^2]) /. z -> x + I*y);, Plot3D[FourPointFunction, {x, -1, .5}, {y, -1, 1}, RegionFunction -> Function[{x, y}, Sqrt[x^2 + y^2] < 1 && x < 1/2], PlotRange -> All]}[[2]] ## References * Zamolodchikov and Zamolodchikov [1996] A. Zamolodchikov and A. Zamolodchikov, Conformal bootstrap in liouville field theory, Nuclear Physics B 477, 577 (1996). * El-Showk _et al._ [2012] S. El-Showk, M. F. Paulos, D. Poland, S. Rychkov, D. Simmons-Duffin, and A. Vichi, Solving the 3d ising model with the conformal bootstrap, Physical Review D 86, 025022 (2012). * Kos _et al._ [2014a] F. Kos, D. Poland, and D. Simmons-Duffin, Bootstrapping mixed correlators in the 3d ising model, Journal of High Energy Physics 2014, 109 (2014a). * Kos _et al._ [2016a] F. Kos, D. Poland, D. Simmons-Duffin, and A. Vichi, Precision Islands in the Ising and $O(N)$ Models, JHEP 08, 036, arXiv:1603.04436 [hep-th] . * Komargodski and Simmons-Duffin [2017a] Z. Komargodski and D. Simmons-Duffin, The random-bond ising model in 2.01 and 3 dimensions, Journal of Physics A: Mathematical and Theoretical 50, 154001 (2017a). * Hasenbusch [2010] M. Hasenbusch, Finite size scaling study of lattice models in the three-dimensional ising universality class, Physical Review B 82, 10.1103/physrevb.82.174433 (2010). * Caselle _et al._ [2015] M. Caselle, G. Costagliola, and N. Magnoli, Numerical determination of the operator-product-expansion coefficients in the 3D Ising model from off-critical correlators, Phys. Rev. D 91, 061901 (2015), arXiv:1501.04065 [hep-th] . * Costagliola [2016] G. Costagliola, Operator product expansion coefficients of the 3D Ising model with a trapping potential, Phys. Rev. D 93, 066008 (2016), arXiv:1511.02921 [hep-th] . * Penedones _et al._ [2016] J. Penedones, E. Trevisani, and M. Yamazaki, Recursion relations for conformal blocks, Journal of High Energy Physics 2016, 1 (2016). * Hogervorst [2016] M. Hogervorst, Dimensional reduction for conformal blocks, Journal of High Energy Physics 2016, 1 (2016). * Rychkov _et al._ [2017] S. Rychkov, D. Simmons-Duffin, and B. Zan, Non-gaussianity of the critical 3d ising model, SciPost Physics 2, 1 (2017). * Brower _et al._ [2013] R. C. Brower, G. T. Fleming, and H. Neuberger, Lattice Radial Quantization: 3D Ising, Phys. Lett. B 721, 299 (2013), arXiv:1212.6190 [hep-lat] . * Brower _et al._ [2012] R. C. Brower, G. T. Fleming, and H. Neuberger, Radial Quantization for Conformal Field Theories on the Lattice, PoS LATTICE2012, 061 (2012), arXiv:1212.1757 [hep-lat] . * Brower _et al._ [2014] R. C. Brower, M. Cheng, and G. T. Fleming, Improved Lattice Radial Quantization, PoS LATTICE2013, 335 (2014), arXiv:1407.7597 [hep-lat] . * Brower _et al._ [2015] R. C. Brower, M. Cheng, and G. T. Fleming, Quantum Finite Elements: 2D Ising CFT on a Spherical Manifold, PoS LATTICE2014, 318 (2015). * Brower _et al._ [2016] R. C. Brower, G. Fleming, A. Gasbarro, T. Raben, C.-I. Tan, and E. Weinberg, Quantum Finite Elements for Lattice Field Theory, PoS LATTICE2015, 296 (2016), arXiv:1601.01367 [hep-lat] . * Brower _et al._ [2017] R. C. Brower, E. S. Weinberg, G. T. Fleming, A. D. Gasbarro, T. G. Raben, and C.-I. Tan, Lattice Dirac Fermions on a Simplicial Riemannian Manifold, Phys. Rev. D 95, 114510 (2017), arXiv:1610.08587 [hep-lat] . * Brower _et al._ [2018] R. C. Brower, M. Cheng, E. S. Weinberg, G. T. Fleming, A. D. Gasbarro, T. G. Raben, and C.-I. Tan, Lattice $\phi^{4}$ field theory on Riemann manifolds: Numerical tests for the 2-d Ising CFT on $\mathbb{S}^{2}$, Phys. Rev. D 98, 014502 (2018), arXiv:1803.08512 [hep-lat] . * Brower _et al._ [2020] R. C. Brower, G. T. Fleming, A. D. Gasbarro, D. Howarth, T. G. Raben, C.-I. Tan, and E. S. Weinberg, Radial lattice quantization of 3d $\phi^{4}$ field theory, arXiv preprint arXiv:2006.15636 [hep-lat] (2020). * Deng and Blote [2003] Y. Deng and H. W. Blote, Conformal invariance and the ising model on a spheroid, Physical Review E 67, 036107 (2003). * Komargodski and Simmons-Duffin [2017b] Z. Komargodski and D. Simmons-Duffin, The Random-Bond Ising Model in 2.01 and 3 Dimensions, J. Phys. A 50, 154001 (2017b), arXiv:1603.04444 [hep-th] . * Cosme _et al._ [2015] C. Cosme, J. V. P. Lopes, and J. Penedones, Conformal symmetry of the critical 3d ising model inside a sphere, Journal of High Energy Physics 2015, 1 (2015). * Cardy [1984] J. L. Cardy, Conformal invariance and surface critical behavior, Nuclear Physics B 240, 514 (1984). * Cardy [1996] J. Cardy, _Scaling and renormalization in statistical physics_ , Vol. 5 (Cambridge university press, 1996). * Diehl and Shpot [1998] H. Diehl and M. Shpot, Massive field-theory approach to surface critical behavior in three-dimensional systems, Nuclear Physics B 528, 595 (1998). * Deng _et al._ [2005] Y. Deng, H. W. Blöte, and M. Nightingale, Surface and bulk transitions in three-dimensional o (n) models, Physical Review E 72, 016128 (2005). * Hasenbusch [2011] M. Hasenbusch, Thermodynamic casimir force: A monte carlo study of the crossover between the ordinary and the normal surface universality class, Physical Review B 83, 134425 (2011). * Gliozzi _et al._ [2015] F. Gliozzi, P. Liendo, M. Meineri, and A. Rago, Boundary and interface cfts from the conformal bootstrap, Journal of High Energy Physics 2015, 36 (2015). * Kos _et al._ [2016b] F. Kos, D. Poland, D. Simmons-Duffin, and A. Vichi, Precision islands in the ising and o (n) models, Journal of High Energy Physics 2016, 1 (2016b). * Simmons-Duffin [2017] D. Simmons-Duffin, The Lightcone Bootstrap and the Spectrum of the 3d Ising CFT, JHEP 03, 086, arXiv:1612.08471 [hep-th] . * Kos _et al._ [2014b] F. Kos, D. Poland, and D. Simmons-Duffin, Bootstrapping mixed correlators in the 3d ising model, Journal of High Energy Physics 2014, 109 (2014b). * Mohamed _et al._ [2018] M. S. Mohamed, A. N. Hirani, and R. Samtaney, Numerical convergence of discrete exterior calculus on arbitrary surface meshes, International Journal for Computational Methods in Engineering Science and Mechanics 19, 194 (2018). * Gasbarro [2018] A. D. Gasbarro, _Studies of Conformal Behaviorin Strongly Interacting Quantum Field Theories_ , Ph.D. thesis, Yale University (2018). * Guerrieri _et al._ [2016] A. L. Guerrieri, A. C. Petkou, and C. Wen, The free $\sigma$ cfts, Journal of High Energy Physics 2016, 1 (2016).
literatures beforehand in such a comprehensive way. The main results of this chapter play a key role in proving Theorem 4.1 and its extension in §9. In fact, we generalize the approach developed for simple saddles in [7] (related to logarithmic singularity type) to multi-saddle type. Let $M^{\prime}\subset M$ be a minimal component of a locally Hamiltonian flow $\psi_{\mathbb{R}}$ associated with a closed 1-form $\eta$. Let $I\subset M^{\prime}$ be its transversal curve equipped with a standard parametrization. Recall that a parametrization of curve $\gamma:[a,b]\to M$ is _standard_ if $\gamma:I\to M$ if $\eta(d\gamma)=1$. In the standard coordinates, the first return map $T:I\to I$ is an IET. For every saddle $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})$ of multiplicity $m=m_{\sigma}\geq 2$ let $(x,y)$ be a singular chart in a neighborhood $U_{\sigma}$ of $\sigma$. Then the corresponding local Hamiltonian is of the form $H(x,y)=\Im(x+iy)^{m}$. If the $\psi_{\mathbb{R}}$-invariant area form $\omega=V(x,y)dx\wedge dy$, then the corresponding local Hamiltonian equation in $U_{\sigma}$ is of the form $\frac{dx}{dt}=\frac{\frac{\partial H}{\partial y}(x,y)}{V(x,y)}=\frac{m\Re(x+iy)^{m-1}}{V(x,y)}\quad\text{and}\quad\frac{dy}{dt}=-\frac{\frac{\partial H}{\partial x}(x,y)}{V(x,y)}=-\frac{m\Im(x+iy)^{m-1}}{V(x,y)},$ so (8.1) $X(x,y)=X_{1}(x,y)+iX_{2}(x,y)=\frac{m\overline{(x+iy)^{m-1}}}{V(x,y)}$ and $\eta=m\Im(x+iy)^{m-1}\,dx+m\Re(x+iy)^{m-1}\,dy.$ Therefore, a $C^{1}$-curve $\gamma:[a,b]\to U_{\sigma}$ is standard if and only if (8.2) $\displaystyle\begin{split}1=\eta_{\gamma(t)}\gamma^{\prime}(t)&=m\Im(\gamma(t))^{m-1}\Re\gamma^{\prime}(t)+m\Re(\gamma(t))^{m-1}\Im\gamma^{\prime}(t)\\\ &=\Im\big{(}m(\gamma(t))^{m-1}\gamma^{\prime}(t)\big{)}=\Im\left(\frac{d}{dt}(\gamma(t))^{m}\right).\end{split}$ For every $f\in C^{m}(M)$ and any $\alpha=(\alpha_{1},\alpha_{2})\in{\mathbb{Z}}_{\geq 0}^{2}$ with $|\alpha|=\alpha_{1}+\alpha_{2}\leq m$ let $\partial_{\sigma}^{\alpha}(f)=\frac{\partial^{|\alpha|}(f\cdot V)}{\partial^{\alpha_{1}}x\partial^{\alpha_{2}}y}(0,0)$. ###### Lemma 8.1. For every $f\in C^{m}(M)$ and any $\alpha\in{\mathbb{Z}}_{\geq 0}^{2}$ with $|\alpha|\leq m-2$ we have $\partial_{\sigma}^{\alpha}(f)=\partial_{\sigma}^{\alpha}(f\circ\psi_{t})\text{ for every }t\in{\mathbb{R}}.$ ###### Proof. First note that for every $(x,y)\in U_{\sigma}\cap\psi_{-t}(U_{\sigma})$ we have $\displaystyle\frac{d}{dt}((f\cdot V)\circ\psi_{t})(x,y)$ $\displaystyle=\frac{\frac{\partial(f\cdot V)}{\partial x}(\psi_{t}(x,y))}{V(\psi_{t}(x,y))}(V\cdot X_{1})(\psi_{t}(x,y))$ $\displaystyle\quad+\frac{\frac{\partial(f\cdot V)}{\partial y}(\psi_{t}(x,y))}{V(\psi_{t}(x,y))}(V\cdot X_{2})(\psi_{t}(x,y)).$ Therefore, by induction $\displaystyle\frac{d}{dt}\frac{\partial^{|\alpha|}}{\partial^{\alpha_{1}}x\partial^{\alpha_{2}}y}((f\cdot V)\circ\psi_{t})(x,y)=\frac{\partial^{|\alpha|}}{\partial^{\alpha_{1}}x\partial^{\alpha_{2}}y}\frac{d}{dt}((f\cdot V)\circ\psi_{t})(x,y)$ $\displaystyle=\sum_{|\beta|\leq|\alpha|}W_{\beta,1}(t,x,y)\frac{\partial^{|\beta|}}{\partial^{\beta_{1}}x\partial^{\beta_{2}}y}(V\cdot X_{1})(\psi_{t}(x,y))$ $\displaystyle\qquad+\sum_{|\beta|\leq|\alpha|}W_{\beta,2}(t,x,y)\frac{\partial^{|\beta|}}{\partial^{\beta_{1}}x\partial^{\beta_{2}}y}(V\cdot X_{2})(\psi_{t}(x,y)).$ As $V\cdot X_{1}$ and $V\cdot X_{2}$ are homogenous polynomials of degree $m-1$, we have $\frac{\partial^{|\beta|}}{\partial^{\beta_{1}}x\partial^{\beta_{2}}y}(V\cdot X_{1})(0,0)=\frac{\partial^{|\beta|}}{\partial^{\beta_{1}}x\partial^{\beta_{2}}y}(V\cdot X_{2})(0,0)=0\text{ if }|\beta|\leq m-2.$ It follows that $\frac{d}{dt}\frac{\partial^{|\alpha|}}{\partial^{\alpha_{1}}x\partial^{\alpha_{2}}y}((f\cdot V)\circ\psi_{t})(0,0)=0\text{ for all }t\in{\mathbb{R}}\text{ and }|\alpha|\leq m-2.$ Hence $\partial_{\sigma}^{\alpha}(f\circ\psi_{t})=\frac{\partial^{|\alpha|}}{\partial^{\alpha_{1}}x\partial^{\alpha_{2}}y}((f\cdot V)\circ\psi_{t})(0,0)=\frac{\partial^{|\alpha|}}{\partial^{\alpha_{1}}x\partial^{\alpha_{2}}y}(f\cdot V)(0,0)=\partial_{\sigma}^{\alpha}(f).$ ∎ Let $G_{0}:{\mathbb{C}}\to{\mathbb{C}}$ be the principal $m$-th root map, i.e. $G_{0}(re^{is})=r^{1/m}e^{is/m}$ if $s\in[0,2\pi)$, and let $\omega\in{\mathbb{C}}$ be the principal $2m$-th root of unity. ###### Definition 5. For every $\varepsilon>0$ denote by $D_{\varepsilon}$ the pre-image of the square $[-\varepsilon,\varepsilon]\times[-\varepsilon,\varepsilon]$ by the map $z\mapsto z^{m}$. Given a neighborhood $U_{\sigma}$ of $\sigma$, choose $\varepsilon>0$ such that $D_{\varepsilon}=D_{\sigma,\varepsilon}\subset U_{\sigma}$. Let us consider four curves that parametrize some incoming and outgoing segments of the boundary of $D_{\varepsilon}$: $\gamma_{+}^{in},\gamma_{+}^{out}:(0,\varepsilon)\to\partial D_{\varepsilon}$, $\gamma_{-}^{in},\gamma_{-}^{out}:(-\varepsilon,0)\to\partial D_{\varepsilon}$ are given by $\gamma_{\pm}^{in}(s)=G_{0}(-\varepsilon+is),\quad\gamma_{\pm}^{out}(s)=G_{0}(\varepsilon+is).$ For every interval $J\subset[0,2\pi)$ denote by $\mathcal{S}(J)$ the corresponding angular sector $\\{z\in{\mathbb{C}}:\operatorname{Arg}(z)\in J\\}$. ###### Lemma 8.2. The following statements hold: (i) The maps $\gamma_{\pm}^{in}$/$\gamma_{\pm}^{out}$ are standard parametrizations of incoming/outgoing segments of $D_{\varepsilon}\cap\mathcal{S}([0,2\pi/m))$ for the flow $\psi_{\mathbb{R}}$. (ii) The orbit segments entering $D_{\varepsilon}$ at $\gamma_{\pm}^{in}(s)$ leave it at $\gamma_{\pm}^{out}(s)$. Denote by $\tau(s)$ the time spent by this orbit in the set $D_{\varepsilon}$. Then (iii) for every $f\in C^{m}(M)$ we have (8.3) $\int_{0}^{\tau(s)}f(\psi_{t}(\gamma_{\pm}^{in}(s)))\,dt=\frac{1}{m^{2}}\int_{-\varepsilon}^{\varepsilon}\frac{(f\cdot V)(G_{0}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du.$ ###### Proof. As $\Im(\gamma_{\pm}^{in}(s)^{m})=\Im(G_{0}(-\varepsilon+is)^{m})=s,\quad\Im(\gamma_{\pm}^{out}(s)^{m})=\Im(G_{0}(\varepsilon+is)^{m})=s,$ in view of (8.2), the parametrizations $\gamma_{\pm}^{in}$, $\gamma_{\pm}^{out}$ are standard. Since the map $z\mapsto z^{m}$ is a bijection between $D_{\varepsilon}\cap\mathcal{S}([0,2\pi/m))$ and $[-\varepsilon,\varepsilon]\times[-\varepsilon,\varepsilon]$, and $G_{0}$ is its inverse, let us consider a local flow $\tilde{\psi}_{\mathbb{R}}$ on $[-\varepsilon,\varepsilon]\times[-\varepsilon,\varepsilon]$ conjugated to the flow $\psi_{\mathbb{R}}$ restricted to $D_{\varepsilon}\cap\mathcal{S}([0,2\pi/m))$, i.e. $\tilde{\psi}_{t}(z)=\psi_{t}(G_{0}(z))^{m}$. By (8.1), $\displaystyle\frac{d}{dt}\tilde{\psi}_{t}(z)$ $\displaystyle=m\,\psi_{t}(G_{0}(z))^{m-1}\frac{d}{dt}\psi_{t}(G_{0}(z))=m\,\psi_{t}(G_{0}(z))^{m-1}X(\psi_{t}(G_{0}(z)))$ $\displaystyle=m^{2}\frac{|\psi_{t}(G_{0}(z))|^{2(m-1)}}{V(\psi_{t}(G_{0}(z)))}=m^{2}\frac{|\tilde{\psi}_{t}(z)|^{\frac{2(m-1)}{m}}}{V\circ G_{0}(\tilde{\psi}_{t}(z))}.$ Hence $\frac{d}{dt}\Re\tilde{\psi}_{t}(z)=m^{2}\frac{|\tilde{\psi}_{t}(z)|^{\frac{2(m-1)}{m}}}{V\circ G_{0}(\tilde{\psi}_{t}(z))}>0\text{ and }\frac{d}{dt}\Im\tilde{\psi}_{t}(z)=0.$ It follows that the interval $\\{(-\varepsilon,s):s\in(-\varepsilon,\varepsilon)\\}$ is the incoming and $\\{(\varepsilon,s):s\in(-\varepsilon,\varepsilon)\\}$ is the outgoing segment of $[-\varepsilon,\varepsilon]\times[-\varepsilon,\varepsilon]$ for the local flow $\tilde{\psi}_{\mathbb{R}}$. Moreover, the orbit segments entering $[-\varepsilon,\varepsilon]\times[-\varepsilon,\varepsilon]$ at $(-\varepsilon,s)$ leave it at $(\varepsilon,s)$. Passing via $G_{0}$ to the flow $\psi_{\mathbb{R}}$, we obtain the first claim of the lemma. Recall that $\tau(s)$ is the time spent by $\tilde{\psi}_{\mathbb{R}}$-orbit starting at $(-\varepsilon,s)$ in the set $[-\varepsilon,\varepsilon]\times[-\varepsilon,\varepsilon]$. Then $\displaystyle\int_{0}^{\tau(s)}f(\psi_{t}(\gamma_{\pm}^{in}(s)))\,dt$ $\displaystyle=\int_{0}^{\tau(s)}f\circ G_{0}\big{(}\tilde{\psi}_{t}(-\varepsilon,s)\big{)}\,dt$ $\displaystyle=\int_{0}^{\tau(s)}f\circ G_{0}\big{(}\Re\tilde{\psi}_{t}(-\varepsilon,s),s\big{)}\,dt.$ Next we integrate by substituting $u(t)=\Re\tilde{\psi}_{t}(-\varepsilon,s)$. As $-\varepsilon=\Re\tilde{\psi}_{0}(-\varepsilon,s),\quad\varepsilon=\Re\tilde{\psi}_{\tau(s)}(-\varepsilon,s)$ and $\displaystyle\frac{du}{dt}$ $\displaystyle=\frac{d}{dt}\Re\tilde{\psi}_{t}(-\varepsilon,s)=m^{2}\frac{|\tilde{\psi}_{t}(-\varepsilon,s)|^{\frac{2(m-1)}{m}}}{V\circ G_{0}(\tilde{\psi}_{t}(-\varepsilon,s))}$ $\displaystyle=m^{2}\frac{((\Re\tilde{\psi}_{t}(-\varepsilon,s))^{2}+s^{2})^{\frac{m-1}{m}}}{V\circ G_{0}(\Re\tilde{\psi}_{t}(-\varepsilon,s),s)}=m^{2}\frac{(u^{2}+s^{2})^{\frac{m-1}{m}}}{V\circ G_{0}(u,s)},$ by change of variables, we have $\displaystyle\int_{0}^{\tau(s)}f\circ G_{0}\big{(}\Re\tilde{\psi}_{t}(-\varepsilon,s),s\big{)}\,dt$ $\displaystyle=\int_{-\varepsilon}^{\varepsilon}f\circ G_{0}(u,s)\frac{V\circ G_{0}(u,s)}{m^{2}(u^{2}+s^{2})^{\frac{m-1}{m}}}du$ $\displaystyle=\frac{1}{m^{2}}\int_{-\varepsilon}^{\varepsilon}\frac{(f\cdot V)(G_{0}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du.$ This gives (8.3). ∎ ###### Remark 8.3. Lemma 8.2 describes incoming and outgoing segments on the boundary of $D_{\varepsilon}$ but only in the angular sector $\mathcal{S}([0,2\pi/m))$. The same arguments apply to the flow $\psi_{\mathbb{R}}$ restricted to $\mathcal{S}([2\pi k/m,2\pi(k+1)/m))$ for $0\leq k<m$. As $\omega\in{\mathbb{C}}$ is the principal $2m$-th root of unity, the incoming/outgoing segments of $D_{\varepsilon}\cap\mathcal{S}([2\pi k/m,2\pi(k+1)/m))$ are given by $\omega^{2k}\gamma_{\pm}^{in}$ and $\omega^{2k}\gamma_{\pm}^{out}$ respectively. Moreover, if $\tau_{k}(s)$ is the time spent by ${\psi}_{\mathbb{R}}$-orbit starting at $\omega^{2k}\gamma_{\pm}^{in}(s)$ in the set $D_{\varepsilon}$, then (8.4) $\varphi_{f}^{\sigma,k}(s):=\int_{0}^{\tau_{k}(s)}f(\psi_{t}(\omega^{2k}\gamma_{\pm}^{in}(s)))\,dt=\frac{1}{m^{2}}\int_{-\varepsilon}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k}G_{0}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du.$ Note that for $(u,s)\in{\mathbb{R}}^{2}_{\geq 0}$ we have $G_{0}(-u,-s)=\omega G_{0}(u,s),\ G_{0}(u,-s)=\omega^{2}\overline{G_{0}}(u,s),\ G_{0}(-u,s)=\omega\overline{G_{0}}(u,s).$ It follows that for every $s\in(0,\varepsilon)$ we have $\displaystyle m^{2}\varphi_{f}^{\sigma,k}(s)$ $\displaystyle=\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k}G_{0}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du+\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k}G_{0}(-u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du$ $\displaystyle=\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k}G_{0}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du+\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k+1}\overline{G_{0}}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du$ and $\displaystyle m^{2}\varphi_{f}^{\sigma,k}(-s)$ $\displaystyle=\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k}G_{0}(u,-s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du+\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k}G_{0}(-u,-s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du$ $\displaystyle=\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k+2}\overline{G_{0}}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du+\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega^{2k+1}{G_{0}}(u,s))}{(u^{2}+s^{2})^{\frac{m-1}{m}}}du.$ ### 8.1. Singularities of $\varphi_{f}^{\sigma,k}$ The purpose of this section is to understand the type of singularity of functions $\varphi_{f}^{\sigma,k}$. These functions are responsible for reading the singularities of $\varphi_{f}$ and provide the tools to prove Theorem 9.1 in §9. For every $m\geq 2$ let $G:{\mathbb{R}}^{2}_{\geq 0}\to{\mathbb{C}}$ be a continuous inverse of one of the maps $z\mapsto z^{m}$, $z\mapsto\overline{z}^{m}$, $z\mapsto-z^{m}$ or $z\mapsto-\overline{z}^{m}$. Then $G$ is homogenous of degree $1/m$ and analytic on ${\mathbb{R}}^{2}_{>0}$. If $G_{0}:{\mathbb{R}}^{2}_{\geq 0}\to{\mathbb{C}}$ is the principal $m$-th root and $\omega$ is the principal $2m$-th root of unity, then $G$ is either $\omega^{l}G_{0}\text{ or }\omega^{l}\overline{G}_{0}\text{ for some }0\leq l<2m.$ Let $f:{D}\to{\mathbb{R}}$ be a bounded Borel map where $D$ is the pre-image of the square $[-1,1]\times[-1,1]$ by the map $z\mapsto z^{m}$. For every $a\geq 1/2$ let us consider the map $\varphi=\varphi_{f,a}:(0,1]\to{\mathbb{R}}$ given by (8.5) $\varphi(s)=\int_{0}^{1}\frac{f(G(u,s))}{(u^{2}+s^{2})^{a}}\,du.$ ###### Remark 8.4. Note that for every $\varepsilon>0$ we have $\displaystyle\varphi_{f,a,\varepsilon}(s)$ $\displaystyle=\int_{0}^{\varepsilon}\frac{f(G(u,s))}{(u^{2}+s^{2})^{a}}\,du=\frac{1}{\varepsilon}\int_{0}^{1}\frac{f(G(u/\varepsilon,s))}{((u/\varepsilon)^{2}+s^{2})^{a}}\,du$ $\displaystyle=\varepsilon^{2a-1}\int_{0}^{1}\frac{f(\varepsilon^{-1/m}G(u,\varepsilon s))}{(u^{2}+(\varepsilon s)^{2})^{a}}\,du=\varepsilon^{2a-1}\varphi_{f\circ\varepsilon^{-1/m},a}(\varepsilon s).$ Therefore $s^{2a}\varphi^{\prime}_{f,a,\varepsilon}(s)=(\varepsilon s)^{2a}\varphi^{\prime}_{f\circ\varepsilon^{-1/m},a}(\varepsilon s).$ Notice that $s^{2a-1}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a}}\,du=\int_{0}^{1}\frac{1}{((\frac{u}{s})^{2}+1)^{a}}\frac{du}{s}=\int_{0}^{1/s}\frac{dx}{(x^{2}+1)^{a}}.$ Let us recall the definitions of Gamma function $\Gamma(z)$ and Beta function $B(x,y)$ $B(x,y):=\int_{0}^{1}t^{x-1}{(1-t)}^{y-1}dt,\quad\Gamma(z):=\int_{0}^{\infty}x^{z-1}e^{-x}dx,$ and let us denote $\Gamma_{a}:=\int_{0}^{+\infty}\frac{dx}{(x^{2}+1)^{a}}=\frac{1}{2}B(\frac{1}{2},a-\frac{1}{2})=\frac{1}{2}\frac{\Gamma(\frac{1}{2})\Gamma(a-\frac{1}{2})}{\Gamma(a)}=\frac{\sqrt{\pi}}{2}\frac{\Gamma(a-\frac{1}{2})}{\Gamma(a)}.$ Then, for every $a>1/2$, (8.6) $\displaystyle s^{2a-1}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a}}\,du\leq\Gamma_{a}\text{ for all }s\in(0,1],\text{ and}$ (8.7) $\displaystyle\lim_{s\to 0}s^{2a-1}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a}}\,du=\Gamma_{a}.$ If $a=1/2$ then (8.8) $\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a}}\,du=\int_{0}^{1/s}\frac{dx}{\sqrt{x^{2}+1}}=\log\Big{(}\frac{1}{s}+\sqrt{\frac{1}{s^{2}}+1}\Big{)}\leq\log\frac{3}{s}\leq 2+|\log s|.$ In view of (8.6) and (8.8), for every $s\in(0,1]$, (8.9) $\displaystyle\begin{aligned} s^{2a-1}\varphi_{|f|,a}(s)&\leq\|f\|_{\sup}\Gamma_{a}\ &\text{ if }\ a\geq 1/2,\\\ \varphi_{|f|,a}(s)&\leq\|f\|_{\sup}(2+|\log s|)\ &\text{ if }\ a=1/2.\end{aligned}$ In the following lemmas, we find the upper bound of $\varphi^{\prime}$ by $C^{k}$-norms of the function $f$ and the order of vanishing at the saddle. ###### Lemma 8.5. Suppose that $f:D\to{\mathbb{R}}$ is a $C^{1}$-map. For every $1/2\leq a\leq 1$ we have $|s^{2a}\varphi^{\prime}(s)|\leq 2\|f\|_{C^{1}}\Gamma_{a+\frac{m-1}{2m}}.$ Moreover, $\lim_{s\to 0}s^{2a}\varphi^{\prime}(s)=-2af(0,0)\Gamma_{a+1}.$ ###### Proof. First note that $\varphi$ is a $C^{1}$-function on $(0,1]$ with (8.10) $\displaystyle\begin{split}\varphi^{\prime}(s)&=\int_{0}^{1}\frac{\frac{\partial f}{\partial x}(G(u,s))\frac{\partial G_{1}}{\partial s}(u,s)+\frac{\partial f}{\partial y}(G(u,s))\frac{\partial G_{2}}{\partial s}(u,s)}{(u^{2}+s^{2})^{a}}\,du\\\ &\quad-2a\int_{0}^{1}\frac{sf(G(u,s))}{(u^{2}+s^{2})^{a+1}}\,du.\end{split}$ As $(G_{1}+iG_{2})^{m}=\pm u\pm is$, we have (8.11) $m(G_{1}+iG_{2})^{m-1}\Big{(}\frac{\partial G_{1}}{\partial s}+i\frac{\partial G_{2}}{\partial s}\Big{)}=\pm i.$ Hence $\Big{|}\frac{\partial G_{1}}{\partial s}+i\frac{\partial G_{2}}{\partial s}\Big{|}=\frac{1}{m|G_{1}+iG_{2}|^{m-1}}=\frac{1}{m(u^{2}+s^{2})^{\frac{m-1}{2m}}}.$ It follows that (8.12) $\displaystyle\begin{split}\Big{|}&\int_{0}^{1}\frac{\frac{\partial f}{\partial x}(G(u,s))\frac{\partial G_{1}}{\partial s}(u,s)+\frac{\partial f}{\partial y}(G(u,s))\frac{\partial G_{2}}{\partial s}(u,s)}{(u^{2}+s^{2})^{a}}\,du\Big{|}\\\ &\leq\frac{\|f^{\prime}\|_{C^{0}}}{m}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a+\frac{m-1}{2m}}}\,du\leq\frac{\|f^{\prime}\|_{C^{0}}}{m}\frac{\Gamma_{a+\frac{m-1}{2m}}}{s^{2a-\frac{1}{m}}}\leq\frac{\|f^{\prime}\|_{C^{0}}}{m}\frac{\Gamma_{a+\frac{m-1}{2m}}}{s^{2a}}\end{split}$ and $\Big{|}\int_{0}^{1}\frac{sf(G(u,s))}{(u^{2}+s^{2})^{a+1}}\,du\Big{|}\leq\|f\|_{C^{0}}\int_{0}^{1}\frac{s}{(u^{2}+s^{2})^{a+1}}\,du\leq{\|f\|_{C^{0}}}\frac{\Gamma_{a+1}}{s^{2a}}.$ It follows that $|\varphi^{\prime}(s)|\leq\Big{(}2a\|f\|_{C^{0}}\Gamma_{a+1}+\frac{\|f^{\prime}\|_{C^{0}}}{m}\Gamma_{a+\frac{m-1}{2m}}\Big{)}\frac{1}{s^{2a}}\leq 2\|f\|_{C^{1}}\Gamma_{a+\frac{m-1}{2m}}\frac{1}{s^{2a}}.$ Since $f$ is of class $C^{1}$, we have $|f(G(u,s))-f(0,0)|\leq\|f\|_{C^{1}}\|G(u,s)\|\leq\|f\|_{C^{1}}(u^{2}+s^{2})^{\frac{1}{2m}}.$ Moreover, by (8.6), $\int_{0}^{1}\frac{(u^{2}+s^{2})^{\frac{1}{2m}}}{(u^{2}+s^{2})^{a+1}}\,du\leq\frac{\Gamma_{a+1-\frac{1}{2m}}}{s^{2a+1-\frac{1}{m}}}.$ Therefore, in view of (8.10), (8.12), we have $\displaystyle\Big{|}$ $\displaystyle s^{2a}\varphi^{\prime}(s)+2af(0,0)s^{2a+1}\int_{0}^{1}\frac{du}{(u^{2}+s^{2})^{a+1}}\Big{|}$ $\displaystyle\leq s^{2a}\left|\int_{0}^{1}\frac{\frac{\partial f}{\partial x}(G(u,s))\frac{\partial G_{1}}{\partial s}(u,s)+\frac{\partial f}{\partial y}(G(u,s))\frac{\partial G_{2}}{\partial s}(u,s)}{(u^{2}+s^{2})^{a}}\,du\right|$ $\displaystyle\quad+2as^{2a+1}\int_{0}^{1}\frac{|f(0,0)-f(G(u,s))|}{(u^{2}+s^{2})^{a+1}}du$ $\displaystyle=s^{2a}\|f^{\prime}\|_{C^{0}}\frac{\Gamma_{a+\frac{m-1}{2m}}}{s^{2a-\frac{1}{m}}}+2as^{2a+1}\|f\|_{C^{1}}\int_{0}^{1}\frac{(u^{2}+s^{2})^{\frac{1}{2m}}}{(u^{2}+s^{2})^{a+1}}\,du$ $\displaystyle\leq\|f\|_{C^{1}}\Gamma_{a+\frac{m-1}{2m}}s^{\frac{1}{m}}+2as^{2a+1}\|f\|_{C^{1}}\frac{\Gamma_{a+1-\frac{1}{2m}}}{s^{2a+1-\frac{1}{m}}}=O(s^{\frac{1}{m}}).$ Hence $\lim_{s\to 0}s^{2a}\varphi^{\prime}(s)=-\lim_{s\to 0}2af(0,0)s^{2a+1}\int_{0}^{1}\frac{du}{(u^{2}+s^{2})^{a+1}}=-2af(0,0)\Gamma_{a+1}.$ ∎ ###### Lemma 8.6. Assume that $f:D\to{\mathbb{R}}$ is a $C^{m}$-map, $1/2\leq a\leq 1$ and let $k$ be a natural number such that $k\leq m(2a-1)$. Suppose that $f^{(j)}(0,0)=0$ for $0\leq j<k$. Then $|s^{2a-\frac{k}{m}}\varphi^{\prime}(s)|\leq\frac{\|f\|_{C^{k}}\Gamma_{1}}{(k-1)!}\text{ for all }s\in(0,1].$ Moreover, if $k<m(2a-1)$ then $s^{2a-\frac{k}{m}-1}\varphi_{|f|,a}(s)\leq\frac{\|f\|_{C^{k}}\Gamma_{{a-\frac{k}{2m}}}}{k!}\text{ for all }s\in(0,1]$ and if $k=m(2a-1)$, then $\varphi_{|f|,a}(s)\leq\frac{\|f\|_{C^{k}}}{k!}(2+|\log s|)\text{ for all }s\in(0,1].$ ###### Proof. By assumption, $\displaystyle|f(G(u,s))|$ $\displaystyle\leq\frac{\|f^{(k)}\|_{C^{0}}}{k!}\|G(u,s)\|^{k}\leq\frac{\|f^{(k)}\|_{C^{0}}}{k!}(u^{2}+s^{2})^{\frac{k}{2m}},$ $\displaystyle\left|\frac{\partial f}{\partial x}(G(u,s))\right|$ $\displaystyle\leq\frac{\|(\partial f/\partial x)^{(k-1)}\|_{C^{0}}}{(k-1)!}\|G(u,s)\|^{k-1}\leq\frac{\|(\partial f/\partial x)^{(k-1)}\|_{C^{0}}}{(k-1)!}(u^{2}+s^{2})^{\frac{k-1}{2m}},$ $\displaystyle\left|\frac{\partial f}{\partial y}(G(u,s))\right|$ $\displaystyle\leq\frac{\|(\partial f/\partial y)^{(k-1)}\|_{C^{0}}}{(k-1)!}\|G(u,s)\|^{k-1}\leq\frac{\|(\partial f/\partial y)^{(k-1)}\|_{C^{0}}}{(k-1)!}(u^{2}+s^{2})^{\frac{k-1}{2m}}.$ Therefore $\varphi_{|f|,a}(s)\leq\frac{\|f^{(k)}\|_{C^{0}}}{k!}\int_{0}^{1}\frac{(u^{2}+s^{2})^{\frac{k}{2m}}}{(u^{2}+s^{2})^{a}}du=\frac{\|f^{(k)}\|_{C^{0}}}{k!}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a-\frac{k}{2m}}}du$ and, by (8.10), $\displaystyle|\varphi^{\prime}(s)|$ $\displaystyle\leq\frac{\|f^{(k)}\|_{C^{0}}}{m(k-1)!}\int_{0}^{1}\frac{(u^{2}+s^{2})^{\frac{k-1}{2m}}}{(u^{2}+s^{2})^{\frac{m-1}{2m}}(u^{2}+s^{2})^{a}}du$ $\displaystyle\quad+2as\frac{\|f^{(k)}\|_{C^{0}}}{k!}\int_{0}^{1}\frac{(u^{2}+s^{2})^{\frac{k}{2m}}}{(u^{2}+s^{2})^{a+1}}du$ $\displaystyle\leq\frac{\|f^{(k)}\|_{C^{0}}}{m(k-1)!}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a+\frac{m-k}{2m}}}du+2as\frac{\|f^{(k)}\|_{C^{0}}}{k!}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a+1-\frac{k}{2m}}}du.$ As $k\leq m(2a-1)$, we have $a+1-\frac{k}{2m}>a+\frac{m-k}{2m}=1+\frac{(2a-1)m-k}{2m}\geq 1$. By (8.6), this gives $\displaystyle|\varphi^{\prime}(s)|\leq\frac{\|f^{(k)}\|_{C^{0}}}{m(k-1)!}\frac{\Gamma_{a+\frac{m-k}{2m}}}{s^{2a-\frac{k}{m}}}+2as\frac{\|f^{(k)}\|_{C^{0}}}{k!}\frac{\Gamma_{a+1-\frac{k}{2m}}}{s^{2a+1-\frac{k}{m}}}\leq\frac{\|f^{(k)}\|_{C^{0}}}{(k-1)!}\frac{\Gamma_{1}}{s^{2a-\frac{k}{m}}}.$ If additionally $k<m(2a-1)$ then $a-\frac{k}{2m}>\frac{1}{2}$. By (8.6) again, $\displaystyle\varphi_{|f|,a}(s)\leq\frac{\|f^{(k)}\|_{C^{0}}}{k!}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a-\frac{k}{2m}}}du\leq\frac{\|f^{(k)}\|_{C^{0}}}{k!}\frac{\Gamma_{a-\frac{k}{2m}}}{s^{2a-1-\frac{k}{m}}}.$ On the other hand, if $k=m(2a-1)$ then $a-\tfrac{k}{2m}=\tfrac{1}{2}$ and, by (8.8), $\displaystyle\varphi_{|f|,a}(s)\leq\frac{\|f^{(k)}\|_{C^{0}}}{k!}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a-\frac{k}{2m}}}du\leq\frac{\|f^{(k)}\|_{C^{0}}}{k!}(2+|\log s|).$ ∎ ### 8.2. Quantities $C^{\pm}_{\alpha}(\varphi_{f})$ In this section, we develop some tools that help compute the non-vanishing quantities $C^{\pm}_{\alpha}(\varphi_{f})$. ###### Definition 6. For every real $\beta$ let $C_{\beta}({\mathbb{R}}_{>0}^{2})$ be the space of continuous homogenous functions $H:{\mathbb{R}}_{>0}\times{\mathbb{R}}_{>0}\to{\mathbb{C}}$ of degree $\beta$ such that $H(u,s)=O(\|(u,s)\|^{\beta})$. For every real $a$ such that $2a-\beta>1$ and $H\in C_{\beta}({\mathbb{R}}_{>0}^{2})$ let $\Gamma_{a}(H):=\int_{0}^{+\infty}\frac{H(x,1)}{(x^{2}+1)^{a}}\,dx.$ As ${H(x,1)}/{(x^{2}+1)^{a}}=O(1/(1+x)^{2a-\beta})$, the quantity $\Gamma_{a}(H)$ is well-defined and (8.13) $s^{2a-\beta-1}\int_{0}^{1}\frac{H(u,s)}{(u^{2}+s^{2})^{a}}\,du=\int_{0}^{1}\frac{H(u/s,1)}{((u/s)^{2}+1)^{a}}\,\frac{du}{s}=\int_{0}^{1/s}\frac{H(x,1)}{(x^{2}+1)^{a}}\,dx\to\Gamma_{a}(H).$ Note that if $H(u,s)=\tilde{H}(u,s)/\|(u,s)\|^{\max\\{-\beta,0\\}}$ and $\tilde{H}:{\mathbb{R}}_{\geq 0}\times{\mathbb{R}}_{\geq 0}\to{\mathbb{C}}$ is continuous homogenous of degree $\max\\{\beta,0\\}$ then $H\in C_{\beta}({\mathbb{R}}_{>0}^{2})$. ###### Lemma 8.7. Assume that $2a-\beta>1$, $H\in C_{\beta}({\mathbb{R}}_{>0}^{2})$ is of class $C^{1}$ and $\tfrac{\partial H}{\partial y}\in C_{\beta-1}({\mathbb{R}}_{>0}^{2})$. Then (8.14) $\displaystyle\Gamma_{a}(\tfrac{\partial H}{\partial y})=2a\Gamma_{a+1}(H)-(2a-1-\beta)\Gamma_{a}(H).$ ###### Proof. Note that for every $y>0$, $\Gamma_{a}(H)=\int_{0}^{+\infty}\frac{H(x,1)}{(x^{2}+1)^{a}}\,dx=\int_{0}^{+\infty}\frac{H(x/y,1)}{(x^{2}/y^{2}+1)^{a}}\,\frac{dx}{y}=y^{2a-1-\beta}\int_{0}^{+\infty}\frac{H(x,y)}{(x^{2}+y^{2})^{a}}\,dx.$ As $H\in C_{\beta}({\mathbb{R}}_{>0}^{2})$ and $\tfrac{\partial H}{\partial y}\in C_{\beta-1}({\mathbb{R}}_{>0}^{2})$, by differentiating with respect to $y$, we get $\displaystyle-(2a-1-\beta)y^{\beta-2a}\Gamma_{a}(H)$ $\displaystyle=\frac{d}{dy}y^{1+\beta-2a}\Gamma_{a}(H)=\frac{d}{dy}\int_{0}^{+\infty}\frac{H(x,y)}{(x^{2}+y^{2})^{a}}\,dx$ $\displaystyle=\int_{0}^{+\infty}\frac{\frac{\partial H}{\partial y}(x,y)}{(x^{2}+y^{2})^{a}}\,dx-2ay\int_{0}^{+\infty}\frac{H(x,y)}{(x^{2}+y^{2})^{a+1}}\,dx.$ Taking $y=1,$ this yields (8.14). ∎ Recall that, by Lemma 8.5 for any $C^{1}$-map $f:D\to{\mathbb{R}}$ with $f(0,0)\neq 0$ and for every $1/2\leq a\leq 1$ we have $\lim_{s\to 0}s^{2a}\varphi^{\prime}(s)=-2af(0,0)\Gamma_{a+1}.$ In the next preliminary lemmas, we find precise asymptotics of $\varphi^{\prime}$ at zero also when the function $f$ and some of its derivatives vanish at the saddle. This plays a crucial role in calculating the quantity $C^{\pm}_{\alpha}(\varphi_{f})$ explicitly in §9. ###### Lemma 8.8. Let $k$ be an integer number such that $0\leq k\leq m(2a-1)$. Suppose that $f:D\to{\mathbb{R}}$ is of class $C^{k+1}$ and $f^{(j)}(0,0)=0$ for $0\leq j<k$. Then (8.15) $\lim_{s\to 0}s^{2a-\frac{k}{m}}\varphi^{\prime}(s)=\sum_{j=0}^{k}\binom{k}{j}\frac{\partial^{k}f}{\partial x^{j}\partial y^{k-j}}(0,0)\Gamma_{a}^{k,j}(G),$ where $\displaystyle\Gamma_{a}^{k,j}(G)=-\frac{2a-1-\frac{k}{m}}{k}\Gamma_{a}(G_{1}^{j}G_{2}^{k-j})-2a\frac{k-1}{k}\Gamma_{a+1}(G_{1}^{j}G_{2}^{k-j})\text{ if }k\geq 1$ and $\Gamma_{a}^{0,0}(G)=-2a\Gamma_{a+1}$. ###### Proof. If $k=0$, then (8.15) follows directly from Lemma 8.5. Assume that $k\geq 1$. Then, by assumptions, $\displaystyle f(G(u,s))$ $\displaystyle=\sum_{j=0}^{k}\binom{k}{j}\frac{\partial^{k}f}{\partial x^{j}\partial y^{k-j}}(0,0)G_{1}^{j}(u,s)G_{2}^{k-j}(u,s)+O(\|G(u,s)\|^{k+1})$ $\displaystyle=\sum_{j=0}^{k}\binom{k}{j}\frac{\partial^{k}f}{\partial x^{j}\partial y^{k-j}}(0,0)G_{1}^{j}(u,s)G_{2}^{k-j}(u,s)+O((u^{2}+s^{2})^{\frac{k+1}{2m}}).$ Moreover, $\displaystyle\frac{\partial f}{\partial x}(G(u,s))$ $\displaystyle=\sum_{j=0}^{k-1}\binom{k-1}{j}\frac{\partial^{k}f}{\partial x^{j+1}\partial y^{k-1-j}}(0,0)G_{1}^{j}(u,s)G_{2}^{k-1-j}(u,s)+O((u^{2}+s^{2})^{\frac{k}{2m}}),$ $\displaystyle\frac{\partial f}{\partial y}(G(u,s))$ $\displaystyle=\sum_{j=0}^{k-1}\binom{k-1}{j}\frac{\partial^{k}f}{\partial x^{j}\partial y^{k-j}}(0,0)G_{1}^{j}(u,s)G_{2}^{k-1-j}(u,s)+O((u^{2}+s^{2})^{\frac{k}{2m}}).$ As $\|G(u,s)\|=\|(u,s)\|^{\frac{1}{m}}=(u^{2}+s^{2})^{\frac{1}{2m}}$ and $\|\frac{\partial G}{\partial s}(u,s)\|=\frac{\|(u,s)\|^{-\frac{m-1}{m}}}{m}=\frac{(u^{2}+s^{2})^{-\frac{m-1}{2m}}}{m}$, it follows that (8.16) $\displaystyle\begin{split}\frac{d}{ds}\frac{f(G(u,s))}{(u^{2}+s^{2})^{a}}&=\frac{\frac{\partial f}{\partial x}(G(u,s))\frac{\partial G_{1}}{\partial s}(u,s)+\frac{\partial f}{\partial y}(G(u,s))\frac{\partial G_{2}}{\partial s}(u,s)}{(u^{2}+s^{2})^{a}}-2a\frac{sf(G(u,s))}{(u^{2}+s^{2})^{a+1}}\\\ &=\sum_{j=0}^{k}\binom{k}{j}\frac{\partial^{k}f}{\partial x^{j}\partial y^{k-j}}(0,0)\frac{\big{(}\frac{j}{k}G_{1}^{j-1}G_{2}^{k-j}\frac{\partial G_{1}}{\partial s}+\frac{k-j}{k}G_{1}^{j}G_{2}^{k-1-j}\frac{\partial G_{2}}{\partial s}\big{)}(u,s)}{(u^{2}+s^{2})^{a}}\\\ &\quad-2as\sum_{j=0}^{k}\binom{k}{j}\frac{\partial^{k}f}{\partial x^{j}\partial y^{k-j}}(0,0)\frac{(G_{1}^{j}G_{2}^{k-j})(u,s)}{(u^{2}+s^{2})^{a+1}}\\\ &\quad+O\Big{(}\frac{1}{(u^{2}+s^{2})^{a+\frac{m-k-1}{2m}}}\Big{)}+O\Big{(}\frac{s}{(u^{2}+s^{2})^{a+1-\frac{k+1}{2m}}}\Big{)}.\end{split}$ Since $G_{1}^{j-1}G_{2}^{k-j}\frac{\partial G_{1}}{\partial s}$, $G_{1}^{j}G_{2}^{k-j-1}\frac{\partial G_{2}}{\partial s}$ are homogenous of degree $\frac{k-m}{m}<2a-1$ and $G_{1}^{j}G_{2}^{k-j}$ is homogenous of degree $\frac{k}{m}<2(a+1)-1$, by (8.13) we have (8.17) $\displaystyle\begin{split}\lim_{s\to 0}s^{2a-\frac{k}{m}}\int_{0}^{1}\frac{(G_{1}^{j-1}G_{2}^{k-j}\frac{\partial G_{1}}{\partial s})(u,s)}{(u^{2}+s^{2})^{a}}\,du=\Gamma_{a}(G_{1}^{j-1}G_{2}^{k-j}\tfrac{\partial G_{1}}{\partial s}),\\\ \lim_{s\to 0}s^{2a-\frac{k}{m}}\int_{0}^{1}\frac{(G_{1}^{j}G_{2}^{k-j-1}\frac{\partial G_{2}}{\partial s})(u,s)}{(u^{2}+s^{2})^{a}}\,du=\Gamma_{a}(G_{1}^{j}G_{2}^{k-j-1}\tfrac{\partial G_{2}}{\partial s}),\\\ \lim_{s\to 0}s^{2a-\frac{k}{m}}\int_{0}^{1}\frac{s(G_{1}^{j}G_{2}^{k-j})(u,s)}{(u^{2}+s^{2})^{a+1}}\,du=\Gamma_{a+1}(G_{1}^{j}G_{2}^{k-j}).\end{split}$ Furthermore, $\displaystyle\lim_{s\to 0}s^{2a-\frac{k+1}{m}}\int_{0}^{1}\frac{1}{(u^{2}+s^{2})^{a+\frac{m-k-1}{2m}}}\,du=\Gamma_{a+\frac{m-k-1}{2m}},$ $\displaystyle\lim_{s\to 0}s^{2a-\frac{k+1}{m}}\int_{0}^{1}\frac{s}{(u^{2}+s^{2})^{a+1-\frac{k+1}{2m}}}=\Gamma_{a+1-\frac{k+1}{2m}}.$ In view of (8.16) and (8.17), this gives the statement (8.15) with $\Gamma_{a}^{k,j}(G)=\frac{j}{k}\Gamma_{a}(G_{1}^{j-1}G_{2}^{k-j}\tfrac{\partial G_{1}}{\partial s})+\frac{k-j}{k}\Gamma_{a}(G_{1}^{j}G_{2}^{k-1-j}\tfrac{\partial G_{2}}{\partial s})-2a\Gamma_{a+1}(G_{1}^{j}G_{2}^{k-j}).$ In view of (8.14) applied to $H=G_{1}^{j}G_{2}^{k-j}$ (which is homogenous of degree $k/m$), we get $\displaystyle\Gamma_{a}^{k,j}(G)$ $\displaystyle=\frac{1}{k}\Gamma_{a}\big{(}\tfrac{\partial}{\partial s}(G_{1}^{j}G_{2}^{k-j})\big{)}-2a\Gamma_{a+1}(G_{1}^{j}G_{2}^{k-j})$ $\displaystyle=\frac{2a}{k}\Gamma_{a+1}(G_{1}^{j}G_{2}^{k-j})-\frac{2a-1-\frac{k}{m}}{k}\Gamma_{a}(G_{1}^{j}G_{2}^{k-j})-2a\Gamma_{a+1}(G_{1}^{j}G_{2}^{k-j})$ $\displaystyle=-\frac{2a-1-\frac{k}{m}}{k}\Gamma_{a}(G_{1}^{j}G_{2}^{k-j})-2a\frac{k-1}{k}\Gamma_{a+1}(G_{1}^{j}G_{2}^{k-j}).$ ∎ Let $G_{0}:{\mathbb{R}}^{2}_{\geq 0}\to{\mathbb{C}}$ be the principal branch of the $m$-th root and let $\omega$ and $\omega_{0}$ be the principal $2m$-th and $4m$-th root of unity respectively. ###### Lemma 8.9. Let $1/2<a\leq(m-1)/m$, $1\leq k<(2a-1)m$ and $a_{0},a_{1},\ldots,a_{k}$ are real numbers not all equal to $0$. Then there exists $0\leq l<2m$ such that $\sum_{j=0}^{k}a_{j}\big{(}\Gamma_{a}^{k,j}(\omega^{l}G_{0})+\Gamma_{a}^{k,j}(\omega^{l+1}\overline{G_{0}})\big{)}\neq 0.$ ###### Proof. Let $\mathfrak{G}:C_{k/m}({\mathbb{R}}_{>0}^{2})\to{\mathbb{C}}$ be the linear operator given by $\mathfrak{G}(H):=-\frac{2a-1-\frac{k}{m}}{k}\Gamma_{a}(H)-2a\frac{k-1}{k}\Gamma_{a+1}(H).$ Then $\overline{\mathfrak{G}(H)}=\mathfrak{G}(\overline{H})$ and $\Gamma_{a}^{k,j}(G)=\mathfrak{G}(G_{1}^{j}G_{2}^{k-j})$. Suppose that, contrary to our claim, (8.18) $\sum_{j=0}^{k}a_{j}\big{(}\Gamma_{a}^{k,j}(\omega^{l}G_{0})+\Gamma_{a}^{k,j}(\omega^{l+1}\overline{G_{0}})\big{)}=0\text{ for every }0\leq l<2m.$ Denote by ${\mathbb{R}}_{k}[x,y]$ the linear space of homogenous polynomial of degree $k$. The space ${\mathbb{R}}_{k}[x,y]$ coincides with the subspace ${\mathbb{C}}_{k,{\mathbb{R}}}[z,\overline{z}]$ of complex homogenous polynomials ${\mathbb{C}}_{k}[z,\overline{z}]$ of the form $\sum_{j=0}^{k}c_{j}z^{j}\overline{z}^{k-j}$ such that $\overline{c}_{j}=c_{k-j}$ for $0\leq j\leq k$. For every $P\in{\mathbb{R}}_{k}[x,y]$ denote by $\widehat{P}\in{\mathbb{C}}_{k,{\mathbb{R}}}[z,\overline{z}]$ the unique polynomial such that $\widehat{P}(z,\overline{z})=P(x,y)$. As $Q(x,y)=\sum_{j=0}^{k}a_{j}x^{j}y^{k-j}\in{\mathbb{R}}_{k}[x,y]$ is non- zero by assumptions, the corresponding polynomial $\widehat{Q}(z,\overline{z})=\sum_{j=0}^{k}c_{j}z^{j}\overline{z}^{k-j}$ is also non-zero. Note that (8.19) $\displaystyle\begin{split}\sum_{j=0}^{k}a_{j}\Gamma_{a}^{k,j}(G)&=\sum_{j=0}^{k}a_{j}\mathfrak{G}(G_{1}^{j}G_{2}^{k-j})={\mathfrak{G}}(Q(G_{1},G_{2}))\\\ &=\mathfrak{G}(\widehat{Q}(G,\overline{G}))=\sum_{j=0}^{k}c_{j}\mathfrak{G}(G^{j}\overline{G}^{k-j}).\end{split}$ As $k<m(2a-1)\leq m-2$, in view of (8.18) and (8.19), for every $0\leq l\leq 2k$ we have $\displaystyle 0$ $\displaystyle=\sum_{j=0}^{k}c_{j}\Big{(}\mathfrak{G}\big{(}(\omega^{l}G_{0})^{j}(\overline{\omega^{l}G_{0}})^{k-j}\big{)}+\mathfrak{G}\big{(}(\omega^{l+1}\overline{G_{0}})^{j}(\overline{\omega^{l+1}\overline{G_{0}}})^{k-j}\big{)}\Big{)}$ $\displaystyle=\omega^{-kl}\sum_{j=0}^{k}\omega^{2jl}c_{j}\big{(}\mathfrak{G}(G_{0}^{j}\overline{G_{0}}^{k-j})+\omega^{2j-k}\mathfrak{G}(\overline{G_{0}}^{j}{G_{0}}^{k-j})\big{)}.$ Let us consider the matrix $\Omega_{k}=[\omega^{2lj}]_{0\leq l,j\leq k}\in M_{(k+1)\times(k+1)}({\mathbb{C}})$. As $k<m$, by the Vandermonde determinant, $\det\Omega_{k}=\prod_{0\leq i<j\leq k}(\omega^{2j}-\omega^{2i})\neq 0.$ This gives $c_{j}\Big{(}\mathfrak{G}\big{(}G_{0}^{j}\overline{G_{0}}^{k-j}\big{)}+\omega^{2j-k}\overline{\mathfrak{G}\big{(}{G^{j}_{0}}\overline{G_{0}}^{k-j}\big{)}}\Big{)}=0\text{ for all }0\leq j\leq k.$ As $c_{k-j}=\overline{c_{j}}$ and $\widehat{Q}$ is non-zero, there exists $0\leq j\leq k/2$ such that $c_{j}\neq 0$. Then $\displaystyle 0$ $\displaystyle=\omega_{0}^{k-2j}\Big{(}\mathfrak{G}\big{(}G_{0}^{j}\overline{G_{0}}^{k-j}\big{)}+\omega_{0}^{2j-k}\overline{\mathfrak{G}\big{(}{G^{j}_{0}}\overline{G_{0}}^{k-j}\big{)}}\Big{)}$ $\displaystyle=\mathfrak{G}\big{(}(\omega^{-1}_{0}G_{0})^{j}\overline{(\omega^{-1}_{0}G_{0})}^{k-j}\big{)}+\overline{\mathfrak{G}\big{(}{(\omega^{-1}_{0}G_{0})}^{j}\overline{(\omega^{-1}_{0}G_{0})}^{k-j}\big{)}}.$ Hence $0=\Re\mathfrak{G}\big{(}(\omega^{-1}_{0}G_{0})^{j}\overline{(\omega^{-1}_{0}G_{0})}^{k-j}\big{)}=\mathfrak{G}\big{(}|G_{0}|^{j}\Re((\omega_{0}\overline{G_{0}})^{k-2j})\big{)}.$ Since $G_{0}$ is the principal $m$-th root, for all $u,s>0$ we have $\operatorname{Arg}G_{0}(u,s)\in(0,\tfrac{\pi}{2m})$. Hence $\operatorname{Arg}(\omega_{0}\overline{G_{0}(u,s)})\in(0,\tfrac{\pi}{2m})\text{ and }\operatorname{Arg}(\omega_{0}\overline{G_{0}(u,s)})^{k-2j}\in(0,(k-2j)\tfrac{\pi}{2m})\subset(0,\tfrac{\pi}{2}),$ so $\Re((\omega_{0}\overline{G_{0}}(u,s))^{k-2j})>0$. As $2a-1-\frac{k}{m}>0$, by the definition of $\mathfrak{G}$ we have $\mathfrak{G}\big{(}|G_{0}|^{j}\Re((\omega_{0}\overline{G_{0}})^{k-2j})\big{)}<0,$ which is a contradiction. ∎ ## 9\. Global properties of the operator $f\mapsto\varphi_{f}$ and correcting operators In this section, we use the results of the previous section to prove an extended version of Theorem 4.1, which is Theorem 9.1. For every $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})$, let $G_{\sigma}:{\mathbb{C}}\to{\mathbb{C}}$ be the principal $m_{\sigma}$-root map and let $\omega_{\sigma}$ be the principal $2m_{\sigma}$-root of unity. For every $0\leq k\leq m_{\sigma}-2$, recall that (9.1) $a(\sigma)=\frac{m_{\sigma}-2}{m_{\sigma}},\quad b(\sigma,k)=\frac{m_{\sigma}-2-k}{m_{\sigma}}.$ Denote by $C^{m}_{\sigma,k}(M)$ the space of maps $f\in C^{m}(M)$, which vanish on $\bigcup_{\sigma^{\prime}\in\mathrm{Fix}(\psi_{\mathbb{R}})\setminus\\{\sigma\\}}U_{\sigma^{\prime}}$ and $f^{(j)}(\sigma)=0$ for all $0\leq j<k$. ###### Theorem 9.1. The following statements hold: (i) For every $f\in C^{m}(M)$ we have $\varphi_{f}\in\operatorname{P_{a}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ and $\varphi_{|f|}\in\operatorname{\widehat{P}_{a}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$, where $a=\frac{m-2}{m}$. Moreover, the operator $f\in C^{m}(M)\mapsto\varphi_{f}\in\operatorname{P_{a}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ is bounded. More precisely, there exists $C>0$ such that $\|\varphi_{f}\|_{a}\leq C\|f\|_{C^{1}}$ for every $f\in C^{m}(M)$. (ii) For every $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})\cap M^{\prime}$ and $0\leq k\leq m_{\sigma}-2$, there exists $C_{\sigma,k}>0$ such that for every $f\in C^{m}_{\sigma,k}(M)$ we have $\varphi_{f}\in\operatorname{P_{b(\sigma,k)}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$, $\|\varphi_{f}\|_{b(\sigma,k)}\leq C_{\sigma,k}\|f\|_{C^{k+1}}$ and $\varphi_{|f|}\in\operatorname{\widehat{P}_{b(\sigma,k)}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$. (iii) Moreover, if additionally $\psi_{\mathbb{R}}$ is minimal on $M$ (i.e. $M^{\prime}=M$), then for every $f\in C^{m}_{\sigma,k}(M)$ and for every $\alpha\in\mathcal{A}$ the quantity $C^{\pm}_{\alpha}(\varphi_{f})$ is zero or is of the form (9.2) $-\frac{1}{m^{2}_{\sigma}}\sum_{j=0}^{k}\binom{k}{j}\partial_{\sigma}^{(j,k-j)}(f)\Big{(}\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{l}G_{\sigma}\big{)}+\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{l+1}\overline{G_{\sigma}}\big{)}\Big{)}$ for some $0\leq l<2m_{\sigma}$. On the other hand, for every $0\leq l<2m_{\sigma}$, there exists $\alpha\in\mathcal{A}$ such that $C^{\pm}_{\alpha}(\varphi_{f})$ is of the form (9.2). ###### Proof. Without loss of generality we can assume that $\psi_{\mathbb{R}}$ is minimal on $M$. The proof of (i) and (ii) in the general case proceeds in the same way up to some complications in notation. Choose $\varepsilon>0$ such that $D_{\sigma,\varepsilon}\subset U_{\sigma}$ for any $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})$, where $D_{\sigma,\varepsilon}$ is a closed neighborhood of $\sigma$ defined in §8. Denote by $g:I\to{\mathbb{R}}_{>0}\cup\\{+\infty\\}$ the first return time map. Since the flow $\psi_{\mathbb{R}}$ is smooth and $f$ is of class $C^{m}$, both $g$ and $\varphi_{f}$ belong to $C^{1}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$. Moreover, for every $x\in\bigcup_{\alpha\in\mathcal{A}}IntI_{\alpha}$ we have $|\varphi_{f}^{\prime}(x)|\leq|g^{\prime}(x)|\|f\|_{C^{0}}+\|f^{\prime}\|_{C^{0}}\int_{0}^{g(x)}\Big{\|}\frac{d\psi_{t}(x)}{dx}\Big{\|}dt.$ If additionally $\operatorname{dist}(x,End(T))\geq\varepsilon$ then $|\varphi_{f}^{\prime}(x)|\leq C_{\varepsilon}\|f\|_{C_{1}}$, where $C_{\varepsilon}:=\max\Big{\\{}|g^{\prime}(x)|+\int_{0}^{g(x)}\Big{\|}\frac{d\psi_{t}(x)}{dx}\Big{\|}dt:x\in I,\operatorname{dist}(x,End(T))\geq\varepsilon\Big{\\}}<+\infty.$ Let $e\in End(T)$ and suppose that $e$ is the first backward intersection of a separatrix incoming to a fixed point $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})$. For every $x\in(e-\varepsilon,e)\cup(e,e+\varepsilon)$, let $0<\tau_{1}(x)<\tau_{2}(x)<g(x)$ be the entrance ($\tau_{1}(x)$) and the exit ($\tau_{2}(x)$) time of the orbit segment $\\{\psi_{t}x:0\leq t\leq g(x)\\}$ to $D_{\sigma,\varepsilon}$. Let us consider $\varphi_{f}^{1},\varphi_{f}^{2}:(e-\varepsilon,e)\cup(e,e+\varepsilon)\to{\mathbb{R}}$ given by $\varphi_{f}^{1}(x)=\int_{\tau_{1}(x)}^{\tau_{2}(x)}f(\psi_{t}x)\,dt,\quad\varphi_{f}^{2}(x)=\int_{0}^{\tau_{1}(x)}f(\psi_{t}x)\,dt+\int_{\tau_{2}(x)}^{g(x)}f(\psi_{t}x)\,dt.$ Of course, $\varphi_{f}(x)=\varphi_{f}^{1}(x)+\varphi_{f}^{2}(x)$ for every $x\in(e-\varepsilon,e)\cup(e,e+\varepsilon)$. In view of Lemma 8.2 and Remark 8.3, there exists $0\leq l<m_{\sigma}$ such that for every $s\in(0,\varepsilon)$ we have $\displaystyle m_{\sigma}^{2}\varphi_{f}^{1}(e+s)$ $\displaystyle=\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega_{\sigma}^{2l}G_{\sigma}(u,s))}{(u^{2}+s^{2})^{\frac{m_{\sigma}-1}{m_{\sigma}}}}du+\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega_{\sigma}^{2l+1}\overline{G_{\sigma}}(u,s))}{(u^{2}+s^{2})^{\frac{m_{\sigma}-1}{m_{\sigma}}}}du,$ $\displaystyle m_{\sigma}^{2}\varphi_{f}^{1}(e-s)$ $\displaystyle=\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega_{\sigma}^{2l+1}{G_{\sigma}}(u,s))}{(u^{2}+s^{2})^{\frac{m_{\sigma}-1}{m_{\sigma}}}}du+\int_{0}^{\varepsilon}\frac{(f\cdot V)(\omega_{\sigma}^{2l+2}\overline{G_{\sigma}}(u,s))}{(u^{2}+s^{2})^{\frac{m_{\sigma}-1}{m_{\sigma}}}}du.$ Note that $2\big{(}\tfrac{m_{\sigma}-1}{m_{\sigma}}\big{)}-1=a(\sigma)$. In view of (8.9), Lemma 8.5 and Remark 8.4, it follows that for every $x\in(e-\varepsilon,e)\cup(e,e+\varepsilon)$ we have (9.3) $\displaystyle|x-e|^{a(\sigma)}\varphi_{|f|}^{1}(x)$ $\displaystyle\leq\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}\|f\cdot V\|_{C^{0}}\text{ if }m_{\sigma}>2,$ (9.4) $\displaystyle\varphi_{|f|}^{1}(x)$ $\displaystyle\leq 2\|f\cdot V\|_{C^{0}}(2+|\log(\varepsilon s)|)\text{ if }m_{\sigma}=2,$ (9.5) $\displaystyle|x-e|^{a(\sigma)+1}|(\varphi_{f}^{1})^{\prime}(x)|$ $\displaystyle\leq\frac{4\varepsilon^{-1/m_{\sigma}}\Gamma_{\frac{3(m_{\sigma}-1)}{2m_{\sigma}}}}{m^{2}_{\sigma}}\|f\cdot V\|_{C^{1}}\leq\frac{\varepsilon^{-1/m}}{m_{\sigma}^{2}}\Gamma_{3/4}\|V\|_{C^{1}}\|f\|_{C^{1}},$ (9.6) $\displaystyle\lim_{x\to e^{\pm}}|x-e|^{a(\sigma)+1}(\varphi_{f}^{1})^{\prime}(x)$ $\displaystyle=\mp\frac{2(a(\sigma)+1)}{m_{\sigma}^{2}}f(\sigma)V(\sigma)\Gamma_{\frac{2m_{\sigma}-1}{m_{\sigma}}}.$ If additionally $f^{(j)}(\sigma)=0$ for all $0\leq j<k$ ($1\leq k\leq m_{\sigma}-2$), then by Lemma 8.6, we have (9.7) $\displaystyle|x-e|^{a(\sigma)-\frac{k}{m_{\sigma}}+1}|(\varphi_{f}^{1})^{\prime}(x)|$ $\displaystyle\leq\frac{2\Gamma_{1}}{m_{\sigma}^{2}(k-1)!}\|f\cdot V\|_{C^{k}}\leq\frac{2\Gamma_{1}}{m_{\sigma}^{2}}\|V\|_{C^{k}}\|f\|_{C^{k}},$ (9.8) $\displaystyle|x-e|^{a(\sigma)-\frac{k}{m_{\sigma}}}\varphi_{|f|}^{1}(x)$ $\displaystyle\leq\frac{2\Gamma_{\frac{2m_{\sigma}-1-k}{2m_{\sigma}}}}{m_{\sigma}^{2}k!}\|f\cdot V\|_{C^{k}},$ and, by Lemma 8.8, (9.9) $\displaystyle\begin{split}\lim_{x\to e^{+}}&(x-e)^{a(\sigma)-\frac{k}{m_{\sigma}}+1}(\varphi_{f}^{1})^{\prime}(x)\\\ &=\frac{1}{m^{2}_{\sigma}}\sum_{j=0}^{k}\binom{k}{j}\partial_{\sigma}^{(j,k-j)}(f)\Big{(}\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{2l}G_{\sigma}\big{)}+\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{2l+1}\overline{G_{\sigma}}\big{)}\Big{)},\end{split}$ (9.10) $\displaystyle\begin{split}\lim_{x\to e^{-}}&(e-x)^{a(\sigma)-\frac{k}{m_{\sigma}}+1}(\varphi_{f}^{1})^{\prime}(x)\\\ &=-\frac{1}{m^{2}_{\sigma}}\sum_{j=0}^{k}\binom{k}{j}\partial_{\sigma}^{(j,k-j)}(f)\Big{(}\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{2l+1}G_{\sigma}\big{)}+\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{2l+2}\overline{G_{\sigma}}\big{)}\Big{)}.\end{split}$ Since $\tau^{1}$ and $g-\tau^{2}$ can be $C^{1}$-extended to the intervals $[e-\varepsilon,e]$ and $[e,e+\varepsilon]$, for every $x\in(e-\varepsilon,e)\cup(e,e+\varepsilon)$ we have $|(\varphi_{f}^{2})^{\prime}(x)|\leq C_{\sigma,\varepsilon}\|f\|_{C^{1}}$, where $\displaystyle C_{\sigma,\varepsilon}:$ $\displaystyle=\max\Big{\\{}|\tau_{1}^{\prime}(x)|+\int_{0}^{\tau_{1}(x)}\Big{\|}\frac{d\psi_{t}(x)}{dx}\Big{\|}dt:0<|x-e|<\varepsilon\Big{\\}}$ $\displaystyle\quad+\max\Big{\\{}|(g-\tau_{2})^{\prime}(x)|+\int_{0}^{g(x)-\tau_{2}(x)}\Big{\|}\frac{d\psi_{-t}(Tx)}{dx}\Big{\|}dt:0<|x-e|<\varepsilon\Big{\\}}<+\infty.$ As $a(\sigma)+1=\frac{2(m_{\sigma}-1)}{m_{\sigma}}\leq\frac{2(m-1)}{m}=a+1$ for every $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})$, in view of (9.3)-(9.6), it follows that $\varphi_{f}\in\operatorname{P_{a}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ and $\varphi_{|f|}\in\operatorname{\widehat{P}_{a}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ for every $f\in C^{m}(M)$ and $p_{a}(\varphi)\leq\Big{(}\sum_{\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})}\big{(}\varepsilon^{-1/m}\Gamma_{3/4}\|V\|_{C^{1}}+m_{\sigma}\varepsilon^{a+1}C_{\sigma,\varepsilon}\big{)}+|I|^{a+1}C_{\varepsilon}\Big{)}\|f\|_{C^{1}}.$ As $\|\varphi_{f}\|_{L^{1}}\leq\|f\|_{L^{1}}\leq\mu(M)\|f\|_{C^{0}}$, there exists $C>0$ such that $\|\varphi_{f}\|_{a}\leq C\|f\|_{C^{1}}$ for every $f\in C^{m}(M)$. Since $a(\sigma)-\frac{k}{m_{\sigma}}=b(\sigma,k)$, applying similar arguments for functions $f\in C^{m}_{\sigma,k}(M)$ and using (9.7)-(9.10) (instead of (9.3)-(9.6)), we obtain $\varphi_{f}\in\operatorname{P_{b(\sigma,k)}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$, $\varphi_{|f|}\in\operatorname{\widehat{P}_{b(\sigma,k)}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ and the existence of $C_{\sigma,k}>0$ such that $\|\varphi_{f}\|_{b(\sigma,k)}\leq C_{\sigma,k}\|f\|_{C^{k}}$ for every $f\in C^{m}_{\sigma,k}(M)$. Moreover, (9.9) applied to $e=l_{\alpha}$ and (9.10) applied to $e=r_{\alpha}$ and combined with the inequality $|(\varphi_{f}^{2})^{\prime}(x)|\leq C_{\sigma,\varepsilon}\|f\|_{C^{1}}$, yields either $\displaystyle C^{+}_{\alpha}(\varphi_{f})$ $\displaystyle=-\frac{1}{m^{2}_{\sigma}}\sum_{j=0}^{k}\binom{k}{j}\partial_{\sigma}^{(j,k-j)}(f)\Big{(}\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{2l}G_{\sigma}\big{)}+\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{2l+1}\overline{G_{\sigma}}\big{)}\Big{)},$ $\displaystyle C^{-}_{\alpha}(\varphi_{f})$ $\displaystyle=-\frac{1}{m^{2}_{\sigma}}\sum_{j=0}^{k}\binom{k}{j}\partial_{\sigma}^{(j,k-j)}(f)\Big{(}\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{2l+1}G_{\sigma}\big{)}+\Gamma_{\frac{m_{\sigma}-1}{m_{\sigma}}}^{k,j}\big{(}\omega_{\sigma}^{2l+2}\overline{G_{\sigma}}\big{)}\Big{)}$ or $C^{\pm}_{\alpha}(\varphi_{f})=0$ whenever the forward semi-orbit of $e$ returns to $I$ for the first time to one of its ends without visiting singular points. The latter option appears exactly twice. On the other hand, since every incoming separatix crosses $I$, it follows that every number of the form (9.2) is obtained as $C^{\pm}_{\alpha}(\varphi_{f})$ for some $\alpha$. ∎ ### 9.1. Correcting operators for observables Let us consider the basis $h_{1},\ldots,h_{g}$ of $U_{g+1}\subset H(\pi^{(0)})$, defined in Remark 3.1, such that $\lim_{k\rightarrow\infty}\frac{1}{k}\left\|Q(k)h_{i}\right\|=\lambda_{i}$ for $1\leq i\leq g$. Given $0\leq b<1$ we choose $2\leq j\leq g+1$ such that $\lambda_{j}\leq\lambda_{1}b<\lambda_{j-1}$. Since $h_{1},\ldots,h_{j-1}$ is a basis of $U_{j}$ (see also Remark 3.1) and the correction operator $\mathfrak{h}_{j}:\operatorname{P_{b}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to U_{j}$ (defined in the proof of Theorem 6.1) is bounded, for every $1\leq i<j$ there exists a bounded operator $d_{b,i}:\operatorname{P_{b}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\rightarrow{\mathbb{R}}$ such that (9.11) $\mathfrak{h}_{j}(\varphi)=\sum_{i=1}^{j-1}d_{b,i}(\varphi)h_{i}\quad\text{for every }\varphi\in\operatorname{P_{b}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}).$ By Lemma 7.4 in [5], for every $h_{i}\in U_{g+1}\subset H(\pi^{(0)})$ ($1\leq i\leq g$) there exists $f_{i}\in C^{\infty}(M)$ such that $\varphi_{f_{i}}=h_{i}$ and $f_{i}$ vanishes on $\bigcup_{\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})}U_{\sigma}$. Finally, for every $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})\cap M^{\prime}$ and any $0\leq k\leq m_{\sigma}-2$ we define a _correcting operator_ $\mathfrak{R}_{\sigma,k}:C^{m}_{\sigma,k}\to C^{m}_{\sigma,k}$ given by (9.12) $\mathfrak{R}_{\sigma,k}(\xi)=\xi-\sum_{i=1}^{j-1}d_{b(\sigma,k),i}(\varphi_{\xi})f_{i}.$ The correcting operator does not change the observable around all fixed points, but it removes the influence of Lyapunov exponents of the K-Z cocycle on the asymptotic of Birkhoff integrals. ###### Proposition 9.2. Let $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})\cap M^{\prime}$ and $0\leq k\leq m_{\sigma}-2$. Then for every $\xi\in C^{m}_{\sigma,k}$ we have $\displaystyle\limsup_{T\rightarrow\infty}\frac{\log{\big{|}\int_{0}^{T}\mathfrak{R}_{\sigma,k}(\xi)(\psi_{t}x)\,dt\big{|}}}{\log T}\leq b(\sigma,k)\text{ for a.e. }x\in M^{\prime};$ $\displaystyle\limsup_{T\rightarrow\infty}\frac{\log{\big{|}\int_{0}^{T}\mathfrak{R}_{\sigma,k}(\xi)\circ\psi_{t}\,dt\big{|}}_{L^{1}(M^{\prime})}}{\log T}\leq b(\sigma,k).$ ###### Proof. In view of Theorem 9.1, $\varphi_{\mathfrak{R}_{\sigma,k}(\xi)}\in\operatorname{P_{b}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ and $\varphi_{|\mathfrak{R}_{\sigma,k}(\xi)|}\in{\operatorname{\widehat{P}_{b}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ for $b=b(\sigma,k)$. Therefore, by the definition of the correcting operator, $\displaystyle\mathfrak{h}_{j}(\varphi_{\mathfrak{R}_{\sigma,k}(\xi)})$ $\displaystyle=\mathfrak{h}_{j}(\varphi_{\xi})-\sum_{i=1}^{j-1}{d_{b,i}(\varphi_{\xi})}\mathfrak{h}_{j}(\varphi_{f_{i}})=\mathfrak{h}_{j}(\varphi_{\xi})-\sum_{i=1}^{j-1}{d_{b,i}(\varphi_{\xi})}\mathfrak{h}_{j}(h_{i})$ $\displaystyle=\mathfrak{h}_{j}(\varphi_{\xi})-\sum_{i=1}^{j-1}d_{b,i}(\varphi_{\xi})h_{i}=0.$ Hence, by Corollary 6.2, for every $\tau>0$ we have $\|\mathcal{M}^{(k)}(S(k)\varphi_{\mathfrak{R}_{\sigma,k}(\xi)})\|=O(e^{(\lambda_{1}b+\tau)k})$. Finally, Theorems 7.8 and 7.10 applied to $g=\varphi_{1}$ ($a=(m-2)/2$) and $f=\mathfrak{R}_{\sigma,k}(\xi)$, complete the proof. ∎ Note that, in view of (1.11), the inequalities are optimal whenever $\psi_{\mathbb{R}}$ on $M$ is minimal. Hence, it follows that the correction provided by the operator $\mathfrak{R}_{\sigma,k}$ is the most optimal one. ## 10\. Complete power deviation spectrum of Birkhoff integrals In this section by combining previous results, we prove the full deviation spectrum of Birkhoff integrals for locally Hamiltonian flows. ###### Proof of Theorem 1.1. The proof splits into five parts. _Part I: Deviations around fixed points._ For every $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})$ and $\alpha\in{\mathbb{Z}}_{\geq 0}\times{\mathbb{Z}}_{\geq 0}$ with $|\alpha|<m_{\sigma}-2$, choose a map $\bar{\xi}_{\sigma}^{\alpha}\in C^{m}(M)$ supported on the neighborhood $U_{\sigma}$ of the fixed point $\sigma$ so that $\partial_{\sigma}^{\beta}(\bar{\xi}_{\sigma}^{\alpha})=\delta_{\alpha\beta}$ for all $\beta\in{\mathbb{Z}}_{\geq 0}\times{\mathbb{Z}}_{\geq 0}$ with $|\beta|\leq m$, where $\delta_{\alpha\beta}$ is the Kronecker delta, i.e. $\delta_{\alpha\beta}=1$ if $\alpha=\beta$ and $\delta_{\alpha\beta}=0$ if $\alpha\neq\beta$. By definition, $\bar{\xi}_{\sigma}^{\alpha}\in C_{\sigma,|\alpha|}^{m}(M)$. Let $\xi_{\sigma}^{\alpha}:=\mathfrak{R}_{\sigma,|\alpha|}(\bar{\xi}_{\sigma}^{\alpha})\in C_{\sigma,|\alpha|}^{m}(M)$ and $c_{\sigma,\alpha}(T,x):=\int_{0}^{T}\xi_{\sigma}^{\alpha}(\psi_{t}(x))dt$. Then, in view of Proposition 9.2 applied to $\xi=\bar{\xi}_{\sigma}^{\alpha}$, for every $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})\cap M^{\prime}$ and $\alpha\in{\mathbb{Z}}_{\geq 0}\times{\mathbb{Z}}_{\geq 0}$ with $|\alpha|<m_{\sigma}-2$ we have (1.5) and (1.6). Recall that, by Lemma 8.1, the corresponding distribution $\partial_{\sigma}^{\alpha}$ is bounded and $\psi_{\mathbb{R}}$-invariant. _Part II: Construction of the remainder._ Let us consider $f_{r}\in C^{m}(M)$ given by (10.1) $f=\sum_{\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})}\sum_{\begin{subarray}{c}\alpha\in{\mathbb{Z}}^{2}_{\geq 0}\\\ |\alpha|<m_{\sigma}-2\end{subarray}}\partial^{\alpha}_{\sigma}(f)\xi_{\sigma}^{\alpha}+f_{r}.$ In view of Theorem 9.1, we have $\varphi_{f_{r}}\in\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$. Indeed, let $\\{\chi_{\sigma}:\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})\\}\subset C^{m}(M)$ be a partition of unity such that for any pair of fixed points $(\sigma,\sigma^{\prime})$ we have $\chi_{\sigma}(x)=\delta_{\sigma\sigma^{\prime}}$ for all $x\in U_{\sigma^{\prime}}$. Then $\displaystyle f_{r}=\sum_{\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})}\Big{(}f\cdot\chi_{\sigma}-\sum_{|\alpha|<m_{\sigma}-2}\partial^{\alpha}_{\sigma}(f)\xi_{\sigma}^{\alpha}\Big{)}$ and $f_{\sigma}:=f\cdot\chi_{\sigma}-\sum_{|\alpha|<m_{\sigma}-2}\partial^{\alpha}_{\sigma}(f)\xi_{\sigma}^{\alpha}$ vanishes on $\bigcup_{\sigma^{\prime}\in\mathrm{Fix}(\psi_{\mathbb{R}})\setminus\\{\sigma\\}}U_{\sigma^{\prime}}$ and for every $\beta\in{\mathbb{Z}}^{2}_{\geq 0}$ with $|\beta|<m_{\sigma}-2$ we have $\partial_{\sigma}^{\beta}(f_{\sigma})=\partial_{\sigma}^{\beta}(f\cdot\chi_{\sigma})-\sum_{|\alpha|<m_{\sigma}-2}\partial^{\alpha}_{\sigma}(f)\partial_{\sigma}^{\beta}(\xi_{\sigma}^{\alpha})=\partial_{\sigma}^{\beta}(f)-\sum_{|\alpha|<m_{\sigma}-2}\delta_{\alpha\beta}\partial^{\alpha}_{\sigma}(f)=0.$ Therefore, $(f_{\sigma})^{(l)}(\sigma)=0$ for all $0\leq l<m_{\sigma}-2$. As $f_{\sigma}\in C^{m}_{\sigma,m_{\sigma}-2}$, in view of Theorem 9.1, it follows that for every $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})\cap M^{\prime}$ we have $\varphi_{f_{\sigma}}\in\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ and $\|\varphi_{f_{\sigma}}\|_{0}\leq C\|f_{\sigma}\|_{C^{m_{\sigma}-1}}\leq C\|f\|_{C^{m_{\sigma}-1}}\|\chi_{\sigma}\|_{C^{m_{\sigma}-1}}+C\sum_{|\alpha|<m_{\sigma}-2}|\partial^{\alpha}_{\sigma}(f)|\|\xi_{\sigma}^{\alpha}\|_{C^{m_{\sigma}-1}}.$ By definition, for every $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})$ and $\alpha\in{\mathbb{Z}}_{\geq 0}^{2}$ there exists $C_{\sigma,\alpha}>0$ such that $|\partial^{\alpha}_{\sigma}(f)|\leq C_{\sigma,\alpha}\|f\|_{C^{|\alpha|}}$ for every $f\in C^{m}(M)$. It follows that there exists another $C>0$ such that $\|\varphi_{f_{r}}\|_{0}\leq C\|f\|_{C^{m-1}}$ for every $f\in C^{m}(M)$. _Part III: Deviation of the remainder $f_{r}$._ Applying Theorem 6.1 to $a=0$, we have a bounded (correction) operator $\mathfrak{h}_{g+1}:\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\rightarrow U_{g+1}\subset H(\pi^{(0)})$ such that ${\mathfrak{h}_{g+1}}(h)=h$ for every $h\in U_{g+1}$. Let us consider bounded operators ${d_{i}}:\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\rightarrow{\mathbb{R}}$ for $1\leq i\leq g$ such that (10.2) $\mathfrak{h}_{g+1}(\varphi)=\sum_{i=1}^{g}{d_{i}}(\varphi)h_{i}\quad\text{for every }\varphi\in\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}).$ Let $D_{i}:C^{m}(M)\rightarrow{\mathbb{R}}$, $1\leq i\leq g$ be operators given by $D_{i}({f})=d_{i}(\varphi_{f_{r}}),\text{ for }f\in C^{m}(M).$ Since $C^{m}(M)\ni f\mapsto\varphi_{f_{r}}\in\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ and ${d_{i}}:\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\rightarrow{\mathbb{R}}$ are bounded linear operator, the operators $D_{i}$ are also bounded. Recall that we have $f_{i}\in C^{\infty}(M)$ such that $\varphi_{f_{i}}=h_{i}$ for $1\leq i\leq g$. Let us consider $f_{e}\in C^{m}(M)$ given by (10.3) $f_{r}=\sum_{i=1}^{g}D_{i}(f)f_{i}+f_{e}.$ For every $1\leq i\leq g$ let $u_{i}(T,x):=\int_{0}^{T}f_{i}(\psi_{t}(x))dt$. As $\lim_{k\rightarrow\infty}\frac{1}{k}\left\|Q(k)h_{i}\right\|=\lambda_{i}$ for $1\leq i\leq g$, in view of Proposition 7.12, we have (1.7) and (1.8) with $\nu_{i}=\lambda_{i}/\lambda_{1}$. By the definition of $f_{e}$, we have $\displaystyle\varphi_{f_{e}}=\varphi_{f_{r}}-\sum_{i=1}^{g}D_{i}(f)\varphi_{f_{i}}=\varphi_{f_{r}}-\sum_{i=1}^{g}D_{i}(f)h_{i}.$ As $\varphi_{f_{r}}\in\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$, we have $\varphi_{f_{e}}\in\operatorname{P_{0}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ and $\displaystyle\mathfrak{h}_{g+1}(\varphi_{f_{e}})$ $\displaystyle=\mathfrak{h}_{g+1}(\varphi_{f_{r}})-\sum_{i=1}^{g}D_{i}(f)\mathfrak{h}_{g+1}(h_{i})=\mathfrak{h}_{g+1}(\varphi_{f_{r}})-\sum_{i=1}^{g}{d_{i}}(\varphi_{f_{r}})h_{i}=0.$ By Corollary 6.2, it follows that $\|\mathcal{M}^{(k)}(S(k)\varphi_{f_{e}})\|=O(e^{\tau k})\text{ for every }\tau>0.$ Let $err(f,T,x)=\int_{0}^{T}f_{e}(\psi_{t}(x))dt$. If $f_{e}\neq 0$ then, in view of Proposition 7.13 and Remark 7.14, this gives (1.9) and (1.10). By (10.1) and (10.3), we have (10.4) $f=\sum_{\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})}\sum_{\begin{subarray}{c}\alpha\in{\mathbb{Z}}^{2}_{\geq 0}\\\ |\alpha|<m_{\sigma}-2\end{subarray}}\partial^{\alpha}_{\sigma}(f)\xi_{\sigma}^{\alpha}+\sum_{i=1}^{g}D_{i}(f)f_{i}+f_{e}.$ Passing to the Birkhoff integrals, we obtain (1.4). _Part IV: Invariance of distributions._ We need to show that the distributions $D_{i}$ for $1\leq i\leq g$ are $\psi_{\mathbb{R}}$-invariant. By (10.4), for every $s\in{\mathbb{R}}$ we have $f\circ\psi_{s}=\sum_{\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})}\sum_{\begin{subarray}{c}\alpha\in{\mathbb{Z}}^{2}_{\geq 0}\\\ |\alpha|<m_{\sigma}-2\end{subarray}}\partial^{\alpha}_{\sigma}(f\circ\psi_{s})\xi_{\sigma}^{\alpha}+\sum_{i=1}^{g}D_{i}(f\circ\psi_{s})f_{i}+(f\circ\psi_{s})_{e}.$ Since $\partial^{\alpha}_{\sigma}(f\circ\psi_{s})=\partial^{\alpha}_{\sigma}(f)$ (see Lemma 8.1), it follows that $\bar{f}:=\sum_{i=1}^{g}D_{i}(f\circ\psi_{s}-f)f_{i}=f\circ\psi_{s}-f+f_{e}-(f\circ\psi_{s})_{e}.$ Note that for any $T>0$ we have $\left|\int_{0}^{T}(f\circ\psi_{s}-f)(\psi_{t}x)\,dt\right|\leq\int_{0}^{s}|f(\psi_{t}x)|\,dt+\int_{T}^{T+s}|f(\psi_{t}x)|\,dt\leq 2s\|f\|_{C^{0}}.$ In view of (1.9), it follows that (10.5) $\limsup_{T\to+\infty}\frac{\log\left|\int_{0}^{T}\bar{f}(\psi_{t}x)\,dt\right|}{\log T}\leq 0\text{ for a.e. }x\in M^{\prime}.$ On the other hand, $\varphi_{\bar{f}}=\sum_{i=1}^{g}D_{i}(f\circ\psi_{s}-f)\varphi_{f_{i}}=\sum_{i=1}^{g}D_{i}(f\circ\psi_{s}-f)h_{i}\in U_{g+1}.$ Suppose that, contrary to our claim, $D_{i}(f\circ\psi_{s}-f)\neq 0$ for some $1\leq i\leq g$. As $h_{1},\ldots,h_{g}$ are linearly independent, $h:=\varphi_{\bar{f}}=\sum_{i=1}^{g}D_{i}(f\circ\psi_{s}-f)h_{i}\neq 0.$ In view of (3.4), it follows that $\lambda(h):=\lim_{k\to\infty}\frac{\log\|Q(k)h\|}{k}\geq\lambda_{g}>0.$ By Proposition 7.12, this yields $\limsup_{T\to+\infty}\frac{\log\left|\int_{0}^{T}\bar{f}(\psi_{t}x)\,dt\right|}{\log T}=\frac{\lambda(h)}{\lambda_{1}}>0\text{ for a.e. }x\in M^{\prime},$ contrary to (10.5). Consequently, $D_{i}(f\circ\psi_{s})=D_{i}(f)$ for all $1\leq i\leq g$ and $s\in{\mathbb{R}}$. _Part V: Lower bounds._ Suppose that $M^{\prime}=M$. Let us consider any $f\in C^{m}_{\sigma,l}$ with $f^{(l)}(\sigma)\neq 0$ for $0\leq l<m_{\sigma}-2$ and $\sigma\in\mathrm{Fix}(\psi_{\mathbb{R}})$. Then $\varphi=\varphi_{f}\in{\operatorname{P_{b}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})$ with $b=b(\sigma,l)>0$. The purpose of this part is to show (1.11). In view of Theorem 9.1 (the last sentence) combined with Lemma 8.9 applied to $a=\frac{m_{\sigma}-1}{m_{\sigma}}$ and $a_{i}=\partial^{(i,l-i)}_{\sigma}(f)$ for $0\leq i\leq l$ (at least one of them is non-zero, since $f^{(l)}(\sigma)\neq 0$), there exists $\alpha\in\mathcal{A}$ such that $C^{+}_{\alpha}(\varphi_{f})\neq 0$ or $C^{-}_{\alpha}(\varphi_{f})\neq 0$. We focus only on the case $C^{+}_{\alpha}(\varphi_{f})\neq 0$. In the latter case, the proof runs similarly. For every $k\geq 1$ let us consider the interval $\big{(}l^{(k)}_{\alpha},l^{(k)}_{\alpha}+\varepsilon|I^{(k)}_{\alpha}|\big{]}$ with $\varepsilon:=\left(\frac{|C^{+}_{\alpha}|}{2^{\frac{4+4b}{b}}\kappa^{1+b}d\zeta(1+b)p_{b}(\varphi)}\right)^{1/(1+b)}.$ As $d,\kappa,\zeta(b+1)\geq 1$, by (4.1), we have $16\varepsilon^{b}\leq 1$. In view of Proposition 5.5, for $k$ large enough ($|I^{(k)}|\leq\delta$) and for every $x\in\big{(}l^{(k)}_{\alpha},l^{(k)}_{\alpha}+\varepsilon|I^{(k)}_{\alpha}|\big{]}$, we have $\big{|}(x-l^{(k)}_{\alpha})^{1+b}(S(k)\varphi)^{\prime}(x)\big{|}\geq\frac{|C^{+}_{\alpha}|}{2}-2^{2+b}\kappa^{1+b}d\zeta(1+b)p_{b}(\varphi)\left(\frac{\varepsilon|I^{(k)}_{\alpha}|}{|I^{(k)}_{\alpha}|}\right)^{1+b}\geq\frac{|C^{+}_{\alpha}|}{4}>0.$ In view of Lemma 4.7, there exists an interval $\widehat{J}\subset\big{(}l^{(k)}_{\alpha},l^{(k)}_{\alpha}+\varepsilon|I_{\alpha}^{(k)}|\big{]}$ such that (10.6) $|\widehat{J}|\geq\frac{\varepsilon|I^{(k)}_{\alpha}|}{4}\text{ and }\ |(S(k)\varphi)(x)|\geq\frac{|C^{+}_{\alpha}|}{4(\varepsilon|I^{(k)}_{\alpha}|)^{b}}\geq\frac{|C^{+}_{\alpha}|}{|I^{(k)}_{\alpha}|^{b}}\text{ for }x\in\widehat{J}.$ Finally we can choose an interval $J^{(k)}\subset\widehat{J}$ such that (10.7) $\displaystyle|J^{(k)}|\geq\varepsilon|I^{(k)}_{\alpha}|/(2^{4}\kappa);$ (10.8) $\displaystyle\operatorname{dist}(J^{(k)},End(T^{(k)}))\geq\varepsilon|I^{(k)}_{\alpha}|/2^{4};$ (10.9) $\displaystyle\operatorname{dist}((T^{(k)})^{-1}J^{(k)},End(T^{(k)}))\geq\varepsilon|I^{(k)}_{\alpha}|/(2^{4}\kappa).$ Let us consider the set $\displaystyle B_{k}:$ $\displaystyle=\\{T^{g}_{t}(x,0):x\in(T^{(k)})^{-1}J^{(k)},0\leq t<(S(k)g)(x)\\}$ $\displaystyle=\\{T^{g}_{t}(x,0):x\in J^{(k)},-(S(k)g)((T^{(k)})^{-1}x)\leq t<0\\}.$ As $g\geq\underline{g}>0$, by (10.6), (10.7) and (3.12), we have $Leb(B_{k})=\int_{(T^{(k)})^{-1}J^{(k)}}S(k)g(x)\,dx\geq\underline{g}|J^{(k)}|\min_{\beta\in\mathcal{A}}Q_{\beta}(k)\geq\frac{\delta\varepsilon\underline{g}}{2^{4}\kappa^{2}}|I|.$ For every $(x^{\prime},r^{\prime})=T^{g}_{r}(x,0)\in B_{k}$ let $\displaystyle\tau_{k}^{0}$ $\displaystyle=\tau_{k}^{0}(x^{\prime},r^{\prime}):=(S(k)g)(x)-r\text{ and }$ $\displaystyle\tau_{k}^{1}$ $\displaystyle=\tau_{k}^{1}(x^{\prime},r^{\prime}):=(S(k)g)(x)+(S(k)g)(T^{(k)}x)-r.$ Then $T^{g}_{\tau_{k}^{0}}(x^{\prime},r^{\prime})=(T^{(k)}x,0)$ and $\displaystyle\int_{0}^{\tau_{k}^{1}}f(T^{g}_{t}(x^{\prime},r^{\prime}))\,dt-\int_{0}^{\tau_{k}^{0}}f(T^{g}_{t}(x^{\prime},r^{\prime}))\,dt$ $\displaystyle=\int_{0}^{(S(k)g)(T^{(k)}x)}f(T^{g}_{t}(T^{(k)}x,0))\,dt$ $\displaystyle=(S(k)\varphi_{f})(T^{(k)}x).$ As $x\in(T^{(k)})^{-1}J^{(k)}$, by (10.6), we have $|(S(k)\varphi_{f})(T^{(k)}x)|\geq{|C^{+}_{\alpha}|}/|I^{(k)}_{\alpha}|^{b}$. It follows that (10.10) $\max\Big{\\{}\Big{|}\int_{0}^{\tau_{k}^{1}}f(T^{g}_{t}(x^{\prime},r^{\prime}))\,dt\Big{|},\Big{|}\int_{0}^{\tau_{k}^{0}}f(T^{g}_{t}(x^{\prime},r^{\prime}))\,dt\Big{|}\Big{\\}}\geq\frac{|C^{+}_{\alpha}|}{2|I^{(k)}_{\alpha}|^{b}}.$ Choose $\tau_{k}=\tau_{k}(x^{\prime},r^{\prime})$ among $\tau_{k}^{0}$ and $\tau_{k}^{1}$ such that $\left|\int_{0}^{\tau_{k}}f(T^{g}_{t}(x^{\prime},r^{\prime}))\,dt\right|=\max\Big{\\{}\Big{|}\int_{0}^{\tau_{k}^{1}}f(T^{g}_{t}(x^{\prime},r^{\prime}))\,dt\Big{|},\Big{|}\int_{0}^{\tau_{k}^{0}}f(T^{g}_{t}(x^{\prime},r^{\prime}))\,dt\Big{|}\Big{\\}}.$ As $x\in(T^{(k)})^{-1}J^{(k)}$, in view of (10.8), (10.9) and (4.14), we have $\displaystyle|(S(k)g)((T^{(k)})^{-1}x)|$ $\displaystyle\leq\|\mathcal{M}^{(k)}(S(k)g)\|+p_{a}(S(k)g)O(|I^{(k)}|^{-a})$ $\displaystyle|(S(k)g)(x)|$ $\displaystyle\leq\|\mathcal{M}^{(k)}(S(k)g)\|+p_{a}(S(k)g)O(|I^{(k)}|^{-a}).$ Moreover, by (6.6), (5.1) and (5.10), we have $\|\mathcal{M}^{(k)}(S(k)g)\|\leq\frac{2\kappa}{|I^{(k)}|}\|S(k)g\|_{L^{1}(I^{(k)})}\leq\frac{2\kappa\|g\|_{L^{1}(I^{(0)})}}{|I^{(k)}|},\quad p_{a}(S(k)g)\leq O(p_{a}(g)).$ Therefore, $\displaystyle|(S(k)g)((T^{(k)})^{-1}x)|\leq O(|I^{(k)}|^{-1}),\quad|(S(k)g)(x)|\leq O(|I^{(k)}|^{-1}).$ Hence there exists $C>0$ such that for every $k\geq 1$ and $(x^{\prime},r^{\prime})\in B_{k}$ we have $\tau_{k}(x^{\prime},r^{\prime})\leq C|I^{(k)}|^{-1}$. In view of (10.10), it follows that for every $(x^{\prime},r^{\prime})\in B_{k}$ we have $\frac{\log|\int_{0}^{\tau_{k}}f(T^{g}_{t}(x^{\prime},r^{\prime}))\,dt|}{\log\tau_{k}}\geq\frac{\log(|C^{+}_{\alpha}||I^{(k)}|^{-b}/2)}{\log(C|I^{(k)}|^{-1})}.$ Since $(B_{k})_{k\geq 1}$ is a sequence of asymptotically invariant sets (i.e. for every $t\in{\mathbb{R}}$ we have $Leb(B_{k}\triangle T^{g}_{t}B_{k})\to 0$ as $k\to\infty$) and their measures are separated from zero, by the ergodicity of the flow, a.e. $(x,r)\in I^{g}$ belongs to $B_{k}$ for infinitely many $k$. It follows that for a.e. $(x,r)\in I^{g}$ we have $\displaystyle\limsup_{T\to+\infty}\frac{\log|\int_{0}^{T}f(T^{g}_{t}(x,r))\,dt|}{\log T}$ $\displaystyle\geq\limsup_{k\to+\infty}\frac{\log|\int_{0}^{\tau_{k}}f(T^{g}_{t}(x,r))\,dt|}{\log\tau_{k}}$ $\displaystyle\geq\lim_{k\to+\infty}\frac{\log(|C^{+}_{\alpha}||I^{(k)}|^{-b}/2)}{\log(C|I^{(k)}|^{-1})}=b.$ Finally, (1.12) follows directly from (1.5) and (1.11), since $c_{\sigma,\alpha}(T,x)=\int_{0}^{T}\xi_{\sigma}^{\alpha}(\psi_{t}(x))dt$ and $\xi_{\sigma}^{\alpha}\in C_{\sigma,|\alpha|}^{m}(M)$ with $\partial_{\sigma}^{\alpha}(\xi_{\sigma}^{\alpha})=1$. ∎ ## Acknowledgements M.K. would like to thank the Center of Excellence “Dynamics, mathematical analysis and artificial intelligence” at the Nicolaus Copernicus University in Toruń for hospitality during his post-doc grant. Research was partially supported by the Narodowe Centrum Nauki Grant 2017/27/B/ST1/00078. ## Appendix A Proof of Theorem 3.2 We review the natural extension of the Rauzy-Veech induction and prove the full measure of IETs satisfying LABEL:FDC. ### A.1. Extension of the Rauzy-Veech induction Let $\mathcal{G}\subset\mathcal{S}^{0}_{\mathcal{A}}$ be a Rauzy class and set $\Delta^{\mathcal{A}}:=\\{\lambda\in{\mathbb{R}}_{>0}^{\mathcal{A}}:|\lambda|=1\\}.$ Let ${\mathcal{R}}:\mathcal{G}\times{\mathbb{R}}_{>0}^{\mathcal{A}}\rightarrow\mathcal{G}\times{\mathbb{R}}_{>0}^{\mathcal{A}}$ be the standard Rauzy-Veech map defined in §2.4 by $\mathcal{R}(\pi,\lambda)=(\widetilde{\pi},\widetilde{\lambda}),\text{ where }\widetilde{\lambda}=A^{-1}(\pi,\lambda)\text{ and $\widetilde{\pi}$ is given by \eqref{def:pi}.}$ Then we define (normalized) Rauzy-Veech renormalization $\widetilde{\mathcal{R}}:\mathcal{G}\times\Delta^{\mathcal{A}}\rightarrow\mathcal{G}\times\Delta^{\mathcal{A}},\quad\widetilde{\mathcal{R}}(\pi,\lambda)=(\tilde{\pi},\tilde{\lambda}/|\tilde{\lambda}|).$ By Veech [27], there exists an $\widetilde{\mathcal{R}}$-invariant ergodic recurrent measure $\mu_{\mathcal{G}}$ which is equivalent to the product of the counting measure on $\mathcal{G}$ and the Lebesgue measure on $\Delta^{\mathcal{A}}$. For every $\pi\in\mathcal{S}^{0}_{\mathcal{A}}$, let $\Theta_{\pi}:=\Big{\\{}\tau\in{\mathbb{R}}^{\mathcal{A}}:\sum_{\pi_{0}(\alpha)\leq k}\tau_{\alpha}>0,\sum_{\pi_{1}(\alpha)\leq k}\tau_{\alpha}<0\text{ for }1\leq k\leq d\Big{\\}}$ and let $X(\mathcal{G}):=\bigcup_{\pi\in\mathcal{G}}\left\\{(\pi,\lambda,\tau)\in\\{\pi\\}\times\Delta^{\mathcal{A}}\times\Theta_{\pi}:\langle\lambda,\Omega_{\pi}\tau\rangle=1\right\\}.$ Then the natural (invertible) extension of $\widetilde{\mathcal{R}}$ is of the form $\widehat{\mathcal{R}}:X(\mathcal{G})\to X(\mathcal{G}),\quad\widehat{\mathcal{R}}(\pi,\lambda,\tau)=\left(\tilde{\pi},\frac{A^{-1}(\pi,\lambda)\lambda}{|A^{-1}(\pi,\lambda)\lambda|},|A^{-1}(\pi,\lambda)\lambda|A^{-1}(\pi,\lambda)\tau\right).$ The natural extension, constructed by Veech in [27], of the measure $\mu_{\mathcal{G}}$ on $X(\mathcal{G})$ is denoted by $\widehat{\mu}_{\mathcal{G}}$. Then $\widehat{\mu}_{\mathcal{G}}$ is $\widehat{\mathcal{R}}$-invariant and $\widehat{\mathcal{R}}$ is ergodic and recurrent with respect to $\widehat{\mu}_{\mathcal{G}}$. We extend the map $A:\mathcal{G}\times\Delta^{\mathcal{A}}\rightarrow SL_{\mathcal{A}}({\mathbb{Z}})$ defined in §2.4 to $\widehat{A}:X(\mathcal{G})\rightarrow SL_{\mathcal{A}}({\mathbb{Z}})$ given by $\widehat{A}(\pi,\lambda,\tau)={A}(\pi,\tau)$. Let us consider the extended cocycle $\widehat{A}:{\mathbb{Z}}\times X(\mathcal{G})\rightarrow SL_{\mathcal{A}}({\mathbb{Z}})$ $\widehat{A}^{(n)}(\pi,\lambda,\tau)=\begin{cases}\widehat{A}(\pi,\lambda,\tau)\cdot\widehat{A}(\widehat{\mathcal{R}}(\pi,\lambda,\tau))\cdots\widehat{A}(\widehat{\mathcal{R}}^{n-1}(\pi,\lambda,\tau))&\text{ if }n\geq 0\\\ \widehat{A}(\widehat{\mathcal{R}}^{-1}(\pi,\lambda,\tau))\cdot\widehat{A}(\widehat{\mathcal{R}}^{-2}(\pi,\lambda,\tau))\cdots\widehat{A}(\widehat{\mathcal{R}}^{-n}(\pi,\lambda,\tau))&\text{ if }n<0.\end{cases}$ Then (A.1) $\widehat{A}^{(n)}(\pi,\lambda,\tau)={A}^{(n)}(\pi,\tau)\text{ if }n\geq 0.$ Let $Y\subset X(\mathcal{G})$ be a subset such that $0<\widehat{\mu}_{\mathcal{G}}(Y)<\infty$. For a.e $(\pi,\lambda,\tau)\in Y$, let $r(\pi,\lambda,\tau)\geq 1$ by the first return time of $(\pi,\lambda,\tau)$ for the map $\widehat{\mathcal{R}}$. Denote by $\widehat{\mathcal{R}}_{Y}:Y\rightarrow Y$ the induced map and by $\widehat{{A}}_{Y}:Y\rightarrow SL_{\mathcal{A}}({\mathbb{Z}})$ the induced cocycle $\widehat{\mathcal{R}}_{Y}(\pi,\lambda,\tau)=\widehat{\mathcal{R}}^{r(\pi,\lambda,\tau)}(\pi,\lambda,\tau),\quad\widehat{{A}}_{Y}(\pi,\lambda,\tau)=\widehat{{A}}^{(r(\pi,\lambda,\tau))}(\pi,\lambda,\tau)$ for a.e $(\pi,\lambda,\tau)\in Y$. Let $\widehat{\mu}_{Y}$ be the restriction of $\widehat{\mu}_{\mathcal{G}}$ to $Y$. ### A.2. Oseledets splitting Assume that $\log\|\widehat{A}_{Y}\|$ and $\log\|\widehat{A}^{-1}_{Y}\|$ are $\widehat{\mu}_{Y}$-integrable. By the Oseledets multiplicative theorem, the symplecticity of $\widehat{A}_{Y}$ (see [31]) and the simplicity of spectrum (see [1]), there exist Lyapunov exponents $\lambda_{1}>\ldots>\lambda_{g}>\lambda_{g+1}=0$ such that for $\widehat{\mu}_{Y}$-a.e. $(\pi,\lambda,\tau)\in Y$ we have a splitting ${\mathbb{R}}^{\mathcal{A}}=\bigoplus_{1\leq i\leq g+1}F_{i}(\pi,\lambda,\tau)\oplus\bigoplus_{1\leq i\leq g}F_{-i}(\pi,\lambda,\tau),$ for which (A.2) $\displaystyle\lim_{n\to\pm\infty}\frac{1}{n}\log\|\widehat{A}^{(n)}_{Y}(\pi,\lambda,\tau)^{t}\upharpoonright_{F_{i}(\pi,\lambda,\tau)}\|=\lambda_{i}\ \text{ if }1\leq i\leq g+1$ (A.3) $\displaystyle\lim_{n\to\pm\infty}\frac{1}{n}\log\|\widehat{A}^{(n)}_{Y}(\pi,\lambda,\tau)^{t}\upharpoonright_{F_{-i}(\pi,\lambda,\tau)}\|=-\lambda_{i}\ \text{ if }1\leq i\leq g$ (A.4) $\displaystyle\widehat{A}^{(n)}_{Y}(\pi,\lambda,\tau)^{t}F_{i}(\pi,\lambda,\tau)=F_{i}(\widehat{\mathcal{R}}^{n}_{Y}(\pi,\lambda,\tau))$ for all $i\in\\{-g,\ldots,-1,1,\ldots,g+1\\}$ and $n\in{\mathbb{Z}}$ and $\dim F_{\pm i}(\pi,\lambda,\tau)=1\quad\text{for}\quad i=1,\ldots,g.$ Moreover, for every partition $\\{I_{1},I_{2}\\}$ of the set $\\{-g,\ldots,-1,1,\ldots,g+1\\}$ we have (A.5) $\lim_{n\to\pm\infty}\frac{1}{n}\log\Big{|}\sin\angle\Big{(}\bigoplus_{i\in I_{1}}F_{i}(\pi,\lambda,\tau),\bigoplus_{i\in I_{2}}F_{i}(\pi,\lambda,\tau)\Big{)}\Big{|}=0,$ and (A.6) $H(\pi):=\bigoplus_{i\neq g+1}F_{i}(\pi,\lambda,\tau).$ For every $1\leq j\leq g+1$ let $\displaystyle E_{j}(\pi,\lambda,\tau)$ $\displaystyle:=\bigoplus_{j\leq i\leq g+1}F_{i}(\pi,\lambda,\tau)\oplus\bigoplus_{1\leq i\leq g}F_{-i}(\pi,\lambda,\tau)$ $\displaystyle U_{j}(\pi,\lambda,\tau)$ $\displaystyle:=\bigoplus_{1\leq i<j}F_{i}(\pi,\lambda,\tau)\subset H(\pi).$ Then $E_{j}(\pi,\lambda,\tau)\oplus U_{j}(\pi,\lambda,\tau)={\mathbb{R}}^{\mathcal{A}}.$ By (A.4), for every $n\in{\mathbb{Z}}$ we have $\displaystyle\widehat{A}^{(n)}_{Y}(\pi,\lambda,\tau)^{t}E_{j}(\pi,\lambda,\tau)=E_{j}(\widehat{\mathcal{R}}^{n}_{Y}(\pi,\lambda,\tau)),\ \widehat{A}^{(n)}_{Y}(\pi,\lambda,\tau)^{t}U_{j}(\pi,\lambda,\tau)=U_{j}(\widehat{\mathcal{R}}^{n}_{Y}(\pi,\lambda,\tau)).$ In view of (A.2), (A.3) and (A.5), for every $1\leq j\leq g+1$ we have (A.7) $\displaystyle\lim_{n\to+\infty}\frac{1}{n}\log\|\widehat{A}^{(n)}_{Y}(\pi,\lambda,\tau)^{t}\upharpoonright_{E_{j}(\pi,\lambda,\tau)}\|=\lambda_{j}$ (A.8) $\displaystyle\lim_{n\to+\infty}\frac{1}{n}\log\|\widehat{A}^{(-n)}_{Y}(\pi,\lambda,\tau)^{t}\upharpoonright_{U_{j}(\pi,\lambda,\tau)}\|=-\lambda_{j-1}\text{ if }j\geq 2$ (A.9) $\displaystyle\lim_{n\to\pm\infty}\frac{1}{n}\log\left|\sin\angle\left(E_{j}(\widehat{\mathcal{R}}^{n}_{Y}(\pi,\lambda,\tau)),U_{j}(\widehat{\mathcal{R}}^{n}_{Y}(\pi,\lambda,\tau))\right)\right|=0\text{ if }j\geq 2.$ ### A.3. Proof of Theorem 3.2 The arguments used in the proof runs similarly to that used to prove Theorem 3.8 in [8]. We will omit some repetitive arguments. ###### Proof of Theorem 3.2. Let us consider a subset $Y\subset X(\mathcal{G})$ which satisfies the assumptions below: * $(i)$ the projection $\underline{Y}$ of $Y$ on $\mathcal{G}\times\Lambda^{\mathcal{A}}$ is precompact with respect to the Hilbert metric; * $(ii)$ there exists $0<\delta<1$ such that for every $(\pi,\lambda,\tau)\in Y$ we have $\min\Big{\\{}\Big{\\{}\sum_{\pi_{0}(\alpha)\leq k}\tau_{\alpha}:1\leq k<d\Big{\\}}\cup\\{(\Omega_{\pi}(\tau))_{\alpha}:\alpha\in\mathcal{A}\\}\Big{\\}}>\delta\max\\{(\Omega_{\pi}(\tau))_{\alpha}:\alpha\in\mathcal{A}\\};$ * $(iii)$ $\widehat{\mu}_{Y}$ is finite; * $(iv)$ the functions $\log\|\widehat{A}_{Y}\|$ and $\log\|\widehat{A}_{Y}^{-1}\|$ are $\widehat{\mu}_{Y}$-integrable. _Acceleration._ In view of (A.8) and (A.7), for every $\tau>0$ the maps $\displaystyle Y\ni(\pi,\lambda,\tau)\mapsto\sup_{n\geq 0}e^{-(\lambda_{j}-\tau)n}\|\widehat{A}^{(n)}_{Y}(\pi,\lambda,\tau)^{t}\upharpoonright_{E_{j}(\pi,\lambda,\tau)}\|\in{\mathbb{R}}\text{ for }1\leq j\leq g+1,$ $\displaystyle Y\ni(\pi,\lambda,\tau)\mapsto\sup_{n\geq 0}e^{(\lambda_{j-1}+\tau)n}\|\widehat{A}^{(-n)}_{Y}(\pi,\lambda,\tau)^{t}\upharpoonright_{U_{j}(\pi,\lambda,\tau)}\|\in{\mathbb{R}}\text{ for }2\leq j\leq g+1$ are a.e. defined and measurable. Therefore, there exists a closed subset $K\subset Y$ with $\widehat{\mu}_{Y}(K)/\widehat{\mu}_{Y}(Y)>1-\tau/2$ and a constant $C>0$ such that if $(\pi,\lambda,\tau)\in K$ then for every $n\geq 0$ we have (A.10) $\displaystyle\|\widehat{A}^{(n)}_{Y}(\pi,\lambda,\tau)^{t}\upharpoonright_{E_{j}(\pi,\lambda,\tau)}\|\leq Ce^{(\lambda_{j}+\tau)n}\text{ for }1\leq j\leq g+1,$ (A.11) $\displaystyle\|(\widehat{A}^{(n)}_{Y}(\mathcal{R}_{Y}^{-n}(\pi,\lambda,\tau))^{t}\upharpoonright_{U_{j}(\pi,\lambda,\tau)})^{-1}\|\leq Ce^{(-\lambda_{j-1}+\tau)n}\text{ for }2\leq j\leq g+1.$ Let $\widehat{\mathcal{R}}_{K}:K\to K$ be the induced map and let $\widehat{A}_{K}:K\to SL_{\mathcal{A}}({\mathbb{Z}})$ be the induced cocycle, i.e. $\widehat{\mathcal{R}}_{K}(\pi,\lambda,\tau)=\widehat{\mathcal{R}}^{r_{K}(\pi,\lambda,\tau)}_{Y}(\pi,\lambda,\tau),$ where $r_{K}(\pi,\lambda,\tau)\geq 1$ is the first return time of $(\pi,\lambda,\tau)\in K$ to $K$ for the map $\widehat{\mathcal{R}}_{Y}$ and $\widehat{A}_{K}^{(n)}=\widehat{A}_{Y}^{(r_{K}^{(n)})}\text{ for every }n\geq 0,$ where $r_{K}^{(n)}:=\sum_{0\leq i<n}r_{K}\circ\widehat{\mathcal{R}}_{K}^{i}$ for every $n\geq 0$. Then (A.12) $\frac{r_{K}^{(n)}(\pi,\lambda,\tau)}{n}\to\frac{\widehat{\mu}_{Y}(Y)}{\widehat{\mu}_{Y}(K)}\in(1,1+\tau)\text{ for a.e. }(\pi,\lambda,\tau)\in K.$ In view of (A.10) and (A.11), for every $(\pi,\lambda,\tau)\in K$, (A.13) $\displaystyle\|\widehat{A}^{(n)}_{K}(\pi,\lambda,\tau)^{t}\upharpoonright_{E_{j}(\pi,\lambda,\tau)}\|\leq Ce^{(\lambda_{j}+\tau)r_{K}^{(n)}(\pi,\lambda,\tau)}\text{ for }1\leq j\leq g+1,$ (A.14) $\displaystyle\|(\widehat{A}^{(n)}_{K}(\mathcal{R}_{K}^{-n}(\pi,\lambda,\tau))^{t}\upharpoonright_{U_{j}(\pi,\lambda,\tau)})^{-1}\|\leq Ce^{(-\lambda_{j-1}+\tau)r_{K}^{(n)}(\mathcal{R}_{K}^{-n}(\pi,\lambda,\tau))}$ for $2\leq j\leq g+1$. Moreover, for a.e. $(\pi,\lambda,\tau)\in K$, (A.15) $\lim_{n\to+\infty}\frac{1}{n}\log\|\widehat{A}^{(n)}_{K}(\pi,\lambda,\tau)\|=\lambda_{1}\frac{\widehat{\mu}_{Y}(Y)}{\widehat{\mu}_{Y}(K)}\in(\lambda_{1},\lambda_{1}(1+\tau)).$ Since the maps $\log\|\widehat{A}_{K}\|$ and $\log\|\widehat{A}_{K}^{-1}\|$ are $\widehat{\mu}_{K}$-integrable, for a.e. $(\pi,\lambda,\tau)\in K$, (A.16) $\lim_{n\to+\infty}\frac{1}{n}\log\|\widehat{A}_{K}(\widehat{\mathcal{R}}_{K}^{n}(\pi,\lambda,\tau))\|=0.$ By the ergodicity of $\widehat{\mathcal{R}}:X(\mathcal{G})\to X(\mathcal{G})$, for a.e. $(\pi,\lambda,\tau)\in X(\mathcal{G})$ (A.17) $\displaystyle\begin{split}&\text{there exists }n_{1}\geq 0\text{ such that }\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)\in K\\\ &\text{and $\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)$ satisfies \eqref{eq:laangle}, \eqref{eq:retK}, \eqref{eq:maxlap} and \eqref{eq:zzero}}.\end{split}$ By Fubini argument, there exists a measurable subset $\Xi\subset\mathcal{G}\times\Lambda^{\mathcal{A}}$ such that $\mu_{\mathcal{G}}(\mathcal{G}\times\Lambda^{\mathcal{A}}\setminus\Xi)=0$ and for every $(\pi,\lambda)\in\Xi$, there exists $\tau\in\Theta_{\pi}$ such that $(\pi,\lambda,\tau)\in X(\mathcal{G})$ satisfies (A.17). _Full measure._ We now show that every $(\pi,\lambda)\in\Xi$ satisfies the LABEL:FDC. Suppose that $(\pi,\lambda)\in\Xi$ and $(\pi,\lambda,\tau)\in X(\mathcal{G})$ satisfies (A.17). Then the accelerating sequence $(n_{k})_{k\geq 0}$ required by Definition 2 is defined as follows: * • $n_{0}=0$; * • for $k\geq 1$ we take $n_{k}$ so that $\widehat{\mathcal{R}}^{n_{k}}(\pi,\lambda,\tau)=\widehat{\mathcal{R}}_{K}^{k-1}\widehat{\mathcal{R}}^{n_{1}(\pi,\lambda,\tau)}(\pi,\lambda,\tau)$. Since $(\pi,\lambda,\tau)$ is Oseledets generic, $(\pi,\lambda)$ is also Oseledets generic with the Oseledets filtration $\displaystyle\\{0\\}=E_{0}(\pi,\lambda)\subset E_{-1}(\pi,\lambda)\subset\ldots\subset E_{-g}(\pi,\lambda)\subset E_{cs}(\pi,\lambda)$ $\displaystyle=E_{g+1}(\pi,\lambda)\subset E_{g}(\pi,\lambda)\subset\ldots\subset E_{1}(\pi,\lambda)=\Gamma$ given by $E_{j}(\pi,\lambda):=E_{j}(\pi,\lambda,\tau)\text{ for }j=-1,-2,\ldots,-g,g+1,g,\ldots,2,1.$ We can define a complementary filtration $\\{0\\}=U_{1}\subset U_{2}\subset\ldots\subset U_{g}\subset U_{g+1}$ by $U_{j}=U_{j}(\pi,\lambda,\tau)\text{ for }1\leq j\leq g+1.$ Then for every $k\geq 1$, $E_{j}^{(k)}=E_{j}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))\text{ and }U_{j}^{(k)}=U_{j}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau))).$ By the definition of $Q$ and (A.1), $Q(k,l)=\widehat{A}_{K}^{(l-k)}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))^{t}$ for $1\leq k\leq l$, so $\displaystyle\|Q|_{E_{j}}(k,l)\|$ $\displaystyle=\big{\|}\widehat{A}_{K}^{(l-k)}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))^{t}\upharpoonright_{E_{j}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))}\big{\|}$ $\displaystyle\|Q|_{U_{j}}(k,l)^{-1}\|$ $\displaystyle=\big{\|}(\widehat{A}_{K}^{(l-k)}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))^{t}\upharpoonright_{U_{j}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))})^{-1}\big{\|}$ $\displaystyle=\big{\|}(\widehat{A}_{K}^{(l-k)}(\widehat{\mathcal{R}}_{K}^{-(l-k)}(\widehat{\mathcal{R}}_{K}^{l-1}\circ\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))^{t}\upharpoonright_{U_{j}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))})^{-1}\big{\|}.$ Since $\widehat{\mathcal{R}}_{K}^{k-1}\circ\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau),\widehat{\mathcal{R}}_{K}^{l-1}\circ\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)\in K$ for every $1\leq k\leq l$, by (A.13) and (A.14), we have $\displaystyle\|Q|_{E_{j}}(k,l)\|\leq Ce^{(\lambda_{j}+\tau)r_{K}^{(l-k)}(\widehat{\mathcal{R}}_{K}^{k-1}\circ\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau))}\text{ for }1\leq j\leq g+1,$ $\displaystyle\|Q|_{U_{j}}(k,l)^{-1}\|\leq Ce^{(-\lambda_{j-1}+\tau)r_{K}^{(l-k)}(\widehat{\mathcal{R}}_{K}^{k-1}\circ\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau))}\text{ for }2\leq j\leq g+1.$ Let us consider a sequence $(r_{n})_{n\geq 0}$ given by $r_{0}=1$ and for $n\geq 1$, $r_{n}=r_{K}(\widehat{\mathcal{R}}_{K}^{n-1}\circ\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau))).$ Then for all $1\leq k\leq l$ we have $r(k,l)=r_{K}^{(l-k)}(\widehat{\mathcal{R}}_{K}^{k-1}\circ\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau))$, so $\|Q|_{E_{j}}(k,l)\|\leq Ce^{(\lambda_{j}+\tau)r(k,l)}\text{ and }\|Q|_{U_{j}}(k,l)^{-1}\|\leq Ce^{(-\lambda_{j-1}+\tau)r(k,l)}.$ Both inequalities extend to the case where $k=0$. Then the constant $C$ must be multiplied additionally by $\max\\{\|Q(0,1)\|,e^{\lambda_{1}}\|Q(0,1)^{-1}\|\\}$. This gives (3.6) and (3.7). _Return time estimate._ Since $\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)\in K$ satisfies (A.12), (A.15) and (A.16), we also have (A.18) $\displaystyle\frac{r(0,n)}{n}=\frac{1+r_{K}^{(n-1)}(\mathcal{R}^{n_{1}}(\pi,\lambda,\tau))}{n}\to\frac{\widehat{\mu}_{Y}(Y)}{\widehat{\mu}_{Y}(K)}\in(1,1+\tau),$ $\displaystyle\frac{1}{n}\log\|Z(n+1)\|=\frac{1}{n}\log\|\widehat{A}_{K}(\widehat{\mathcal{R}}_{K}^{n-1}\circ\mathcal{R}^{n_{1}}(\pi,\lambda,\tau))\|\to 0,$ $\displaystyle\frac{1}{n}\log\|Q(1,n)\|=\frac{1}{n}\log\|\widehat{A}^{(n-1)}_{K}(\mathcal{R}^{n_{1}}(\pi,\lambda,\tau))\|\to\lambda_{1}\frac{\widehat{\mu}_{Y}(Y)}{\widehat{\mu}_{Y}(K)}\in(\lambda_{1},\lambda_{1}(1+\tau)).$ As $\|Q(0,1)^{-1}\|^{-1}\|Q(1,n)\|\leq\|Q(0,n)\|\leq\|Q(0,1)\|\|Q(1,n)\|$, this gives (A.19) $\frac{1}{n}\log\|Q(n)\|\to\lambda_{1}\frac{\widehat{\mu}_{Y}(Y)}{\widehat{\mu}_{Y}(K)}\in(\lambda_{1},\lambda_{1}(1+\tau)).$ The above convergences lead directly to (3.5), (3.8) and (3.9). _Angle estimate._ Since, $\displaystyle\frac{\log\left|\sin\angle\left(E_{j}^{(k)},U_{j}^{(k)}\right)\right|}{\log\|Q(k)\|}=$ $\displaystyle\frac{\log\left|\sin\angle\left(E_{j}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau))),U_{j}(\widehat{\mathcal{R}}_{K}^{k-1}(\widehat{\mathcal{R}}^{n_{1}}(\pi,\lambda,\tau)))\right)\right|}{r_{K}^{(k-1)}(\mathcal{R}^{n_{1}}(\pi,\lambda,\tau))}$ $\displaystyle\cdot\frac{r_{K}^{(k-1)}(\mathcal{R}^{n_{1}}(\pi,\lambda,\tau))}{k}\frac{k}{\log\|Q(k)\|},$ in view of (A.9), (A.18) and (A.19), $\lim_{k\to\infty}\frac{\log\left|\sin\angle\left(E_{j}^{(k)},U_{j}^{(k)}\right)\right|}{\log\|Q(k)\|}=\frac{0}{\lambda_{1}}=0>-\tau.$ This leads to (3.11). _Rokhlin tower condition._ Since $\widehat{\mathcal{R}}^{n_{k}}(\pi,\lambda,\tau)\in Y$ and the set $Y\subset X(\mathcal{G})$ is chosen to satisfy the conditions $(i)$ and $(ii)$, by the proof of Lemma 3.6 in [8], $(n_{k})_{k\geq 0}$ is a Rokhlin-balanced accelerating sequence. Thus $(n_{k})_{k\geq 0}$ satisfies (3.10) and (RT). ∎ ## References * [1] A. Avila, M. Viana, _Simplicity of Lyapunov spectra: proof of the Zorich-Kontsevich conjecture_ , Acta Math. 198 (2007), 1-56. * [2] A. Bufetov, _Limit theorems for translation flow_ , Ann. of Math. (2) 179 (2014), 431-499. * [3] J. Chaika, K. Frączek, A. Kanigowski, C. Ulcigrai, _Singularity of the spectrum for smooth area-preserving flows in genus two and translation surfaces well approximated by cylinders_ , Comm. Math. Phys. 381 (2021), 1369-1407. * [4] J. Chaika, A. Wright, _A smooth mixing flow on a surface with nondegenerate fixed points_ , J. Amer. Math. Soc. 32 (2019), 81-117. * [5] J.-P. Conze, K. Frączek, _Cocycles over interval exchange transformations and multivalued Hamiltonian flows_ , Adv. Math. 226 (2011), 4373-4428. * [6] B. Fayad, G. Forni, A. Kanigowski, _Lebesgue spectrum of countable multiplicity for conservative flows on the torus_ , J. Amer. Math. Soc. 34 (2021), 747-813. * [7] K. Frączek, C. Ulcigrai, _Ergodic properties of infinite extensions of area-preserving flows_ , Math. Ann. 354 (2012), 1289-1367. * [8] by same author, _On the asymptotic growth of Birkhoff integrals for locally Hamiltonian flows and ergodicity of their extensions_ , preprint https://arxiv.org/abs/2112.05939. * [9] G. Forni, _Solutions of the cohomological equation for area-preserving flows on compact surfaces of higher genus_ , Ann. of Math. (2) 146 (1997), 295-344. * [10] by same author, _Deviation of ergodic averages for area-preserving flows on surfaces of higher genus_ , Ann. of Math. (2) 155 (2002), 1-103. * [11] by same author, _Sobolev regularity of solutions of the cohomological equation_ , Ergodic Theory Dynam. Systems 41 (2021), 685-789. * [12] S. Ghazouani, C. Ulcigrai, _A priori bounds for GIETs, affine shadows and rigidity of foliations in genus 2_ , preprint https://arxiv.org/abs/2106.03529. * [13] A.B. Katok, _Invariant measures of flows on orientable surfaces_ , (Russian) Dokl. Akad. Nauk SSSR 211 (1973), 775-778. * [14] M. Keane, _Interval exchange transformations_ , Math. Z. 141 (1975), 25-31. * [15] A.V. Kochergin, _Mixing in special flows over a rearrangement of segments and in smooth flows on surfaces_ , (Russian) Mat. Sb. (N.S.) 96(138) (1975), 471-502. * [16] M. Kontsevich, _Lyapunov exponents and Hodge theory_ , The mathematical beauty of physics (Saclay, 1996), 318-332, Adv. Ser. Math. Phys., 24, World Sci. Publ., River Edge, NJ, 1997. * [17] M. Kontsevich, A. Zorich, _Lyapunov exponents and Hodge theory_ , preprint https://arxiv.org/abs/hep-th/9701164. * [18] S. Marmi, P. Moussa, J.-C. Yoccoz, _The cohomological equation for Roth-type interval exchange maps_ , J. Amer. Math. Soc. 18 (2005), 823-872. * [19] S. Marmi, J.-C. Yoccoz, _Hölder regularity of the solutions of the cohomological equation for Roth type interval exchange maps_ , Comm. Math. Phys. 344 (2016), 117-139. * [20] G. Rauzy, _Échanges d’intervalles et transformations induites_ , Acta Arith. 34 (1979), 315-328. * [21] D. Ravotti, _Quantitative mixing for locally Hamiltonian flows with saddle loops on compact surfaces_ , Ann. Henri Poincaré 18 (2017), 3815-3861. * [22] D. Scheglov, _Absence of mixing for smooth flows on genus two surfaces_ , J. Mod. Dyn. 3 (2009), 13-34. * [23] C. Ulcigrai, _Mixing of asymmetric logarithmic suspension flows over interval exchange transformations_ , Ergodic Theory Dynam. Systems 27 (2007), 991-1035. * [24] by same author, _Weak mixing for logarithmic flows over interval exchange transformations_ , J. Mod. Dyn. 3 (2009), 35-49. * [25] by same author, _Absence of mixing in area-preserving flows on surfaces_ , Ann. of Math. (2) 173 (2011), 1743-1778. * [26] by same author, _Dynamics and ’arithmetics’ of higher genus surface flows_ , ICM Proceedings 2022. * [27] W.A. Veech, _Gauss measures for transformations on the space of interval exchange maps_ , Ann. of Math. (2) 115 (1982), 201-242. * [28] M. Viana, _Dynamics of Interval Exchange Transformations and Teichmüller Flows_ , lecture notes available from http://w3.impa.br/~viana/out/ietf.pdf * [29] J.-Ch. Yoccoz, _Continued fraction algorithms for interval exchange maps: an introduction_ , Frontiers in number theory, physics, and geometry. I, 401-435, Springer, Berlin, 2006. * [30] by same author, _Interval exchange maps and translation surfaces_ , Homogeneous flows, moduli spaces and arithmetic, 1-69, Clay Math. Proc., 10, Amer. Math. Soc., Providence, RI, 2010. * [31] A. Zorich, _Finite Gauss measure on the space of interval exchange transformations. Lyapunov exponents_ , Ann. Inst. Fourier (Grenoble) 46 (1996), 325-370. * [32] by same author, _Deviation for interval exchange transformations_ , Ergodic Theory Dynam. Systems 17 (1997), 1477–1499.
3SAT 11institutetext: Graz University of Technology, Graz, Austria 11email: <EMAIL_ADDRESS>22institutetext: Tufts University, Medford, MA, USA 22email<EMAIL_ADDRESS>33institutetext: The University of Electro-Communications and RIKEN Center for Advanced Intelligence Project, Tokyo, Japan 33email<EMAIL_ADDRESS>44institutetext: The University of Sydney, Sydney, Australia 44email: <EMAIL_ADDRESS> # Graphs with large total angular resolution††thanks: This work started during the Japan-Austria Joint Seminar _Computational Geometry Seminar with Applications to Sensor Networks_ supported by the Japan Society for the Promotion of Science (JSPS) and the Austrian Science Fund (FWF) under grant AJS 399. O.A., I.P., D.P., and B.V. are partially supported by the FWF grants W1230 (Doctoral Program Discrete Mathematics) and I 3340-N35 (Collaborative DACH project _Arrangements and Drawings_). Oswin Aichholzer 11 0000-0002-2364-0583 Matias Korman 22 Yoshio Okamoto 33 0000-0002-9826-7074 Irene Parada 11 0000-0003-3147-0083 Daniel Perz 11 0000-0002-6557-2355 André van Renssen 44 0000-0002-9294-9947 Birgit Vogtenhuber 11 0000-0002-7166-4467 ###### Abstract The total angular resolution of a straight-line drawing is the minimum angle between two edges of the drawing. It combines two properties contributing to the readability of a drawing: the angular resolution, which is the minimum angle between incident edges, and the crossing resolution, which is the minimum angle between crossing edges. We consider the total angular resolution of a graph, which is the maximum total angular resolution of a straight-line drawing of this graph. We prove that, up to a finite number of well specified exceptions of constant size, the number of edges of a graph with $n$ vertices and a total angular resolution greater than $60^{\circ}$ is bounded by $2n-6$. This bound is tight. In addition, we show that deciding whether a graph has total angular resolution at least $60^{\circ}$ is -hard. ###### Keywords: Graph drawing Total angular resolution Angular resolution Crossing resolution NP-hardness. ## 1 Introduction The _total angular_ resolution of a drawing $D$, or short $\operatorname{TAR}(D)$, is the smallest angle occurring in $D$, either between two edges incident to the same vertex or between two crossing edges. In other words, $\operatorname{TAR}(D)$ is the minimum of the angular resolution $\operatorname{AR}(D)$ and the crossing resolution $\operatorname{CR}(D)$ of the same drawing. Furthermore, the total angular resolution of a graph $G$ is defined as the maximum of $\operatorname{TAR}(D)$ over all drawings $D$ of $G$. Similarly, the angular resolution and the crossing resolution of $G$ are the maximum of $\operatorname{AR}(D)$ and $\operatorname{CR}(D)$, respectively, over all drawings $D$ of $G$. The total angular resolution of a graph is in general smaller than the minimum of its crossing resolution and its angular resolution. Note that all drawings considered in this work are straight-line. Formann et al. [7] were the first to introduce the angular resolution of graphs and showed that finding a drawing of a graph with angular resolution at least $90^{\circ}$ is -hard. Fifteen years later experiments by Huang et al. [8, 10] showed that the crossing resolution plays a major role in the readability of drawings. Consequently research in that direction was intensified. In particular right angle crossing drawings (or short RAC drawings) were studied [5, 11], and -hardness of the decision version for right angles was proven [2]. The upper bound for the number of edges of $\alpha{AC}$ drawings (drawings with crossing resolution $\alpha$) is $\frac{180^{\circ}}{\alpha}(3n-6)$ [6]. For the two special classes of RAC drawings and $60^{\circ}$AC drawings better upper bounds are known. More precisely, RAC drawings have at most $4n-10$ edges [5] and $\alpha$AC drawings with $\alpha>60^{\circ}$ have at most $6.5n-20$ edges [1]. Argyriou et al. [3] were the first to study the total angular resolution, calling it just _total resolution_. They presented drawings of complete and complete bipartite graphs with asymptotically optimal total angular resolution. Recently Bekos et al. [4] presented a new algorithm for finding a drawing of a given graph with high total angular resolution which was performing superior to earlier algorithms like [3, 9] on the considered test cases. ## 2 Upper bound on the number of edges We say a drawing $D$ is planarized if we replace every crossing by a vertex so that this new vertex splits both crossing edges into two edges. We denote this planarized drawing by $P(D)$. Furthermore, every edge in $P(D)$ has two sides and every side is incident to exactly one cell of $D$. Note that both sides of an edge can be incident to the same cell. We define the size of a cell of a connected drawing $D$ as the number of sides in $P(D)$ incident to this cell. In this section we show that for almost all graphs with $\operatorname{TAR}(G)>60^{\circ}$ the number of edges is bounded by $2n-6$. We start by showing a bound for the number of edges in a connected drawing $D$ depending on the size of the unbounded cell of $D$. ###### Lemma 1 Let $D$ be a connected drawing with $n\geq 1$ vertices and $m$ edges. If the unbounded cell of $D$ has size $k$ and $\operatorname{TAR}(D)>60^{\circ}$, then $m\leq 2n-2-\left\lceil{k}/{2}\right\rceil$. ###### Proof If at least three edges cross each other in a single point, then there exists an angle with at most $60^{\circ}$ at this crossing point. Therefore every crossing is incident to two edges. We planarize the drawing $D$ and get $n^{\prime}=n+\operatorname{cr}(D)$ and $m^{\prime}=m+2\operatorname{cr}(D)$ where $\operatorname{cr}(D)$ is the number of crossings in $D$, $n^{\prime}$ is the number of vertices of $P(D)$, and $m^{\prime}$ is the number of edges of $P(D)$. Since we have a planar graph, we can use Euler’s formula to compute the number $f$ of faces in $P(D)$ as $\displaystyle f=-n+m+\operatorname{cr}(D)+2.$ (1) Moreover, every bounded cell of $D$ has at least size $4$, as otherwise $P(D)$ contains a triangle which implies an angle of at most $60^{\circ}$. By definition, the unbounded cell of $D$ has size $k$ and we obtain the following inequality $2m^{\prime}\geq 4(f-1)+k.$ (2) Combining Equation (1) and Inequality (2) gives ${m\leq 2n-2-\left\lceil{k}/{2}\right\rceil}$. ∎ From Lemma 1 it follows directly that a connected drawing $D$ on $n\geq 3$ vertices and with $\operatorname{TAR}(D)>60^{\circ}$ fulfills $m\leq 2n-4$. Observation 1, which will be useful to prove Lemma 2, follows from the fact that the sum of interior angles in a simple polygon is $180^{\circ}(p-2)$. ###### Observation 1 Let $D$ be a plane drawing where the boundary of the unbounded cell is a simple polygon $P$ with $p>3$ vertices. Let the inner degree of a vertex $v_{i}$ of $P$ be the number $d^{\prime}_{i}$ of edges incident to $v_{i}$ that lie in the interior of $P$. If $\operatorname{TAR}(D)>60^{\circ}$, then $\sum_{v_{i}\in V(P)}d^{\prime}_{i}\leq 2p-7$ holds. ###### Lemma 2 Let $D$ be a connected plane drawing on $n\geq 3$ vertices, where $D$ is not a path on $3$ vertices and not a $4$-gon. If $\operatorname{TAR}(D)>60^{\circ}$, then $m\leq 2n-5$. ###### Proof The unbounded cell of $D$ cannot have size $3$, as in this case the convex hull of the drawing is a triangle and we have $\operatorname{TAR}(D)\leq 60^{\circ}$. If the drawing $D$ has an unbounded cell of size at least $5$ and $\operatorname{TAR}(D)>60^{\circ}$, then $m\leq 2n-5$ follows directly from Lemma 1. Otherwise, the unbounded cell of $D$ has size $4$, which, as $D$ is not a path on $3$ vertices, implies that the boundary of $D$ is a 4-gon $F$. By Observation 1 and the fact that $D$ is not a 4-cycle, there is precisely one edge $e$ in the interior of and incident to $F$. Let $D^{\prime}$ be the drawing we get by deleting all vertices and edges of $F$ and also the edge $e$. The drawing $D^{\prime}$ is connected and has $n^{\prime}\geq 1$ vertices and $m^{\prime}$ edges, where $n=n^{\prime}+4$ and $m=m^{\prime}+5$. By Lemma 1 we know that $m^{\prime}\leq 2n^{\prime}-2$ and we derive $m=m^{\prime}+5\leq 2n^{\prime}-2+5\leq 2n-5$. ∎ Two drawings are _combinatorially equivalent_ if all cells are bounded by the same edges, all crossing edge pairs are the same, and the order of crossings along an edge are the same. We can extend Lemma 2 in the following way. ###### Lemma 3 Let $D$ be a connected plane drawing on $n\geq 3$ vertices with $\operatorname{TAR}(D)>60^{\circ}$. If $D$ is not combinatorially equivalent to one of the exceptions E1–E9 as listed below and depicted in Fig. 5 (Appendix 0.B), then $m\leq 2n-6$. 1. _E1_ A tree on at most 4 vertices. 2. _E2_ An empty $4$-gon. 3. _E3_ A $4$-gon with one additional vertex connected to one vertex of the $4$-gon. 4. _E4_ An empty $5$-gon. 5. _E5_ A $5$-gon with one inner vertex connected to two non-neighboring vertices of the $5$-gon. 6. _E6_ A $5$-gon with an edge inside, connected with 3 edges to the 5-gon such that the 5-gon is partitioned into two empty 4-gons and one empty $5$-gon. 7. _E7_ A $6$-gon with an additional diagonal between opposite vertices. 8. _E8_ A $6$-gon with an additional vertex or edge inside, connected with 3 or 4, respectively, edges to the 6-gon such that the 6-gon is partitioned into 3 or 4, respectively, empty 4-gons. 9. _E9_ A $6$-gon with either a path on 3 vertices or a 4-cycle inside, connected as depicted also in Fig. 1(a). The proof of Lemma 3 is similar to the one of Lemma 2 and can be found in Appendix 0.A. Note that Lemma 3 considers plane drawings. If $D$ has a crossing, then $P(D)$ has a vertex of degree $4$. The only drawings in the exceptions with a vertex with degree $4$ are shown in Fig. 1(a). It can be shown that, when replacing the vertices of degree $4$ in any of them by a crossing, the resulting drawings have $\operatorname{TAR}(D)\leq 60^{\circ}$. A detailed proof of this fact can be found in Appendix 0.C and will be useful for the proof of the next theorem. ((a)) ((b)) Figure 1: (a) The drawings of exception E9. (b) A drawing $D$ of a graph with $m\\!=\\!2n\\!-\\!6$ and $\operatorname{TAR}(D)>60^{\circ}$. ###### Theorem 2.1 Let $G$ be a graph with $n\geq 3$ vertices, $m$ edges and $\operatorname{TAR}(G)>60^{\circ}$. Then $m\leq 2n-6$ except if $G$ is either a graph of an exception for Lemma 3 or only consists of three vertices and one edge (Exception E0 in Fig. 5). ###### Proof Assume there exists a graph which is not in the list of exceptions for Lemma 3 with $\operatorname{TAR}(G)>60^{\circ}$. Consider a drawing $D$ of $G$ with $\operatorname{TAR}(D)>60^{\circ}$ and its planarization $P(D)$. Applying Lemma 1 to every component gives $m\leq 2m-6$, with the only exception consisting of three vertices and one edge (Exception E0). Details can be found in Appendix 0.D. So for the rest of the proof only consider connected graphs. If three edges cross in a single point, then in $P(D)$ this point has degree $6$ and therefore an angle with at most $60^{\circ}$. Hence $P(D)$ has $m_{P}=m+2\operatorname{cr}(D)$ edges and $n_{P}=n+\operatorname{cr}(D)$ vertices. Let $m=2n-c$. This is equivalent to $m_{P}=2n_{P}-c$. Since $\operatorname{TAR}(P(D))\geq\operatorname{TAR}(D)>60^{\circ}$, by applying Lemma 3 we get that $m_{P}\leq 2n_{P}-6$ or $P(D)$ is in the exceptions. If $m_{P}\leq 2n_{P}-6$, then also $m\leq 2n-6$. If $P(D)$ is in the exceptions, then, as observed before, $D$ is in the exceptions. ∎ The bound of Theorem 2.1 is the best possible in the sense that there are infinitely many graphs with $m=2n-6$ and $\operatorname{TAR}(G)>60^{\circ}$. Consider for example the layered $8$-gon with two edges in the middle depicted in Fig. 1(b), which can be generalized to any $n=8k$ with $k\in\mathds{N}$. In the full version of this work we present examples for every $n\geq 9$ and also discuss plane drawings of planar graphs. ## 3 NP-hardness Forman et al. [7] showed that the problem of determining whether there exists a drawing of a graph with angular resolution of $90^{\circ}$ is -hard. Their proof, which is by reduction from with exactly three different literals per clause, also implies -hardness of deciding whether a graph has a drawing with total angular resolution of $90^{\circ}$. We adapt their reduction to show -hardness of the decision problem for $\operatorname{TAR}(G)\geq 60^{\circ}$. A full version of the proof of Theorem 3.1 can be found in Appendix 0.E. ###### Theorem 3.1 It is -hard to decide whether a graph $G$ has $\operatorname{TAR}(G)\geq 60^{\circ}$. ###### Proof (sketch) Given a formula with variables $x_{1},x_{2},\dots,x_{n}$ and clauses $c_{1},c_{2},\dots,c_{m}$, where every clause contains exactly three different literals, we first construct a graph $G$ for it. The basic building blocks of $G$ consist of triangles, which must be equilateral in any drawing with total angular resolution $60^{\circ}$. We use three types of gadgets; see Fig. 2(a). The clause gadget has a designated _clause vertex_ $C_{j}$ and the variable gadget has two _literal vertices_ $X_{i,j},\overline{X}_{i,j}$ per clause $c_{j}$. For each gadget, the embedding with total angular resolution $60^{\circ}$ is unique up to rotation, scaling and reflection. ((a)) All used gadgets ((b)) Frame with clause gadgets Figure 2: Gadgets and frame of the -hardness proof. For connecting the gadgets, we build a 3-sided frame; see Fig. 2(b). It consists of a straight _bottom path_ of $2n+2m-1$ triangles alternatingly facing up and down, a sequence of $m$ clause gadgets stacked on top of each other to the right (one for each clause, with the clause vertices $C_{1},\ldots,C_{m}$ facing to the right), and a _top path_ of $2n+2m-1$ triangles alternatingly facing down and up. The leftmost $n$ vertices of degree three on the upper side of the bottom path and the lower side of the top path ($X_{1},\ldots,X_{n}$ and $X^{\prime}_{1},\ldots,X^{\prime}_{n}$) are used for the variables: For each variable $x_{i}$, we add a variable gadget and a connector gadget by identifying $A_{i,1}$ with $X_{i}$, $A_{i,2}$ with $A_{i,3}$, and $A_{i,4}$ with $X^{\prime}_{i}$, respectively. Finally, a clause-literal path consisting of three consecutive edges between $X_{i,j}$ ($\overline{X}_{i,j}$) and $C_{j}$ is added whenever $x_{i}$ ($\overline{x}_{i}$) is a literal of clause $c_{j}$. The following holds for any drawing $D$ of the graph $G$ with $TAR(D)\geq 60^{\circ}$. (_1_) The embedding of the frame is unique up to rotation, scaling, and reflection. Hence we can assume that it is embedded as in Fig. 2(b). (_2_) Each variable gadget together with its connector gadget must be drawn vertically between its $X_{i}$ and $X^{\prime}_{i}$, either with all $X_{i,j}$ to the right of the $\overline{X}_{i,j}$ or the other way around. (_3_) All clause-literal paths leave from their clause vertices to the right, and one path per clause leaves horizontally to the right. We claim that $\operatorname{TAR}(G)\geq 60^{\circ}$ if and only if the initial formula is satisfiable. For the one direction, consider a satisfying truth assignment of the formula. We draw the variable gadgets with all true literal sides to the right and scaled (via the connector gadgets) such that different gadgets have their vertices at different heights, and we draw the clause-literal paths as indicated in Fig. 3. ((a)) True connection, two versions ((b)) False connection Figure 3: Connections between clause and literal vertices in the -hardness proof. For the other direction, consider a drawing of $C$ with $\operatorname{TAR}(D)=60^{\circ}$. Using the straight lines $\ell_{1}$ and $\ell_{2}$ sketched in Fig. 2(b), one can show that every clause-literal path that leaves the clause vertex horizontally must end at a literal vertex facing to the right. Setting the according literals to true gives a non-contradicting variable assignment that in turn fulfills all clauses. ∎ ## 4 Conclusion In this work we have shown that, up to a finite number of well specified exceptions of constant size, any graph $G$ with $\operatorname{TAR}(G)>60^{\circ}$ has at most $2n-6$ edges. In addition we have been able to obtain similar bounds for graphs with ${\operatorname{TAR}(G)\geq 90^{\circ}}$ and $\operatorname{TAR}(G)>120^{\circ}$: For graphs with $\operatorname{TAR}(G)\geq 90^{\circ}$ we have $m\leq 2n-2\sqrt{n}$ and for $\operatorname{TAR}(G)>120^{\circ}$ we have $m\leq n$ for $n\geq 7$, which is best possible. We conjecture that almost all graphs with $\operatorname{TAR}(G)>\frac{k-2}{k}90^{\circ}$ have at most $2n-2-\lfloor\frac{k}{2}\rfloor$ edges. From a computational point of view, we have proven that finding a drawing of a given graph with total angular resolution at least $60^{\circ}$ is -hard. The same was known before for at least $90^{\circ}$ [7]. On the other hand, for large angles, the recognition problem eventually becomes easy (for example, $G$ can be drawn with $\operatorname{TAR}(G)>120^{\circ}$ if and only if it is the union of cycles of at least 7 vertices and arbitrary paths). This yields the following open problem: At which angle(s) does the decision problem change from NP-hard to polynomially solvable? ## References * [1] Ackerman, E., Tardos, G.: On the maximum number of edges in quasi-planar graphs. Journal of Combinatorial Theory, Series A 114, 563–571 (2007). https://doi.org/10.1016/j.jcta.2006.08.002 * [2] Argyriou, E.N., Bekos, M.A., Symvonis, A.: The Straight-Line RAC Drawing Problem is NP-Hard. In: 37th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM 2011). pp. 74–85. Springer, Berlin, Heidelberg (2011). https://doi.org/10.1007/978-3-642-18381-2_6 * [3] Argyriou, E.N., Bekos, M.A., Symvonis, A.: Maximizing the Total Resolution of Graphs. The Computer Journal 56(7), 887–900 (2013). https://doi.org/10.1093/comjnl/bxs088 * [4] Bekos, M.A., Förster, H., Geckeler, C., Holländer, L., Kaufmann, M., Spallek, A.M., Splett, J.: A Heuristic Approach Towards Drawings of Graphs with High Crossing Resolution. In: Biedl, T., Kerren, A. (eds.) 26th International Symposium on Graph Drawing and Network Visualization (GD 2018). pp. 271–285. Springer, Cham (2018). https://doi.org/10.1007/978–3–030–04414–5_19 * [5] Didimo, W., Eades, P., Liotta, G.: Drawing graphs with right angle crossings. Theoretical Computer Science 412(39), 5156–5166 (2011). https://doi.org/10.1016/j.tcs.2011.05.025 * [6] Dujmovic, V., Gudmundsson, J., Morin, P., Wolle, T.: Notes on large angle crossing graphs. Chicago Journal of Theoretical Computer Science 4, 1–14 (2011). https://doi.org/10.4086/cjtcs.2011.004 * [7] Formann, M., Hagerup, T., Haralambides, J., Kaufmann, M., Leighton, F.T., Symvonis, A., Welzl, E., Woeginger, G.J.: Drawing Graphs in the Plane with High Resolution. SIAM Journal on Computing 22, 1035–1052 (1993). https://doi.org/10.1137/0222063 * [8] Huang, W.: Using eye tracking to investigate graph layout effects. In: 2007 6th International Asia-Pacific Symposium on Visualization. pp. 97–100. IEEE (2007). https://doi.org/10.1109/APVIS.2007.329282 * [9] Huang, W., Eades, P., Hong, S.H., Lin, C.: Improving multiple aesthetics produces better graph drawings. Journal of Visual Languages & Computing 24(4), 262–272 (2013). https://doi.org/10.1016/j.jvlc.2011.12.002 * [10] Huang, W., Hong, S.H., Eades, P.: Effects of Crossing Angles. In: 2008 IEEE Pacific Visualization Symposium. pp. 41–46 (2008). https://doi.org/10.1109/PACIFICVIS.2008.4475457 * [11] van Kreveld, M.: The quality ratio of RAC drawings and planar drawings of planar graphs. In: Brandes, U., Cornelsen, S. (eds.) 18th International Symposium on Graph Drawing (GD 2010). pp. 371–376. Springer, Berlin, Heidelberg (2010). https://doi.org/10.1007/978–3–642–18469-7_34 ## Appendix 0.A Proof of Lemma 3 ###### Lemma 3 Let $D$ be a connected plane drawing on $n\geq 3$ vertices with $\operatorname{TAR}(D)>60^{\circ}$. If $D$ is not combinatorially equivalent to one of the exceptions E1–E9 as listed below and depicted in Fig. 5 (Appendix 0.B), then $m\leq 2n-6$. 1. _E1_ A tree on at most 4 vertices. 2. _E2_ An empty $4$-gon. 3. _E3_ A $4$-gon with one additional vertex connected to one vertex of the $4$-gon. 4. _E4_ An empty $5$-gon. 5. _E5_ A $5$-gon with one inner vertex connected to two non-neighboring vertices of the $5$-gon. 6. _E6_ A $5$-gon with an edge inside, connected with 3 edges to the 5-gon such that the 5-gon is partitioned into two empty 4-gons and one empty $5$-gon. 7. _E7_ A $6$-gon with an additional diagonal between opposite vertices. 8. _E8_ A $6$-gon with an additional vertex or edge inside, connected with 3 or 4, respectively, edges to the 6-gon such that the 6-gon is partitioned into 3 or 4, respectively, empty 4-gons. 9. _E9_ A $6$-gon with either a path on 3 vertices or a 4-cycle inside, connected as depicted also in Fig. 1(a). ###### Proof Let $D^{\prime}$ be a subdrawing of $D$ consisting of all vertices, which are not on the unbounded cell and all edges, which are not incident to a vertex on the unbounded cell. Assume $D^{\prime}$ has $n^{\prime}$ vertices and $m^{\prime}$ edges. We consider different cases. Case 1 The unbounded cell has size at least $7$. Then we have, by Lemma 1, $m\leq 2n-2-\left\lceil\frac{k}{2}\right\rceil=2n-2-\left\lceil\frac{7}{2}\right\rceil\leq 2n-6.$ Case 2 The unbounded cell has size $4$. Then either our drawing has only one cell, which is a case of Exception E1, or the outer boundary is a $4$-gon. In this case we have $n=n^{\prime}+4$ and $m\leq m^{\prime}+5$. If there is at most one vertex in the interior of the 4-gon, then we have Exception E2 or E3, respectively. So we can assume that there are at least 2 vertices in the interior. By Observation 1 we have at most one edge from a vertex on the unbounded cell to the inside. Therefore, $D^{\prime}$ is connected and thus it has at least one edge. So by Lemma 1 we have ${m^{\prime}\leq 2n^{\prime}-3}$. With this we get $\displaystyle m\leq m^{\prime}+5\leq 2n^{\prime}-3+5=2(n-4)+2=2n-6.$ Case 3 The unbounded cell has size $5$. If the unbounded cell has size $5$, the outer boundary is a $5$-gon. The only other possibility would be a drawing $D$ of a triangle with an attached edge, but in that case $\operatorname{TAR}(D)\leq 60^{\circ}$. If there are at most two adjacent vertices inside the 5-gon, then we have one of Exceptions E4, E5, or E6. So we can assume that there are at least 3 vertices in the interior. Moreover, $n=n^{\prime}+5$ holds. Due to Observation 1, there are at most three edges connecting the interior to the $5$-gon and the $5$-gon itself has $5$ edges, that is, $m=m^{\prime}+5+3=m^{\prime}+8$. If $D^{\prime}$ is connected and has more than 2 vertices, then the size of the unbounded cell is at least $3$ and we have by Lemma 1 $m^{\prime}\leq 2n^{\prime}-4$. By applying Lemma 1 for every connected component we also get $m^{\prime}\leq 2n^{\prime}-4$ if $D^{\prime}$ is disconnected. In these cases $m^{\prime}\leq 2n^{\prime}-4$ holds and we have $\displaystyle m$ $\displaystyle\leq m^{\prime}+8\leq 2n^{\prime}-4+8=2n-6.$ Case 4 The unbounded cell of our drawing $D$ has size $6$. So our drawing $D$ has either only one (unbounded) cell (two cases of Exception E1), consists of two triangles sharing a vertex ($\operatorname{TAR}(D)\leq 60^{\circ}$), or has as boundary a $4$-gon with an attached edge or a $6$-gon. So there are two cases we have to consider. * • If the unbounded cell is a $4$-gon with an attached edge, we use the same arguments as in Case 1. If the $4$-gon is empty, then we have again Exception E3. If we have at least one point inside the 4-gon, then by Lemma 1 we have $m^{\prime}\leq 2n^{\prime}-2$. So we get $m=m^{\prime}+6\leq 2n^{\prime}-2+6=2(n^{\prime}+5)-6=2n-6.$ * • If the unbounded cell is a $6$-gon, then by Observation 1 we can have at most $5$ edges connecting the interior to the 6-gon. First we assume that $D^{\prime}$ is connected. If $\operatorname{TAR}(D)>60^{\circ}$, then $\operatorname{TAR}(D^{\prime})>60^{\circ}$ and the drawing in $D^{\prime}$ fulfills ${m^{\prime}\leq 2n^{\prime}-5}$ by Lemma 2 or it is in the exceptions of Lemma 2. Furthermore, we know $n=n^{\prime}+6$ and $m\leq m^{\prime}+11$. If $m^{\prime}\leq 2n^{\prime}-5$, then $\displaystyle m$ $\displaystyle\leq m^{\prime}+11\leq 2n^{\prime}-5+11=2n-6.$ So the only drawings where this does not hold are $6$-gons with a drawing inside, which is in the exceptions in Lemma 2. This results in Exceptions E7 and E8, as $n^{\prime}<3$, and Exceptions E9, as they contain the two exceptions of Lemma 2. If $D^{\prime}$ is not connected and $\operatorname{TAR}(D)>60^{\circ}$, then $m\leq 2n-6$ or $D^{\prime}$ consists of two non-adjacent vertices which are connected to the $6$-gon with $5$ edges in total. This means that one of the two inner vertices has degree at least $3$ in the drawing $D$. If one vertex has degree 4, then there is a triangle in our drawing $D$ which means $\operatorname{TAR}(D)\leq 60^{\circ}$. Otherwise, if one vertex has degree 3 and the other one has degree 2, then we have a drawing like in Fig. 4. The grey shaded $4$-gon has 2 edges in the inside. So due to Observation 1 we have $\operatorname{TAR}(D)\leq 60^{\circ}$. ∎ Figure 4: Two separated vertices inside a $6$-gon. ## Appendix 0.B All exceptions This appendix contains drawings depicting all exceptions of Lemma 3 and Theorem 2.1 (Fig. 5). Figure 5: All exceptions for Lemma 3 and Theorem 2.1 ## Appendix 0.C Replacing a vertex by a crossing in Exception E9 ###### Lemma 4 If we replace the vertex of degree $4$ in a drawing of Fig. 1(a) with a crossing, then the resulting drawings $D$ have $\operatorname{TAR}(D)\leq 60^{\circ}$. ###### Proof Figure 6: Replacing vertex of degree $4$ of the drawing in Fig. 1(a) (left) with a crossing. If we replace the vertex of degree $4$ of the drawing in Fig. 1(a) (left) with a crossing, then we get the drawing $D_{cr}$ in Fig. 6, where the dashed edge is not part of the actual drawing. We want to show that $\operatorname{TAR}(D_{cr})\leq 60^{\circ}$. As in Fig. 6 we denote $\angle ACB$ as $\alpha$ and $\angle BCD$ as $\beta$ and both these angles are between two edges of the drawing. Let $P_{1},P_{2}$ and $P_{3}$ be the other three vertices on the unbounded cell. Since $C$ is a crossing, $C$ is inside the pentagon $ABP_{1}P_{2}P_{3}$. The inner angles of a pentagon sum up to $540^{\circ}$. All eight inner angles of the drawing, which are incident to the convex hull have more than $60^{\circ}$. This implies $\angle BAC+\angle ABC\leq 60^{\circ}$. Furthermore we have $\alpha+\beta=180^{\circ}=\alpha+\angle BAC+\angle ABC$. This means we have $\beta=\angle BAC+\angle ABC\leq 60^{\circ}$. But $\beta$ appears in $D_{cr}$ so we have $\operatorname{TAR}(D_{cr})\leq 60^{\circ}$. Let $D^{\prime}_{cr}$ be the drawing we get if we replace in the drawing in Fig. 1(a) (right) the vertex of degree $4$ with a crossing. Then $D_{cr}$ is a subdrawing of $D^{\prime}_{cr}$. So we get $\operatorname{TAR}(D^{\prime}_{cr})\leq\operatorname{TAR}(D_{cr})\leq 60^{\circ}$. ∎ ## Appendix 0.D Disconnected drawings ###### Lemma 5 Let $D$ be an disconnected drawing on $n\geq 3$ vertices with $\operatorname{TAR}(D)>60^{\circ}$. Then $m\leq 2n-6$ or $D$ consists of three vertices and one edge (Exception E0 in Fig. 5). ###### Proof Assume $D$ consists of components $C_{i}$, $1\leq i\leq l$, with $n_{i}\geq 1$ vertices and $m_{i}\geq 0$ edges. Furthermore, $\operatorname{TAR}(C_{i})\geq\operatorname{TAR}(D)>60^{\circ}$ holds. By Lemma 1 we get for every component $m_{i}\leq 2n_{i}-2$. If $l\geq 3$, then we have $m=\sum_{i=1}^{l}m_{i}\leq\sum_{i=1}^{l}(2n_{i}-2)=2n-2l\leq 2n-6.$ Otherwise $l=2$. If $C_{1}$ contains at least 2 edges, then the size of the unbounded cell of $C_{1}$ is at least $3$. So we get $m_{1}\leq 2n_{1}-4$ by Lemma 1. This gives $m=m_{1}+m_{2}=2n_{1}-4+2n_{2}-2=2n-6.$ If $C_{1}$ and $C_{2}$ both consist of two vertices and an edge, then we have $m=2\cdot 4-6=2$. If $D$ is a drawing on $3$ vertices and an edge, then we have Exception E0. ∎ ## Appendix 0.E Proof of Theorem 3.1 ###### Theorem 0.E.2 It is -hard to decide whether a graph $G$ has $\operatorname{TAR}(G)\geq 60^{\circ}$. ###### Proof As input we are given a formula with variables $x_{1},x_{2},\dots,x_{n}$ and clauses $c_{1},c_{2},\dots,c_{m}$, where every clause contains exactly three different literals. We first construct a graph $G$ for the formula. The basic building blocks of our construction consist of triangles, which, in order to obtain a total angular resolution of $60^{\circ}$, must all be equilateral. We use the following gadgets; see Fig. 2(a). As clause gadget we use a sequence of four triangles that share a common vertex and in which consecutive triangles share an edge. The middle vertex with three incident edges, marked with $C_{j}$ in the figure, will be used to connect the clause gadget to its literals. We denote $C_{j}$ as _clause vertex_. As variable gadget we use a triangle followed by a sequence of $m$ hexagons and followed by another triangle. Each hexagon consists of six triangles sharing the center point. Each non-extreme hexagon of the sequence is incident to its neighboring hexagons via two “opposite” edges. The initial triangle is incident to the first hexagon via the edge opposite to the incidence with the second hexagon. The final triangle is incident to the last hexagon via the edge opposite to the incidence with the second to last hexagon. The vertices of the initial and the final triangle that are incident to none of the hexagons are denoted as $A_{i,1}$ and $A_{i,2}$, respectively. For each variable $x_{i}$, we assign one side of the hexagonal path to the positive literal $x_{i}$ and the other to the negative literal $\overline{x}_{i}$. The intermediate vertices of the $j$th hexagon of the path are denoted with $X_{i,j}$ and $\overline{X}_{i,j}$, respectively, and called _literal vertices_. They will be used for connecting a literal to its clause. Additionally we use a connector gadget. It consists of two triangles with a common edge. The two vertices that are incident to only one of the triangles are denoted as $A_{i,3}$ and $A_{i,4}$, respectively. Note that for all three gadgets, an embedding with total angular resolution $60^{\circ}$ is unique up to rotation, scaling and reflection of the whole gadget. Especially, for each gadget, all triangles are congruent. For connecting the gadgets, we first build a rigid 3-sided frame as depicted in Fig. 2(b). On the bottom, it consists of a straight path of $2n+2m-1$ triangles that alternatingly face up and down (the bottom path). On top of the rightmost triangle of this path, we add a sequence of $m$ clause gadgets stacked on top of each other (one for each clause, with the clause vertices $C_{1},\ldots,C_{m}$ facing to the right). The top consists of a straight path of $2n+2m-1$ triangles that alternatingly face down and up (the top path). We denote the leftmost $n+1$ vertices of degree three on the upper side of the bottom path with $X_{1},\ldots,X_{n}$, and $B_{1}$. The leftmost $n+1$ vertices of degree three on the lower side of the top path are denoted $X^{\prime}_{1},\ldots,X^{\prime}_{n}$, and $B_{2}$. As an embedding with total angular resolution $60^{\circ}$ of this frame again is again unique up to rotation, scaling, and reflection, we assume without loss of generality that it is embedded as depicted in Fig. 2(b). Then, for every $1\leq i\leq n$, $X^{\prime}_{i}$ and $X_{i}$ lie on a vertical line. Further, the line $\ell_{1}$ spanned by $B_{1}$ and $C_{m}$ has slope $60^{\circ}$ and the line $\ell_{2}$ through $B_{2}$ and $C_{1}$ has slope $-60^{\circ}$. We next add the variable gadgets in the following way. For each variable $x_{i}$, we identify the vertex $A_{i,1}$ of its gadget with $X_{i}$. Further, we connect the gadget to $X^{\prime}_{i}$ via a connector gadget by identifying $A_{i,2}$ with $A_{i,3}$ and $A_{i,4}$ with $X^{\prime}_{i}$, respectively. In any drawing with total angular resolution $60^{\circ}$ of the construction so far, each variable gadget together with its connector gadget must be drawn vertically between $X_{i}$ and $X^{\prime}_{i}$. Further, the gadgets can be scaled by adapting the height of the connector gadget. Independent of the scaling factor, the left side of each variable gadget is always to the left of the lines $\ell_{1}$ and $\ell_{2}$. Directionwise, variable gadgets can be drawn in two ways: either all $X_{i,j}$ are to the right of the $\overline{X}_{i,j}$ or the other way around. To complete the construction, we add a path consisting of three consecutive edges between $X_{i,j}$ ($\overline{X}_{i,j}$) and $C_{j}$ whenever $x_{i}$ ($\overline{x}_{i}$) is a literal of clause $c_{j}$. To obtain a total angular resolution of $60^{\circ}$ at every clause vertex $C_{i}$, all of these paths must start from $C_{i}$ towards the right and one must start horizontally. We claim that the constructed graph $G$ has a drawing $D$ with $\operatorname{TAR}(D)\geq 60^{\circ}$ if and only if the initial formula is satisfiable. Assume first that the formula is satisfiable. Consider a truth assignment of the variables that satisfies the formula. We draw each variable gadget such that the side corresponding to its true literal is on the right. Further, we scale all the variable gadgets such that no two vertices of different variable gadgets or of a variable gadget and a clause gadget lie on a horizontal line (except for the vertices $X_{i}$). For every clause $c_{j}$, we choose a literal $v_{i}\in\\{x_{i},\overline{x}_{i}\\}$ of $c_{i}$ which is true. We draw the path between the corresponding clause vertex $C_{j}$ and the literal vertex $V_{i,j}$ by starting with a horizontal edge from $C_{j}$ to the right, continuing with a $\pm 60^{\circ}$ edge to the right and up to the height of $V_{i,j}$, and ending with a horizontal edge to $V_{i,j}$. For the other literals of $c_{j}$ we draw a $\pm 60^{\circ}$ edge from $C_{j}$ to the right, followed by a horizontal edge to the left and a $\pm 60^{\circ}$ edge to the left or right, depending on whether $v_{i}$ is true or false; see Fig. 3. As all edges of the resulting drawing $D$ are either horizontal or under an angle of $\pm 60^{\circ}$, we have $\operatorname{TAR}(D)=60^{\circ}$ as desired. For the other direction, assume that $G$ admits a drawing $D$ with $\operatorname{TAR}(D)=60^{\circ}$. In $D$, consider a clause vertex $C_{j}$ and the path $P=C_{j}M_{1}M_{2}V_{i,j}$ which starts horizontally at $C_{j}$. Then the literal vertex $V_{i,j}$ must be on the right side of its variable gadget: If $V_{i,j}$ is a left vertex of a variable gadget, then $P$ must enter $V_{i,j}$ from the left under an angle of at most $\pm 60^{\circ}$ with respect to the horizontal line. Hence $M_{2}$ lies to the left of the lines $\ell_{1}$ and $\ell_{2}$. On the other hand, the second vertex $M_{1}$ of $P$ lies horizontally to the right of $C_{j}$. However, to respect the $60^{\circ}$ restriction at $M_{1}$, $M_{2}$ must lie to the right of the lines $\ell_{1}$ and $\ell_{2}$, a contradiction. Now consider the set of literal vertices that are an endpoint of a path starting horizontally at some clause vertex. As these literal vertices are on the right side of their corresponding variable gadgets, the set does not contain any pair $X_{i,j},\overline{X}_{i,k}$. By setting all the corresponding literals to true, we obtain a non-contradicting partial truth assignment of the variables that satisfies the formula since for every clause $c_{j}$ the literal $v_{i}$ corresponding to $V_{i,j}$ is true. ∎
# On applications of Herglotz-Nevanlinna functions in material sciences, I: classical theory and applications of sum rules Annemarie Luger Email<EMAIL_ADDRESS>Department of Mathematics, Stockholm University, SE-106 91 Stockholm, Sweden Miao-Jung Yvonne Ou Email<EMAIL_ADDRESS>Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, USA ###### Abstract This is the first part of the review article which focuses on theory and applications of Herglotz-Nevanlinna functions in material sciences. It starts with the definition of scalar valued Herglotz-Nevanlinna functions and explains in detail the theorems that are pertinent to applications, followed by a short overview of the matrix-valued and operator-valued versions of these functions and the properties that carry over from scalar cases. The theory is complemented by some applications from electromagnetics that are related to the sum rules. More applications of Herglotz Nevanlinnna functions in material sciences can be found in Part II. ## 1 Introduction This review article deals with theory and applications of Herglotz-Nevanlinna functions, which are functions analytic in the complex upper half-plane and with non-negative imaginary part. They appear in surprisingly many circumstances and have been studied and utilized for a long time, which also explains why they do appear under several names. Here we are going to call them Herglotz-Nevanlinna functions (or Herglotz for short). Even if the definition at first sight does not seem to be very restrictive, it does have strong implications. For more than a century it is known that the set of all Herglotz-Nevanlinna functions is described via an integral representation using three parameters only, two numbers and a positive Borel- measure (satisfying a reasonable growth condition). This explicit parametrization has made them a very powerful tool which has been used effectively both in pure mathematics as well as in applications. It turns out that with such relatively simple functions, amazingly much information can be encoded. For example, Herglotz-Nevanlinna functions are in one-to-one correspondence with passive (one-port) systems. This means that the corresponding function ”knows everything about the system”. Another example are Sturm-Liouville differential operators, appearing in mathematical physics. Here for a given operator its spectrum can be completely described in terms of the singularities of the corresponding Titchmarsh-Weyl coefficient, which is a Herglotz-Nevanlinna function. And even more, this function can still be used in order to describe the spectrum when the boundary conditions are changed. But these functions are not only used when working with a single system or operator, but can also be employed to deal with a whole class of problems simultaneously, as for instance when finding common bounds for the performance of all antennas that fit into a given volume (e,g., a ball of given radius), independently of their particular shape. In the study of composite materials, a similar situation arises in deriving bounds on effective properties when only the volume fractions are given; these bounds only depend on the volume fraction. In recent years there has been a series of workshops where mathematicians working in pure mathematics and in applied mathematics and experts in various applications have met. All participants have one common interest, Herglotz- Nevanlinna functions, but with very different perspectives and approaches. This two-part review article is an attempt to reflect and to present in a systematic and unified way the various pieces of mathematical theorems underpinning a diverse set of applications. The structure of the current paper is as follows. After this introduction, in Section 2 we review the mathematical background for Herglotz-Nevanlinna functions and provide a common basis for the applications presented in Section 3 and in Part II, which is concluded with possible generalizations of the theory. Section 2 starts with the the well-known integral representation (Section 2.2), followed by various aspects that we consider to be relevant in the chosen applications. In particular, the behavior of a Herglotz-Nevanlinna function on/towards the real line (i.e., at the boundary of the domain) is detailed in Sections 2.3 and 2.7. In material sciences often the functions do have more specific properties, which are discussed in 2.4; in particular, Stieltjes functions are characterized. Besides the integral representation, other (equivalent) representations are also presented in Section 2.5. In Section 2.6, it is explained how Herglotz-Nevanlinna functions appear in the mathematical description of passive systems, and in Section 2.8 we review briefly the matrix- (and operator-)valued Herglotz-Nevanlinna functions. Section 3 (as well as Section 2 in Part II) is devoted to applications, where we present a diverse set of applications in material sciences with the underlying common theme of Herglotz Nevanlinna functions. The common feature here is that the use of Herglotz-Nevanlinna functions makes it possible to handle a large class of problems at once, instead of changing the models according to details such as the shape of inclusions. In particular, in several situations physical bounds can be derived, which provide estimates of e.g., performance under certain conditions. In the applications presented here, the independent variable is either the frequency (in electromagnetics, poroelastics, quasi-static cloaking as well as time dispersive, dissipative systems) or the material contrasts (for composite material). In Section 3.1 we describe how sum rules can be employed for deriving bounds for electromagnetic structures, and in Section 3.2 passive realizations/approximations of non-passive systems are found via optimization in terms of the corresponding Herglotz-Nevanlinna functions. More applications can be found in Part II. They involve bounds on effective properties of composite materials, numerical treatment of a costly memory term in the modeling of poroelastic materials as well as bounds for quasi-static cloaking and identifying certain time dispersive and dissipative systems as restrictions of Hamiltonian systems. Even if all these examples demonstrate the effectiveness of Herglotz- Nevanlinna functions, there are situations in applications that cannot be treated by these methods, but would require more general classes of functions. This applies for instance for non-passive systems, e.g., appearing in electromagnetics, for which the analytic function in question might have non- positive imaginary part as well. Another example are composite materials with more than two phases. Then, even if the corresponding analytic functions still have positive imaginary part, they are not covered by the treatment above, since they depend on more than only one complex variable. Therefore, in Section 3 of Part II we provide an overview of the mathematics that is available for different classes of functions that extend the classical Herglotz-Nevanlinna class and we expect to them be relevant for applications in material sciences. We hope that this two-part review paper can be both helpful for people working in applications (by providing mathematical references for different aspects of Herglotz-Nevanlinna functions as well as their generalizations for future work) and interesting for pure mathematicians (by pointing out some relevant applications of Herglotz-Nevanlinna functions). ## 2 Mathematical background ### 2.1 Definition and first examples In this article, the complex upper half plane is denoted by $\mathbb{C}^{+}:=\\{z\in\mathbb{C}:{\rm Im}\,z>0\\}$ and the right half plane by $\mathbb{C}_{+}:=\\{z\in\mathbb{C}:{{\rm Re}\,}\,z>0\\}$. ###### Definition 2.1.1 A function $h:\mathbb{C}^{+}\to\mathbb{C}$ is called a Herglotz-Nevanlinna function if it is analytic in $\mathbb{C}^{+}$ and satisfies ${\rm Im}\,h(z)\geq 0$ for all $z\in\mathbb{C}^{+}$. These functions appear at various places with different names: Herglotz, Nevanlinna, Pick, R-function (or some combination of these). In pure mathematics Nevanlinna seems to be most used whereas in applications often Herglotz is prefered. ###### Example 2.1.2 It is easy to check that the following functions belong to this class $f_{1}(z)=-\frac{1}{z-3}\quad f_{2}(z)=i\quad f_{3}(z)=-\frac{1}{z+i}\quad f_{4}(z)={\rm Log}\,z\quad f_{5}(z)=\sqrt{z},$ where for the last two functions the branch is chosen such that the functions map $\mathbb{C}^{+}$ into the upper half plane. Other, maybe less obvious, examples are $f_{6}(z)=\tan z\qquad f_{7}(z)=\frac{\log\big{(}\Gamma(z+1)\big{)}}{z\log z},$ where $\Gamma(z)$ denotes the Gamma-function; see [6, 7] ###### Remark 2.1.3 By definition for a Herglotz-Nevanlinna function ${\rm Im}\,f(z)\geq 0$ for all $z\in\mathbb{C}^{+}$. However, it follows from a version of the maximum principle that if there is a point $z^{*}\in\mathbb{C}^{+}$ such that ${\rm Im}\,f(z^{*})=0$ then $f$ is a (real) constant function. Hence, if $f$ and $g$ are non-constant Herglotz-Nevanlinna functions then the composition $F(z):=f\big{(}g(z)\big{)}$ is a Herglotz-Nevanlinna function as well. In particular, if $f\not\equiv 0$ is Herglotz-Nevanlinna then both $g_{1}(z):=f\big{(}-\frac{1}{z}\big{)}$ and $g_{2}(z):=-\frac{1}{f(z)}$ are Herglotz-Nevanlinna functions. When considering limits towards real points, then usually only non-tangential limits ${z\hat{\to}x_{0}}$ are considered, this means that $z$ tends to $x_{0}\in\mathbb{R}$ in some Stolz-domain $D_{\theta}:=\\{z\in\mathbb{C}^{+}:\theta<{\rm Arg}(z-x_{0})<\pi-\theta\\}$, where $0<\theta<\frac{\pi}{2}$. ###### Remark 2.1.4 Herglotz-Nevanlinna functions can also be characterized via the boundary behavior only, namely an analytic function $f:\mathbb{C}^{+}\to\mathbb{C}$ is Herglotz-Nevanlinna if and only if it holds ${\lim\limits_{z\hat{\to}x_{0}}}{\rm Im}\,f(z)\geq 0$ (as a finite number or $+\infty$) for all $x_{0}\in\mathbb{R}\cup\\{\infty\\}$. ### 2.2 Integral representation The main tool in the work with Herglotz-Nevanlinna functions is the following explicit representation, which in principle has been known for more than a century; see e.g., [25] and also [11]. ###### Theorem 2.2.1 A function $f:\mathbb{C}^{+}\to\mathbb{C}$ is a Herglotz-Nevanlinna function if and only if there are numbers $a\in\mathbb{R}$, $b\geq 0$ and a (positive) Borel measure $\mu$ with $\int_{\mathbb{R}}\frac{1}{1+\xi^{2}}d\mu(\xi)<\infty$ such that $f(z)=a+bz+\int_{\mathbb{R}}\left(\frac{1}{\xi-z}-\frac{\xi}{1+\xi^{2}}\right)d\mu(\xi).$ (2.2.1) Moreover, $a$, $b$, and $\mu$ are unique with this property. Note that the term $\frac{\xi}{1+\xi^{2}}$ is needed for assuring the convergence of the integral. ###### Remark 2.2.2 Alternatively, representation (2.2.1) can also be written as $f(z)=a+bz+\int_{\mathbb{R}}\frac{1+\xi z}{\xi-z}d\sigma(\xi)$ (2.2.2) with the finite measure $\sigma$ given by ${d\sigma(\xi)}:=\frac{d\mu(\xi)}{1+\xi^{2}}$. Given a Herglotz-Nevanlinna function the constants $a$ and $b$ can be read off directly, namely, it holds $a={\rm Re}\,f(i)\quad\text{ and }\quad b=\lim\limits_{y\to\infty}\frac{f(iy)}{{i}y}.$ (2.2.3) ###### Example 2.2.3 For the functions in Example 2.1.2 we have for instance $\mu_{1}=\delta_{3}$, the point measure with mass $1$ at the point $\xi_{0}=3$, is the representing measure for $f_{1}$, for $f_{2}$ the measure is a multiple of the Lebesgue measure $\mu_{2}=\frac{1}{\pi}\lambda_{\mathbb{R}}$, whereas the representing measure $\mu_{3}$ of $f_{3}$ is absolutely continuous with respect to the Lebesgue measure and has density $\frac{1}{\pi(1+\xi^{2})}$, i.e. $d\mu_{3}(\xi)=\frac{1}{\pi(1+\xi^{2})}d\lambda_{\mathbb{R}}(\xi)$. Given the function, its representing measure can be reconstructed via the following formula, known as the Stieltjes inversion formula; see e.g., [25] ###### Proposition 2.1 Let $f$ be a Herglotz-Nevanlinna function with integral representation (2.2.1). Then for $x_{1}<x_{2}$ it holds $\mu\big{(}(x_{1},x_{2})\big{)}+\frac{1}{2}\mu\left(\\{x_{1}\\}\right)+\frac{1}{2}\mu\left(\\{x_{2}\\}\right)=\displaystyle\lim_{y\rightarrow 0+}\frac{1}{\pi}\int_{x_{1}}^{x_{2}}{\rm Im}\,f(x+iy)\,dx,$ (2.2.4) or, in a weak formulation, if $h$ is a compactly supported smooth function in $C_{0}^{1}(\mathbb{R})$, then $\int_{\mathbb{R}}h(\xi)d\mu(\xi)=\lim_{y\rightarrow 0+}\frac{1}{\pi}\int_{\mathbb{R}}h(x)\,{\rm Im}\,f(x+iy)\,dx,$ Moreover, point masses are given by $\lim\limits_{z\hat{\to}\alpha}(\alpha-z)f(z)=\mu\big{(}\\{\alpha\\}\big{)}.$ (2.2.5) By definition a Herglotz-Nevanlinna function is defined in the upper halfplane $\mathbb{C}^{+}$ only. However, it can be extended naturally also to the lower half plane $\mathbb{C}^{-}$, since the integral in the right-hand side of (2.2.1) is well defined for all $z\in\mathbb{C}\setminus\mathbb{R}$. This extension is symmetric with respect to the real line, i.e. $f(\overline{z})=\overline{f(z)}\qquad z\in\mathbb{C}\setminus\mathbb{R},$ (2.2.6) and is hence called symmetric extension. ###### Example 2.2.4 For some of the functions from Example 2.1.2 the symmetric extensions are $f_{1}(z)=-\frac{1}{z-3}\qquad f_{2}(z)=\left\\{\begin{array}[]{rc}i&{\rm Im}\,z>0\\\ -i&{\rm Im}\,z<0\end{array}\right.\qquad f_{3}(z)=\left\\{\begin{array}[]{rc}-\frac{1}{z+i}&{\rm Im}\,z>0\\\\[5.69054pt] -\frac{1}{z-i}&{\rm Im}\,z<0\end{array}\right..$ ### 2.3 Boundary behavior We first note that for a Herglotz-Nevanlinna function $f$ $\lim\limits_{y\to 0+}f(x+iy)\text{ exists for almost all }x\in\mathbb{R}.$ To see this, let $\varphi$ be a Möbius transform that maps the unit disk $\mathbb{D}$ onto the open upper halfplane $\mathbb{C}^{+}$, e.g. $\varphi(w)=i\frac{1+w}{1-w}$. If $f$ is a Herglotz-Nevanlinna function then the function $h(w):=\varphi^{-1}\big{(}f(\varphi(w))\big{)}$ is a bounded analytic function in $\mathbb{D}$ and hence has boundary values almost everywhere. Therefore it is also true for the Herglotz-Nevanlinna function $f$. The weak form of the Stieltjes inversion formula also shows that the limit of the imaginary part always exists in the distributional sense. However, for pointwise limits, and good properties of the function on the boundary, more assumptions on the measure have to be imposed. Let $f$ be given with integral representation (2.2.1). If there is an interval $(x_{1},x_{2})$ such that $(x_{1},x_{2})\cap{\rm supp}\,\mu=\emptyset$, then for every $x\in(x_{1},x_{2})$ the integral in (2.2.1) exists and is real analytic. Hence the function can be extended analytically to the lower half plane and this analytic extension coincides with the symmetric extension. But also in other cases it can be possible to extend the Herglotz-Nevanlinna function analytically over (some part of) the real line. But then, in general, the continuation will not coincide with the symmetric extension. A characterization of this situation in terms of the measure is given in the following theorem; see [18]. ###### Proposition 2.2 Let $f$ be a Herglotz-Nevanlinna function with representation (2.2.1). Then $f$ can be continued analytically onto the interval $(x_{1},x_{2})$ if and only if the measure $\mu$ is absolutely continuous with respect to the Lebesgue measure $\lambda$ on this interval and the density $\varrho(t)$ is real analytic on $(x_{1},x_{2})$. In this case, $f(z)=\overline{f(\overline{z})}+2\pi i\varrho(z),$ where $\varrho(z)$ denotes the analytic continuation of the density $\varrho$. ###### Example 2.3.1 The function $f_{2}$ in Example 2.1.2 can be extended as an entire function, $f_{2}(z)\equiv i$, whereas $f_{3}$ can be extended analytically only to the punctured plane $\mathbb{C}\setminus\\{-i\\}$. Loosely speaking, an analytic density guarantees an analytic boundary function. However, for the boundary function to be continuous it is not sufficient to assume that $\mu$ has a continuous density. As a counter example, consider the density $\varrho(\xi)=\left\\{\begin{array}[]{cl}\displaystyle-\frac{1}{\ln\xi},&\xi\in(0,\gamma],\\\\[8.53581pt] 0,&\xi\in[-\gamma,0],\end{array}\right.,$ (2.3.1) which is continuous on the $[-\gamma,\gamma]$ for any $\gamma\in(0,1)$, but for which the the corresponding Herglotz-Nevanlinna function does not admit a continuous extension to $x=0$. The appropriate assumption here turns out to be Hölder continuity. A function $\varrho:(x_{1},x_{2})\to\mathbb{R}$ is called Hölder continuous with exponent $\alpha$, that is $\varrho\in C^{0,\alpha}(x_{1},x_{2})$, if there exists a constant $C>0$ such that $|\varrho(\xi_{1})-\varrho(\xi_{2})|\leq C\cdot|\xi_{1}-\xi_{2}|^{\alpha}\quad\text{ for all }\xi_{1},\xi_{2}\in(x_{1},x_{2}).$ The following theorem relies on some well known results; a detailed proof for the current situation is given in [23, Theorem 2.2]. ###### Proposition 2.3 . Let $f$ be a Herglotz-Nevanlinna function with representation (2.2.1) and assume that there is an interval $(x_{1},x_{2})$ where the measure $\mu$ is absolutely continuous with respect to the Lebesgue measure $\lambda$ with Hölder continuous density $\varrho$. Then for every compact interval $I\subset(x_{1},x_{2})$ the function $f$ admits a continuous extension to $\mathbb{C}^{+}\cup I$. This continuation is given via the Hilbert transform $f(x)=a+bx+p.v.\int_{\mathbb{R}}\left(\frac{1}{\xi-x}-\frac{\xi}{1+\xi^{2}}\right)d\varrho(\xi)+i\pi\varrho(x),\quad x\in I,$ where the integral is taken as a principal value at $\xi=x$. ### 2.4 Subclasses In this section we focus on how properties of the measure in the integral representation (2.2.1) are related to properties of the function. We start with the so-called symmetric functions, which are important for instance in connection with passive systems, cf., Section 2.6. ###### Definition 2.4.1 A Herglotz-Nevanlinna function is called symmetric if $f(-\overline{z})=-\overline{f(z)}.$ (2.4.1) Such functions are purely imaginary on the imaginary axes and can be characterized in the following way. ###### Proposition 2.4 A Herglotz-Nevanlinna function $f$ with representation (2.2.1) is symmetric if and only if $a=0$ and $\mu$ is symmetric with respect to $0$, i.e., $\mu(B)=\mu(-B)$ for every Borel set $B$ in $\mathbb{R}$. In this case, the representation can be written as $f(z)=bz+p.v.\int_{\mathbb{R}}\frac{1}{t-z}d\mu(t)\quad\text{ for } z\in\mathbb{C}^{+},$ where $p.v.$ denotes the principle value at $\infty$. The functions behavior at $\infty$ is closely related to the properties of the representing measure $\mu$ and related simplifications of the representation The following statements can be found in [25]. The first theorem characterizes when the term $\frac{\xi}{1+\xi^{2}}$ is needed in the integral. ###### Theorem 2.4.2 Let $f$ be a Herglotz-Nevanlinna function with representation (2.2.1). Then the following are equivalent: * (i) $\displaystyle\int_{1}^{\infty}\frac{{\rm Im}\,f(iy)}{y}dy<\infty$ * (ii) $\displaystyle\int_{\mathbb{R}}\frac{1}{1+|\xi|}d\mu(\xi)<\infty$ * (iii) $f(z)=s+\displaystyle\int_{\mathbb{R}}\frac{1}{\xi-z}d\mu(\xi)\text{ with some }s\in\mathbb{R}.$ In this case $s=\lim\limits_{y\to\infty}f(iy)=\lim\limits_{y\to\infty}{\rm Re}\,f(iy)=a-\int_{\mathbb{R}}\frac{\xi}{1+\xi^{2}}d\mu(\xi)$. The next theorem characterizes functions with bounded measure. ###### Theorem 2.4.3 Let $f$ be a Herglotz-Nevanlinna function with representation (2.2.1). Then the following are equivalent: * (i) $\displaystyle\lim\limits_{z\hat{\to}\infty}\displaystyle\frac{f(z)}{{\rm Im}\,z}=0\quad\text{and }\quad\displaystyle\limsup\limits_{z\hat{\to}\infty}|z|{\rm Im}\,f(z)<\infty$ * (ii) $\displaystyle\int_{\mathbb{R}}d\mu(\xi)<\infty.$ Hence also in this case $f(z)=s+\displaystyle\int_{\mathbb{R}}\frac{1}{\xi-z}d\mu(\xi)$, with $s\in\mathbb{R}$. An important subclass of Herglotz-Nevanlinna functions are Stieltjes functions; see also [25]. ###### Definition 2.4.4 A holomorphic function $f:\mathbb{C}\setminus[0,+\infty)\to\mathbb{C}$ is called a Stieltjes function if * • ${\rm Im}\,f(z)\geq 0$ for ${\rm Im}\,z>0$ * • $f(x)\geq 0$ for $x\in(-\infty,0)$. These functions can be characterized in several different ways: ###### Theorem 2.4.5 Let $f$ be holomorphic in the domain $\mathbb{C}\setminus[0,+\infty)$. Then the following are equivalent: * (a) $f$ is a Stieltjes function. * (b) $f$ can be represented as $f(z)=s+\displaystyle\int_{[0,\infty)}\frac{1}{\xi-z}d\mu(\xi)$ with $s\geq 0$ and $\int_{[0,\infty)}\frac{1}{1+\xi}d\mu(\xi)<\infty$. * (c) $f$ is a Herglotz-Nevanlinna function (analytically continued onto $\mathbb{R}^{-}$), which satisfies $\int_{1}^{\infty}\frac{{\rm Im}\,f(iy)}{y}dy<\infty$ and $\lim\limits_{y\to\infty}f(iy)\geq 0$. * (d) The functions $f(z)$ and $h_{1}(z):=zf(z)$ are Herglotz-Nevanlinna functions. * (e) The functions $f(z)$ and $h_{2}(z):=zf(z^{2})$ are Herglotz-Nevanlinna functions. In this case $s=\lim\limits_{x\to-\infty}f(x)$. Moreover, symmetric Herglotz-Nevanlinna functions can be represented via Stieltjes functions. ###### Theorem 2.4.6 A function $f$ is a symmetric Herglotz-Nevanlinna function, i.e., $f(-\overline{z})=-\overline{f(z)}$, if and only if there exists a Stieltjes function $h$ such that $f(z)=zh(z^{2})$. Note that in some places the notion Stieltjes function means that additionally all moments of the representing measure exist. Other versions of Stieltjes functions where the functions are analytic on the other half-line are used in Section 2.1.1 of Part II. Another important subclass is rational Herglotz-Nevanlinna functions. Here the term rational might be understood in two different ways. One way is to think about functions for which there exists a rational function in $\mathbb{C}$ such that its restriction to the upper half plane coincides with the given function, e.g., $f_{1},f_{2},$ and $f_{3}$ in Example 2.1.2, as well as in connection with electrical circuit networks cf., Example 3.1.1. Note that these functions might have absolutely continuous measure, like $f_{2}$ and $f_{3}$. But rational can also be interpreted in a more strict way, namely that the integral representation gives a rational function in $\mathbb{C}$, or in other words, that the symmetric extension is rational in $\mathbb{C}$. Among the above named examples only $f_{1}$ is rational also in this sense. Rational functions in this stricter meaning are exact those functions for which the measure is a finite sum of Dirac measures, as eg., when deriving bounds in Section 2.1.1 of Part II. Also, more generally, meromorphic Herglotz-Nevanlinna functions have been investigated, e.g., in connection with inverse problems. An important property are the interlacing of zeros and poles on the real line. ### 2.5 Other representations Besides the integral representation there also exist other ways to represent Herglotz-Nevanlinna functions. #### 2.5.1 Operator representations Representations using resolvents have been used in different contexts. The theorem below follows straightforwardly from Example 2.5.2 or can be seen as a special case of the results in e.g., [27]. Here self-adjoint linear relations are used; they can be viewed as multi-valued operators. For a detailed overview of relations in inner product spaces see [14] or [5, Chapter 1]. ###### Theorem 2.5.1 A function $f$ is a Herglotz-Nevanlinna function if and only if there exist a Hilbert space $\mathcal{H}$, a self-adjoint linear relation $A$ in $\mathcal{H}$, a point $z_{0}\in\mathbb{C}^{+}$ and an element $v\in\mathcal{H}$ such that $f(z)=\overline{f(z_{0})}+(z-\overline{z_{0}})\left((I+(z-z_{0})(A-z)^{-1})v,v\right)_{\mathcal{H}}.$ (2.5.1) Moreover, if $\mathcal{H}=\overline{span}\\{(I+(z-z_{0})(A-z)^{-1})v:z\in\varrho(A)\\}$, where $\overline{span}$ denotes closed linear span and $\varrho(A)$ the resolvent set of $A$, then the representation is called minimal. In this case the representation is unique up to unitary equivalence. If the representation is minimal then it can be shown that ${\rm hol}(f)=\varrho(A)$, meaning that the function $F$ (more precisely, its symmetric continuation to the lower halfplane and to those real points where possible) is analytic exactly in the resolvent set of the representing relation $A$. In particular, isolated eigenvalues of $A$ are poles of $f$. Non-isolated eigenvalues are then called generalized poles and can be characterized analytically as well. Since unitarily equivalent relations do have the same spectral properties, these are intrinsic for the function as well. There are different (equivalent) ways to construct such an operator representation. ###### Example 2.5.2 If, for instance, the integral representation (2.2.1) is given, then the above representation can be realized as follows: If in the integral representation $b=0$ then $\mathcal{H}=L^{2}_{\mu}$ and $A$ is actually an operator. namely, $A$ is multiplication by the independent variable, i.e. $g(\xi)\mapsto\xi\cdot g(\xi)$. If $z_{0}$ is fixed than $v\in L^{2}_{\mu}$ might be chosen as $v(\xi)=\frac{1}{\xi-\overline{z_{0}}}$. If $b>0$ then the space has an additional one-dimensional component, namely, $\mathcal{H}=L^{2}_{\mu}\oplus\mathbb{C}$ and $A$ is not an operator but a relation with non-trivial multivalued part $A(0)$. The relation $A$ is acting in $L^{2}_{\mu}$ as multiplication by the independent variable and has the second component as multivalued part, i.e., $A(0)=\\{0\\}\times\mathbb{C}$. In Theorems 2.4.2 and 2.4.3, some properties of the function have been related to certain properties of the measure that lead to simplifications of the integral representation. In the following theorem these results are extended to the operator representation. ###### Theorem 2.5.3 Let $f$ be a Herglotz-Nevanlinna function given by representation (2.5.1). Then 1. 1. $\displaystyle\lim\limits_{y\to\infty}\frac{f(iy)}{y}=0$ if and only if the relation $A$ is an operator, i.e., its multi valued part is trivial. 2. 2. $\displaystyle\int_{1}^{\infty}\frac{{\rm Im}\,f(iy)}{y}dy<\infty$ if and only if $v\in{\rm dom}((|A|+I)^{1/2})$. 3. 3. $\displaystyle\lim\limits_{z\hat{\to}\infty}\displaystyle\frac{f(z)}{{\rm Im}\,z}=0\text{ \,and }\displaystyle\limsup\limits_{z\hat{\to}\infty}|z|{\rm Im}\,f(z)<\infty$ if and only if $A$ is an operator and $v\in{\rm dom}(A)$. In this case $f(z)=s+\left((A-z)^{-1}u,u\right)_{\mathcal{H}}$ with $s\in\mathbb{R}$ and $u:=(A-\overline{z_{0}})v$. Operator representations appear naturally in connection with spectral problems for self-adjoint operators. For instance, the spectrum of a Sturm-Liouville operator can be characterized in terms of the singularities of the corresponding Titchmarsh-Weyl function, which in many cases is a Herglotz- Nevanlinna functions. Then $A$ is the differential operator and $\mu$ can be interpreted as the spectral measure, see eg, [16] and references therein or Chapter 6 in [5]. Abstractly speaking, scalar Herglotz-Nevanlinna functions do appear in connection with rank one perturbations of self-adjoint operators, see eg., [2], or in connection with self-adjoint extensions of a symmetric operator with deficiency indices $(1,1)$, [1]. Given such a symmetric operator and one fixed self-adjoint extension, then there exists a Herglotz-Nevanlinna function, the so-called Q-function (in the sense of Krein) or abstract Weyl- function, such that all self-adjoint extensions can be parameterized via Kreins resolvent formula. Moreover, also the spectrum of any (minimal) extension is given in terms of (the singularities of fractional linear transformations of) this Herglotz-Nevanlinna function . #### 2.5.2 Exponential representation If $f$ is a Herglotz-Nevanlinna function then the function $F(z):={\rm Log}(f(z))$ is also Herglotz-Nevanlinna. Since ${\rm Im}\,F$ is bounded, it follows that $F$ has an integral representation with an absolute continuous measure and no linear term i.e. $b=0$. This observation leads to the following representation. ###### Proposition 2.5 A function $f$ is a Herglotz-Nevanlinna function if and only if there exists a real constant $\gamma$ and a density $\vartheta$ such that $f(z)=\exp\left(\gamma+\int_{\mathbb{R}}\left(\frac{1}{t-z}-\frac{t}{1+t^{2}}\right)\vartheta(t)d\lambda_{\mathbb{R}}(t)\right).$ For details, in particular, concerning the relation between $\mu$ from (2.2.1) and $\vartheta$ see [3] and [4]. ### 2.6 Passive systems Symmetric Herglotz-Nevanlinna functions are also characterized in terms of Laplace-transforms of certain distributions, see eg. the classical text [32]. Consider an operator $R$ that acts on distributions $\mathcal{D}^{\prime}(\mathbb{R},\mathbb{C})$ as a convolution operator, i.e., there exists $Y\in\mathcal{D}^{\prime}$ such that $R(\varphi)=Y\star\varphi$ for all $\varphi\in\mathcal{D}^{\prime}$ such that this action is well- defined. ###### Definition 2.6.1 A convolution operator $R=Y\star\,$ is called (admittance-) passive if for every test function $\varphi\in\mathcal{D}$ the output $R(\varphi)=:\psi$ is locally integrable and ${\rm Re}\,\left[\int_{-\infty}^{t}\overline{\varphi(\tau)}\psi(\tau)d\tau\right]\geq 0,\quad{\forall t\in\mathbb{R}.}$ It can be shown that every passive operator $R$ is _causal_ (i.e. ${\rm supp}Y\subseteq[0,\infty)$) and it is of slow growth (i.e. $Y\in\mathcal{S}^{\prime}$, where $\mathcal{S}^{\prime}$ denotes the set of Schwartz distributions). For a convolution operator that is causal and of slow growth, the Laplace transform $W:=\mathcal{L}(Y)$ of its defining distribution is well defined and holomorphic in the right halfplane, see e.g. [32] for details. Furthermore, a _real distribution_ is a distribution that maps real test functions to real numbers and a convolution operator is called _real_ if it maps real distributions into real distributions. A holomorphic function is called _positive real_ (or for short PR) if it maps the right half plane into itself and takes real values on the real line. Passive operators are in a one-to-one correspondence with the positive real functions in the sense of the following theorem, which, however, is formulated in terms of Herglotz-Nevanlinna functions. ###### Theorem 2.6.2 Given a real passive operator $R=Y\star\,$, the function $f(z):=i{W}(\frac{z}{i})$, is a symmetric Herglotz-Nevanlinna function (where $W=\mathcal{L}(Y)$). Conversely, given a symmetric Herglotz-Nevanlinna function $f$, the convolution operator $R:=\mathcal{L}^{-1}(W)\star\,$ for $W(s):=\frac{1}{i}f(is)$ is passive and real. ###### Remark 2.6.3 Here the Laplace transform $W$ is itself a positive real function. In applications sometimes this transfer function is considered directly; see e.g., Example 3.1.2, or alternatively the Laplace transform is combined with a multiplication of $-i$ in the independent variable, and is then called the Fourier-Laplace transform, as in Equation (2.2.9) of Part II. ### 2.7 Asymptotic behavior Generally speaking the growth of the function at a boundary point in $\mathbb{R}\cup\\{\infty\\}$ is closely related to the behavior of the measure at this point, e.g., (2.2.5). In this section we demonstrate how the function’s asymptotic behavior and the moments of the measure are related; see [28] for an overview and [8] for the proofs. We start with noting that for every Herglotz-Nevanlinna function $f$, one has $f(z)=b_{1}z+o(z)\qquad\text{as }z\hat{\to}\infty,$ and $f(z)=\frac{a_{-1}}{z}+o(\frac{1}{z})\qquad\text{as }z\hat{\to}0,$ where $b_{1}=b$ in the integral representation $\eqref{eq:intrep}$ and $a_{-1}=-\mu(\\{0\\})$. Some functions do even admit expansions of higher order. We first consider expansions at $\infty$. ###### Definition 2.7.1 A Herglotz-Nevanlinna function $f$ has an asymptotic expansion of order $K$ at $z=\infty$ if for $K\geq-1$ there exist real numbers $b_{1},b_{0},b_{-1},\ldots,b_{-K}$ such that $f$ can be written as $f(z)=b_{1}z+b_{0}+\frac{b_{-1}}{z}+\ldots+\frac{b_{-K}}{z^{K}}+o\Big{(}\frac{1}{z^{K}}\Big{)}\quad\quad\text{ as }z\hat{\to}\infty.$ (2.7.1) ###### Remark 2.7.2 This means that $\lim\limits_{z\hat{\to}\infty}z^{K}\Big{(}f(z)-b_{1}z-b_{0}-\frac{b_{-1}}{z}-\ldots-\frac{b_{-K}}{z^{K}}\Big{)}=0.$ (2.7.2) Moreover, the coefficients $b_{-j}$ are given by $b_{-j}=\lim\limits_{z\hat{\to}\infty}z^{j}\Big{(}f(z)-b_{1}z-b_{0}-\frac{b_{-1}}{z}-\ldots-\frac{b_{-(j-1)}}{z^{j-1}}\Big{)}.$ (2.7.3) The following theorem relates the asymptotic expansion to the moment of the measure. ###### Theorem 2.7.3 Let $f$ be a Herglotz-Nevanlinna function with representing measure $\mu$ in (2.2.1) and $N_{\infty}\geq 0$. Then $f$ has an asymptotic expansion of order $2N_{\infty}+1$ at $z=\infty$ if and only if the measure $\mu$ has finite moments up to order $2N_{\infty}$, i.e., $\int_{\mathbb{R}}\xi^{2N_{\infty}}d\mu(\xi)<\infty$. Moreover, in this case $\int_{\mathbb{R}}\xi^{k}d\mu(\xi)=-b_{-k-1}\quad\text{ for }0<k\leq N_{\infty}.$ (2.7.4) Since these moments can be calculated by a modified version of the Stieltjes inversion formula, this result can be reformulated in the following way, known as _sum-rules_. See [8] for a rigorous derivation. ###### Theorem 2.7.4 Let $f$ be a Herglotz-Nevanlinna function. Then, for some integer $N_{\infty}\geq 0$, the limit $\lim_{\varepsilon\to 0^{+}}\lim_{y\to 0^{+}}\int_{\varepsilon<|x|<\frac{1}{\varepsilon}}x^{2N_{\infty}}{\rm Im}\,f(x+iy)dx$ (2.7.5) exists as a finite number if and only if the function $f$ admits at $z=\infty$ an asymptotic expansion of order $2N_{\infty}+1$. In this case, the following sum rules hold $\lim_{\varepsilon\to 0^{+}}\lim_{y\to 0^{+}}\frac{1}{\pi}\int_{\varepsilon<|x|<\frac{1}{\varepsilon}}x^{n}{\rm Im}\,f(x+iy)dx=\begin{cases}a_{-1}-b_{-1},&n=0\\\ -b_{-n-1},&0<n\leq 2N_{\infty}\end{cases}.$ (2.7.6) ###### Example 2.7.5 Note that the assumption that the coefficients in expansions (2.7.1) are real is essential. Consider e.g., the function $f(z)=i$ for $z\in\mathbb{C}^{+}$, which admits expansions of arbitrary order if non-real coefficients are allowed. However, the limits (2.7.5) do not exist. This example also shows that not every Herglotz-Nevanlinna function does admit a sum rule. Expansions at $z=0$ are defined analogously. This can either be done explicitly, as below, or via the expansion at $\infty$ for the Herglotz- Nevanlinna function $\widetilde{f}(z):=f(-1/z)$. The above remark applies then accordingly. ###### Definition 2.7.6 A Herglotz-Nevanlinna function $f$ has an asymptotic expansion of order $K$ at $z=0$ if for $K\geq-1$ there exist real numbers $a_{-1},a_{0},a_{1},\ldots,a_{K}$ such that $f$ can be written as $f(z)=\frac{a_{-1}}{z}+a_{0}+a_{1}z+\ldots+a_{K}z^{K}+o\big{(}{z^{K}}\big{)}\quad\quad\text{ as }z\hat{\to}0.$ (2.7.7) ###### Theorem 2.7.7 Let $f$ be a Herglotz-Nevanlinna function. Then, for some integer $N_{0}\geq 1$, the limit $\lim_{\varepsilon\to 0^{+}}\lim_{y\to 0^{+}}\int_{\varepsilon<|x|<\frac{1}{\varepsilon}}\frac{{\rm Im}\,f(x+iy)}{x^{2N_{0}}}dx$ (2.7.8) exists as a finite number if and only if $f$ admits at $z=0$ an asymptotic expansion of order $2N_{0}-1$. In this case the following sum rules hold $\lim_{\varepsilon\to 0^{+}}\lim_{y\to 0^{+}}\frac{1}{\pi}\int_{\varepsilon<|x|<\frac{1}{\varepsilon}}\frac{{\rm Im}\,f(x+iy)}{x^{p}}dx=\begin{cases}a_{1}-b_{1},&p=2\\\ a_{p-1},&2<p\leq 2N_{0}\end{cases}.$ (2.7.9) ###### Example 2.7.8 The Herglotz-Nevanlinna function $f(z)=\tan(z)$ has the asymptotic expansion $\tan(z)=z+\frac{z^{3}}{3}+\frac{2z^{5}}{15}+\ldots\text{as }z\hat{\to}0$ (2.7.10) and $\tan(z)=i+o(1)$ as $z\hat{\to}\infty$ (which, however, is not an asymptotic expansion in the sense of (2.7.1)). We thus find that $a_{1}=1$, $a_{3}=1/3$, $a_{5}=2/15$, and $b_{1}=0$ (whereas $b_{0}$ does not exist), and hence the following sum rules apply. $\lim_{\epsilon\to 0^{+}}\lim_{y\to 0^{+}}\frac{1}{\pi}\int_{\epsilon\leq|x|\leq 1/\epsilon}\frac{{\rm Im}\,\tan(x+iy)}{x^{p}}dx=\begin{cases}1&p=2\\\ 1/3&p=4\\\ 2/15&p=6\end{cases}$ (2.7.11) ###### Remark 2.7.9 Note that the case of $p=1$ is not included in Theorem 2.7.7. In order to guarantee this limit to be finite, it is required that $f$ admits asymptotic expansions of order $1$ at both $z=\infty$ and $z=0$. In this case, the limit equals $a_{0}-b_{0}$. ###### Remark 2.7.10 Note that the exponents in (2.7.5) and (2.7.8) are even. A corresponding statement for odd exponents, meaning that the existence of the limit is equivalent to the existence of the expansion, does not hold. A counterexample is given in [8, p. 9]. ###### Remark 2.7.11 The counterpart of Theorem 2.7.3 for the operator representation (2.5.1) is $v\in{\rm dom}(A^{N_{\infty}})$ if and only if an asymptotic expansion of order $2N_{\infty}+1$ at $z=\infty$ exists. For symmetric Herglotz-Nevanlinna functions (2.4.1), the non-zero coefficients of odd and even order in an asymptotic expansion are necessarily real-valued and purely imaginary, respectively, and hence expansions (2.7.7) and (2.7.1) stop at the appearance of the first imaginary term, or the first non-existing term. If the assumptions in both Theorems 2.7.4 and 2.7.7 are satisfied, i.e., that both asymptotic expansions exist up to order $2N_{0}-1$ and $2N_{\infty}+1$, respectively, these together with Remark 2.7.9 can be summarized as $\frac{2}{\pi}\int_{0^{+}}^{\infty}\frac{{\rm Im}\,f(x)]}{x^{2n}}dx:=\lim_{\varepsilon\rightarrow 0^{+}}\lim_{y\rightarrow 0^{+}}\frac{2}{\pi}\int_{\varepsilon}^{1/\varepsilon}\frac{{\rm Im}\,h(x+iy)]}{x^{2n}}dx=a_{2n-1}-b_{2n-1}$ (2.7.12) for $n=-N_{\infty},\ldots,N_{0}$. ### 2.8 Matrix- and operator- valued Herglotz-Nevanlinna functions So far in this text the values of the functions considered have been complex numbers, but much of the theory can be extended to matrix- or even operator- valued functions; see [15] for a detailed overview. Let $\mathcal{H}_{0}$ be a complex Hilbert space and denote by $\mathcal{L}(\mathcal{H}_{0})$ and $\mathcal{B}(\mathcal{H}_{0})$ the spaces of linear and bounded linear operators in $\mathcal{H}_{0}$, respectively. In case of finite dimensional $\mathcal{H}_{0}$, say ${\rm dim}\mathcal{H}_{0}=n$, these two spaces coincide and are identified with the space of matrices $\mathbb{C}^{n\times n}$. For $T\in\mathcal{L}(\mathcal{H}_{0})$ we denote by $T^{*}$ the adjoint operator; for $T\in\mathbb{C}^{n\times n}$ this is the conjugate transpose of the matrix $T$. ###### Definition 2.8.1 A function $F:\mathbb{C}^{+}\to\mathcal{B}(\mathcal{H}_{0})$ is called Herglotz-Nevanlinna if it is analytic and ${\rm Im}\,F(z)\geq 0$ for $z\in\mathbb{C}^{+}$, where ${\rm Im}\,F(z):=\frac{1}{2i}(F(z)-F(z)^{*})$. Also these functions can be represented via an integral representation as in Theorem 2.2.1. ###### Theorem 2.8.2 A function $F:\mathbb{C}^{+}\to\mathcal{L}(\mathcal{H}_{0})$ is a Herglotz- Nevanlinna function if and only if there are operators $C=C^{*}$ and $D\geq 0$ $\in\mathcal{L}(\mathcal{H}_{0})$ and a (positive) $\mathcal{L}(\mathcal{H}_{0})$-valued Borel measure $\Omega$ with $\int_{\mathbb{R}}\frac{1}{1+\xi^{2}}d\left(\Omega(\xi)\mathbf{x},\mathbf{x}\right)_{\mathcal{L}(\mathcal{H}_{0})}<\infty$ for all $\mathbf{x}\in\mathcal{H}_{0}$ such that $F(z)=C+Dz+\int_{\mathbb{R}}\left(\frac{1}{\xi-z}-\frac{\xi}{1+\xi^{2}}\right)d\Omega(\xi).$ (2.8.1) Moreover, $C$, $D$, and $\Omega$ are unique with this property. Here an operator-valued measure is defined via a non-decreasing operator- valued (distribution) function; see [15]. ###### Remark 2.8.3 As in Theorems 2.4.2 and 2.4.3 the representation simplifies under certain growth conditions. More precisely, these theorems hold true even in the operator-valued case if the growth conditions are considered weakly, e.g., (i) in Theorem 2.4.2 becomes $\int\limits_{0}^{\infty}\dfrac{({\rm Im}\,F(iy)\mathbf{x},\mathbf{x})_{\mathcal{H}_{0}}}{y}dy\leq\infty$ for all $\mathbf{x}\in\mathcal{H}_{0}$. Also the results in Section 2.3 hold in this weak sense. Also the operator representations can be extended to this case. ###### Theorem 2.8.4 A function $F:\mathbb{C}^{+}\to\mathcal{B}(\mathcal{H}_{0})$ is a Herglotz- Nevanlinna function if and only if there exist a Hilbert space $\mathcal{H}$, a self-adjoint linear relation $A$, a point $z_{0}\in\mathbb{C}^{+}$ and a map $\Gamma\in\mathcal{L}(\mathcal{H}_{0},\mathcal{H})$ such that $F(z)={F(z_{0})^{*}}+(z-\overline{z_{0}})\Gamma^{*}(I+(z-z_{0})(A-z)^{-1})\Gamma.$ (2.8.2) Moreover, if $\mathcal{H}=\overline{span}\\{(I+(z-z_{0})(A-z)^{-1})\Gamma{\mathbf{x}}:z\in\varrho(A)\text{ and }{\mathbf{x}}\in\mathcal{H}_{0}\\}$, then the representation is called minimal. In this case the representation is unique up to unitary equivalence. For scalar functions, i.e., $\mathcal{H}_{0}=\mathbb{C}$, the linear mapping $\Gamma:\mathbb{C}\to\mathcal{H}$ acts as $1\mapsto v$, where $v$ is the element in the scalar representation Theorem 2.5.1. Similarly as in Theorems 2.4.2 and 2.4.3, certain assumptions on the growth of the function $F$ guarantee simplified representations. As an example we give one result, which will be used in Section 2.4 of Part II. ###### Theorem 2.8.5 Let $F:\mathbb{C}^{+}\to\mathcal{B}(\mathcal{H}_{0})$ be a Herglotz-Nevanlinna function with representation (2.8.2). Then $\displaystyle\lim\limits_{z\hat{\to}\infty}\displaystyle\frac{\|F(z)\|}{{\rm Im}\,z}=0\text{ \,and }\displaystyle\limsup\limits_{z\hat{\to}\infty}|z|\cdot\|{\rm Im}\,F(z)\|<\infty$ if and only if $A$ is an operator and $\Gamma\subset{\rm dom}(A).$ In this case $F(z)=S+\Gamma_{0}^{*}(A-z)^{-1}\Gamma_{0}$ (2.8.3) with $\Gamma_{0}:=(A-\overline{z_{0}})\Gamma$ and $S=S^{*}\in\mathcal{L}(\mathcal{H}_{0})$. In particular, this theorem implies the following corollary. ###### Corollary 2.8.6 For a Herglotz-Nevanlinna function $F:\mathbb{C}^{+}\to\mathcal{B}(\mathcal{H}_{0})$ the growth condition $\displaystyle\limsup\limits_{y\to\infty}y\|F(iy)\|<\infty$ implies that $F(z)=\Gamma_{0}^{*}(A-z)^{-1}\Gamma_{0},$ (2.8.4) where $A$ is a self-adjoint operator in a Hilbert space $\mathcal{H}$ and $\Gamma_{0}\in\mathcal{L}(\mathcal{H}_{0},\mathcal{H})$. Moreover, there exists a minimal representation, that is, a representation for which it holds $\mathcal{H}=\overline{span}\\{(A-z)^{-1})\Gamma_{0}{\mathbf{x}}:z\in\varrho(A)\text{ and }{\mathbf{x}}\in\mathcal{H}_{0}\\}$, that is unique up to unitary equivalence. ###### Example 2.8.7 Both the functions $F(z)=\begin{pmatrix}z&1\\\ 1&-\frac{1}{z}\end{pmatrix}\quad\text{ and }\quad\tilde{F}(z):=-F(z)^{-1}=\frac{1}{2}\cdot\begin{pmatrix}-\frac{1}{z}&-1\\\ -1&z\end{pmatrix}$ are Herglotz-Nevanlinna functions. The above example illustrates a general phenomenon for matrix (and operator) functions, namely, the point $z=0$ is both a pole and a zero of $F$; it is also a pole of the inverse $F^{-1}$. In particular, $\det F(z)\equiv-2$, and hence the poles of $F$ can not be read off from the scalar function $\det F(z)$, but the matrix structure has to be taken into account. Whereas scalar Herglotz-Nevanlinna functions do appear in connection with extensions of symmetric operators with deficiency index $1$, higher defect leads to matrix-valued functions (for finite deficiency index) or operator- valued functions (for infinite deficiency index). As an example, consider differential operators. If such an operator acts on functions defined on the half line $\mathbb{R}^{+}$ (which has only one boundary point, $x=0$) then the minimal operator will in general have deficiency index $1$ and hence the corresponding Titchmarsh-Weyl function is a scalar Herglotz-Nevanlinna function. If however, one considers either a compact interval (with $2$ boundary points) or differential operators on finite graphs (with finitely many boundary points) the corresponding Weyl function is a matrix-valued Herglotz-Nevanlinna function, where the number of boundary points determines its size. Partial differential operators defined on some domain in $\mathbb{R}^{n}$ (with boundary that consists of infinitely many points) give rise to operator valued Herglotz-Nevanlinna functions. See e.g., the recent books [29, 5] and references therein. Other examples for matrix valued Herglotz-Nevanlinna functions do appear e.g., in connection with array antennas [24]. ## 3 Applications In this section, as well as in Part II, we give examples of applications, where Herglotz-Nevanlinna functions are utilized. They stem from quite different areas but in terms of the underlying mathematics they have a lot in common. Here we focus on applications in electromagnetics and techniques that are related to the sum rules. As is mentioned in the introduction, there are also applications where the functions depend on the contrast of materials rather than frequency; see Section 2.1 of Part II. Here we want to point out these similarities in an informal way, more precise definitions are then given in the respective application below or in Part II. First of all, the description of most of the problems in some way involves a convolution operator. This might be related to time-invariance (also called time-homogeneity), or it can appear as a memory term or a time-dispersive integral term. Another common feature is Causality, which means that the current state depends only on the time evolution in the past but not on the future. Mathematically, causality amounts to the fact that the the convolution kernel is supported on one half line only, which implies that its Fourier (or Laplace) transform is an analytic function, in the upper (or lower) half plane. In the applications with contrast the analyticity arises from the coercivity of a certain sesquilinear form. In general the analytic functions given in this way will not be Herglotz- Nevanlinna, but an additional assumption is needed. This might be e.g., passivity or power dissipation, which imposes a sign restriction on the imaginary (or real) part, and this is how Herglotz-Nevanlinna functions appear. In many situations there is a one-to-one correspondence between the systems and the Herglotz-Nevanlinna functions describing them. In the following sections as well as in Part II we summarize results from different areas and try to make their connections to the mathematical background in Section 2 more explicit. We try to use the notations as close as possible to the original papers in order to make them more accessible to the reader. Unfortunately, this leads to unavoidable clashes in some notations, which we will point out explicitly if the context there is not enough to resolve the ambiguity of notation. ### 3.1 Sum rules and physical bounds in electromagnetics In Section 2.6 the mathematical definition of passive systems was given and it was explained that such systems are in one-to-one correspondence with symmetric Herglotz-Nevanlinna functions. Here we are going to give a physical motivation including an example from electromagnetics and demonstrate how the sum rules are used to derive physical bounds. We are following closely the exposition in [28], where also additional references can be found. Physical objects that cannot produce energy are usually considered as passive. However, whether a system is passive or not (in the mathematical sense) depends very much on the definition of the input and the output. More precisely, consider one-port systems. These are systems consisting of one input and one output parameter, which can be measured at the so-called ports of these systems. As an example one might think of an electric circuit with two nodes to which one can input a signal, e.g., a current, and measure a voltage. The one-port systems we consider here are assumed to be linear, continuous and time-translationally invariant. Hence the system is in convolution form, [32], i.e., if $u(t)$ denotes the input, then the output $v(t)$ is given by $v(t)=(w\star u)(t):=\int_{\mathbb{R}}w(\tau)u(t-\tau)d\tau,$ (3.1.1) with impulse response $w(t)$. As before, we restrict ourselves to real-valued systems, i.e., the systems where the impulse response $w$ is real-valued. One way to define passivity for such systems is so-called admittance passivity defined in Definition 2.6.1 [31, 32], where $\mathcal{W}_{\rm{adm}}(T):={\rm Re}\,\int_{-\infty}^{T}v(t)\overline{u(t)}dt\geq 0$ (3.1.2) for all $T\in\mathbb{R}$ and all $u\in{C}^{\infty}_{0}$ (i.e., smooth functions with compact support). Here, $\mathcal{W}_{\rm{adm}}(T)$ represents all energy the system has absorbed until time $T$ and hence this definition means that the system absorbs more energy than it emits, or in other words, the system does not produce energy. It can be shown, [32], that the impulse response $w$ of a passive system has the representation $w(t)=b\delta^{\prime}(t)+H(t)\int_{\mathbb{R}}\cos(\xi t)d\mu(\xi),$ (3.1.3) where $b\geq 0$, $\delta^{\prime}$ denotes the derivative of the Dirac distribution, $H$ the Heaviside step function and $\mu$ a Borel measure satisfying the growth condition from Theorem 2.2.1. This implies that the Laplace transform of the impulse response (3.1.3), $W(s)$ gives rise to a symmetric Herglotz-Nevanlinna function, cf., Theorem 2.6.2, which has exactly the parameters $b$ and $\mu$. Let us have a closer look at a few examples of passive systems in electromagnetics from [28]. ###### Example 3.1.1 Input impedance of electrical circuit networks Consider a simple electric one- port circuit containing passive components, i.e., each resistance $R$, inductance $L$ and capacitance $C$ are positive. The input signal to this system is the real-valued electric current $i(t)$ and its output signal is the voltage $v(t)$, see Fig. 1a. As an explicit example, consider the simple circuit in Fig. 1b. In order to check that this system is passive, we calculate $\mathcal{W}_{\rm{adm}}(T)$ from (3.1.2). $i(t)$$-$$+$$v(t)$Circuita) $i(t)$$L$$R$$-$$+$$v(t)$b) Figure 1: a) A general electric circuit; b) A simple circuit example. For a given input current $i(t)$, the output voltage is given by $v(t)=L\frac{d\,i(t)}{dt}+Ri(t)$ and can be written as $v=w\star i$, where $w=L\delta^{\prime}+R\delta$ is the impulse response. Hence, the integral (3.1.2) becomes $\mathcal{W}_{\rm{adm}}(T)=\int_{-\infty}^{T}\left(L\frac{d\,i(t)}{dt}i(t)+Ri(t)^{2}\right)dt=\frac{L}{2}i(T)^{2}+R\int_{-\infty}^{T}i(t)^{2}dt\geq 0,$ (3.1.4) and the system is admittance-passive. The transfer function (i.e., here the input impedance), which by definition is the Laplace transform of the impulse response, becomes, in this case, the positive real (PR)-function $Z_{\rm{in}}(s)=sL+R$ (3.1.5) and hence $f(z):=iZ_{\rm{in}}(-is)$ is a Herglotz-Nevanlinna function. This simple example generalizes to circuit networks composed of arbitrary number and combinations of passive resistors, capacitances and inductances resulting in rational PR functions [19]. Moreover, it is straightforward to include transformers and transmission lines as well as multiple input and output systems resulting in matrix valued PR functions [12]. Given a Herglotz-Nevanlinna function, the integral identities in Theorems 2.7.4 and 2.7.7 have been applied in order to derive physical bounds on passive systems, see e.g., [8]. In the engineering and physics literature, these integral identities appear in various forms and special cases and are also often referred to as sum rules [26, 8]. For Herglotz-Nevanlinna functions, the integral identities are given on the real axis where $z=x$ is often interpreted as angular frequency $\omega$ (in rad$/s$), wave number $k=\omega/c_{0}$ (in m-1), or as wavelength $\lambda=2\pi/k$ (in m). In many practical electromagnetic applications, it is reasonable to assume some partial knowledge regarding the low- and/or high-frequency asymptotic expansions of the corresponding Herglotz-Nevanlinna function, such as the static and the optical responses of a material, or a structure. In these cases, the sum rules can be used to obtain inequalities by constraining the integration interval to a finite bandwidth in the frequency (or wavelength) domain, and thereby yielding useful physical limitations in a variety of applications. As illustration, we treat the following classical example by applying the theory presented in Section 2.7, even though residue calculus could also be used to solve this problem. ###### Example 3.1.2 The resistance-integral theorem Consider a passive circuit consisting of a parallel connection of a capacitance $C$ and an impedance $Z_{1}(s)$ that does not contain a shunt capacitance (i.e., $Z_{1}(0)$ is finite and $Z_{1}(\infty)\neq 0$), see the figure besides. Then the input impedance of this circuit is given by $Z(s)=1/(sC+1/Z_{1}(s))$, which is a PR-function in the Laplace variable $s\in\mathbb{C}_{+}$, and hence the system is admittance passive. $Z(s)\Rightarrow$$\displaystyle\frac{1}{sC}$$Z_{1}(s)$ The asymptotic expansions are $Z(s)=Z_{1}(0)+o(s)$ as $s\hat{\to}0$ and $Z(s)=1/(sC)+o(s^{-1})$ as $s\hat{\to}\infty$. Here, the corresponding Herglotz-Nevanlinna function is $h(\omega):=iZ(-i\omega)$ for $\omega\in\mathbb{C}^{+}$. Its low- and high-frequency asymptotics are $h(\omega)=o(\omega^{-1})\text{ as }\ \omega\hat{\to}0\text{ and }\ h(\omega)=-\frac{1}{\omega C}+o(\omega^{-1})\text{ as }\ \omega\hat{\to}\infty.$ (3.1.6) In terms of (2.7.7) and (2.7.1), we have $a_{-1}=0$ and $b_{-1}=-1/C$, and thus the sum rule (2.7.12) with $n=0$ gives $\frac{2}{\pi}\int_{0^{+}}^{\infty}{\rm Re}\,[Z(-i\omega)]d\omega=\frac{2}{\pi}\int_{0^{+}}^{\infty}{\rm Im}\,[h(\omega)]d\omega=a_{-1}-b_{-1}=\frac{1}{C}.$ (3.1.7) By integrating only over a finite frequency interval $\Omega:=[\omega_{1},\omega_{2}]$, and estimating this integral from below, we obtain the bound $\Delta\omega\inf\limits_{\omega\in\Omega}{\rm Re}\,[Z(-i\omega)]\leq\int_{0^{+}}^{\infty}{\rm Re}\,[Z(-i\omega)]d\omega=\frac{\pi}{2C},$ (3.1.8) where $\Delta\omega:=\omega_{2}-\omega_{1}$. Consequently, inequality (3.1.8) limits the product between the bandwidth and the minimum resistance over the given frequency interval; see also [9]. Compositions of Herglotz-Nevanlinna functions can be used to construct new Herglotz-Nevanlinna functions and, hence, also new sum rules, cf., also Section 2.3 in Part II. Here, we illustrate this for a case where the minimal temporal dispersion for metamaterials is determined, by first transforming the problem to the question of determining the minimum amplitude of a Herglotz- Nevanlinna function over a bandwidth, [21, 8]. When a dielectric medium is specified to have inductive properties (i.e., has negative permittivity) over a given bandwidth, it is regarded as a metamaterial. A given negative permittivity value at a single frequency is always possible to achieve. For instance, the plasmonic resonances in small metal particles can be explained by e.g., using Drude or Lorentz models. However, when a constant negative permittivity value is prescribed over a given bandwidth, the passivity of the material will imply severe bandwidth limitations, see e.g., [21]. To derive these limitations based on Herglotz-Nevanlinna functions, we start by considering the following general situation: Let $h_{0}$ be a fixed Herglotz-Nevanlinna function that can be extended continuously to a neighbourhood of the compact interval $\Omega\subset\mathbb{R}$ and has the large argument asymptotics $h_{0}(z)=b_{1}^{0}z+o(z)$ as $z\hat{\to}\infty$. Denote by $F(x):=-h_{0}(x)$ the negative of $h_{0}$. We are now looking for a Herglotz-Nevanlinna function $h$ which has the same continuity property on the real line as $h_{0}$ and with an asymptotic expansion $h(z)=b_{1}z+o(z)$ as $z\hat{\to}\infty$ and lies as close as possible to the given anti-Herglotz function $F$. In particular, we aim to derive a lower bound for the error norm $\|h-F\|_{L^{\infty}(\Omega)}:=\sup_{x\in\Omega}|h(x)-F(x)|.$ (3.1.9) To this end, the following auxiliary Herglotz-Nevanlinna function $h_{\varDelta}(z)$, for $\varDelta>0$, is used $h_{\varDelta}(z):=\frac{1}{\pi}\int_{-\varDelta}^{\varDelta}\frac{1}{\xi-z}d\xi=\frac{1}{\pi}{\rm Log}\frac{z-\varDelta}{z+\varDelta}=\begin{cases}i+o(1)&\text{as}\ z\hat{\to}0\vspace{1mm}\\\ \displaystyle\frac{-2\varDelta}{\pi z}+o(z^{-1})&\text{ as }\ z\hat{\to}\infty.\end{cases}$ (3.1.10) Note that ${\rm Im}\,h_{\varDelta}(z)\geq\frac{1}{2}$ for $|z|\leq\varDelta$ and ${\rm Im}\,z\geq 0$. Next, consider the composite Herglotz-Nevanlinna function ${h}_{1}(z):=h_{\varDelta}\big{(}h(z)+h_{0}(z)\big{)}$. Since $h(z)+h_{0}(z)=(b_{1}+b_{1}^{0})z+o(z)$ as $z\hat{\to}\infty$ the new function $h_{\varDelta}$ has the the asymptotic expansions $h_{1}(z)=o(z^{-1})\text{ as}\ z\hat{\to}0\text{ and }\ h_{1}(z)=\frac{-2\varDelta}{\pi(b_{1}+b_{1}^{0})}z^{-1}+o(z^{-1})\text{ as}\ z\hat{\to}\infty.$ (3.1.11) Then the sum rule (2.7.12) with $n=0$ becomes $\frac{2}{\pi}\int_{0+}^{\infty}{\rm Im}\,h_{1}(x)dx=a_{-1}-b_{-1}=\frac{2\varDelta}{\pi(b_{1}+b_{1}^{0})}.$ (3.1.12) Choosing $\varDelta:=\sup_{x\in\Omega}|h(x)+h_{0}(x)|$, the following integral inequalities follow $\frac{1}{\pi}|\Omega|\leq\frac{2}{\pi}\int_{\Omega}\underbrace{{\rm Im}\,h_{1}(x)}_{\geq\frac{1}{2}}dx\leq\frac{2}{\pi}\int_{0+}^{\infty}{\rm Im}\,h_{1}(x)dx=\frac{2\sup_{x\in\Omega}|h(x)+h_{0}(x)|}{\pi(b_{1}+b_{1}^{0})}$ (3.1.13) or $\|h+h_{0}\|_{L^{\infty}(\Omega)}\geq(b_{1}+b_{1}^{0})\frac{1}{2}|\Omega|,\text{ where }|\Omega|=\int_{\Omega}dx.$ (3.1.14) ###### Example 3.1.3 Metamaterials and temporal dispersion Consider now a dielectric metamaterial with a constant, real-valued and negative target permittivity $\epsilon_{\rm{t}}<0$ to be approximated over an interval $\Omega$. In this case, the function of interest is $F(z)=z\epsilon_{\rm{t}}$ and hence we have $h_{0}(z)=-F(z)$ with $b_{1}^{0}=-\epsilon_{\rm{t}}$. Let $\epsilon(z)$ be the permittivity function of the approximating passive dielectric material, and $h(z)=z\epsilon(z)$ the corresponding Herglotz-Nevanlinna function with $b_{1}=\epsilon_{\infty}$, the assumed high-frequency permittivity of the material, and the approximation interval $\Omega=\omega_{0}[1-B/2,1+B/2]$, where $\omega_{0}$ is the center frequency and $B$ the relative bandwidth with $0<B<2$. The resulting physical bound obtained from (3.1.14) is given by $\|\epsilon(\cdot)-\epsilon_{\rm{t}}\|_{L^{\infty}(\Omega)}\geq\frac{(\epsilon_{\infty}-\epsilon_{\rm{t}})B}{2+B}.$ (3.1.15) Note that the variable $x$ corresponds here to angular frequency, also commonly denoted as $\omega$ (in rad/s). Other applications are related to scattering passive systems, see e.g., [32, 8] for a precise definition. Scattering passive systems have transfer functions that map $\mathbb{C}^{+}$ to the unit disk. To use (2.7.12), one then first constructs a Herglotz-Nevanlinna function by mapping the unit disk to $\mathbb{C}^{+}$. This map can be made in many different ways and the particular choice depends on the asymptotic expansion and the physical interpretation of the system. The Cayley transform, logarithm, and addition are most common in applications. For examples see e.g., [8]. ### 3.2 Physical bounds via convex optimization In this section it is exemplified how Herglotz-Nevanlinna function’s can be used to identify or approximate passive systems with given properties. This approach is based on convex optimization related to the functions integral representation. To facilitate the computation of a numerical solution using a software such as e.g., CVX [17], it is necessary to first impose some a priori constraints on the class of approximating Herglotz-Nevanlinna functions. In view of Section 2.3 we restrict ourselves here to approximating Herglotz-Nevanlinna functions that are locally Hölder continuous on some given intervals on the real line. A passive approximation problem is considered where the target function $F$ is an arbitrary complex valued continuous function defined on an approximation domain $\Omega\subset\mathbb{R}$ consisting of a finite union of closed and bounded intervals of the real axis. The norms used, denoted by $\|\cdot\|_{L^{p}(w,\Omega)}$, are weighted $L^{p}(\Omega)$-norms with a positive continuous weight function $w$ on $\Omega$, and where $1\leq p\leq\infty$. Here for any approximating function $h$ we assume that it is the Hölder continuous extension (to $\Omega$) of some Herglotz-Nevanlinna function generated by an absolutely continuous measure $\mu$ having a density $\mu^{\prime}$ which is Hölder continuous on the closure $\overline{U}$ of an arbitrary neighborhood $U\supset\Omega$ of the approximation domain. Then, cf., Proposition 2.3, both the real and the imaginary parts of $h$ are continuous functions on $\Omega$. Moreover, it holds that ${\rm Im}\,h(x)=\pi\mu^{\prime}(x)$ on $\overline{U}$ the real part is given by the associated Hilbert transform. As we consider real systems only, the approximating Herglotz-Nevanlinna function $h$ can be assumed to be symmetric and its real part hence admits the representation ${\rm Re}\,h(x)=bx+p.v.\int_{\mathbb{R}}\frac{\mu^{\prime}(\tau)}{\tau-x}d\tau\quad\text{ for }x\in\Omega,$ (3.2.1) where $p.v.$ denotes the principal values both at $\infty$ and $x$. The continuity of $h$ on $\Omega$ implies that the norm $\|h\|_{L^{p}(w,\Omega)}$ is well-defined for $1\leq p\leq\infty$. If approximating the function $F$ by Herglotz-Nevanlinna functions $h$ on $\Omega$ one is interested in the greatest lower bound on the approximation error by $d:=\displaystyle\inf_{h}\|h-F\|_{L^{p}(w,\Omega)},$ (3.2.2) where the infinum is taken over all Herglotz-Nevanlinna functions $h$ generated by a measure having a Hölder continuous density on $\overline{U}$. In general, a best approximation achieving the bound $d$ in (3.2.2) does not exist. In practice, however, the problem is approached by using numerical algorithms such as CVX, solving finite-dimensional approximation problems using e.g., B-splines, with the number of basis functions $N$ fixed during the optimization, cf., [30, 23]. Here, a B-spline of order $m\geq 2$ is an $m-2$ times continuously differentiable and compactly supported positive basis spline function consisting of piecewise polynomial functions of order $m-1$, i.e., linear, quadratic, cubic, etc., and which is defined by $m+1$ break- points [13]. For the density ${\rm Im}\,h(x)$ of the approximating symmetric function $h$ here it is made the ansatz of a finite B-spline expansion $\pi\mu^{\prime}(x)=\sum_{n=1}^{N}\zeta_{n}\left(p_{n}(x)+p_{n}(-x)\right)$ (3.2.3) for $x\in\mathbb{R}$, where $\zeta_{n}$ are optimization variables for $n=1,\ldots,N$, and $p_{n}(x)$ are B-spline basis functions of fixed order $m$ which are defined on the given partition. The real part ${\rm Re}\,h(x)$ for $x\in\Omega$ is then given by (3.2.1), and can be expressed as ${\rm Re}\,h(x)=bx-\frac{\zeta_{0}}{x}+\sum_{n=1}^{N}\zeta_{n}\left(\hat{p}_{n}(x)-\hat{p}_{n}(-x)\right),\quad x\in\Omega,$ (3.2.4) where $\hat{p}_{n}(x)$ is the (negative) Hilbert transform of the B-spline function $p_{n}(x)$ and where a point mass at $x=0$ with amplitude $c_{0}$ has been included. Any other a priori assumed point masses can be included in a similar way. Consider now the following convex optimization problem $\displaystyle\begin{array}[]{llll}&\text{minimize}&&\|h-F\|_{L^{p}(w,\Omega)}\\\\[2.84526pt] &\text{subject to}&&\zeta_{n}\geq 0,\ \text{for}\ n=0,\ldots N,\\\ &&&b\geq 0,\end{array}$ (3.2.8) where the optimization is over the variables $(\zeta_{0},\zeta_{1},\ldots,\zeta_{N},b)$. Note that the objective function in (3.2.8) above is the norm of an affine form in the optimization variables. Hence, the objective function is a convex function in the variables $(\zeta_{0},\zeta_{1},\ldots,\zeta_{N},b)$. The uniform continuity of all functions involved implies that the solution to (3.2.8) can be approximated within an arbitrary accuracy by discretizing the approximation domain $\Omega$ (and the computation of the norm) using only a finite number of sample points. The corresponding numerical problem (3.2.8) can now be solved efficiently by using the CVX Matlab software for disciplined convex programming. The convex optimization formulation (3.2.8) offers a great advantage in the flexibility in which additional or alternative convex constraints and formulations can be implemented; see also [30, 23]. ###### Example 3.2.1 A canonical example for convex optimization is passive approximation of metamaterials; see also [21, 30, 23]. As in Example 3.1.3 the variable $x$ corresponds here to angular frequency, also commonly denoted as $\omega$ (in rad/s). A typical application is with the study of optimal plasmonic resonances in small structures (or particles) for which the absorption cross section can be approximated by $\sigma_{\rm{abs}}\approx k{\rm Im}\,\gamma,$ (3.2.9) where $k=2\pi/\lambda$ is the wave number, $\lambda$ the wavelength and where $\gamma$ is the electric polarizability of the particle; see [10]. As e.g., the polarizability of a dielectric sphere with radius $a$ is given by $\gamma(x)=4\pi a^{3}(\epsilon(x)-1)/(\epsilon(x)+2)$ where $\epsilon(x)$ is the permittivity function of the dielectric material inside the sphere. A surface plasmon resonance is obtained when $\epsilon(x)\approx-2$, and, hence, we specify that the target permittivity of our metamaterial is $\epsilon_{\rm t}=-2$. However, a metamaterial with a negative real part cannot, in general, be implemented as a passive material over a given bandwidth c.f., [20]. Based on the theory of Herglotz-Nevanlinna functions and associated sum rules, the physical bound in (3.1.15) can be derived, where $\epsilon_{\infty}$ is the high-frequency permittivity of the material, $\epsilon_{\rm t}<\epsilon_{\infty}$, $\Omega=\omega_{0}[1-B/2,1+B/2]$, $\omega_{0}$ the center frequency and $B$ the relative bandwidth with $0<B<2$, c.f., [20]. The convex optimization formulation (3.2.8) can be used to study passive realizations (3.2.3) and (3.2.4) that satisfies the bound (3.1.15) as close as possible. Here, the approximating Herglotz-Nevanlinna function is $h(x)=x\epsilon(x)$, the target function $F(x)=x\epsilon_{\rm t}$, $\zeta_{0}$ the amplitude of a point mass at $x=0$, $b=\epsilon_{\infty}$ and a weighted norm is used defined by $\|f\|_{L^{\infty}(w,\Omega)}=\max_{x\in\Omega}|f(x)/x|$ assuming that $0\notin\Omega$. For numerical examples of these kind of approximations as well as with non-passive systems employing quasi-Herglotz functions (Section 3.1 in Part II) see [23, 22, 28]. ## References * [1] Naum Il’ich Akhiezer and Izrail Markovich Glazman. Theory of linear operators in Hilbert space, volume 1. Dover publications, 1993. * [2] S. Albeverio and P. Kurasov. Singular perturbations of differential operators, volume 271 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2000. Solvable Schrödinger type operators. * [3] N. Aronszajn and W. F. Donoghue. On exponential representations of analytic functions. J. Analyse Math., 5:321–388, 1956. * [4] N. Aronszajn and W. F. Donoghue. A supplement to the paper on exponential representations of analytic functions in the upper half-plane with positive imaginary part. J. Analyse Math., 12:113–127, 1964. * [5] Jussi Behrndt, Seppo Hassi, and Henk de Snoo. Boundary value problems, Weyl functions, and differential operators, volume 108 of Monographs in Mathematics. Birkhäuser/Springer, Cham, [2020] ©2020. * [6] Christian Berg and Henrik L. Pedersen. Pick functions related to the gamma function. volume 32, pages 507–525. 2002. Conference on Special Functions (Tempe, AZ, 2000). * [7] Christian Berg and Henrik L. Pedersen. A one-parameter family of Pick functions defined by the gamma function and related to the volume of the unit ball in $n$-space. Proc. Amer. Math. Soc., 139(6):2121–2132, 2011. * [8] A Bernland, A Luger, and M Gustafsson. Sum rules and constraints on passive systems. Journal of Physics A: Mathematical and Theoretical, 44(14):145205, mar 2011. * [9] H. W. Bode. Network analysis and feedback amplifier design. Van Nostrand, 1945. * [10] C. F. Bohren and D. R. Huffman. Absorption and Scattering of Light by Small Particles. John Wiley & Sons, 1983. * [11] W. Cauer. The Poisson integral for functions with positive real part. Bull. Amer. Math. Soc., 38(10):713–717, 1932. * [12] H. Carlin D. Youla, L. Castriota. Bounded real scattering matrices and the founda-tions of linear passive network theory. IRE Transactions on Circuit Theory, 6(1):102–124, 1959. * [13] Carl de Boor. On calculating with $B$-splines. J. Approximation Theory, 6:50–62, 1972. * [14] A. Dijksma and H. S. V. de Snoo. Symmetric and selfadjoint relations in Kreĭn spaces. I. In Operators in indefinite metric spaces, scattering theory and other topics (Bucharest, 1985), volume 24 of Oper. Theory Adv. Appl., pages 145–166. Birkhäuser, Basel, 1987. * [15] Fritz Gesztesy and Eduard Tsekanovskii. On matrix-valued herglotz functions. Mathematische Nachrichten, 218(1):61–138, 2000. * [16] Fritz Gesztesy and Maxim Zinchenko. On spectral theory for Schrödinger operators with strongly singular potentials. Math. Nachr., 279(9-10):1041–1082, 2006. * [17] M. Grant and S. Boyd. Cvx: A system for disciplined convex programming, release 2.0. CVX Research, Inc., Austin, TX, 2012. * [18] David S. Greenstein. On the analytic continuation of functions which map the upper half plane into itself. J. Math. Anal. Appl., 1:355–362, 1960. * [19] E. A. Guillemin. Synthesis of passive networks. John Wiley & Sons, 1957. * [20] M. Gustafsson and D. Sjöberg. Physical bounds and sum rules for high-impedance surfaces. IEEE Trans. Antennas Propag., 59(6):2196–2204, 2011. * [21] Mats Gustafsson and Daniel Sjöberg. Sum rules and physical bounds on passive metamaterials. New Journal of Physics, 12(4):043046, apr 2010. * [22] Y. Ivanenko, M. Nedic, M. Gustafsson, B.L.G. Jonsson, A. Luger, and S. Nordebo. Quasi-herglotz functions and convex optimization. R. Soc. open sci., 7:191541, 2020. * [23] Yevhen Ivanenko, Mats Gustafsson, B. L. G. Jonsson, Annemarie Luger, Börje Nilsson, Sven Nordebo, and Joachim Toft. Passive approximation and optimization using B-splines. SIAM J. Appl. Math., 79(1):436–458, 2019. * [24] B. L. G. Jonsson, C. I. Kolitsidas, and N. Hussain. Array antenna limitations. Antennas and Wireless Propagation Letters, IEEE, 12:1539–1542, 2013\. * [25] I.S. Kac and M.G. Krein. R-functions-analytic functions mapping the upper halfplane into itself. AMS Translations, 103:1–18, 1974. * [26] Frederick W. King. Hilbert transforms. Vol. 2, volume 125 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2009. * [27] M. G. Kreĭn and H. Langer. Über einige Fortsetzungsprobleme, die eng mit der Theorie hermitescher Operatoren im Raume $\Pi_{\kappa}$ zusammenhängen. I. Einige Funktionenklassen und ihre Darstellungen. Math. Nachr., 77:187–236, 1977. * [28] M. Nedic, C. Ehrenborg, Y. Ivanenko, A. Ludvig-Osipov, S. Nordebo, A. Luger, B.L.G. Jonsson, D. Sjöberg, and M. Gustafsson. Advances in Mathematical Methods for Electromagnetics, chapter Herglotz functions and applications in electromagnetics. IET, 2019. * [29] Kurasov Pavel. Spectral geometry of graphs. to appear. * [30] B. Nilsson D. Sjöberg S. Nordebo, M. Gustafsson. Optimal realizations of passive structures. IEEE Trans. Antennas Propag., 62(9):4686–4694, 2014. * [31] M. Wohlers and E. Beltrami. Distribution theory as the basis of generalized passive-network analysis. IEEE Transactions on Circuit Theory, 12(2):164–170, 1965. * [32] A. H. Zemanian. Distribution theory and transform analysis. An introduction to generalized functions, with applications. McGraw-Hill Book Co., New York-Toronto-London-Sydney, 1965.
# Nonlinear Mixing driven by Internal Gravity Waves Adam S. Jermyn Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA ###### Abstract Hydrodynamic waves propagate through stellar interiors, transporting energy and angular momentum. They can also advect fluid elements to produce mixing, but this effect has not been quantified from first principles. We derive the leading order non-linear wave mixing due to internal gravity waves in a thermally and compositionally-stratified fluid. We find that this scales as the fourth power of wave velocity, that it is suppressed by compositional stratification, and that it depends on the thermal and compositional diffusivities. Stellar physics (1621); Astrophysical fluid dynamics (101); Internal Waves (819) ††journal: ApJ ## 1 Introduction Waves are solutions to the linearized equations of motion of a fluid. These are exact solutions to the full equations of motion in the limit of vanishing amplitude, but at any finite amplitude there are non-linear corrections. One such correction is the Stokes Drift (Andrews & Mcintyre, 1978), which is the difference between the Eulerian displacement $\displaystyle\boldsymbol{\xi}_{\rm Euler}(\boldsymbol{r},t)\equiv\int_{0}^{t}\boldsymbol{u}(\boldsymbol{r})dt$ (1) and the Lagrangian one $\displaystyle\boldsymbol{\xi}_{\rm Lagrange}\equiv\int_{0}^{t}\boldsymbol{u}(\boldsymbol{\xi}_{\rm Lagrange}(\boldsymbol{r},t))dt$ (2) after some amount of time $t$. That is, $\displaystyle\boldsymbol{\xi}_{\rm Stokes}=\boldsymbol{\xi}_{\rm Lagrange}-\boldsymbol{\xi}_{\rm Euler}$ (3) Here $\boldsymbol{u}$ is the velocity, $\boldsymbol{r}$ is the spatial coordinate, $t$ is time, and $\boldsymbol{\xi}$ is the displacement. Here our aim is to derive the diffusivity associated with the Stokes drift for a random field of internal gravity waves (IGW). Our approach is intentionally didactic, and we reproduce a number of known intermediate results for clarity. We begin in Section 2 with a review of properties of diffusion, concluding with a well-known expression for the diffusion coefficient in terms of the zero-frequency autocorrelation of the velocity field. In Section 3 we derive the equations of motion for internal gravity waves in a thermally- and compositionally-stratified medium, retaining both thermal and compositional diffusion. We then use this to derive the non-linear forcing due to IGW, and use that to compute the nonlinear wave diffusivity. We conclude with a comparison to other prescriptions for wave mixing in Section 4. ## 2 Diffusion We now review some basic facts about diffusion. ### 2.1 Diffusivity The diffusivity $D$ is defined in one dimension as $\displaystyle D\equiv\lim_{T\rightarrow\infty}\frac{\langle(r(T)-r(0))^{2}\rangle}{2T},$ (4) where $r(T)$ is the coordinate of a particle at time $T$ undergoing stochastic motion and $\langle…\rangle$ represents an expectation value over that motion. We can relate this form to the velocity $u(t)$ of the particle via $\displaystyle r(T)-r(0)=\int_{0}^{T}u(t)dt,$ (5) which gives $\displaystyle D=\lim_{T\rightarrow\infty}\frac{1}{2T}\int_{0}^{T}\int_{0}^{T}\langle u(t)u(t^{\prime})\rangle dtdt^{\prime}.$ (6) ### 2.2 Stationary Process In a stationary process correlations are time-translation invariant, so $\displaystyle\langle u(t)u(t^{\prime})\rangle=\langle u(t-t^{\prime})u(0)\rangle.$ (7) This is a good approximation in most astrophysical contexts, where the forcing mechanism (e.g. convection) and the propagating medium change on time-scales which are very long compared with the wave frequency. We proceed assuming that $u$ is described by a stationary process. Equation (6) then simplifies to $\displaystyle D=\lim_{T\rightarrow\infty}\frac{1}{2T}\int_{0}^{T}\int_{t-T}^{t}\langle u(\tau)u(0)\rangle d\tau dt,$ (8) where $\tau\equiv t-t^{\prime}$. Exchanging the order of integration we find $\displaystyle D=\lim_{T\rightarrow\infty}\frac{1}{2T}\left(\int_{-T}^{0}\int_{0}^{T+\tau}+\int_{0}^{T}\int_{\tau}^{T}\right)\langle u(\tau)u(0)\rangle dtd\tau.$ (9) Once more because $u$ is a stationary process we have $\displaystyle\langle u(\tau)u(0)\rangle=\langle u(0)u(-\tau)\rangle=\langle u(-\tau)u(0)\rangle,$ (10) so we can flip the sign of $\tau$ in the first pair of integrals and obtain $\displaystyle D=\lim_{T\rightarrow\infty}\frac{1}{2T}\left(\int_{0}^{T}\int_{0}^{T-\tau}+\int_{0}^{T}\int_{\tau}^{T}\right)\langle u(\tau)u(0)\rangle dtd\tau$ (11) Performing the inner integrals over $t$ we find $\displaystyle D=\lim_{T\rightarrow\infty}\int_{0}^{T}\frac{T-\tau}{T}\langle u(\tau)u(0)\rangle d\tau.$ (12) Taking the limit we recover the relation by Kubo (1957): $\displaystyle D=\int_{0}^{\infty}\langle u(\tau)u(0)\rangle d\tau,$ (13) which may also be written for a stationary process as $\displaystyle D=\frac{1}{2}\int_{-\infty}^{\infty}\langle u(\tau)u(0)\rangle d\tau.$ (14) ### 2.3 Relation to the Power Spectrum We use the Fourier transorm convention $\displaystyle u(\omega)$ $\displaystyle=\int_{-\infty}^{\infty}e^{-i\omega t}u(t)\frac{dt}{\sqrt{2\pi}}$ (15) $\displaystyle u(t)$ $\displaystyle=\int_{-\infty}^{\infty}e^{i\omega t}u(\omega)\frac{d\omega}{\sqrt{2\pi}}.$ (16) With this, we write the frequency autocorrelation as $\displaystyle\langle u(\omega)u(\omega^{\prime})\rangle=\int_{-\infty}^{\infty}\frac{dt}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\frac{dt^{\prime}}{\sqrt{2\pi}}e^{-i(\omega t+\omega^{\prime}t^{\prime})}\langle u(t)u(t^{\prime})\rangle.$ (17) Because this is a stationary process we can subtract an offset from the times in the correlation function so long as the difference between them is preserved. We do this with a change of variables to $\tau=t-t^{\prime}$ and $q=(t+t^{\prime})/2$, giving $\displaystyle\langle u(\omega)u(\omega^{\prime})\rangle$ $\displaystyle=\int_{-\infty}^{\infty}\frac{d\tau}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\frac{dq}{\sqrt{2\pi}}e^{-i(\omega(\tau/2+q)-\omega^{\prime}(\tau/2-q))}\langle u(\tau)u(0)\rangle$ (18) $\displaystyle=\int_{-\infty}^{\infty}\frac{d\tau}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\frac{dq}{\sqrt{2\pi}}e^{-i\tau(\omega-\omega^{\prime})/2-iq(\omega+\omega^{\prime})}\langle u(\tau)u(0)\rangle$ (19) $\displaystyle=\delta(\omega+\omega^{\prime})\int_{-\infty}^{\infty}d\tau e^{-i\tau(\omega-\omega^{\prime})/2}\langle u(\tau)u(0)\rangle$ (20) $\displaystyle=\delta(\omega+\omega^{\prime})\int_{-\infty}^{\infty}d\tau e^{-i\tau\omega}\langle u(\tau)u(0)\rangle,$ (21) where we obtained the third line by performing the integral over $q$. Because the frequency autocorrelation vanishes except when $\omega=-\omega^{\prime}$, we can define the power spectrum $\displaystyle S(\omega)\equiv\int_{-\infty}^{\infty}d\omega^{\prime}\langle u(\omega)u(\omega^{\prime})\rangle,$ (22) which is the energy per unit frequency in the velocity field. Using equation (21) we see that $\displaystyle S(\omega)=\int_{-\infty}^{\infty}d\tau e^{-i\tau\omega}\langle u(\tau)u(0)\rangle$ (23) and so $\displaystyle D=\frac{1}{2}S(0).$ (24) Hence, the diffusivity is related to the power spectrum at zero frequency. Physically, this is because diffusion is a statement about long-time behaviour. ### 2.4 Spatial Variation The diffusion coefficient is defined in terms of the motion of a single particle in the infinite-time limit, and so it is not trivial to define diffusion coefficients which vary in space. It can be done, however, by defining the local diffusivity to be given by the diffusion coefficient one would obtain if the local conditions held globally. We compute this by averaging the diffusion coefficient a volume $V$ which is small compared with the large-scale structure of the star but large compared with the characteristic length-scale of the velocity field (e.g. its scale of variation). Thus we generalize equation (14) to find $\displaystyle D=\frac{1}{2}\int_{-\infty}^{\infty}d\tau\int\frac{d^{3}\boldsymbol{r}}{V}\langle u(\tau,\boldsymbol{r})u(0,\boldsymbol{r})\rangle.$ (25) We now generalize our earlier Fourier transorm convention $\displaystyle u(\omega,\boldsymbol{k})$ $\displaystyle=\int_{-\infty}^{\infty}\frac{dt}{\sqrt{2\pi}}\int\frac{d^{3}\boldsymbol{r}}{V}e^{-i\omega t-i\boldsymbol{k}\cdot\boldsymbol{r}}u(t,\boldsymbol{r})$ (26) $\displaystyle u(t,\boldsymbol{r})$ $\displaystyle=\int_{-\infty}^{\infty}\frac{d\omega}{\sqrt{2\pi}}\sum_{\boldsymbol{k}}e^{i\omega t+i\boldsymbol{k}\cdot\boldsymbol{r}}u(\omega,\boldsymbol{k}),$ (27) as well as the corresponding mixed conventions for e.g. $u(\omega,\boldsymbol{r})$. Defining $\displaystyle S(\omega,\boldsymbol{k})\equiv\int_{-\infty}^{\infty}d\omega^{\prime}\langle u(\omega,\boldsymbol{k})u(\omega^{\prime},-\boldsymbol{k})\rangle,$ (28) we find $\displaystyle S(\omega,\boldsymbol{k})$ $\displaystyle=\int_{-\infty}^{\infty}d\omega^{\prime}\int\frac{d^{3}\boldsymbol{r}}{V}\int\frac{d^{3}\boldsymbol{r}^{\prime}}{V}e^{i\boldsymbol{k}\cdot\boldsymbol{r}-i\boldsymbol{k}\cdot\boldsymbol{r}^{\prime}}\langle u(\omega,\boldsymbol{r})u(\omega^{\prime},\boldsymbol{r^{\prime}})\rangle$ (29) Using the results of the previous section we write this as $\displaystyle S(\omega,\boldsymbol{k})$ $\displaystyle=\int_{-\infty}^{\infty}d\omega^{\prime}\int_{-\infty}^{\infty}d\tau\int\frac{d^{3}\boldsymbol{r}}{V}\int\frac{d^{3}\boldsymbol{r}^{\prime}}{V}e^{i\boldsymbol{k}\cdot\boldsymbol{r}-i\boldsymbol{k}\cdot\boldsymbol{r}^{\prime}-i\omega\tau}\delta(\omega+\omega^{\prime})\langle u(\tau,\boldsymbol{r})u(0,\boldsymbol{r^{\prime}})\rangle$ (30) $\displaystyle=\int_{-\infty}^{\infty}d\tau\int\frac{d^{3}\boldsymbol{r}}{V}\int\frac{d^{3}\boldsymbol{r}^{\prime}}{V}e^{i\boldsymbol{k}\cdot\boldsymbol{r}-i\boldsymbol{k}\cdot\boldsymbol{r}^{\prime}+i\omega\tau}\langle u(\tau,\boldsymbol{r})u(0,\boldsymbol{r^{\prime}})\rangle$ (31) Summing over $\boldsymbol{k}$ produces $\delta(\boldsymbol{r}-\boldsymbol{r}^{\prime})$, so $\displaystyle D=\frac{1}{2}\sum_{\boldsymbol{k}}S(0,\boldsymbol{k}).$ (32) That is, the diffusivity receives a contribution from the power at zero frequency for all wave-vectors. ## 3 Internal Gravity Waves ### 3.1 Leading Order Here we determine the leading order of the diffusivity in the wave velocity field $\boldsymbol{u}_{w}$. This must be at least second order (i.e. $D\propto u_{w}^{2}$), as the diffusivity is sensitive to the power in the velocity field and hence goes like $u^{2}$. However, the damping length of internal gravity waves approaches zero for internal gravity waves as $\omega\rightarrow 0$. So we should expect the power to vanish at $\omega=0$ anywhere away from the wave excitation region, and hence the contribution to the diffusivity vanishes as well. This means that the dominant contribution to the diffusivity must arise at higher orders in $u_{w}$. If we assume that the waves velocities are Gaussian random variables, then expectation values of the form $\langle u_{w}u_{w}u_{w}\rangle$ vanish, so the diffusivity must be at least fourth order in the wave velocity (i.e. $D\propto u_{w}^{4}$), and indeed some fourth-order terms do not straightforwardly vanish (specifically, terms in which each frequency occurs at least twice). Fourth-order terms must arise via non-linear interactions between waves. There are several terms in the Navier-Stokes equation that can provide such interactions, but the simplest is the Stokes acceleration $\displaystyle\boldsymbol{a}_{s}=\boldsymbol{u}_{w}\cdot\nabla\boldsymbol{u}_{w},$ (33) Because $\boldsymbol{a}_{s}$ is quadratic in $\boldsymbol{u}_{w}$ it has power at zero frequency, even though $\boldsymbol{u}_{w}$ does not. A quick way to see this is to note that $\sin(\omega t)$ has no power at zero-frequency (i.e. the Fourier transform has support only at $\omega^{\prime}=\\{-\omega,\omega\\}$), but $\sin^{2}(\omega t)$, which has a non-zero time average, has support at $\omega^{\prime}=\\{-2\omega,0,2\omega\\}$. The net result is that a substantial (order unity) fraction of the power in $a_{s,r}$ is at zero frequency. While there are other non-linearities arising due to wave motion (e.g. coupling between the density and velocity fields), we believe that this term is representative of the largest of those and proceed neglecting all others. We thus conclude that the diffusivity is most likely to arise in terms of the form $\langle\boldsymbol{u}_{s}\boldsymbol{u}_{s}\rangle$, where $\boldsymbol{u}_{s}$ is the velocity field that arises from a zero-frequency non-linear forcing term as derived below (see equation (37)). That is, the waves interact with each other to produce a non-linear acceleration term that appears in the Navier-Stokes equations. This new term has a zero-frequency component, which drives further motion via the _linear_ equations of motion. This new motion has a zero frequency component, and that is what enters into equation (24) to produce diffusion. ### 3.2 A Subtlety with Wavevectors Because IGW are incompressible, the wavevector $\boldsymbol{k}$ obeys $\boldsymbol{k}\cdot\boldsymbol{u}_{w}=0$, so equation (37) is non-zero only when there are multiple waves of different wave-vectors present. This, however, is straightforward to arrange. Consider for instance waves with wave- vectors $\boldsymbol{k}_{r}+\boldsymbol{k}_{\perp}$ and $\boldsymbol{k}_{r}-\boldsymbol{k}_{\perp}$. These produces Stokes acceleration with terms of the form and magnitude $\displaystyle\boldsymbol{a}_{s}$ $\displaystyle=(\boldsymbol{k}_{r}-\boldsymbol{k}_{\perp})\cdot\boldsymbol{u}_{w}(\boldsymbol{k}_{r}+\boldsymbol{k}_{\perp})\boldsymbol{u}_{w}(\boldsymbol{k}_{r}-\boldsymbol{k}_{\perp}).$ (34) Noting that $(\boldsymbol{k}_{r}+\boldsymbol{k}_{\perp})\cdot\boldsymbol{u}_{w}(\boldsymbol{k}_{r}+\boldsymbol{k}_{\perp})=0$, we can rewrite this as $\displaystyle\boldsymbol{a}_{s}$ $\displaystyle=\left[(\boldsymbol{k}_{r}-\boldsymbol{k}_{\perp})-(\boldsymbol{k}_{r}+\boldsymbol{k}_{\perp})\right]\cdot\boldsymbol{u}_{w}(\boldsymbol{k}_{r}+\boldsymbol{k}_{\perp})\boldsymbol{u}_{w}(\boldsymbol{k}_{r}-\boldsymbol{k}_{\perp})$ (35) $\displaystyle=-2\boldsymbol{k}_{\perp}\cdot\boldsymbol{u}_{w}(\boldsymbol{k}_{r}+\boldsymbol{k}_{\perp})\boldsymbol{u}_{w}(\boldsymbol{k}_{r}-\boldsymbol{k}_{\perp})$ (36) $\displaystyle=-2k_{\perp}u_{w,\perp}(\boldsymbol{k}_{r}+\boldsymbol{k}_{\perp})\boldsymbol{u}_{w}(\boldsymbol{k}_{r}-\boldsymbol{k}_{\perp})$ (37) which is non-zero. ### 3.3 Outline of Calculation We now sketch the calculation before performing it in more detail. We first derive the linearized equations of motion for IGW in the Boussinesq plane-parallel limit. We then apply the non-linear acceleration in equation (37) to those equations, and derive the linear response of the velocity field to the Stokes forcing. The result is a radial velocity $u_{r}(\omega,\boldsymbol{k})$, where $\omega$ and $\boldsymbol{k}$ are the frequency and wavevector of $\boldsymbol{a}_{s}$. This linear response is of the form $\displaystyle u_{r}(\omega,\boldsymbol{k})=\mathcal{L}_{i}(\omega,\boldsymbol{k})a_{s,i}(\omega,\boldsymbol{k}),$ (38) where $\mathcal{L}$ is a linear operator that depends on frequency and wavevector $\boldsymbol{k}$ and summation is implied over repeated indices. Next, we relate the Stokes acceleration to the wave velocity field via equation (37), which we write as $\displaystyle\boldsymbol{a}_{s}(\omega,\boldsymbol{k})=\int_{-\infty}^{\infty}\frac{dt}{\sqrt{2\pi}}\int\frac{d^{3}\boldsymbol{r}}{V}e^{-i\omega t-i\boldsymbol{k}\cdot\boldsymbol{r}}\boldsymbol{u}_{w}(t,\boldsymbol{r})\cdot\nabla\boldsymbol{u}_{w}(t,\boldsymbol{r})$ (39) Inserting equation (27) twice we find $\displaystyle\boldsymbol{a}_{s}(\omega,\boldsymbol{k})$ $\displaystyle=\int_{-\infty}^{\infty}\frac{dt}{\sqrt{2\pi}}\int\frac{d^{3}\boldsymbol{r}}{V}e^{-i\omega t-i\boldsymbol{k}\cdot\boldsymbol{r}}\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{\sqrt{2\pi}}\sum_{\boldsymbol{k}^{\prime}}e^{i\omega^{\prime}t+i\boldsymbol{k}^{\prime}\cdot\boldsymbol{r}}\boldsymbol{u}_{w}(\omega^{\prime},\boldsymbol{k}^{\prime})\cdot\nabla\int_{-\infty}^{\infty}\frac{d\omega^{\prime\prime}}{\sqrt{2\pi}}\sum_{\boldsymbol{k}^{\prime\prime}}e^{i\omega^{\prime\prime}t+i\boldsymbol{k}^{\prime\prime}\cdot\boldsymbol{r}}\boldsymbol{u}_{w}(\omega^{\prime\prime},\boldsymbol{k}^{\prime\prime})$ (40) $\displaystyle=\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{\sqrt{2\pi}}\sum_{\boldsymbol{k}^{\prime}}\int_{-\infty}^{\infty}\frac{d\omega^{\prime\prime}}{\sqrt{2\pi}}\sum_{\boldsymbol{k}^{\prime\prime}}\int_{-\infty}^{\infty}\frac{dt}{\sqrt{2\pi}}\int\frac{d^{3}\boldsymbol{r}}{V}e^{it(-\omega+\omega^{\prime}+\omega^{\prime\prime})+i\boldsymbol{r}\cdot(-\boldsymbol{k}+\boldsymbol{k}^{\prime}+\boldsymbol{k}^{\prime\prime})}\boldsymbol{u}_{w}(\omega^{\prime},\boldsymbol{k}^{\prime})\cdot\boldsymbol{k}^{\prime\prime}\boldsymbol{u}_{w}(\omega^{\prime\prime},\boldsymbol{k}^{\prime\prime})$ (41) $\displaystyle=\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{\sqrt{2\pi}}\sum_{\boldsymbol{k}^{\prime}}\boldsymbol{u}_{w}(\omega^{\prime},\boldsymbol{k}^{\prime})\cdot(\boldsymbol{k}-\boldsymbol{k}^{\prime})\boldsymbol{u}_{w}(\omega-\omega^{\prime},\boldsymbol{k}-\boldsymbol{k}^{\prime}).$ (42) The diffusivity is then given by $\displaystyle D$ $\displaystyle=\frac{1}{2}\sum_{\boldsymbol{k}}S(0,\boldsymbol{k})$ (43) $\displaystyle=\frac{1}{2}\sum_{\boldsymbol{k}}\int_{-\infty}^{\infty}d\omega\langle u_{r}(0,\boldsymbol{k})u_{r}(\omega,-\boldsymbol{k})\rangle$ (44) $\displaystyle=\frac{1}{2}\int_{-\infty}^{\infty}d\omega\int_{-\infty}^{\infty}d\omega_{1}\int_{-\infty}^{\infty}d\omega_{2}\sum_{\boldsymbol{k},\boldsymbol{k}_{1},\boldsymbol{k}_{2}}\mathcal{L}_{a}(0,\boldsymbol{k})\mathcal{L}_{c}(\omega,-\boldsymbol{k})(\boldsymbol{k}-\boldsymbol{k}_{1})_{a}(-\boldsymbol{k}-\boldsymbol{k}_{2})_{c}$ (45) $\displaystyle\times\langle(u_{w,a}(\omega_{1},\boldsymbol{k}_{1})u_{w,b}(-\omega_{1},\boldsymbol{k}-\boldsymbol{k}_{1})u_{w,c}(\omega_{2},\boldsymbol{k}_{2})u_{w,d}(\omega-\omega_{2},-\boldsymbol{k}-\boldsymbol{k}_{2})\rangle.$ (46) The result is a four-point autocorrelation function of the wave field. For Gaussian random variables $x_{1}...x_{4}$, Wick’s theorem allows us to write $\displaystyle\langle x_{1}x_{2}x_{3}x_{4}\rangle=\langle x_{1}x_{2}\rangle\langle x_{3}x_{4}\rangle+\langle x_{1}x_{3}\rangle\langle x_{2}x_{4}\rangle+\langle x_{1}x_{4}\rangle\langle x_{2}x_{3}\rangle.$ (47) We now approximate the correlations in $\boldsymbol{u}_{w}$ as Gaussian and use the above result to write $\displaystyle\langle(u_{w,a}(\omega_{1},\boldsymbol{k}_{1})u_{w,b}(-\omega_{1},\boldsymbol{k}-\boldsymbol{k}_{1})u_{w,c}(\omega_{2},\boldsymbol{k}_{2})u_{w,d}(\omega-\omega_{2},-\boldsymbol{k}-\boldsymbol{k}_{2})\rangle=$ (48) $\displaystyle\langle(u_{w,a}(\omega_{1},\boldsymbol{k}_{1})u_{w,b}(-\omega_{1},\boldsymbol{k}-\boldsymbol{k}_{1})\rangle\langle u_{w,c}(\omega_{2},\boldsymbol{k}_{2})u_{w,d}(\omega-\omega_{2},-\boldsymbol{k}-\boldsymbol{k}_{2})\rangle\ $ (49) $\displaystyle+\langle(u_{w,a}(\omega_{1},\boldsymbol{k}_{1})u_{w,c}(\omega_{2},\boldsymbol{k}_{2})\rangle\langle u_{w,b}(-\omega_{1},\boldsymbol{k}-\boldsymbol{k}_{1})u_{w,d}(\omega-\omega_{2},-\boldsymbol{k}-\boldsymbol{k}_{2})\rangle$ (50) $\displaystyle+\langle(u_{w,a}(\omega_{1},\boldsymbol{k}_{1})u_{w,d}(\omega-\omega_{2},-\boldsymbol{k}-\boldsymbol{k}_{2})\rangle\langle u_{w,b}(-\omega_{1},\boldsymbol{k}-\boldsymbol{k}_{1})u_{w,c}(\omega_{2},\boldsymbol{k}_{2})\rangle.$ (51) As a stationary process, each two-point correlation function vanishes unless it has opposing frequencies (e.g. $\omega=-\omega^{\prime}$). Likewise, spatial translation invariance means that two-point function vanishes unless it has opposing wave-vectors. Examining the three terms, we see that all but the last requires $\boldsymbol{k}=0$. These do not contribute because, as we shall see, $\mathcal{L}(\omega,0)=0$111Physically this arises because, in a stratified medium, there must be diffusion to permit motion and that does not happen for the $\boldsymbol{k}=0$ mode.. As a result we find $\displaystyle D$ $\displaystyle=\frac{1}{2}\int_{-\infty}^{\infty}d\omega\int_{-\infty}^{\infty}d\omega_{1}\int_{-\infty}^{\infty}d\omega_{2}\sum_{\boldsymbol{k},\boldsymbol{k}_{1},\boldsymbol{k}_{2}}\mathcal{L}_{a}(0,\boldsymbol{k})\mathcal{L}_{c}(\omega,-\boldsymbol{k})(\boldsymbol{k}-\boldsymbol{k}_{1})_{a}(-\boldsymbol{k}-\boldsymbol{k}_{2})_{c}$ (52) $\displaystyle\times\langle(u_{w,a}(\omega_{1},\boldsymbol{k}_{1})u_{w,d}(\omega-\omega_{2},-\boldsymbol{k}-\boldsymbol{k}_{2})\rangle\langle u_{w,b}(-\omega_{1},\boldsymbol{k}-\boldsymbol{k}_{1})u_{w,c}(\omega_{2},\boldsymbol{k}_{2})\rangle.$ (53) The correlation functions all vanish unless their wave-vectors sum to zero, so $\boldsymbol{k}-\boldsymbol{k}_{1}+\boldsymbol{k}_{2}=0$, which we can use to eliminate $\boldsymbol{k}_{2}$ and find $\displaystyle D$ $\displaystyle=\frac{1}{2}\int_{-\infty}^{\infty}d\omega\int_{-\infty}^{\infty}d\omega_{1}\int_{-\infty}^{\infty}d\omega_{2}\sum_{\boldsymbol{k},\boldsymbol{k}_{1}}\mathcal{L}_{a}(0,\boldsymbol{k})\mathcal{L}_{c}(\omega,-\boldsymbol{k})(\boldsymbol{k}-\boldsymbol{k}_{1})_{a}(-\boldsymbol{k}_{1})_{c}$ (54) $\displaystyle\times\langle(u_{w,a}(\omega_{1},\boldsymbol{k}_{1})u_{w,d}(\omega-\omega_{2},-\boldsymbol{k}_{1})\rangle\langle u_{w,b}(-\omega_{1},\boldsymbol{k}-\boldsymbol{k}_{1})u_{w,c}(\omega_{2},\boldsymbol{k}_{1}-\boldsymbol{k})\rangle.$ (55) We can shift $\omega$ up by $\omega_{2}$ and $\boldsymbol{k}$ up by $\boldsymbol{k}_{1}$ to obtain $\displaystyle D$ $\displaystyle=-\frac{1}{2}\int_{-\infty}^{\infty}d\omega\int_{-\infty}^{\infty}d\omega_{1}\int_{-\infty}^{\infty}d\omega_{2}\sum_{\boldsymbol{k},\boldsymbol{k}_{1}}\mathcal{L}_{a}(0,\boldsymbol{k}+\boldsymbol{k}_{1})\mathcal{L}_{c}(\omega+\omega_{2},-\boldsymbol{k}-\boldsymbol{k}_{1})k_{a}k_{1,c}$ (56) $\displaystyle\times\langle(u_{w,a}(\omega_{1},\boldsymbol{k}_{1})u_{w,d}(\omega,-\boldsymbol{k}_{1})\rangle\langle u_{w,b}(-\omega_{1},\boldsymbol{k})u_{w,c}(\omega_{2},-\boldsymbol{k})\rangle$ (57) Inserting equation (28) twice we find $\displaystyle D$ $\displaystyle=-\frac{1}{2}\int_{-\infty}^{\infty}d\omega\int_{-\infty}^{\infty}d\omega_{1}\int_{-\infty}^{\infty}d\omega_{2}\sum_{\boldsymbol{k},\boldsymbol{k}_{1}}\mathcal{L}_{a}(0,\boldsymbol{k}+\boldsymbol{k}_{1})\mathcal{L}_{c}(\omega+\omega_{2},-\boldsymbol{k}-\boldsymbol{k}_{1})k_{a}k_{1,c}$ (58) $\displaystyle\times\delta(\omega_{1}+\omega)S_{w,ad}(\omega_{1},\boldsymbol{k}_{1})\delta(\omega_{2}-\omega_{1})S_{w,cb}(-\omega_{1},\boldsymbol{k}),$ (59) where $S_{w}$ is the power spectrum of the wave velocity. Evaluating the integrals yields $\displaystyle D$ $\displaystyle=-\frac{1}{2}\int_{-\infty}^{\infty}d\omega_{1}\sum_{\boldsymbol{k},\boldsymbol{k}_{1}}\mathcal{L}_{a}(0,\boldsymbol{k}+\boldsymbol{k}_{1})\mathcal{L}_{c}(0,-\boldsymbol{k}-\boldsymbol{k}_{1})k_{a}k_{1,c}S_{w,ad}(\omega_{1},\boldsymbol{k}_{1})S_{w,cb}(-\omega_{1},\boldsymbol{k})$ (60) $\displaystyle=-\frac{1}{2}\int_{-\infty}^{\infty}d\omega_{1}\sum_{\boldsymbol{k},\boldsymbol{k}_{1}}\left(\boldsymbol{k}\cdot\overleftrightarrow{S}_{w}(\omega_{1},\boldsymbol{k}_{1})\cdot\mathcal{L}(0,\boldsymbol{k}+\boldsymbol{k}_{1})\right)\left(\boldsymbol{k}_{1}\cdot\overleftrightarrow{S}_{w}(-\omega_{1},\boldsymbol{k})\cdot\mathcal{L}(0,-\boldsymbol{k}-\boldsymbol{k}_{1})\right).$ (61) That is, the diffusivity is given by a bilinear function of the wave power spectrum. Negating $\boldsymbol{k}_{1}$ we find $\displaystyle D$ $\displaystyle=\frac{1}{2}\int_{-\infty}^{\infty}d\omega_{1}\sum_{\boldsymbol{k},\boldsymbol{k}_{1}}\left(\boldsymbol{k}\cdot\overleftrightarrow{S}_{w}(\omega_{1},-\boldsymbol{k}_{1})\cdot\mathcal{L}(0,\boldsymbol{k}-\boldsymbol{k}_{1})\right)\left(\boldsymbol{k}_{1}\cdot\overleftrightarrow{S}_{w}(-\omega_{1},\boldsymbol{k})\cdot\mathcal{L}(0,\boldsymbol{k}_{1}-\boldsymbol{k})\right).$ (62) We can clean this up a little by noting that for real-valued velocity fields $S(\omega,\boldsymbol{k})=S(\pm\omega,\pm\boldsymbol{k})$. So $\displaystyle D$ $\displaystyle=\frac{1}{2}\int_{-\infty}^{\infty}d\omega_{1}\sum_{\boldsymbol{k},\boldsymbol{k}_{1}}\left(\boldsymbol{k}\cdot\overleftrightarrow{S}_{w}(\omega_{1},\boldsymbol{k}_{1})\cdot\mathcal{L}(0,\boldsymbol{k}-\boldsymbol{k}_{1})\right)\left(\boldsymbol{k}_{1}\cdot\overleftrightarrow{S}_{w}(\omega_{1},\boldsymbol{k})\cdot\mathcal{L}(0,\boldsymbol{k}_{1}-\boldsymbol{k})\right)$ (63) $\displaystyle=\frac{1}{2}\int_{-\infty}^{\infty}d\omega\sum_{\boldsymbol{k}_{1},\boldsymbol{k}_{2}}\left(\boldsymbol{k}_{1}\cdot\overleftrightarrow{S}_{w}(\omega_{1},\boldsymbol{k}_{2})\cdot\mathcal{L}(0,\boldsymbol{k}_{1}-\boldsymbol{k}_{2})\right)\times\left(\boldsymbol{k}_{1}\leftrightarrow\boldsymbol{k}_{2}\right).$ (64) where in the last line we have also relabeled $\omega_{1}\rightarrow\omega$, $\boldsymbol{k}\rightarrow k_{1}$, and $\boldsymbol{k}_{1}\rightarrow\boldsymbol{k}_{2}$. ### 3.4 Filling in Details We now fill in the details we omitted above. Given the non-linear forcing $\boldsymbol{a}_{s}$, how does the velocity field respond? In Appendix A we derive the linearized equations of motion for IGW in the Boussinesq plane- parallel limit. We denote Eulerian perturbations by a prime, so that the perturbation of quantity $A$ is written as $A^{\prime}$, and we write the unperturbed background quantities with a subscript $0$, as in $A_{0}$. The subscript $r$ denotes the vertical direction, and $h$ denotes the horizontal one. Gravity is in the vertical direction. With this, we obtain equations (A20)-(A24): $\displaystyle i\omega\rho_{0}u_{r}-ik_{r}p^{\prime}-T^{\prime}\frac{g_{0}\rho_{0}}{T_{0}}+\mu^{\prime}\frac{g_{0}\rho_{0}}{\mu_{0}}$ $\displaystyle=0$ (65) $\displaystyle i\omega\rho_{0}\boldsymbol{u}_{h}-i\boldsymbol{k}_{h}p^{\prime}$ $\displaystyle=0$ (66) $\displaystyle k_{r}u_{r}+\boldsymbol{k}_{h}\cdot\boldsymbol{u}_{h}$ $\displaystyle=0$ (67) $\displaystyle(i\omega+\alpha k^{2})T^{\prime}+\frac{N_{T}^{2}T_{0}}{g_{0}}u_{r}$ $\displaystyle=0$ (68) $\displaystyle(i\omega+D_{\mu}k^{2})\mu^{\prime}-\frac{N_{\mu}^{2}\mu_{0}}{g_{0}}u_{r}$ $\displaystyle=0$ (69) Here $\mu$ is the mean molecular weight, $T$ is the temperature, $\rho$ is the density, $g>0$ is the downward acceleration due to gravity, $N_{T}$ is the thermal buoyancy frequency, $\alpha$ is the thermal diffusivity, $N_{\mu}$ is the compositional buoyancy frequency, and $D_{\mu}$ is the compositional diffusivity. Note that we work in Fourier space, with wave-vector $\boldsymbol{k}$ and frequency $\omega$. We can insert our forcing term $\boldsymbol{a}_{s}$ on the right-hand side of the momentum equation, giving $\displaystyle i\omega\rho_{0}u_{r}-ik_{r}p^{\prime}-T^{\prime}\frac{g_{0}}{T_{0}}+\mu^{\prime}\frac{g_{0}}{\mu_{0}}$ $\displaystyle=-a_{s,r}$ (70) $\displaystyle i\omega\rho_{0}\boldsymbol{u}_{h}-i\boldsymbol{k}_{h}p^{\prime}$ $\displaystyle=-\boldsymbol{a}_{s,h}.$ (71) Solving for the vertical velocity $u_{r}$ at $\omega=0$ we obtain $\displaystyle u_{r}=\frac{\alpha D_{\mu}k^{2}}{D_{\mu}N_{T}^{2}+\alpha N_{\mu}^{2}}\left(a_{s,r}-\boldsymbol{a}_{s,\perp}\cdot\boldsymbol{k}_{\perp}\frac{k_{r}}{k_{\perp}^{2}}\right)$ (72) With this, we find $\displaystyle\mathcal{L}(0,\boldsymbol{k})=\frac{\alpha D_{\mu}k^{2}}{D_{\mu}N_{T}^{2}+\alpha N_{\mu}^{2}}\left(\hat{r}-\boldsymbol{k}_{\perp}\frac{k_{r}}{k_{\perp}^{2}}\right).$ (73) Inserting this into equation (64) we find $\displaystyle D$ $\displaystyle=\frac{\alpha^{2}D_{\mu}^{2}}{2(D_{\mu}N_{T}^{2}+\alpha N_{\mu}^{2})^{2}}\int_{-\infty}^{\infty}d\omega\sum_{\boldsymbol{k}_{1},\boldsymbol{k}_{2}}|\boldsymbol{k}_{1}-\boldsymbol{k}_{2}|^{4}\left(\boldsymbol{k}_{1}\cdot\overleftrightarrow{S}_{w}(\omega_{1},\boldsymbol{k}_{2})\cdot\left[\hat{r}-(\boldsymbol{k}_{1,\perp}-\boldsymbol{k}_{2,\perp})\frac{k_{1,r}-k_{2,r}}{|\boldsymbol{k}_{1,\perp}-\boldsymbol{k}_{2,\perp}|^{2}}\right]\right)\times\left(\boldsymbol{k}_{1}\leftrightarrow\boldsymbol{k}_{2}\right).$ (74) ### 3.5 Approximate Expression Equation (74) is rather unwieldy. It may be simplified by noting that $k_{r}\gg k_{\perp}$ and $k_{r}u_{w,r}\approx k_{\perp}u_{w,\perp}$ for IGW. This means that the $\hat{r}$ term in $\mathcal{L}$ contributes very little and that $k\approx k_{r}$, so $\displaystyle D$ $\displaystyle\approx\frac{\alpha^{2}D_{\mu}^{2}}{2(D_{\mu}N_{T}^{2}+\alpha N_{\mu}^{2})^{2}}\int_{-\infty}^{\infty}d\omega\sum_{\boldsymbol{k}_{1},\boldsymbol{k}_{2}}\frac{|\boldsymbol{k}_{1}-\boldsymbol{k}_{2}|^{6}}{|\boldsymbol{k}_{1,\perp}-\boldsymbol{k}_{2,\perp}|^{4}}\left(\boldsymbol{k}_{1}\cdot\overleftrightarrow{S}_{w}(\omega_{1},\boldsymbol{k}_{2})\cdot\left[\boldsymbol{k}_{1,\perp}-\boldsymbol{k}_{2,\perp}\right]\right)\times\left(\boldsymbol{k}_{1}\leftrightarrow\boldsymbol{k}_{2}\right).$ (75) If the spectrum peaks strongly at frequency $\omega\approx\omega_{0}$ with width $\Delta\omega\approx\omega_{0}$, and peaks at wave-vector $k_{\perp}\approx k_{\perp,0}$ with width $\Delta k_{\perp}\approx k_{\perp,0}$, then $\displaystyle D$ $\displaystyle\approx\frac{\alpha^{2}D_{\mu}^{2}k_{r,0}^{2}}{2\omega_{0}(D_{\mu}N_{T}^{2}+\alpha N_{\mu}^{2})^{2}}\left(\frac{k_{r,0}}{k_{\perp,0}}\right)^{4}\left(k_{r,0}k_{\perp,0}u_{w,r}u_{w,\perp}\right)^{2}.$ (76) Using the incompressibility condition we find $u_{r}\approx u_{\perp}k_{\perp}/k_{r}$ so $\displaystyle D$ $\displaystyle\approx\frac{\alpha^{2}D_{\mu}^{2}k_{r,0}^{2}}{2\omega_{0}(D_{\mu}N_{T}^{2}+\alpha N_{\mu}^{2})^{2}}\left(\frac{k_{r,0}}{k_{\perp,0}}\right)^{4}\left(k_{\perp,0}^{2}u_{w,\perp}^{2}\right)^{2}.$ (77) It is often convenient to write this in terms of the wave luminosity $L_{w}\approx 4\pi r^{2}\rho(\omega/k_{r})u_{\perp}^{2}$, so $\displaystyle D$ $\displaystyle\approx\frac{\alpha^{2}D_{\mu}^{2}k_{r,0}^{8}}{2\omega_{0}^{3}(D_{\mu}N_{T}^{2}+\alpha N_{\mu}^{2})^{2}}\left(\frac{L_{\rm wave}}{4\pi r^{2}\rho}\right)^{2}.$ (78) ## 4 Discussion We have derived the leading order non-linear wave mixing due to internal gravity waves in a thermally and compositionally-stratified fluid. We find that this occurs at fourth order in the wave velocity, scales strongly with both the thermal and compositional diffusivities, and is suppressed by both forms of stratification. A different expression was obtained by Garcia Lopez & Spruit (1991) by assuming that waves drive shear turbulence which then produces mixing. That expression is linear in the wave luminosity (quadratic in the velocity), linear in the thermal diffusivity, and generally predicts much more mixing than our expression. We have not studied the wave-driven turbulence scenario, but note that in order for this to produce substantial mixing it must mean that a large fraction of the wave power is processed into zero-frequency motion. We encourage further study of whether and how this happens to pin down the scaling of wave mixing. I am grateful to Jim Fuller and Yuri Levin for extensive discussions, feedback, and mentorship on this work. The Flatiron Institute is supported by the Simons Foundation. This work was also supported by the Gordon and Betty Moore Foundation (Grant GBMF7392) and the National Science Foundation (Grant No. NSF PHY-1748958). ## Appendix A Equations of Motion Here we derive the linearized equations of motion for IGW in the Boussinesq plane-parallel limit, taking inspiration from Christensen-Dalsgaard (2003). ### A.1 Mass Equation In the Boussinesq approximation we neglect density perturbations except in the momentum equation, so the continuity equation for mass is $\displaystyle\boldsymbol{u}\cdot\nabla\rho_{0}+\rho_{0}\nabla\cdot\boldsymbol{u}=0.$ (A1) In this approximation we further neglect the background density gradient, assuming the waves to have a much smaller characteristic vertical scale, so this reduces to $\displaystyle\nabla\cdot\boldsymbol{u}=0.$ (A2) ### A.2 Composition Equation We treat composition via the mean molecular weight $\mu$, which follows an advection-diffusion equation $\displaystyle\partial_{t}\mu+u_{r}\partial_{r}\mu-D_{\mu}\nabla^{2}\mu=0,$ (A3) where $D_{\mu}$ is the compositional diffusivity and $\partial_{r}$ is the vertical spatial derivative. Note that we have already made use of the Boussinesq approximation by neglecting density variation, and we have assumed that $D_{\mu}$ is a constant so that it commutes with $\nabla$. Expanding this equation to linear order in the perturbations we find $\displaystyle\partial_{t}\mu^{\prime}+u_{r}^{\prime}\partial_{r}\mu_{0}-D_{\mu}\nabla^{2}\mu^{\prime}=0$ (A4) where we have assumed $u_{r,0}=0$, corresponding to a stationary background state. Defining $\displaystyle N_{\mu}^{2}\equiv-\frac{g_{0}}{\mu_{0}}\partial_{r}\mu_{0},$ (A5) where $g_{0}$ is the background acceleration due to gravity, we find $\displaystyle\partial_{t}\mu^{\prime}-u_{r}\frac{N_{\mu}^{2}\mu_{0}}{g_{0}}-D_{\mu}\nabla^{2}\mu^{\prime}=0$ (A6) ### A.3 Energy Equation The energy equation is $\displaystyle c_{p}\partial_{t}T+u_{r}c_{p}T\partial_{r}s=-\nabla\cdot\boldsymbol{F},$ (A7) where $s$ is the dimensionless entropy, $\boldsymbol{F}$ is the radiative heat flux, and $c_{p}$ is the specific heat at constant pressure. Here we have assumed that the entropy is constant in the horizontal direction. We can expand the entropy gradient in terms of the temperature gradient as $\displaystyle T\partial_{r}s=\partial_{r}T-\partial_{r}T_{\rm ad},$ (A8) where the second term on the right-hand side is the adiabatic temperature gradient. This gives $\displaystyle c_{p}\partial_{t}T+u_{r}c_{p}\partial_{r}(T-T_{\rm ad})=-\nabla\cdot\boldsymbol{F},$ (A9) Next, we write the heat flux as $\displaystyle\boldsymbol{F}=-\alpha c_{p}\nabla T,$ (A10) where $\alpha$ is the thermal diffusivity. Treating $\alpha$ and $c_{p}$ as constants we find $\displaystyle c_{p}\partial_{t}T+u_{r}c_{p}\partial_{r}(T-T_{\rm ad})=\alpha c_{p}\nabla^{2}T.$ (A11) Expanding to linear order, we see that $\displaystyle c_{p}\partial_{t}T^{\prime}+u_{r}c_{p}\partial_{t}(T_{0}-T_{\rm ad})=\alpha c_{p}\nabla^{2}T^{\prime}$ (A12) Defining $\displaystyle N_{T}^{2}\equiv\frac{g_{0}}{T_{0}}\partial_{r}(T_{0}-T_{\rm ad}),$ (A13) we finally write this as $\displaystyle\partial_{t}T^{\prime}+\frac{u_{r}}{g_{0}}T_{0}N_{T}^{2}=\alpha\nabla^{2}T^{\prime}.$ (A14) ### A.4 Momentum Equation The inviscid linearized Boussinesq Navier Stokes Equation is $\displaystyle\rho_{0}\partial_{t}\boldsymbol{u}=-\nabla p^{\prime}+\rho_{0}\boldsymbol{g}^{\prime}+\rho^{\prime}\boldsymbol{g}_{0}.$ (A15) Neglecting the perturbation to the gravitational field we find $\displaystyle\rho_{0}\partial_{t}\boldsymbol{u}=-\nabla p^{\prime}+\rho^{\prime}\boldsymbol{g}_{0}+\rho_{0}\nu\nabla^{2}\boldsymbol{u}.$ (A16) Expanding the density perturbation in terms of the composition and temperature we obtain $\displaystyle\rho_{0}\partial_{t}\boldsymbol{u}=-\nabla p^{\prime}+\rho_{0}\left(\frac{\mu^{\prime}}{\mu_{0}}-\frac{T^{\prime}}{T_{0}}\right)\boldsymbol{g}_{0}+\rho_{0}\nu\nabla^{2}\boldsymbol{u}.$ (A17) Splitting this into a horizontal component and a radial component we find $\displaystyle\rho_{0}\partial_{t}\boldsymbol{u}_{h}-\rho_{0}\nu\nabla^{2}\boldsymbol{u}_{h}$ $\displaystyle=-\nabla_{h}p^{\prime}$ (A18) $\displaystyle\rho_{0}\partial_{t}u_{r}-\rho_{0}\nu\nabla^{2}u_{r}$ $\displaystyle=-\partial_{r}p^{\prime}-g_{0}\rho_{0}\left(\frac{\mu^{\prime}}{\mu_{0}}-\frac{T^{\prime}}{T_{0}}\right)$ (A19) where we have picked a sign convention such that $\boldsymbol{g}_{0}$ points radially downward and the scalar $g_{0}>0$. ### A.5 Fourier Transform Suppose that our solution is proportional to $e^{i\omega t-ik_{h}x_{h}-ik_{r}r}$. Then our equations become $\displaystyle i\omega\rho_{0}u_{r}-ik_{r}p^{\prime}-T^{\prime}\frac{g_{0}}{T_{0}}+\mu^{\prime}\frac{g_{0}}{\mu_{0}}$ $\displaystyle=0$ (A20) $\displaystyle i\omega\rho_{0}\boldsymbol{u}_{h}-i\boldsymbol{k}_{h}p^{\prime}$ $\displaystyle=0$ (A21) $\displaystyle k_{r}u_{r}+\boldsymbol{k}_{h}\cdot\boldsymbol{u}_{h}$ $\displaystyle=0$ (A22) $\displaystyle(i\omega+\alpha k^{2})T^{\prime}+\frac{N_{T}^{2}T_{0}}{g_{0}}u_{r}$ $\displaystyle=0$ (A23) $\displaystyle(i\omega+D_{\mu}k^{2})\mu^{\prime}-\frac{N_{\mu}^{2}\mu_{0}}{g_{0}}u_{r}$ $\displaystyle=0$ (A24) ## References * Andrews & Mcintyre (1978) Andrews, D. G., & Mcintyre, M. E. 1978, Journal of Fluid Mechanics, 89, 609–646, doi: 10.1017/S0022112078002773 * Christensen-Dalsgaard (2003) Christensen-Dalsgaard, J. 2003, Lecture Notes on Stellar Oscillations. http://w.astro.berkeley.edu/~eliot/Astro202/2009_Dalsgaard.pdf * Garcia Lopez & Spruit (1991) Garcia Lopez, R. J., & Spruit, H. C. 1991, ApJ, 377, 268, doi: 10.1086/170356 * Kubo (1957) Kubo, R. 1957, Journal of the Physical Society of Japan, 12, 570, doi: 10.1143/JPSJ.12.570
# Predicting the redshift of $\gamma$-ray loud AGN using supervised machine learning Maria Giovanna Dainotti National Astronomical Observatory of Japan, Mitaka Space Science Institute, 4750 Walnut St, Suite 205, Boulder,CO,80301,USA Malgorzata Bogdan Department of Mathematics, University of Wroclaw, Poland Department of Statistics, Lund University, Sweden Aditya Narendra Jagiellonian University, Poland Spencer James Gibson Carnegie Mellon University, USA Blazej Miasojedow Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Poland Ioannis Liodakis Finnish Center for Astronomy with ESO (FINCA), University of Turku, Finland Agnieszka Pollo Astronomical Observatory of Jagiellonian University, Krakow National Centre for Nuclear Research, Warsaw Trevor Nelson University of Massachusetts at Amherst, Massachusetts, USA Kamil Wozniak AGH University of Science and Technology, Krakow Zooey Nguyen Faculty of Astronomy, University of California, Los Angeles, California, USA Johan Larrson Department of Statistics, Lund University, Sweden (Received June 1, 2019; Revised January 10, 2019) ###### Abstract AGNs are very powerful galaxies characterized by extremely bright emissions coming out from their central massive black holes. Knowing the redshifts of AGNs provides us with an opportunity to determine their distance to investigate important astrophysical problems such as the evolution of the early stars, their formation along with the structure of early galaxies. The redshift determination is challenging, because it requires detailed follow-up of multi-wavelength observations, often involving various astronomical facilities. Here, we employ machine learning algorithms to estimate redshifts from the observed $\gamma$-ray properties and photometric data of $\gamma$-ray loud AGN from the Fourth Fermi-LAT Catalog. The prediction is obtained with the Superlearner algorithm, using LASSO selected set of predictors. We obtain a tight correlation, with a Pearson Correlation Coefficient of 71.3% between the inferred and the observed redshifts, an average $\Delta$znorm = 11.6$\times 10^{-4}$. We stress that notwithstanding the small sample of $\gamma$-ray loud AGNs, we obtain a reliable predictive model using Superlearner, which is an ensemble of several machine learning models. AGNs, Machine learning, redshift ††journal: APJ ## 1 Introduction Active Galactic Nuclei (AGN) with jets are the dominant class of objects when it comes to high-latitude ($|b|>10$) extragalactic $\gamma$-ray sources (Abdollahi et al., 2020). The Fermi $\gamma$-ray space telescope has detected more than 2863 such $\gamma$-ray AGNs, the majority of which ($>98\%$) are blazars: AGN with their jets pointed towards our line of sight. Blazars are denoted by the equivalent width of resonant emission lines in their optical spectra. Sources with broad emission lines are classified as Flat Spectrum Radio Quasars (FSRQs), whereas sources with weak or no emission lines are classified as BL Lacertae objects (BLLs). Measuring the redshift (z) of blazars has been a cumbersome and observationally expensive endeavor. The situation is further complicated by the absence of emission lines in the most numerous class of $\gamma$-ray loud blazars, i.e., BL Lacs. As a result, out of the 2863 sources of the Fourth AGN Fermi-LAT catalog (4LAC, Ajello et al. (2020)), only 1591 have redshift estimates, ranging from $z=[0,3]$, but mostly concentrate below $z=2$. $\gamma$-Ray loud blazars with redshift estimates are relevant for our comprehension of the origin of the Extragalactic Background Light (EBL), which in turn let us probe the cosmic evolution of blazars (e.g., Singal et al. (2012), Singal et al. (2014), Singal (2015), Singal et al. (2013a), Chiang et al. (1995), Ackermann et al. (2015), Singal et al. (2013b) Marcotulli et al., 2020), the intergalactic magnetic field (e.g., Venters & Pavlidou, 2013), star formation rate history of our universe (e.g., Fermi-LAT Collaboration et al., 2018), as well as constrain cosmological parameters (e.g., Domínguez et al., 2019). The difficulty in spectroscopically measuring redshift in a significant fraction of BL Lacs and the importance of identifying high-$z$ blazars has led to the development of photometric estimation techniques (photo-z, e.g., Kaur et al., 2017, 2018; Rajagopal et al., 2020; Carrasco et al., 2015; Krakowski et al., 2016; Nakoneczny et al., 2019 ). However, works using such methods typically produce redshift estimates for only $\sim 6-13\%$ of their sample, making alternative methods necessary. Machine learning (ML) methods for obtaining photo-z estimates for AGN are becoming increasingly important in the era of big data Astronomy (e.g., D’Isanto & Polsterer, 2018; Brescia et al., 2013, 2019; Ilbert et al., 2008; Hildebrandt et al., 2010). Here we focus on the $\gamma$-ray emitting AGN population in the 4LAC. In the current literature, multiple works exist which focus on extracting reliable photometric redshift of AGNs (Cavuoti et al., 2014; Fotopoulou & Paltani, 2018; Logan & Fotopoulou, 2020; Yang et al., 2017; Zhang et al., 2019; Curran, 2020; Nakoneczny et al., 2020; Pasquet-Itam & Pasquet, 2018; Jones & Singal, 2017). In the current blazar literature, a lot of effort has also been placed in classifying blazars of uncertain type (e.g., Chiaro et al., 2016; Kang et al., 2019) and unidentified Fermi objects (e.g., Liodakis & Blinov, 2019). Although these papers convey useful information about the algorithms that work well for classifying blazars, so far no analysis has been performed regarding the prediction of the redshifts of $\gamma$-ray loud blazars. Thus, we will tackle this problem by using machine and statistical learning algorithms. We apply multiple ML algorithms, such as LASSO (Least Absolute Shrinkage and Selection Operator), XGBoost (Extreme Gradient boosting), RandomForest, and BayesGLM (Bayesian generalized linear model). We follow the approach used in Dainotti et al. (2019), where some of us used the SuperLearner package to aggregate the results from multiple algorithms and predict the redshifts of $\gamma$-ray bursts. The results of this study increases the number of blazars with inferred redshifts considerably so that we can finally obtain a more complete sample of $\gamma$-ray loud AGNs. As a result, this work will enable the solving of some crucial questions on the luminosity function and density evolution of $\gamma$-ray loud AGNs. In Section 2, we discuss the data and predictors used. In Section 3, we outline the ML methods used, the selection of the best predictors and algorithms, and the validation of our results. In Section 4, we present the results obtained in this analysis. In Section 5, we present our results and discuss future perspectives. ## 2 The sample Fermi-LAT has been continuously monitoring the sky in the 50 MeV to 1 TeV range since 2008. The $\gamma$-ray properties used in this work are obtained from the 4LAC catalog (Ajello et al., 2020). It contains 2863 sources, 658 of which are FSRQs, 1067 are BL Lacs, 1074 are blazars of uncertain type, and the remaining 64 sources are classified as radio galaxies, Narrow line Seyferts (NLSY1), and other non-blazar AGNs. Out of the 2863 sources, 1591 have a measured redshift, whose distribution is shown in Fig. 1. For completeness of the treatment we have included also non BL LAC and non FSRQs sources in the initial scatter matrix plot in Fig. 3 to show how the variables in the sample is distributed. But, in the generalization set, we are predicting the redshift for only the BLLs. Figure 1: The redshift distribution of the entire 4LAC catalog before selection cuts and outliers removal. Unfortunately, all of the 1591 $\gamma$-ray AGNs cannot be used for our model’s training. A significant number of these $\gamma$-ray AGNs have incomplete observational data, meaning we face the problem of missing values in several parameters. Thus, we perform cuts in the data set to remove incomplete data points leaving us with 1169 $\gamma$-ray AGNs out of 2863. These consist of 661 BLLs, 309 FSRQs, 177 unclassified AGNs, and 22 AGNs belonging to other categories. This set is split into training and generalization sets, the former consisting of the $\gamma$-ray AGNs that have observed spectroscopic redshift, while the latter consists of the $\gamma$-ray AGNs for which the redshift is not measured. Our training set consists of 793 $\gamma$-ray AGNs, made up of 422 BLLs, 308 FSRQs, 41 unclassified, and 22 other category AGNs. The 22 other category $\gamma$-ray AGNs in our training set consisted of 2 NLSY1 sources, 3 Compact-Steep spectrum Radio Source (CSS) sources, 13 Radio Galaxies (RDG) sources, and 2 sources classified as non-blazar AGNs. They are shown in Fig. 3. After we perform the cuts related to the missing data we are left with 730 $\gamma$-ray AGNs. Similarly, our generalization set consists of 376 $\gamma$-ray AGNs, of which are 239 BLLs, 1 FSRQ, and are 136 unclassified AGNs. After we perform the cuts in the generalization set we are left with 239 BLLs. Due to their dominating presence, we perform our predictions only for BLLs, and remove the 136 uncategorized AGNs. But, in the scatter matrix plot of Fig. 6, we show in black the only FSRQ from the generalization set. BL Lacs and FSRQs can be very easily separated as we did when we have introduced in the Superlearner categorical variables. We here stress that this is an important point, because it means that the quality of the predictions will most probably differ, especially if the fractions of BL Lacs in the training sample and in the full population are very different. This is expected as we have already mentioned in the introduction that this could be the case because of the difficulty of obtaining their spectroscopic redshift. We also would like to stress that due to the paucity of the other classes the categorical variables have been limited to BLL and FSRQs. Figure 2: The histogram distribution of the redshift of our training set in 1/(z+1) scale Regarding the predictors, 4LAC contains 13 photometric variables along with the spectroscopic redshift and names of the AGNs. It also includes the g-band magnitudes for individual sources from Gaia (Jordi et al., 2010). Some of the variables are used in their logarithmic form since they span over several orders of magnitude and we predict the redshift in the scale of $\frac{1}{z+1}$ (see Fig. 2). Out of these 13 variables, we take into consideration 11. We exclude fractional variability due to the incompleteness of the AGN sample and Log$\nu$f$\nu$ as it is a second-order variable depending on Log$\nu$. The definition and explanation for the 11 variables are given below. * • LogFlux \- Logarithm in the base of 10 of the integral photon flux, in photons/cm2/s, from 1 to 100 GeV. * • LogEnergy_Flux \- Logarithm in base of 10 of the energy flux, the units are in erg cm-2 s-1, in the 100 MeV - 100 GeV range obtained by the spectral fitting in this range. * • LogSignificance \- The source detection significance in Gaussian sigma units, on the range from 50 MeV to 1 TeV. * • LogVariability_Index \- The sum of the log(likelihood) difference between the flux fitted in each time interval and the average flux over the 50 MeV to 1 TeV range. * • Log Highest_Energy \- Measured in GeV, it is the energy of the highest energy photon detected for each source, selected from the lowest instrumental background noise data, with an associated probability of more than 95%. * • Log$\nu$ \- Logarithm in base of 10 of the synchrotron peak frequency in the observer frame, measured in Hz. * • PL_Index \- It is the photon index when fitting the spectrum with a power law, in the energy range from 50 MeV to 1 TeV. * • LogPivot_Energy \- The energy, in MeV, at which the error in the differential photon flux is minimal, derived from the likelihood analysis in the range from 100 MeV - 1 TeV. * • LP_Index \- Photon index at pivot energy ($\alpha$) when fitting the spectrum (100 MeV to 1 TeV) with Log Parabola. * • LP_$\beta$ \- the spectral parameter ($\beta$) when fitting with Log Parabola spectrum from 50 MeV to 1 TeV. * • Gaia_G_Magnitude \- Gaia Magnitude at the g-band provided by the 4LAC, taken from the Gaia Survey. Figure 3: The full scatter matrix plot of all the variables defined above, before feature selection. Here the InvRedshift denotes $\frac{1}{z+1}$ scaled data. ## 3 Methodology In this section, we describe, in detail, the methodology adopted for this study, from the description of the choice of the transformations adopted, the variable selection, the methods considered singularly, such as Big LASSO (a more reliable version of LASSO), XGBoost, Random Forest and Bayes GLM, to the Superlearner algorithm used to create the ensemble leading to the final prediction (see Sec. 3.4). The statistical parameters used in order to compare our results with those of others in the field are: Bias, $\sigma_{NMAD}$ (normalized median absolute deviation), Pearson correlation $r$, RMSE (root mean square error), and standard deviation ($\sigma$). We quote the measured values of these parameters for $\Delta z_{norm}$ and $\Delta z$ , where $\Delta z_{norm}$ = $\frac{z_{spec}-z_{pred}}{(1+z_{spec})}$ and $\Delta z$ = $z_{spec}-z_{pred}$ . As shown in the scatter matrix of Fig. 3, we can see the presence of multiple correlated variables such as PL_Index and LP_Index, LogEnergyFlux and LogFlux, and LogFlux and LogSignificance. Hence, we deploy a feature selection method such as LASSO which as a result naturally reduces the number of correlated variables, although it does not completely eliminate all of them. The procedure consists of mainly two parts, as presented in the flowchart in Fig. 4. The first steps are to clean our data source by eliminating data points with missing variables and then pruning our feature set with the use of the LASSO algorithm. After this, the variables obtained as the selected ones will be used to train our model. We split our data into train and test sets composed of 657 $\gamma$-ray AGNs, and the validation set composed of 73 $\gamma$-ray AGNs. We divide the sample taking as the validation set the latest 10% of the $\gamma$-ray AGN observed. This choice is the same as taking the validation set randomly since there is no preferential order in redshift when we choose the validation set. This is just for one test, but as we show in the Sec. 3.4 we also apply the 10-fold cross-validation (hereafter called 10fCV) 100 times to avoid choosing a validation sample that may not be representative of the whole sample. We will use Superlearner which includes the optimized XGBoost, Random forest, Bayes GLM, and Big LASSO. Details of such an optimization are mentioned in Sec. 3.3. After training this ensemble on our data, we obtain our trained model, which leads us to the prediction on the redshifts. Figure 4: Methodology flowchart: the rectangular boxes represent data sets, the parallelograms the $\gamma$-ray AGN categories, the rhombus indicates functions performed, rounded rectangles indicate the ML algorithms used, the green lines show the direction of the input, orange lines the output, and blue lines indicate the splits and changes in the data set. The color-coding indicates the following: yellow indicates the data with spectroscopic z, orange the ones without spectroscopic z, green the results and, blue indicates the intermediate steps or datasets. ### 3.1 Feature selection We apply the LASSO method to prune our features and obtain a more effective subset for redshift prediction. The LASSO algorithm uses a shrinkage method for linear regression by requiring the $\ell^{1}$ norm (sum of the magnitude of all vectors in the given space) of the solution vector to be less than or equal to a positive number known as the tuning parameter ($\lambda$). This penalization allows the model to select a subset of features and discards the rest by setting their coefficients to 0 (Tibshirani, 1996). The tuning parameter is responsible for deciding the shrinkage coefficient applied to the estimated vector. As a consequence, the model is easier to interpret with a smaller number of features and usually has a smaller prediction error than the full model. The prediction error is the RMSE between the predicted and the observed redshifts, which is minimized during the one hundred times 10fCV training. As a measure of the prediction errors we quote the RMSE value, as well as the $\sigma_{NMAD}$. For our analysis, we use the GLMNET function with the LASSO selection feature (Hastie et al., 2017; Tibshirani et al., 2012). We pick the $\lambda.1se$ value, which is the maximum $\lambda$ value for which the error is within 1 standard deviation (Friedman et al., 2010a) and its corresponding coefficients for the features. The coefficients assigned by LASSO to each of them are displayed in Fig. 5 and we choose only the non-zero coefficient features. To better visualize the parameter space of these features we plot them in the scatter matrix plot shown in Fig. 6 , along with the generalization set. LASSO feature selection shows that some of the variables that were strongly correlated are naturally eliminated, but we are still left with two correlated variables: LogEnergyFlux and LogSignificance. This means that for LASSO both features are relevant. Since LogSignificance is providing the information on the detectability of the $\gamma$-ray AGN and this is relevant to the final prediction of the redshift; thus, we decided to retain it. On the other hand, from a statistical point of view, it is not necessary to remove correlated variables, since the aim here is to reach a greater accuracy on the prediction of the redshift. Nevertheless, we have shown in the Appendix (See Fig. 17) that the results do not change at the level of $1\%$ for $\sigma_{NMAD}$ (Normalized Median Absolute Deviation), RMSE and Correlation when we consider to manually discard this variable. Figure 5: The coefficients assigned to the features by LASSO at the $\lambda$.1se value. We only keep the coefficient features $>0$. Figure 6: The full symmetric scatter matrix plot shows the response (in our case the InvRedshift) and predictor variables. The different $\gamma$-ray AGN categories are color-coded according to the legend displayed on the plot. The values in the parenthesis indicate the number of $\gamma$-ray AGNs present in the data set. Figure 7: The scatter matrix plot for BLLs in the generalization and training set. The generalization set BLLs are shown in blue, while the training set BLLs are shown in red. In addition, we clarify that we performed the analysis with both $\log_{10}(1+z)$ and $\frac{1}{z+1}$, the distribution of the latter shown in Fig. 2. The choice of transformation arises from the fact that the results related to the choice of $\frac{1}{z+1}$ present the smallest $\sigma_{NMAD}$ and smaller $\Delta z_{norm}$ (normalized variation in redshift), thus leading us to use this transformation. ### 3.2 The ML algorithms used in our analysis By adopting an ML approach, we leverage the built-in algorithms that learn from the training set and we test out predictions on the test set. We employ the trained models to predict the redshift of sources for which the redshift has not been measured. These optimized methods are combined into an ensemble using the Superlearner package, providing us with a better prediction than any single algorithm. The ML algorithms used here are summarized in the following itemized points: * • Regression trees build the predictor by partitioning the data based on the values of the independent variables and averaging the value of the dependent variables. Examples of regression trees are XGBoost and Random Forest. Indeed, both the XGBoost and Random Forest algorithms utilize multiple regression trees to increase their predictive power. * • The Random Forest algorithm generates multiple independent regression trees and averages them to obtain a more accurate prediction (Breiman, 2001; Valencia et al., 2019; Green et al., 2019; Miller et al., 2015). An extremely difficult task is how to choose the optimal depth of such a tree, namely to decide which is the number of partition levels. In gradient boosting, the final predictor is built as a weighted sum of simple tree predictors. Compared to the Random Forest method, regression trees are not generated independently but built on each other using residuals from the previous step, until the culmination of trees forms a stronger regression model. * • The XGBoost algorithm is an amelioration of the gradient boosting method (Chen & Guestrin, 2016; Friedman et al., 2000; Friedman, 2001, 2002) and it also leverages poor predictors. It uses a more regularized model formalization to control overfitting, and thus give better performance. * • Big LASSO is a computationally efficient implementation of the LASSO algorithm in R (Zeng & Breheny, 2017). The Big LASSO is an implementation that allows us to compute and analyze big multidimensional data sets quickly and efficiently. * • Bayes GLM is a bayesian inference of the generalized linear model. It determines the most likely estimate of the response variable (in our case the redshift) given the particular set of predictors and the prior distribution on the set of regression parameters (Maximum A Posteriori estimator, MAP). It works on the Fisher principle: “what value of the unknown parameter is most likely to generate the observed data”. BayesGLM method is more numerically and computationally stable as compared to normal GLM models. It employs a student-t prior distribution for the regression coefficients. Then, given the observed data, the likelihood function for these parameters is calculated. The likelihood function and priors are combined to produce the posterior distributions from which we obtain the MAP estimators of the desired parameters (Birnbaum, 1962; Hastie & Tibshirani, 1987, 1990; Friedman et al., 2010b). ### 3.3 Optimizing Algorithms Figure 8: Variation of the RMSE and Correlation coefficient versus the number of trees, for different depths (upper panels) and shrinkage coefficients (lower panels). It should be noted that these results are obtained after performing 10fCV on our data set. For the XGBoost algorithm, we have the option to vary the number of regression trees, the depth, and the learning rate (the so-called shrinkage coefficient, which shrinks the predictions of a tree to prevent over-fitting). We tune these to best fit our data without over or under-fitting. In Fig. 8 top left and right panels show the variation of the root mean square error (RMSE) and correlation, respectively, related to the number of trees in the model. The RMSE and correlation minimize and maximize, respectively, at a depths of 5 and at a number of 500 trees. However, since depth 4 and 5 gives very similar results, to avoid the risk of over-fitting usually associated with a higher depth we choose a max depth of 4 and proceeded to test the model performance while varying the learning rate, see bottom panels of Fig. 8. The optimal learning rate in our case is 0.01. In the left bottom panel of Fig. 8, we plot the RMSE variation, and on the right the Pearson Correlation coefficient (r). In summary, our final XGB optimized model consists of 500 trees, with a depth of 4 and a shrinkage coefficient of 0.01. A similar analysis is performed for Random Forest as well. We tune the number of trees, depth, and the maximum number of nodes based on which model has the lowest RMSE and maximum correlation value. We started with a default value for the number of variables that will be randomly sampled (from here on denoted as mtry), which is 2. We vary the number of trees and the maximum number of nodes. The RMSE and Correlation variation are shown in the top left and right plots of Fig. 9, respectively. We observe that a value of 200 for maximum nodes gives the least RMSE and maximum correlation at 400 trees. Next, we keep the maxnode parameter constant and vary the mtry value from 2 to 4. The RMSE and Correlation plots are shown in the bottom panel of Fig. 9. Among the different values of mtry tested, we see that mtry=2 gives us the best results in terms of the highest correlation coefficient and the smallest RMSE. Furthermore, the number of trees is selected to be 600, as this gives the second smallest RMSE, but since in this region we have contemporaneously also the plateau of the Correlation coefficient (see left bottom panel of Fig. 9) 600 is the most favored value. In addition, when the RMSE is similar as in the 600 and 900 trees we prefer the smaller number of trees to prevent overfitting. In the case of BayesGLM, there are no tuneable hyperparameters, as instead it is for XGBoost and RF. Instead, we specify a formula based on which the redshift is predicted. The formula used is a linear combination of all the features we consider: $\frac{1}{z_{i}+1}=f(\sum K_{i})$ (1) Here $K$ belongs to a set of features described in Sec. 3.1 and presented in Fig. 5, and $i$ denotes each $\gamma$-ray AGN in the training set which is used in the model fitting. The Big LASSO algorithm is an extension of LASSO. Hence its optimization is done identically, i.e its $\lambda$ hyperparameter is tuned based on its internal CV such as to obtain the model with the least RMSE. As a result, there is no need for us to explicitly handle its optimization. Figure 9: The panels show random forest optimization plots. The upper left and right panels present the RMSE and Correlation vs. the number of trees, respectively. This is performed with a fixed value of mtry=2 and different values of RF maxnodes=(50, 100, 150, 200) color-coded with red, blue, black, and green, respectively. The bottom left and right panels present the same plots as the upper panel, but with the fixed value of Maxnodes=200 and with mtry=2,3,4 indicated with red, blue and black, respectively. Since every ML method has its advantages in a given parameter space and in our case in different redshift ranges, we leverage each of the methods by using Superlearner, described in the next subsection. ### 3.4 SuperLearner In our approach, we have three different types of sets: the training, the test, and the generalization sets. The training set is used to train the model based on the observed variables for which we already know the response variable, while the test set is used to validate the accuracy of the model, the generalization set is the one for which the redshift is unknown and the ML algorithm is applied for inferring this information. First, we use LASSO and select important features based on the data from the training set. Then, we construct the prediction model using the Superlearner ensemble algorithm which includes the optimized XGBoost, Random Forest, Bayes GLM, and Big LASSO. In our case, since the test set has never been used in the training set, then it is called validation data set. SuperLearner (Van der Laan et al., 2007) is an algorithm that utilizes k-fold CV to estimate the performance of ML algorithms. It creates an optimal weighted average of the input models, i.e., an ensemble. Namely, the SuperLearner provides coefficients that reflect the relative importance of each learner against the others in the ensemble. Besides this feature, Superlearner can test the predictive power of multiple ML models or the same model, but with different settings. The weights of the algorithms always sum up to 1 and are always equal to or greater than 0. Using these coefficients, we can group the highest weighted algorithms into an ensemble and improve the prediction more than any single algorithm (Polley & Van der Laan, 2010). We use the functions implemented in the statistical software R, particularly the SuperLearner package. In 10fCV the dataset is randomly partitioned into 10 complementary subsets. The SuperLearner is trained on 9 of these subsets and the resulting model is employed to infer the values in the remaining subset, which plays the role of the test set. The process is iterated 10 times, with each subset playing the role of the test set. The SuperLearner parameters are automatically set to optimize the prediction for all test sets (i.e., all data points). Following statistical practice, we repeat this whole procedure 100 times to make the prediction less dependent on the selection of the specific random partition of the dataset. Thus, our predictions result as the average of 100 independent SuperLearner predictions. This allows for stabilization and de-randomization of our results. Given the paucity of our dataset, this is a crucial step in analyzing the performance of our model. ## 4 Results Our final training set consists of 657 $\gamma$-ray AGNs with observed redshifts. We separate 73 $\gamma$-ray AGNs as a validation set that is not used for any training (see Fig. 4). Figure 10: The left panel shows the observed vs. predicted redshift in the $\frac{1}{z+1}$ scale, while the right panel shows the observed vs. predicted redshifts in the linear scale. Figure 11: In all panels the results are obtained with the one hundred 10fCV. Top left and right panels show the histogram of the RMSE, and the relative influence of our chosen predictors, respectively. Bottom left and right panels show the NMAD distribution, and linear Correlation distribution, respectively. In Fig. 10 the top panel shows the correlation plot between the observed and predicted redshift in $\frac{1}{z+1}$ (left panel) and linear scale (right panel). The blue lines indicate the 2$\sigma$ cones for each of the plots where the $\sigma$ is calculated in the $\frac{1}{z+1}$ scale as follows: $\frac{1}{z_{p}+1}=\frac{1}{z_{s}+1}\pm 2\sigma,$ where $z_{s}$ is the spectroscopic redshift and $z_{p}$ is the photometric redshift. Due to the choice of our scaling, the 2$\sigma$ line is not straight on the linear scale and is shown in the following formula- $z_{p}=z_{s}\left[\frac{1\pm 2\sigma(z_{p}+1)}{1\mp 2\sigma}\right]\pm\frac{2\sigma}{1\mp 2\sigma.}$ We obtain a Pearson Correlation $r$= 0.71 in the linear scale, with the $\sigma_{NMAD}(\Delta z_{norm})$ = 0.192 and $\sigma_{NMAD}(\Delta z)$ = 0.287. We obtain a low bias for $\Delta z_{norm}$ at $11.6\times 10^{-4}$ and for $\Delta z$ at $8.5\times 10^{-2}$. We also have a low percentage of catastrophic outliers at 5% of our total sample. The so-called catastrophic outliers are the outliers in ML nomenclature (Jones & Singal, 2020)). More specifically, these catastrophic outliers are the $\gamma$-ray AGNs for which $|{\Delta z}|>2\sigma$, and thus lie outside the cone presented in Fig. 10. In the upper panel of Fig. 11, we present the distribution of our linear scale RMSE, and the relative influence of the features in our data over the one hundred 10f nested CV runs, in the upper left and right panels, respectively. In the bottom panel of Fig. 11, the NMAD and the differential distribution of the correlation coefficient are shown in the left and right panels, respectively. We note here that in our analysis the redshift of $\gamma$-ray AGNs is not just an effect of distance-brightness relation, which is due to selection biases (see Singal et al. (2013b), Singal et al. (2012), Singal et al. (2014), Singal (2015), Singal et al. (2013a), as we have discussed in the introduction). Indeed, a very recent study (Qu et al. (2019) and Zeng et al. (2021)) has been performed on the 4LAC catalog to evaluate the dependence of the BLLs luminosity on the redshift. For completeness, we also present the results from a sample that is not used in the CV step at all, alongside with the prediction of the model on an internal test set in Fig. 13. With this validation set, we have a catastrophic outlier percentage of 7%, thus comparable with the previous values. In the left upper panel of Fig. 12, we show the histogram of $\Delta$z indicating with the red line indicating the bias and with the blue line the $\pm 1$ $\sigma$; while in the right upper panel of Fig. 12 we present the histogram of $\Delta z_{norm}$ with the red line the normalized bias, and with the blue line the $\pm 1\sigma$ normalized. We present the residual plot in Fig. 14 bottom right panel. The lack of any increasing or decreasing trend of the redshift between the residuals and the fitted values is evidence of the goodness of our fit. Furthermore, the $R^{2}$ value for our result is 0.508, and the (Interquartile Range, IQR ) value for $\Delta z$ = 0.39. Additionally, we compare our results with other works done in the field, such as Richards et al. (2008) (Type-1 broad line quasars from SDSS), Laurino et al. (2011) (Optical galaxies and quasars from SDSS ), Ball et al. (2008) (Main sample galaxies, luminous red galaxies and quasars from SDSS and GALEX), and Brescia et al. (2013) (Quasars from SDSS+GALEX+WISE+UKDISS). The comparisons are shown in Table 1. Experiment | Bias ($\Delta z_{norm}$) | Sigma ($\Delta z_{norm}$) | NMAD ($\Delta z_{norm}$) ---|---|---|--- Superlearner | 0.001 | 0.19 | 0.19 Brescia et al. 2013 (best case) | 0.004 | 0.069 | 0.029 Laurino et al. | 0.095 | 0.16 | … Ball et al. | 0.095 | 0.18 | … Richards et al. | 0.115 | 0.28 | … Table 1: Comparison of our results with those of other ML-based photometric redshift estimation techniques. The empty spaces indicate a lack of available data for those cases. We stress that even though our results do not always achieve a more precise prediction than some of the cases shown in Table 1, they are still comparable to them, and we need to take into account that our training set is at least twice smaller compared to the sample investigated in the mentioned paper. Hence, these results highlight that further enlargement and enhancements to the 4LAC dat will produce more precise results in the near future. Figure 12: The differential distribution of the frequencies $\Delta z$ and $\Delta z_{norm}$ are shown in the left and right panels, respectively. The blue lines indicate the $\sigma$ value and the red line the bias. The bottom plots show the box plot representation of the above frequency histogram, respectively. Figure 13: The Correlation of validation set predicted $\frac{1}{z+1}$ vs. the observed $\frac{1}{z+1}$ (upper left panel) and the predicted z vs. the observed one (upper right panel). Figure 14: The top left and right panels show the boxplot of the NMAD values for the internal Superlearner test set, and the boxplot of the RMSE values for ten internal Superlearner test sets, respectively. The bottom left panel shows the $\Delta$z distribution for the ten Superlearner test sets. Bottom right panel: The residuals VS the Superlearner predictions for each of the test sets. ### 4.1 Bias correction As it can be seen from Fig. 10 left panel, the higher redshift AGNs are being predicted at a lower value. This is a clear signature of our predictions being biased. To correct for this, we fit a linear model between the observed and predicted redshifts in the $\frac{1}{z+1}$ scale. We fit linear models for both BLLs and FSRQs separately, which are shown by the cyan and purple dashed lines in Fig. 15, left panel. The black dotted line represents the linear fit for both BLLs and FSRQs together. We can see clearly that the fitted lines deviate from the 1:1 line. Figure 15: Left Panel: Linear regression fitting between the predicted and observed redshifts. The cyan and purple lines show the linear fit for BLL and FSRQs, respectively. Right Panel: Plot of binned $\frac{1}{z+1}$ vs mean bias. The average bias for BLLs and FSRQs is $3.2\times 10^{-3}$ and $-3.2\times 10^{-3}$, respectively. The bias corrections for BLLs and FSRQs follow this equation: $U_{prediction}=a*U_{observed}+b,$ (2) where $U_{prediction}=\frac{1}{Z_{predictions}+1}$, $U_{observed}=\frac{1}{Z_{observed}+1}$, $a$ and $b$ are the slope and the intercept of linear fit, respectively. We obtain a different value of $a$ and $b$ for BLLs and FSRQs. These quantify the bias present in our analysis. For BLLs : $a$ = 0.29 and $b$ = 0.51. For FSRQs : $a$ = 0.29 and $b$ = 0.35. ### 4.2 Prediction on the generalization set Our initial aim, as already indicated in the introduction, is to increase the number of 4LAC $\gamma$-ray AGNs that have estimates of the redshift. Based on the results shown in the previous section, we have reached so far a trained model which enables predictions for 4LAC $\gamma$-ray AGNs that fall within its trained parameter space. Indeed, for the generalization set, it is of crucial importance to ensure that the generalization set parameter space should overlap with our training set as much as possible. We start with a great advantage with this data set, since based on the scatter matrix plot in Fig. 6 we can observe that there is a significant overlap in the training (red and green data points for BLLs and FSRQs, respectively) and the generalization set (blue and black points for BLLs and FSRQ, respectively). Hence, the trained model has the advantage of extrapolating less when predicting the redshift of the generalization set. For the generalization set, we decide to retain $\gamma$-ray AGNs based on the condition that the values of their predictors should fall within the maximum and minimum values of the corresponding predictor in the training set. This way, we can achieve more reliable redshift predictions with minimal extrapolation. To better evaluate how the generalization set overlaps with the training set, we present a scatter matrix plot in Fig. 7, showing the distribution of the very same seven predictors chosen by the LASSO features in Fig. 6. The blue points belong to the new trimmed generalization set, and as we can see, all the points fall well within the training set data points, as shown by the red points. After we perform these cuts in the parameter space, we are left with 232 $\gamma$-ray AGN which is 97% of the total number. These 232 $\gamma$-ray AGNs are all BLLs. We would like to clarify here that the objects in the generalization sample that are classified as BCU, or uncategorized, are excluded when we are performing our predictions. We also exclude the single FSRQ that we have in our generalization set, so as to focus solely on BLLs for our predictions. Thus, the trimming of the variables does not influence the total number of redshifts we predict. We present the results of our analysis in Fig. 16. As shown in our previous results (see Fig. 10), 95% of our predictions fall within the 2$\sigma$ error bars. We expect a similar scenario for the predictions on the generalization set. Here, the blue histogram bars represent the median of the predictions on the generalization set, not taking into account the 2$\sigma$ errors. We performed the Kolmogorov Smirnov Test (KS) test to evaluate if the extracted redshift distribution comes from the observed redshift distribution in the training set. As a result, we obtained that the null hypothesis that the two distributions come from the same parent population is rejected at the level of less than $10^{-16}$%. Since we are not taking into account the error bars, hence the KS test gives us that the two distributions are different. Thus, we decided to investigate this issue by performing the KS test again on the singular distribution of the variables and we confirm also that the null hypothesis of similarity is rejected. Thus, it is not surprising that the two redshift distributions are not similar. Nevertheless, we do not necessarily expect the distributions of the redshift to be similar from a statistical point of view, since selection biases are at play and it is possible, as mentioned earlier, that we observe the faintest $\gamma$-ray AGNs at low redshift and the brightest $\gamma$-ray AGNs at higher redshift. Our model without accounting for the bias correction predicts the redshift for BLLs between 0.5 and 1. With the application of the bias correction, the predicted redshifts are extended to cover the whole interval between 0 and 3, which better resembles the distribution of true redshifts. When the originally predicted redshift (Superlearner prediction) is close to 0.5, then we are at the borders of the generalization limits, namely close to the intercept values $b$ and can not predict the true redshift well. Figure 16: The differential distribution of the predicted redshift of 232 BLLs from the generalization set (blue histogram) vs. training set (orange data points). The upper panel shows the distribution in linear scale, while the bottom panels shows the distribution in the $\frac{1}{z+1}$ scale. To be more specific, our sample contains FSRQs and BL Lacs in similar numbers (655 FSRQs and 686 BL Lacs). However, it is easier to measure redshift in FSRQs given their prominent broad emission lines. Given the observational difficulties in measuring redshifts for BL Lacs, the sources in our study might not be a representative sample of the BL Lac population. There is a non- zero probability chance for sources to be mis-classified, or even the $\gamma$-ray source to be mis-associated with a counterpart. Moreover, our sample contains only 60 non-blazar $\gamma$-ray AGN whose $\gamma$-ray properties potentially evolve differently with redshift. All of the above may hamper the accuracy of the ML models. However, given the improvement in localization accuracy, the number of sources, and the number of non-blazar $\gamma$-ray AGN (a factor of two improvement) between the 3LAC and 4LAC (as well as earlier catalogs), future Fermi catalogs will allow us to address further the shortcomings of our current sample. ## 5 Conclusion In this work, we have crafted a methodology to predict the redshift of $\gamma$-ray loud AGN from the 4LAC catalog, using their observed $\gamma$-ray properties. We used categorical variables to distinguish among $\gamma$-ray AGN types and the LASSO algorithm to select the most predictive variables. We select the ML models based on the coefficient of the predictive power obtained with Superlearner after we have performed the optimization of the models. We trained several ML algorithms on these properties by using Superlearner and used the trained models to predict the $\gamma$-ray AGN redshifts. By computing the relative influence of these observed properties, we also determine which of them are the best predictors. The application of these methods to the 4LAC $\gamma$-ray AGN catalog for the BLLs sources for which the redshift is unknown increases 61% the size of the data set of $\gamma$-ray AGNs with known redshift, thus allowing to reach a larger sample. This new data set will have the great advantage to be complete for a given flux limit with a higher percentage. This enlarged sample of $\gamma$-ray AGNs, in turn, will allow us to determine the luminosity function, its evolution, and the density evolution of $\gamma$-ray AGNs with improved accuracy. With a sample of 657 $\gamma$-ray AGNs with measured redshifts, we have shown that using the Superlearner method can provide predicted redshifts that correlate with the observed redshift to a high degree of accuracy. We obtain, after performing one hundred 10f nested CV, an average Pearson Correlation coefficient, $r=0.77$ in the $\frac{1}{z+1}$ scale and RMSE$=0.12$ and a bias of $5.4\times 10^{-4}$; if we consider the results instead in the z scale $r=0.71$, the RMSE$(\Delta z_{norm})=0.43$, the bias$(\Delta z_{norm})$ $1.2\times 10^{-3}$ and $\sigma_{NMAD}=0.192$. We then predict the redshift of 232 BLLs that do not have the observed redshift and plot them against the observed redshift. Most $\gamma$-ray AGNs without the estimation of redshift lie between $0.18\leq z\leq 1.02$. Previous work utilizing ML algorithms focused primarily on the classification of $\gamma$-ray AGNs. Currently, to the best of our knowledge, no work in the blazar literature attempts to estimate the redshift using their observed $\gamma$-ray characteristics. This is a pioneering work in $\gamma$-ray AGN redshift estimation and will hopefully usher in follow-up studies that can improve our predictive capabilities even further. ## 6 Appendix In this Appendix, we discuss how it is crucially important to show how the models used together with an ensemble performs better than the singular methods. In Table. 2 we show the RMSE, linear correlation, Bias, and NMAD scores of the individual algorithms used in the ensemble and the final Superlearner ensemble score. Based on the RMSE and the linear correlation values, we can clearly see that the Superlearner ensemble performs better. The singular model scores presented here are 10fCV and we ran them with the same optimization parameters shown in Sec. 3.3. Algorithm | Root Mean Square error | Linear correlation | Bias ($\Delta z_{norm}$) ($\times 10^{-4}$) | NMAD $\Delta z_{norm}$ ---|---|---|---|--- SuperLearner | 0.014 | 0.71 | 11.6 | 0.19 XGB | 0.015 | 0.70 | 22.6 | 0.19 RF | 0.015 | 0.70 | 15 | 0.20 BigLasso | 0.02 | 0.69 | 2.2 | 0.19 BayesGLM | 0.02 | 0.69 | 8.6 | 0.19 Table 2: The 10fCV risk estimates of individual algorithms and the Superlearner ensemble. Our choice of using $\frac{1}{z+1}$ scaling for the redshift instead of log(z+1) is based on the result presented in Table. 3. These results are obtained after performing a 10fCV using the two different scalings. Scaling | Mean square error | Linear Correlation | Bias ($\Delta z_{norm}$) ($\times 10^{-4}$) | NMAD $\Delta z_{norm}$ ---|---|---|---|--- log(z+1) | 0.427 | 0.70 | 223 | 0.2 $\frac{1}{z+1}$ | 0.435 | 0.71 | 11.6 | 0.19 Table 3: The MSE, Correlation, bias, and NMAD of two different redshift scaling. We show the one hundred 10fCV results related to the RMSE, the NMAD distribution for the normalized $\Delta z$, and the linear correlation. For completeness of the discussion, we show the results when we exclude LogSignificance from our analysis, see Fig. 17 . Figure 17: The linear scale correlation plot, when LogSignificance is not included. Next, we present the results when we use only a single variable, LogEnergyFlux, for the prediction using our ensemble, in Fig.18. Figure 18: Correlation plot in linear scale. The values for statistical parameters are shown on the plots themselves. It is clear that when we use only one predictor even though it has a high relative influence (the flux), the prediction we achieve for the redshift is poor compared to the prediction we obtain with the full set of LASSO selected predictors. Additionally, we show the results obtained when using our two most predictive features, i.e LP_beta and LogPivotEnergy in Fig. 19. Figure 19: The linear correlation plot when using LP_beta and LogPivotEnergy in our ensemble. These two have the highest relative influence in our feature set, but, using them independently does not lead to accurate results as the entire feature set does. This work presents results from the European Space Agency (ESA) space mission, Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia. M.G.D. thanks Trevor Hastie for the interesting discussion on overfitting problems. We also thank Raymond Wayne for the initial computation and discussions about balanced sampling techniques which will be implemented in subsequent papers. This research was supported by the Polish National Science Centre grant UMO-2018/30/M/ST9/00757 and by Polish Ministry of Science and Higher Education grant DIR/WK/2018/12. Finally, we would like to thank Sarthak Das and Subham Kedia for their help in performing the 10-fold CV analysis of the Superlearner algorithm. ## References * Abdollahi et al. (2020) Abdollahi, S., Acero, F., Ackermann, M., et al. 2020, ApJS, 247, 33, doi: 10.3847/1538-4365/ab6bcb * Ackermann et al. (2015) Ackermann, M., Ajello, M., Albert, A., et al. 2015, The Astrophysical Journal Letters, 813, L41 * Ajello et al. (2020) Ajello, M., Angioni, R., Axelsson, M., et al. 2020, ApJ, 892, 105, doi: 10.3847/1538-4357/ab791e * Ball et al. (2008) Ball, N. M., Brunner, R. J., Myers, A. D., et al. 2008, The Astrophysical Journal, 683, 12–21, doi: 10.1086/589646 * Birnbaum (1962) Birnbaum, A. 1962, Journal of the American Statistical Association, 57, 269 * Breiman (2001) Breiman, L. 2001, Machine learning, 45, 5 * Brescia et al. (2013) Brescia, M., Cavuoti, S., D’Abrusco, R., Longo, G., & Mercurio, A. 2013, The Astrophysical Journal, 772, 140 * Brescia et al. (2019) Brescia, M., Salvato, M., Cavuoti, S., et al. 2019, MNRAS, 489, 663, doi: 10.1093/mnras/stz2159 * Carrasco et al. (2015) Carrasco, D., Barrientos, L. F., Pichara, K., et al. 2015, Astronomy & Astrophysics, 584, A44 * Cavuoti et al. (2014) Cavuoti, S., Brescia, M., D’Abrusco, R., Longo, G., & Paolillo, M. 2014, Monthly Notices of the Royal Astronomical Society, 437, 968 * Chen & Guestrin (2016) Chen, T., & Guestrin, C. 2016, in Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 785–794 * Chiang et al. (1995) Chiang, J., Fichtel, C., Von Montigny, C., Nolan, P., & Petrosian, V. 1995, The Astrophysical Journal, 452, 156 * Chiaro et al. (2016) Chiaro, G., Salvetti, D., La Mura, G., et al. 2016, MNRAS, 462, 3180, doi: 10.1093/mnras/stw1830 * Curran (2020) Curran, S. 2020, Monthly Notices of the Royal Astronomical Society: Letters, 493, L70 * Dainotti et al. (2019) Dainotti, M., Petrosian, V., Bogdan, M., et al. 2019, arXiv preprint arXiv:1907.05074 * D’Isanto & Polsterer (2018) D’Isanto, A., & Polsterer, K. L. 2018, A&A, 609, A111, doi: 10.1051/0004-6361/201731326 * Domínguez et al. (2019) Domínguez, A., Wojtak, R., Finke, J., et al. 2019, ApJ, 885, 137, doi: 10.3847/1538-4357/ab4a0e * Fermi-LAT Collaboration et al. (2018) Fermi-LAT Collaboration, Abdollahi, S., Ackermann, M., et al. 2018, Science, 362, 1031, doi: 10.1126/science.aat8123 * Fotopoulou & Paltani (2018) Fotopoulou, S., & Paltani, S. 2018, Astronomy & Astrophysics, 619, A14 * Friedman et al. (2010a) Friedman, J., Hastie, T., & Tibshirani, R. 2010a, Journal of Statistical Software, 33, 1. https://www.jstatsoft.org/v33/i01/ * Friedman et al. (2010b) —. 2010b, Journal of statistical software, 33, 1 * Friedman et al. (2000) Friedman, J., Hastie, T., Tibshirani, R., et al. 2000, Annals of statistics, 28, 337 * Friedman (2001) Friedman, J. H. 2001, Annals of statistics, 1189 * Friedman (2002) —. 2002, Computational statistics & data analysis, 38, 367 * Green et al. (2019) Green, S. B., Ntampaka, M., Nagai, D., et al. 2019, The Astrophysical Journal, 884, 33 * Hastie & Tibshirani (1987) Hastie, T., & Tibshirani, R. 1987, Journal of the American Statistical Association, 82, 371 * Hastie et al. (2017) Hastie, T., Tibshirani, R., & Tibshirani, R. J. 2017, arXiv preprint arXiv:1707.08692 * Hastie & Tibshirani (1990) Hastie, T. J., & Tibshirani, R. J. 1990, Generalized additive models, Vol. 43 (CRC press) * Hildebrandt et al. (2010) Hildebrandt, H., Arnouts, S., Capak, P., et al. 2010, Astronomy & Astrophysics, 523, A31 * Ilbert et al. (2008) Ilbert, O., Capak, P., Salvato, M., et al. 2008, The Astrophysical Journal, 690, 1236 * Jones & Singal (2017) Jones, E., & Singal, J. 2017, Astronomy & Astrophysics, 600, A113 * Jones & Singal (2020) —. 2020, Publications of the Astronomical Society of the Pacific, 132, 024501 * Jordi et al. (2010) Jordi, C., Gebran, M., Carrasco, J. M., et al. 2010, A&A, 523, A48, doi: 10.1051/0004-6361/201015441 * Kang et al. (2019) Kang, S.-J., Fan, J.-H., Mao, W., et al. 2019, The Astrophysical Journal, 872, 189 * Kaur et al. (2018) Kaur, A., Rau, A., Ajello, M., et al. 2018, ApJ, 859, 80, doi: 10.3847/1538-4357/aabdec * Kaur et al. (2017) —. 2017, ApJ, 834, 41, doi: 10.3847/1538-4357/834/1/41 * Krakowski et al. (2016) Krakowski, T., Małek, K., Bilicki, M., et al. 2016, Astronomy & Astrophysics, 596, A39 * Laurino et al. (2011) Laurino, O., D’Abrusco, R., Longo, G., & Riccio, G. 2011, Monthly Notices of the Royal Astronomical Society, 418, 2165–2195, doi: 10.1111/j.1365-2966.2011.19416.x * Liodakis & Blinov (2019) Liodakis, I., & Blinov, D. 2019, MNRAS, 486, 3415, doi: 10.1093/mnras/stz1008 * Logan & Fotopoulou (2020) Logan, C., & Fotopoulou, S. 2020, Astronomy & Astrophysics, 633, A154 * Marcotulli et al. (2020) Marcotulli, L., Ajello, M., & Di Mauro, M. 2020, in American Astronomical Society Meeting Abstracts, Vol. 235, American Astronomical Society Meeting Abstracts #235, 405.06 * Miller et al. (2015) Miller, A., Bloom, J., Richards, J., et al. 2015, The Astrophysical Journal, 798, 122 * Nakoneczny et al. (2019) Nakoneczny, S., Bilicki, M., Solarz, A., et al. 2019, Astronomy & Astrophysics, 624, A13 * Nakoneczny et al. (2020) Nakoneczny, S. J., Bilicki, M., Pollo, A., et al. 2020, Photometric selection and redshifts for quasars in the Kilo-Degree Survey Data Release 4. https://arxiv.org/abs/2010.13857 * Pasquet-Itam & Pasquet (2018) Pasquet-Itam, J., & Pasquet, J. 2018, Astronomy & Astrophysics, 611, A97 * Polley & Van der Laan (2010) Polley, E. C., & Van der Laan, M. J. 2010 * Qu et al. (2019) Qu, Y., Zeng, H., & Yan, D. 2019, Monthly Notices of the Royal Astronomical Society, 490, 758 * Rajagopal et al. (2020) Rajagopal, M., Kaur, A., Ajello, M., et al. 2020, ApJ, 898, 18, doi: 10.3847/1538-4357/ab96c4 * Richards et al. (2008) Richards, G. T., Myers, A. D., Gray, A. G., et al. 2008, The Astrophysical Journal Supplement Series, 180, 67–83, doi: 10.1088/0067-0049/180/1/67 * Singal (2015) Singal, J. 2015, Monthly Notices of the Royal Astronomical Society, 454, 115 * Singal et al. (2013a) Singal, J., Ko, A., & Petrosian, V. 2013a, Proceedings of the International Astronomical Union, 9, 149 * Singal et al. (2014) —. 2014, The Astrophysical Journal, 786, 109 * Singal et al. (2012) Singal, J., Petrosian, V., & Ajello, M. 2012, The Astrophysical Journal, 753, 45 * Singal et al. (2013b) Singal, J., Petrosian, V., & Ko, A. 2013b, AAS/High Energy Astrophysics Division# 13, 300 * Tibshirani (1996) Tibshirani, R. 1996, Journal of the Royal Statistical Society Series B, 58, 267 * Tibshirani et al. (2012) Tibshirani, R., Bien, J., Friedman, J., et al. 2012, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74, 245 * Valencia et al. (2019) Valencia, D., Paracha, E., & Jackson, A. P. 2019, The Astrophysical Journal, 882, 35 * Van der Laan et al. (2007) Van der Laan, M. J., Polley, E. C., & Hubbard, A. E. 2007, Statistical applications in genetics and molecular biology, 6 * Venters & Pavlidou (2013) Venters, T. M., & Pavlidou, V. 2013, MNRAS, 432, 3485, doi: 10.1093/mnras/stt697 * Yang et al. (2017) Yang, Q., Wu, X.-B., Fan, X., et al. 2017, The Astronomical Journal, 154, 269 * Zeng et al. (2021) Zeng, H., Petrosian, V., & Yi, T. 2021, The Astrophysical Journal, 913, 120 * Zeng & Breheny (2017) Zeng, Y., & Breheny, P. 2017, arXiv preprint arXiv:1701.05936 * Zhang et al. (2019) Zhang, K., Schlegel, D. J., Andrews, B. H., et al. 2019, The Astrophysical Journal, 883, 63
# QCD in the cores of neutron stars††thanks: Presented at Quark Matter 2022 Oleg Komoltsev Faculty of Science and Technology, University of Stavanger, 4036 Stavanger, Norway ###### Abstract I discuss why state-of-the art perturbative QCD calculations of the equation of state at large chemical potential that are reliable at asymptotically high densities constrain the same equation of state at neutron-star densities. I describe how these theoretical calculations affect the EOS at lower density. I argue that the ab-initio calculations in QCD offer significant information about the equation of state of the neutron-star matter, which is complementary to the current astrophysical observations. ## 1 Introduction The equation of state (EOS) of the dense matter at zero temperature is a necessary input for the neutron-stars (NS) physics. Theoretical calculations of the EOS can be done only at the two opposite (low- and high-density) limits. At the low-density limit the matter can be described within the chiral effective field theory (CET) [1, 2]. Those calculations are reliable up to around nuclear saturation density $n_{s}=0.16/\textrm{fm}^{3}$. On the other hand we can access the EOS using perturbative Quantum Chromodynamics (pQCD) at the asymptotically high densities, above $\sim 40n_{s}$ [3, 4]. Central densities of maximally massive neutron stars are around $4-8n_{s}$, which is not reachable within CET or pQCD. Therefore, there are no tools in our possession to compute EOS of the cores of NS from the first principles. However, we can obtain an empirical access to the cores of NSs using recent astrophysical observations. The most important probes of NS physics are the discovery of massive NSs [5, 6, 7], mass - radius measurements [8, 9], and the gravitational-wave and multi-messenger astronomy [10, 11]. Utilizing all constraints coming from astrophysical observation as well as first principle calculations narrows down dramatically the range of possible EOSs, which allows us to use the densest objects in the Universe to test independently various beyond standard model scenarios and/or general relativity. Majority of the EOS studies extrapolate CET EOS up to NS densities 5-10$n_{s}$ and conditioning it with the observational inputs. The results differ from the works that include high-density limit and interpolate between two orders of magnitude. The qualitative difference is in the softening of the EOS happening around $\epsilon\sim$750 MeV/fm-3, which can be interpreted as quark matter cores inside the most massive NS [12]. In this work I answer why and how pQCD input offers significant information about the EOS at NS densities. I find that pQCD input propagates non-trivial constraints all the way down to 2.2$n_{s}$ just by using solely thermodynamic stability, consistency and causality [13]. In addition the complementariness of the pQCD input to the astrophysical observations was studied in [14]. I show that pQCD is responsible for the softening of the EOS at the NS densities. Therefore, it is essential to include pQCD input in any inference study of the EOS. ## 2 Setup All technical details as well as analytical formulas are presented in [13]. In this section I describe the conditions I use, in particular stability, consistency and causality, and the resulting propagation of the pQCD input down to lower densities. Let us start with the baryon density $n$ as a function of the chemical potential $\mu$ as shown in fig.1(a). The goal is to find all possible lines that connect endpoint of CET results (dark blue line in the bottom left corner) with the first point of pQCD calculations (purple line in the upper right corner) using 3 conditions. (a) (b) Figure 1: (a) - Baryon density as a function of chemical potential. Simultaneous fulfilment of thermodynamic consistency, stability and causality narrows down the allowed region (white area) of the EOS. (b) - Zero temperature EOSs from CompOSE database plotted with the allowed area (gray shape) arising from the new constraints in $\epsilon-p-n$ space. Consistent/in tension/not consistent means that EOS is consistent with integral constraints for all/some/none X values in a range [1,4]. The first condition is thermodynamic stability, which implies concavity of the grand canonical potential $\partial^{2}_{\mu}\Omega(\mu)\leq 0$. At zero temperature $\Omega(\mu)=-p(\mu)$, which implies that the number density is monotonically increasing function of the chemical potential $\partial_{\mu}n(\mu)\geq 0$. The second condition is causality – the sound speed cannot exceed the speed of light $c^{2}_{s}\leq 1$. This provides constraints on the first derivative of the number density with respect to the chemical potential $c^{-2}_{s}=\frac{\mu}{n}\frac{\partial n}{\partial\mu}\leq 1.$ (1) For each point on the $\mu-n$ plane we can calculate the least allowed slope coming from causality, which is represented by the arrows in the fig.1(a). This cuts upper (lower) region of the plane, because any points from the area above (below) orange line $c^{2}_{s}=1$ cannot be connected to pQCD (CET) in a casual way. The third condition is thermodynamic consistency. In addition to $n$ and $\mu$ we need to match pressure $p$ at the low- and high- density limits. The pressure is giving by the integral of the number density $\int^{\mu_{\rm QCD}}_{\mu_{\rm CET}}n(\mu)d\mu=p_{\rm QCD}-p_{\rm CET}=\Delta p.$ (2) This implies that the area under the curve for any EOS is fixed by our input parameters. For each arbitrary point ${\mu_{0},n_{0}}$ we can construct the EOS that maximize/minimize the area under the curve $\Delta p_{max/min}(\mu_{0},n_{0})$ shown as a green/blue dashed line in the fig.1(a). If $\Delta p_{max}(\mu_{0},n_{0})<\Delta p$ then any EOS that goes through the point ${\mu_{0},n_{0}}$ does not have enough area under the curve. This discards the region in the lower right corner in the fig.1(a) under the red line called ”integral constraints”. If $\Delta p_{min}(\mu_{0},n_{0})>\Delta p$ then any EOS that goes through the point ${\mu_{0},n_{0}}$ has too much area under the curve. This cuts area in the upper left corner above the red line. The integral constraints can be obtained without any assumptions of interpolation function in a completely general and analytical way. Figure 2: Constraints on the $\epsilon-p$ plane coming from low- and high- density limits. Shapes outlined by solid black line are the allowed areas for fixed number density without pQCD. Blue shapes are allowed regions after imposing pQCD input. We can map the allowed region from $\mu-n$ to $\epsilon-p$ plane. The results of such mapping is shown in the fig.2. The green envelope corresponds to the the white area in the fig.1(a) restricted by the causality and the integral constraints. The shapes of allowed region with and without pQCD are shown for the fixed number density $n$ = 2,3,5 and 10$n_{s}$. This explicitly shows how pQCD input can propagate information down to lower density starting from 2.2$n_{s}$. And, strikingly, at 5$n_{s}$ it excludes 75% of otherwise allowed area. Using the new constraints we can check the consistency of publicly available EOSs. Results for all zero temperature EoSs in $\beta$-equilibrium from the public CompOSE database [15, 16] are shown in the fig.1(b). Almost all of the EOSs start to be inconsistent with pQCD input at some density within the provided range. ## 3 Bayesian inference of EOS With the construction described above we can propagate information from ab- initio QCD calculations down to NS densities, where we already have constraints from astrophysical observations. To understand if the new constraints from pQCD go beyond the constraints coming from the NS measurements we construct a Bayesian-inference framework. This was done in [4], where we generate a large ensemble of different EOSs using Gaussian- process regression. We anchor the ensemble to CET calculations and extrapolate it up to 10$n_{s}$, where we impose pQCD input as a blue shape from fig.2. We condition ensemble sequentially with the astrophysical observations. With this setup we can turn on and turn off pQCD input in order to study its effect on our posterior after imposing astrophysical observation. The results are present in fig.3. The reduction of the pressure (green arrow on the right plot), which is caused by the QCD input, happens before the density reaches its maximal central value. In another words, the prediction of QCD input is the softening of the EOS that happens inside the most massive neutron stars. Figure 3: Left plot shows the sample of 10k EOSs. The coloring represents the likelihood after imposing all observations as well as pQCD input. Right plot shows 67 %-credible intervals conditioned with the different astrophysical observations and high-density limit. The gray band shows 67 %-credible interval for the maximal central energies density reached in NSs. ## 4 Conclusion In this work, I show how QCD calculations at asymptotically high densities can propagate information down to lower densities using solely thermodynamic consistency, stability and causality. This information offers a significant constraints to the EOS at NS density, which is complementary to the current astrophysical observations. In addition, I show that the prediction of QCD input is the softening of the EOS that happens in the most massive NSs. An easy-to-use python script is provided to check consistency of the EOS with pQCD input, available on Github [17]. In order to achieve accurate determination of the EOS it is crucial to utilize all available controlled measurements and theoretical calculations. This strategy either helps us to understand the matter of the densest objects in the Universe or find a discrepancy between different inputs, which allows us to use NS as a tool for fundamental discoveries. ## References * [1] I. Tews, T. Krüger, K. Hebeler, and A. Schwenk, “Neutron matter at next-to-next-to-next-to-leading order in chiral effective field theory,” _Phys. Rev. Lett._ , vol. 110, no. 3, p. 032504, 2013. * [2] C. Drischler, K. Hebeler, and A. Schwenk, “Chiral interactions up to next-to-next-to-next-to-leading order and nuclear saturation,” _Phys. Rev. Lett._ , vol. 122, no. 4, p. 042501, 2019. * [3] T. Gorda, A. Kurkela, P. Romatschke, M. Säppi, and A. Vuorinen, “Next-to-next-to-next-to-leading order pressure of cold quark matter: Leading logarithm,” _Phys. Rev. Lett._ , vol. 121, no. 20, p. 202701, 2018\. * [4] T. Gorda, A. Kurkela, R. Paatelainen, S. Säppi, and A. Vuorinen, “Soft Interactions in Cold Quark Matter,” _Phys. Rev. Lett._ , vol. 127, no. 16, p. 162003, 2021. * [5] P. B. Demorest, T. Pennucci, S. M. Ransom, M. S. E. Roberts, and J. H. T. Hessels, “A two-solar-mass neutron star measured using Shapiro delay,” _Nature_ , vol. 467, pp. 1081–1083, 2010. * [6] J. Antoniadis _et al._ , “A massive pulsar in a compact relativistic binary,” _Science_ , vol. 340, no. 6131, p. 1233232, 2013. * [7] E. Fonseca _et al._ , “Refined Mass and Geometric Measurements of the High-mass PSR J0740+6620,” _Astrophys. J. Lett._ , vol. 915, no. 1, p. L12, 2021. * [8] M. C. Miller _et al._ , “The radius of PSR J0740+6620 from NICER and XMM-Newton data,” _Astrophys. J. Lett._ , vol. 918, no. 2, p. L28, 2021\. * [9] T. E. Riley _et al._ , “A NICER view of the massive pulsar PSR J0740+6620 informed by radio timing and XMM-Newton spectroscopy,” _Astrophys. J. Lett._ , vol. 918, no. 2, p. L27, 2021. * [10] B. P. Abbott _et al._ , “GW170817: Observation of gravitational waves from a binary neutron star inspiral,” _Phys. Rev. Lett._ , vol. 119, no. 16, p. 161101, 2017. * [11] ——, “Multi-messenger observations of a binary neutron star merger,” _Astrophys. J. Lett._ , vol. 848, no. 2, p. L12, 2017. * [12] E. Annala, T. Gorda, A. Kurkela, J. Nättilä, and A. Vuorinen, “Evidence for quark-matter cores in massive neutron stars,” _Nat. Phys._ , vol. 16, no. 9, pp. 907–910, 2020. * [13] O. Komoltsev and A. Kurkela, “How Perturbative QCD Constrains the Equation of State at Neutron-Star Densities,” _Phys. Rev. Lett._ , vol. 128, no. 20, p. 202701, 2022. * [14] T. Gorda, O. Komoltsev, and A. Kurkela, “Ab-initio QCD calculations impact the inference of the neutron-star-matter equation of state,” 4 2022. * [15] S. Typel, M. Oertel, and T. Klähn, “CompOSE CompStar online supernova equations of state harmonising the concert of nuclear physics and astrophysics compose.obspm.fr,” _Phys. Part. Nucl._ , vol. 46, no. 4, pp. 633–664, 2015. * [16] M. Oertel, M. Hempel, T. Klähn, and S. Typel, “Equations of state for supernovae and compact stars,” _Rev. Mod. Phys._ , vol. 89, no. 1, p. 015007, 2017. * [17] “https://github.com/OKomoltsev/QCD-likelihood-function.”
where $\tilde{c}$ denotes the $|\mathcal{Z}|$ components of $c$ corresponding to the columns of $A$ put into in $\tilde{A}$, and $\tilde{c}_c$ are the remaining entries $c_g$ of $c$. Then $\alpha = {\tilde{A}}^{'-1}\tilde{c}$, which can be seen by left-multiplying the above equation by the $|\mathcal{Z}| \times |\mathcal{G}|$ matrix $[{\tilde{A}}^{'-1},\mathbf{0}^{|\mathcal{Z}| \times |\mathcal{G}|-|\mathcal{Z}|}]$. Intuitively, the system $A'\alpha = c$ is over-determined, so we only only need the components $\tilde{c}$ of $c$ to uniquely determine the vector $\alpha$. Now consider the case in which $|\mathcal{G}| < |\mathcal{Z}|$, so that the system $A'\alpha = c$ is now undetermined. Suppose for now that the rank of $A$ is $|\mathcal{G}|$ so that it has full column rank. One solution $\alpha$ can then be obtained by writing $A=\begin{bmatrix} \tilde{A} \\ {\tilde{A}_c} \end{bmatrix} $ where $\tilde{A}$ is an invertible $|\mathcal{G}| \times |\mathcal{G}|$ matrix representing $|\mathcal{G}|$ linearly independent rows of $A$. Now consider $\alpha = \begin{pmatrix} \tilde{A}^{-1}c\\ \mathbf{0}^{(|\mathcal{Z}|-|\mathcal{G}|) \times 1} \end{pmatrix}$ where note that $\tilde{A}^{-1}c$ is $|\mathcal{G}|-$component vector. This represents one solution to $A'\alpha = c$ because $$A'\begin{pmatrix} \tilde{A}^{'-1}c\\ \mathbf{0}^{(|\mathcal{Z}|-|\mathcal{G}|) \times 1} \end{pmatrix} = [\tilde{A}, \tilde{A}_c] \begin{pmatrix} \tilde{A}^{-1}c\\ \mathbf{0}^{(|\mathcal{Z}|-|\mathcal{G}|) \times 1} \end{pmatrix} = c $$ We can combine the constructions in the two special cases considered above to relax any assumptions about the cardinality of $\mathcal{Z}$ and $\mathcal{G}$ or the rank of $A$. Let the rank of $A$ be $k \le \min\{|\mathcal{Z}|, |\mathcal{G}|\}$. Write $A = A_k[I_k,M]$ where $A_k$ is a $k \times |\mathcal{G}|$ matrix composed of $k$ linearly independent columns of $A$, and $M$ is $(|\mathcal{G}|-k) \times k$ matrix that expresses the remaining $(|\mathcal{G}|-k)$ columns of $A$ as linear combinations of the columns of $A$ represented in $A_k$. Write $c = \begin{pmatrix} \tilde{c}_k\\ \tilde{c}_c \end{pmatrix}$ where $\tilde{c}_k$ collects the corresponding $k$ components of $c$. Note that if $c' = \alpha'A$ has a solution, then $c' = \tilde{c}_k'[I_k,M]$, since $c' = (\alpha_k' A_k)[I,M]$ where the $k$ components of $c'$ corresponding to the columns in $A_k$ are $\alpha_k' A_k$, so $\tilde{c}_k'=\alpha_k' A_k$. Now split the rows of $A_k$ as \tilde{A} \\ {\tilde{A}_c} \end{bmatrix}$ where $\tilde{A}$ is a square invertible $k \times k$ matrix representing $k$ linearly independent rows of $A_k$ and $\tilde{A}_c$ is $(|\mathcal{Z}|-k) \times k$. Now $\alpha = \begin{pmatrix} \tilde{c}_k'\tilde{A}^{-1}\\ \mathbf{0}^{(|\mathcal{Z}|-k)\times 1} \end{pmatrix}$ represents a solution to $c'=\alpha'A$ because $$[\tilde{c}_k' \tilde{A}^{-1}, \mathbf{0}^{1 \times (|\mathcal{Z}|-k)}] A = [\tilde{c}_k' \tilde{A}^{-1}, \mathbf{0}^{1 \times (|\mathcal{Z}|-k)}] \begin{bmatrix} \tilde{A} \\ {\tilde{A}_c} \end{bmatrix}[I_k,M] = \tilde{c}_k'[I_k,M] = c' $$ In all of the three cases considered above, we can write any non-zero elements $\alpha_z$ of a $\alpha$ yielding a binary combination as components $x_z$ of $x=M^{-1}b$, where $M$ is an invertible $n \times n$ binary matrix (i.e. having entries of $0$ or $1$), and $b$ an n-component binary vector. Equivalently, $x$ represents the unique solution to $Mx=b$. Cramer's rule for such a solution establishes that the $x_z$ can be written as $$x_z = \frac{det(M_z)}{det(M)}$$ where $M_z$ is a matrix that replaces the column $z$ of the matrix $M$ with the vector $b$. Since both $M$ and $b$ are composed of binary entries, the matrix $M_z$ is always binary as well. The result now follows as stated in Proposition <ref> since $0$ is always a possible value of $det(M_z)$. §.§ Proof of Theorem <ref> §.§.§ Setup and notation Let $\mathcal{Y} \subseteq \mathbbm{R}$ be the support of $Y$. For any $y \in \mathcal{Y}, z \in \mathcal{Z}$ and $t \in \mathcal{T}$, let $F_{(YD)|Z}(y,t|z) := E[\mathbbm{1}(Y_i \le y)\mathbbm{1}(T_i=t)|Z_i=z]$. The strategy will be based on the fact that knowing the distribution of $(Y_i,T_i,Z_i)$ is equivalent to knowing $F_{(YD)|Z}(y,t|z)$ for all $(y,t,z)$ along with the distribution of the instruments $\mathcal{P}_Z$. For any response type $g \in \mathcal{G}$, we use the notation $T_g(z)$ to denote the common counterfactual treatment functions among units $i$ of that response-type: $G_i=g$. By the law of iterated expectations and Assumption 1: \begin{align}F_{(YD)|Z}(y,t|z) &= E[\mathbbm{1}(Y_i \le y)\mathbbm{1}(T_i=t)|Z_i=z] = \sum_{g: T_g(z)=t} P(G_i=g)E[\mathbbm{1}(Y_i(t) \le y)|G_i=g] \nonumber\\ &= \sum_{g: T_g(z)=t} P(G_i=g) F_{Y(t)|G}(y|g): = \sum_{g \in \mathcal{G}} A^{[t]}_{zg} \cdot P(G_i=g) F_{Y(t)|G}(y|g) \label{Aeq} \end{align} where $A^{[t]}$ is the $|\mathcal{Z}| \times \mathcal{|\mathcal{G}|}$ matrix with typical entry $[A^{[t]}]_{zg} = \mathbbm{1}(T_g(z)=t)$. Before proceeding, I state a Lemma that will be useful in what follows: If $\mu_g^t$ is outcome-agnostic identified given instrument support $\mathcal{Z}$, it remains outcome-agnostic identified using data from $Z_i \in \mathcal{Z}_0$, where $\mathcal{Z}_0 \subseteq \mathcal{Z}$ is a subset of instrument values for which the rows of $A^{[t]}$ for $z \in \mathcal{Z}_0$ are linearly independent of one another. Suppose that $A^{[t]}$ does not have full row rank. This implies that for some $\mathcal{Z}_0 \subset \mathcal{Z}$, the remaining rows of $A^{[t]}$ for $z \notin \mathcal{Z}_0$ can be written as a linear combination of the rows of $A^{[t]}$ for $z \in \mathcal{Z}_0$. Take one such $z^* \notin \mathcal{Z}_0$, and accordingly let $$A^{[t]}_{z^*,g} = \sum_{z \in \mathcal{Z}_0} A^{[t]}_{z^*,g} \quad \textrm{ for all } g \in \mathcal{G}$$ Note then that Eq. (<ref>) implies that \begin{align*} F_{(YD)|Z}(y,t|z^*) &= P(Y_i \le y \textrm{ and } T=t|Z_i=z^*)\\ &= P(Y_i(t) \le y \textrm{ and } A^{[t]}_{z^*,G_i}=1|Z_i=z^*)\\ &= P(Y_i(t) \le y \textrm{ and } A^{[t]}_{z^*,G_i}=1)\\ &= P(Y_i \le y \textrm{ and } \sum_{z \in \mathcal{Z}_0} A^{[t]}_{z^*,G_i}=1) \end{align*} where the RHS on the last line does not depend in any way on the distribution of observables for $i$ such that $Z_i=z^*$. Thus, $F_{(YD)|Z}(y,t|z^*)$ adds no information that is not contained in $F_{(YD)|Z}(y,t|z)$ for $z \in \mathcal{Z}_0$. If $\mu_g^t$ is outcome-agnostic identified, it must be using the distribution $\mathcal{P}_{YTZ|Z \in \mathcal{Z}_0}$ rather than the full unconditional distribution $\mathcal{P}_{YTZ}$. Lemma <ref> implies that we can without loss of generality take $A^{[t}$ to have full row-rank. Lemma <ref> extends an observation of <cit.> that one can remove any rows of $A{[t]}$ that is an exact copy of another row (i.e. all selection groups behave the same for two instrument values), and there is hence a direct redundancy over instrument values. §.§.§ Outcome-agnostic identification Now define $\mathbf{F}_{(YD)|Z}(y)$ to be a $|\mathcal{T}|\cdot|\mathcal{Z}| \times 1$ vector of $F_{(YD)|Z}(y,t|z)$ over $z$ and $t$ and $\mathbf{G}^*(y)$ to be a $|\mathcal{T}|\cdot |\mathcal{G}|$-component vector of $P(G_i=g)\cdot F_{Y(t)|G}(y|g)$ over $g$ and $t$, for a fixed $y$. Now let $\mathbf{G}^*$ represent the whole vector-valued function $\mathbf{G}^*: \mathcal{Y} \rightarrow \mathbbm{R}^{|\mathcal{T}|\cdot|\mathcal{G}|}$, and define $\mathbf{F}_{(YD)|Z}$ similarly as the function $\mathcal{Y} \rightarrow \mathbbm{R}^{|\mathcal{T}|\cdot|\mathcal{Z}|}$ yielding the vector $F_{(YD)|Z}(y) $. In this notation $\mathcal{P}_Z$ and $\mathbf{F}_{(YD)|Z}$ encode the entire distribution of observables $(Y,T,Z)$ while $\mathcal{P}_Z$ and $\mathbf{G}^*_Y$ encode the entire distribution of model primitives $(Y(t),G,Z)$. The relationship between the two can be characterized by writing Eq. (<ref>) as: \begin{equation} \label{Aeq2} \mathbf{F}_{(YD)|Z} = \mathcal{A} \circ \mathbf{G}^* \end{equation} where $\mathcal{A}$ is the linear map of functions $\mathcal{Y} \rightarrow \mathbbm{R}^{|\mathcal{T}|\cdot |\mathcal{G}|}$ to functions $\mathcal{Y} \rightarrow\mathbbm{R}^{|\mathcal{T}|\cdot|\mathcal{Z}|}$ defined by: $$ \left[\mathcal{A} \circ \bm{\mu}(y)\right]_{tz} = \sum_{g} A^{[t]}_{z,g} \cdot \bm{\mu}(y)_{t g} $$ holding for each $y$, for any vector-valued function $\bm{\mu}: \mathcal{Y} \rightarrow\mathbbm{R}^{|\mathcal{T}|\cdot|\mathcal{Z}|}$. Let $\theta_c=\mathbbm{E}[Y_i(t)|c(G_i)=1]$ be the parameter of interest. Note that $\theta_c$ can also be written as a linear map applied to the function $\mathbf{G}^*$. In particular $\theta_c = \Theta_c \circ \mathbf{G}^*$, where for any function $\bm{\mu}: \mathcal{Y}$ to $\mathbbm{R}^{|\mathcal{T}|\cdot|\mathcal{G}|}$, $\Theta_c \circ \bm{\mu}$ is the scalar: \begin{equation} \label{thetaeq} \sum_{g \in \mathcal{G}} \frac{c_g}{P(c(G_i)=1)} \cdot \int_\mathcal{Y} y\cdot d\bm{\mu}(y)_{t,g} \end{equation} $$\mathcal{S}:= \{\bm{\mu}: \mathcal{A} \circ \bm{\mu} = \mathbf{F}_{(YD)|Z}\}$$ $$\mathcal{R}:= \{\bm{\mu}: [\bm{\mu}(y)]_{tg}/P(G_i=g) \textrm{ is a proper CDF for each }t \in \mathcal{T} \textrm{ and }g \textrm{ for which } P(G_i=g)>0\}$$ Note that the sets $\mathcal{R}$ and $\mathcal{S}$ as well as the map $\Theta_c$ depend on the distribution $\mathcal{P}_{latent}$ (through $\mathbf{F}_{(YD)|Z}$, through the $P(G_i=g)$, and through $P(c(G_i)=1)$ respectively). Let us denote this dependence by $\mathcal{S}(\mathcal{P}_{obs})$, $\mathcal{R}(\mathcal{P}_{G})$ and $\Theta_c(\mathcal{P}_{G})$, though I will later leave this dependence implicit to ease notation. Definition <ref> of outcome-agnostic identification from Section <ref>, translated into this notation, says that $$\left\{\Theta_c(\mathcal{P}_G) \circ \bm{\mu}: \bm{\mu} \in \mathcal{R}(\mathcal{P}_G) \textrm{ and } \bm{\mu} \in \mathcal{S}(\mathscr{P}_{obs}) \textrm{ and } \mathcal{P}_G \in \mathscr{P}_G \right\} \textrm{ is a singleton for all } \mathcal{P}_{obs} \in \mathscr{P}_{obs}(\mathcal{G})$$ \begin{align*} \mathscr{P}_{obs}(\mathcal{G}) &= \{\phi(\mathcal{P}_{latent} \times \mathcal{P}_Z): \mathcal{P}_{latent} \in \mathscr{P}(\mathcal{G}), \mathcal{P}_Z \in \mathscr{P}_Z\}\\ &= \{\Theta_c(\mathcal{P}_G) \circ \bm{\mu}: \bm{\mu} \in \mathcal{R}(\mathcal{P}_G), \mathcal{P}_{G} \in \mathscr{P}_G \textrm{ such that } supp\{\mathcal{P}_G\} \subseteq \mathcal{G}, \mathcal{P}_Z \in \mathscr{P}_Z\} \end{align*} Putting these together, we have that $\theta_c$ is outcome-agnostic identified if and only if $$ \left\{\Theta_c \cdot \bm{\mu}\right\}_{\bm{\mu} \in (\mathcal{S} \cap \mathcal{R})} \textrm{ is a singleton for all } \mathcal{P}_{latent} \textrm{ with } supp\{\mathcal{P}_{G}(\mathcal{P}_{latent})\} \subseteq \mathcal{G}\vspace{.3cm}$$ §.§.§ A candidate for $\mathbf{G}^*$ that recovers observables Consider the vector-valued function $\mathbf{G}$, where the $t,g$ component of $\mathbf{G}(y)$ is: $$\left[\mathbf{G}(y)\right]_{t,g}:= \begin{cases} P(G_i=g)\cdot F_{Y(t)|G}(y|g) & \textrm{ if } \max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=0\\ \sum_z [(A^{[t]})^+]_{g,z} ]\cdot F_{(YD)|Z}(y,t|z) & \textrm{ if } \max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=1 \end{cases} $$ and $(A^{[t]})^+$ indicates the Moore-Penrose pseudoinverse of the matrix $A^{[t]}$. The reason for separating out the two cases is that if there exists a group $g$ that acts as a “never-taker” with respect to treatment $t$ such that $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=0$, then this corresponds to a column of all zeros in $A^{[t]}$. A property of the Moore-Penrose inverse is that if column $g$ of $A^{[t]}$ is all zeros, then the corresponding row $g$ of $(A^{[t]})^+$ is also all zeros (see e.g. <cit.> which would leave $\left[\mathbf{G}(y)\right]_{t,g}=0$ for all $y$ if we simply defined $\left[\mathbf{G}(y)\right]_{t,g}=\sum_z [(A^{[t]})^+]_{g,z} F_{(YD)|Z}(y,t|z)$ for all $t,g$. This would make it impossible for $\mathbf{G}$ to represent a possible candidate for $\mathbf{G}^*$. The above construction avoids this problem by simply replacing such problematic combinations of $(g,t)$ by using the actual $[\mathbf{G}^*(y)]_{t,g}$. Note that if the first case holds for all $g \in \mathcal{G}$, then the matrix $A^{[t]}$ is simply the zero matrix, and outcome agnostic identification cannot hold, by Lemma <ref>. Thus, we can continue under the assumption that the second case holds for at least some $g \in \mathcal{G}$. We first observe that $\mathbf{G}$ “recovers observables”, by which I mean that $\mathcal{A} \circ \bm{\mu} = \mathbf{F}_{(YD)|Z}$ and hence $\mathbf{G} \in \mathcal{S}$. To see this, note that: \begin{align*}[\mathcal{A} \circ \mathbf{G}(y)]_{t,z} &= \sum_{g} A^{[t]}_{z,g}\left[\mathbf{G}(y)\right]_{t,g}\\ &=\cancel{\sum_{g: \max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=0} A^{[t]}_{z,g}\cdot P(G_i=g)\cdot F_{Y(t)|G}(y|g)}\\ &\hspace{1in} + \sum_{g: \max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=1}\sum_{z'} A^{[t]}_{z,g}[(A^{[t]})^+]_{g,z'} F_{(YD)|Z}(y,t|z')\\ &=\sum_{g,z'} A^{[t]}_{z,g}[(A^{[t]})^+]_{g,z'} F_{(YD)|Z}(y,t|z')\\ &= \sum_{z'} [A^{[t]}(A^{[t]})^+]_{z,z'} F_{(YD)|Z}(y,t|z')= [F_{(YD)|Z}(y)]_{tz} \end{align*} where the second and third equalities use that $A^{[t]}_{z,g} = 0$ for all $z$, if $g$ is such that $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=0$. The final equality follows from $A^{[t]}(A^{[t]})^+ = I_{|\mathcal{Z}|}$, which in turn follows from $(A^{[t]})^+ = {A^{[t]}}'({A^{[t]}}{A^{[t]}}')^{-1}$ since we can by Lemma <ref> assume that $A^{[t]}$ has full row rank. $\mathbf{G}$ may however not be in $\mathcal{R}$, as its definition above does not ensure that each $F_{Y(t)|G}(y|g)$ is necessarily weakly increasing in $y$ with a limit of unity as $y \uparrow \infty$. Note that $[\mathbf{G}]_{t,g}/P(G_i=g)$ does have the final two properties of a CDF: right-continuity and a left limit of zero. To see this, substitute Eq. (<ref>) into the definition of $\mathbf{G}$, to rewrite as: \begin{equation} \label{eq:Fstarnew} \left[\mathbf{G}(y)\right]_{t,g}:= \begin{cases} P(G_i=g)\cdot F_{Y(t)|G}(y|g) & \textrm{ if } \max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=0\\ \sum_{g'} [(A^{[t]})^+ A^{[t]}]_{g,g'}\cdot P(G_i=g')\cdot F_{Y(t)|G}(y|g') & \textrm{ if } \max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=1 \end{cases} \end{equation} Right continuity of each element of $\mathbf{G}(y)$ in $y$ follows from right-continuity of the $F_{Y(t)|G}(y|g')$. Note that $\lim_{y \downarrow -\infty} \left[\mathbf{G}(y)\right]_{t,g} = 0$ follows from each of the CDFs $F_{(YD)|Z}$ approaching zero as $y \downarrow -\infty$, given that the components of $A^{[t]}$ and $P(G_i=g)$ are finite. Let $\beta_{t,g} := \lim_{y \uparrow \infty} \left[\mathbf{G}(y)\right]_{t,g}$. For any $t,g$ such that $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=0$, it follows from the definition of $\mathbf{G}$ that $\beta_{t,g}=P(G_i=g)$, since each of the $F_{Y(t)|G}(y|g)$ are valid CDFs. For the other $t,g$, use (<ref>) to see that \begin{align*} \beta_{t,g}&=\lim_{y \uparrow \infty} \sum_{g'} [(A^{[t]})^+ A^{[t]}]_{g,g'} \cdot P(G_i=g')\cdot F_{Y(t)|G}(y|g') = \sum_{g'} [(A^{[t]})^+ A^{[t]}]_{g,g'} \cdot P(G_i=g')\\ &= [(A^{[t]})^+ A^{[t]}P]_{g} \end{align*} where $P$ is a vector of $P(G_i=g)$ for all $g \in \mathcal{G}$. The following definition will be useful in the construction below. Pick any fixed $g^* \in \mathcal{G}$, and let us define a vector valued function $\mathbf{D}: \mathcal{Y}\rightarrow \mathbbm{R}^{|\mathcal{T}|\cdot |\mathcal{G}|}$ with components: \begin{equation} \label{eq:D} [\mathbf{D}(y)]_{t,g} := (P_g-\beta_{t,g})\cdot F_{Y(t)|G}(y|g^*)= [\left\{I-(A^{[t]})^+ A^{[t]}\right\}P]_{g}\cdot F_{Y(t)|G}(y|g^*) \end{equation} §.§.§ A class of alternative candidates that also recover observables Now let us define for any $\lambda \in [0,1]$ the convex combination of $\mathbf{G}+\mathbf{D}$ and $\mathbf{G}^*$: \begin{equation} \label{eq:Galpha} \mathbf{G}^{\lambda} := \lambda\left(\mathbf{G} + \mathbf{D} \right) + (1-\lambda)\mathbf{G}^* = \mathbf{G}^*+\lambda\left\{\mathbf{G}-\mathbf{G}^*+\mathbf{D}\right\} \end{equation} Our first observation will be that $\mathcal{A} \circ \mathbf{G}^{\lambda} = \mathbf{F}_{(YD)|Z}$, and thus $\mathbf{G}^{\lambda} \in \mathcal{S}$. To see this, note that: \begin{align*}[\mathcal{A} \circ \mathbf{G}^\lambda(y)]_{t,z} &= [\mathcal{A} \circ \mathbf{G}(y)]_{t,z} + \lambda \cdot [\mathcal{A} \circ \left\{\mathbf{G}-\mathbf{G}^*+\mathbf{D}\right\}(y)]_{t,g}\\ &=[F_{(YD)|Z}(y)]_{t,z} + \cancel{\lambda \cdot [\mathcal{A} \circ\mathbf{G}(y)]_{t,g}-\lambda \cdot [\mathcal{A} \circ\mathbf{G}^*(y)]_{t,g}} + \lambda \cdot [\mathcal{A} \circ\mathbf{D}(y)]_{t,g}\\ &=[F_{(YD)|Z}(y)]_{t,z}+\lambda \cdot \sum_{g,g'} A^{[t]}_{z,g} \cdot [(I-(A^{[t]})^+ A^{[t]})]_{g,g'}\cdot P(G_i=g')\cdot F_{Y(t)|G}(y|g^*)\\ &=[F_{(YD)|Z}(y)]_{t,z}+\lambda \cdot \sum_{g'} [\cancel{A^{[t]}(I-(A^{[t]})^+ A^{[t]})}]_{z,g'}\cdot P(G_i=g')\cdot F_{Y(t)|G}(y|g^*)\\ \end{align*} since $\mathcal{A} \circ\mathbf{G}^*=\mathcal{A} \circ\mathbf{G}$ and $A^{[t]}(A^{[t]})^+ A^{[t]}=A^{[t]}$. Now, we verify that for a small enough $\lambda$, $\mathbf{G}^{\lambda}$ yields $F_{Y(t)|G}(y|g)$ that satisfy the properties of a CDF and hence $\mathbf{G}^{\lambda} \in \mathcal{R}$. First, note that $\left[\mathbf{G}^{\lambda}(y)\right]_{t,g}$ is right-continuous in $y$, since each of $[\mathbf{G}(y)]_{t,g}$, $[\mathbf{G}^*(y)]_{t,g}$, and $[\mathbf{D}(y)]_{t,g}$ are. We also have that $\lim_{y \downarrow -\infty} \left[\mathbf{G}^{\lambda}(y)\right]_{t,g}=0$, since $$\lim_{y \downarrow -\infty} \left[\mathbf{G}(y)\right]_{t,g}=\lim_{y \downarrow -\infty} \left[\mathbf{G}^{*}(y)\right]_{t,g}=\lim_{y \downarrow -\infty} \left[\mathbf{D}(y)\right]_{t,g}=0$$ Note as well that \begin{align*} \lim_{y \uparrow \infty} \left[\mathbf{G}^{\lambda}(y)\right]_{t,g} &= \lim_{y \uparrow \infty} \left[\mathbf{G}^{*}(y)\right]_{t,g}+\lambda \cdot \lim_{y \uparrow \infty} \left[\left\{\mathbf{G}-\mathbf{G}^*+\mathbf{D}\right\}(y)\right]_{t,g} \\ &=P_g+\lambda \cdot \left\{\lim_{y \uparrow \infty} \left[\mathbf{G}(y)\right]_{t,g}-\lim_{y \uparrow \infty} \left[\mathbf{G}^{*}(y)\right]_{t,g}+\lim_{y \uparrow \infty} \left[\mathbf{D}(y)\right]_{t,g}\right\}\\ &=P_g+\lambda \cdot \left\{\beta_{t,g}-P_g+(P_g-\beta_{t,g})\cdot 1\right\} = P_g \end{align*} matching the correct normalization of the true $\lim_{y \uparrow \infty} \left[\mathbf{G}^{*}(y)\right]_{t,g} = P_g \cdot \lim_{y \uparrow \infty} F_{Y(t)|G=g}(y) = P_g$. It only remains to be seen that for a small enough value of $\lambda$, $ \left[\mathbf{G}^{\lambda}(y)\right]_{t,g}$ is weakly increasing in $y$. This is where Assumption REG will prove useful. Given Assumption REG, $\left[\mathbf{G}^{\lambda}(y)\right]_{t,g}$ is non-decreasing in $y$ for any $\lambda \in (0,\bar{\lambda}]$, where $\bar{\lambda} = \frac{\underbar{L}}{2|\mathcal{G}|\cdot \bar{L}} >0$. Given Proposition <ref>, we have shown that for $\lambda \le \bar{\lambda}$, $\mathbf{G}^\lambda \in \mathcal{R}$ and hence $\mathbf{G}^\lambda \in (\mathcal{S} \cap \mathcal{R})$. §.§.§ Outcome agnostic identification implies $c \in rs(A^{[t]})$ Thus, identification of $\theta_c$ requires that for any such $\lambda>0$ that is small enough that $\mathbf{G}^\lambda \in (\mathcal{S} \cap \mathcal{R}$), it holds that $\Theta_c \circ \mathbf{G}^\lambda = \Theta_c \circ \mathbf{G}^*$. This in turn requires, by Eq. (<ref>), that $\Theta_c \circ \left\{\mathbf{G}-\mathbf{G}^*+\mathbf{D}\right\}=0$. Now: \begin{align} \Theta_c &\circ \left\{\mathbf{G}-\mathbf{G}^*+\mathbf{D}\right\} =\frac{1}{P(c(G_i))} \sum_g c_g \cdot \left\{\int_\mathcal{Y} y\cdot d\mathbf{G}(y)_{t,g}-\int_\mathcal{Y} y\cdot d\mathbf{G}^*(y)_{t,g}+\int_\mathcal{Y} y\cdot d\mathbf{D}(y)_{t,g}\right\} \nonumber\\ &=\frac{1}{P(c(G_i))} \sum_g c_g \sum_{g'} [I-(A^{[t]})^+ A^{[t]}]_{g,g'} \cdot P(G_i=g')\cdot \mathbbm{E}[Y_i(t)|G_i=g'] \nonumber\\ &\hspace{.5in}+\frac{1}{P(c(G_i))} \sum_g c_g \sum_{g'} [(I-(A^{[t]})^+ A^{[t]})P]_{g,g'}\cdot P(G_i=g') \cdot \mathbbm{E}[Y_i(t)|G_i=g^*] \nonumber\\ &=\frac{1}{P(c(G_i))} \sum_{g'} [c'(I-(A^{[t]})^+ A^{[t]})]_{g'} \cdot P(G_i=g')\cdot \mathbbm{E}[Y_i(t)|G_i=g'] \nonumber\\ &\hspace{1in}+\frac{1}{P(c(G_i))} \sum_{g'} [c'(I-(A^{[t]})^+ A^{[t]})]_{g'}\cdot P(G_i=g')\cdot \mathbbm{E}[Y_i(t)|G_i=g^*] \nonumber\\ &=\frac{1}{P(c(G_i))} \sum_{g'} [c'(I-(A^{[t]})^+ A^{[t]})]_{g'} \cdot P(G_i=g')\cdot \left\{ \mathbbm{E}[Y_i(t)|G_i=g']-\mathbbm{E}[Y_i(t)|G_i=g^*]\right\} \label{eq:finalcondtion} \end{align} For this to be true for all joint distributions $\mathcal{P}_{latent}$ compatible with $\mathcal{G}$, we must have $[c'(I-(A^{[t]})^+ A^{[t]})]_{g'}=0$ for all $g' \in \mathcal{G}$, i.e. $c$ is in the rowspace of $A^{[t]}$. Otherwise, if $c'(I-(A^{[t]})^+ A^{[t]}) = \tilde{c}'$ for some non-zero vector $\tilde{c}$, we could always construct a $\mathcal{P}_{latent}$ yielding a non-zero vector of $P(G_i=g')\cdot \left\{ \mathbbm{E}[Y_i(t)|G_i=g']-\mathbbm{E}[Y_i(t)|G_i=g^*]\right\}$ across $g'$ that is parallel to $\tilde{c}$ in $\mathbbm{R}^{|\mathcal{G}|}$, such that $\sum_{g'} \tilde{c}'_{g'} \cdot P(G_i=g')\cdot \left\{ \mathbbm{E}[Y_i(t)|G_i=g']-\mathbbm{E}[Y_i(t)|G_i=g^*]\right\} > 0$. Deriving Eq. (<ref>): Finally, let us see explicitly that if $c$ lies in the row-space of $A^{[t]}$, we can write $\theta_c$ in the form of Eq. (<ref>). Given the definition of $\mathbf{G}$, this allows us to express $\theta_c=\mathbbm{E}[Y_i(t)|c(G_i)=1]$ as: \begin{align} \theta_c &=\sum_{g} \frac{c_g}{\mathbbm{E}[c_{G_i}]} \int y d\left\{\sum_{z} [(A^{[t]})^+]_{g,z} F_{(YD_T)|Z}(y,t|z))\right\} \nonumber\\ &=\sum_{g} \frac{c_g}{\mathbbm{E}[c_{G_i}]}\sum_{z} [(A^{[t]})^+]_{g,z} P(T_i=t|Z_i=z)\int y dF_{Y|Z,T}(y|z,t) \nonumber\\ &=\sum_{z} \left(\sum_g [(A^{[t]})^+]_{g,z} \frac{c_g}{\mathbbm{E}[c_{G_i}]}\right) \cdot \mathbbm{E}[Y_iD^{[t]}_i|Z_i=z] \nonumber\\ &=\frac{\sum_{z} \left(\sum_g [(A^{[t]})^+]_{g,z} \cdot c_g \right) \cdot \mathbbm{E}[Y_iD^{[t]}_i|Z_i=z]}{\mathbbm{E}[c_{G_i}]} \label{eq:necc} \end{align} The numerator of (<ref>) takes the form of a linear combination in which $K = |\mathcal{Z}|$ with coefficients $\alpha_k = \sum_g [(A^{[t]})^+]_{g,z_k} \cdot c_g$, for an arbitrary ordering of the points $z_1 \dots z_K$ in $\mathcal{Z}$. It only remains to be shown that $(t,\alpha)$ is a binary combination, and that furthermore $\mathbbm{E}[c_{G_i}] = \sum_{k} \alpha_{k} \cdot \mathbbm{E}[D^{[t]}_i(z_k)] = \sum_{k} \alpha_{k} \cdot \mathbbm{E}[D^{[t]}_i|Z_i=z_k]$. This by Proposition (<ref>) then establishes the final result that $\mathbbm{E}[Y_i(t)|c_{G_i}=1] = \mathbbm{E}[Y_i(t)|C_i^{[\alpha,t]}]$. For $(t,\alpha)$ to be a binary combination we must first verify that $\sum_{k} \alpha_{k} \cdot D^{[t]}_i(z_k) \in \{0,1\}$ for all $i$. Note that for any $z$, $D^{[t]}_i(z) = [A^{[t]}]_{z,G_i}$ by the definition of $A^{[t]}$. Thus: \begin{align*} \sum_{k} \alpha_{k} \cdot D^{[t]}_i(z_k) &= \sum_{z} \left(\sum_g [(A^{[t]})^+]_{g,z} \cdot c_g \right) \cdot [A^{[t]}]_{z,G_i}\\ &= \sum_g c_g \cdot [(A^{[t]})^+(A^{[t]})]_{g,G_i} = [[(A^{[t]})^+(A^{[t]})]' c]_{G_i} = [[(A')(A')^+]c]_{G_i} \end{align*} where we let $A'$ denote the transpose of $A^{[t]}$ (dropping the $t$ index for clarity). Since $c$ lies in the row space of $A^{[t]}$ (equivalently $c$ lies in the column space of $A'$), we have that $[(A')(A')^+]c = c$. Thus have by the above that $\sum_{k} \alpha_{k} \cdot D^{[t]}_i(z_k) = c_{G_i}=\mathbbm{1}[c(G_i)=1]$, establishing that $\sum_{k} \alpha_{k} \cdot D^{[t]}_i(z_k) \in \{0,1\}$ for all $i$, and also that $P[c(G_i)=1] = \sum_{k} \alpha_{k} \cdot \mathbbm{E}[D^{[t]}_i(z_k)] = \sum_{k} \alpha_{k} \cdot \mathbbm{E}[D^{[t]}_i|Z_i=z_k]$. §.§.§ Proof of Proposition <ref> The key to monotonicity will be to choose $\lambda$ small enough that any decreases in the components of $\mathbf{G}^{\lambda}$ with $y$ is dominated by increases in the corresponding components of $\mathbf{G}^*$, so that each $\left[\mathbf{G}^{\lambda}\right]_{t,g}$ is monotonically increasing. Note that $\left[\mathbf{G}^\lambda(y)\right]_{t,g}$ is monotonically increasing in $y$ if for any $y'>y$: $$\lim_{y' \downarrow y} \left[\mathbf{G}^\lambda(y')\right]_{t,g} - \left[\mathbf{G}^\lambda(y)\right]_{t,g}\ge 0$$ i.e. that \begin{equation} \label{eq:ineq} \lim_{y' \downarrow y} \left[\mathbf{G}^*(y')\right]_{t,g} - \left[\mathbf{G}^*\right]_{t,g}\ge \lambda\cdot \{\lim_{y' \downarrow y} \left[(\mathbf{G}^*-\mathbf{G})(y')-(\mathbf{G}^*-\mathbf{G})(y)\right]_{t,g} - (\lim_{y' \downarrow y} \left[\mathbf{D}(y')\right]_{t,g} - \left[\mathbf{D}\right]_{t,g})\} \end{equation} Let us turn first to $\left[(\mathbf{G}^*-\mathbf{G})(y)\right]_{t,g}$. Fix a $g$ and $t$, and any $y' > y$. Then, by (<ref>): \begin{equation} \label{eq:Fstarnew2} \left[\mathbf{G}(y')\right]_{t,g}-\left[\mathbf{G}^*(y)\right]_{t,g}= \begin{cases} \sum_{g'} [(A^{[t]})^+ A^{[t]}]_{g,g'}\cdot P(G_i=g')\cdot \left\{F_{Y(t)|G}(y'|g')-F_{Y(t)|G}(y|g')\right\} \end{cases} \end{equation} where the first line indicates the case that $g$ is such that $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=0$, and the second that $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=1$. Thus $\left[(\mathbf{G}^*-\mathbf{G})(y')\right]_{t,g}-\left[(\mathbf{G}^*-\mathbf{G})(y)\right]_{t,g}$ is equal to $0$ if $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=0$, and $$\sum_{g'} [I-(A^{[t]})^+ A^{[t]}]_{g,g'} \cdot P(G_i=g')\cdot \left\{F_{Y(t)|G}(y'|g')-F_{Y(t)|G}(y|g')\right\}$$ if $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=1$. Thus we have by REG that if $\left[(\mathbf{G}^*-\mathbf{G})(y')\right]_{t,g}\ne \left[(\mathbf{G}^*- \mathbf{G})(y)\right]_{t,g}$: \begin{align*} &= \left|\sum_{g'} [I-(A^{[t]})^+ A^{[t]}]_{g,g'} \cdot P(G_i=g')\cdot \left\{F_{Y(t)|G}(y'|g')- F_{Y(t)|G}(y|g')\right\}\right|\\ &= \left\{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)\right\}\cdot \left|\sum_{g'} [I-(A^{[t]})^+ A^{[t]}]_{g,g'} \cdot P(G_i=g') \cdot \frac{F_{Y(t)|G}(y'|g')- F_{Y(t)|G}(y|g')}{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)}\right|\\ & \le \left\{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)\right\}\cdot |\mathcal{G}|^{1/2} \cdot \sqrt{\sum_{g'} P(G_i=g')^2\cdot \left(\frac{F_{Y(t)|G}(y'|g')- F_{Y(t)|G}(y|g')}{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)}\right)^2}\\ & \le \left\{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)\right\}\cdot |\mathcal{G}| \cdot \max_{g'} P(G_i=g') \cdot \max_{g'} \left|\frac{F_{Y(t)|G}(y'|g')- F_{Y(t)|G}(y|g')}{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)}\right|\\ & \le \left\{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)\right\}\cdot |\mathcal{G}| \cdot \max_{g'} \left|\frac{F_{Y(t)|G}(y'|g')- F_{Y(t)|G}(y|g')}{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)}\right| \end{align*} using that $[I-(A^{[t]})^+ A^{[t]}]$ is a projection (so that $|[I-(A^{[t]})^+ A^{[t]}]v| \le |v|$ for any vector $v \in \mathbbm{R}^{|\mathcal{G}|}$) and by the Cauchy-Schwarz inequality. Let $\delta_t^*(y):=\lim_{y' \downarrow y} \left\{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)\right\}$, which is guaranteed to exist by right continuity of $F_{Y(t)|G}(y|g^*)$. Then, by REG: $$\lim_{y' \downarrow y} \left|\left[(\mathbf{G}^*-\mathbf{G})(y')\right]_{t,g}-\left[(\mathbf{G}^*-\mathbf{G})(y)\right]_{t,g}\right| \le \delta_t^*(y) \cdot |\mathcal{G}| \cdot \bar{L}$$ Now consider $\left[\mathbf{D}(y)\right]_{t,g}$. Fix a $g$ and $t$, and any $y' > y$. Similarly, we have that \begin{align*} \left|\left[\mathbf{D}(y')\right]_{t,g}-\left[\mathbf{D}(y)\right]_{t,g}\right|&= \left|\sum_{g'} [I-(A^{[t]})^+ A^{[t]}]_{g,g'} \cdot P(G_i=g')\cdot \{F_{Y(t)|G}(y'|g^*)-F_{Y(t)|G}(y|g^*)\}\right|\\ &\le \{F_{Y(t)|G}(y'|g^*)-F_{Y(t)|G}(y|g^*)\} \cdot |P|\\ &\le \{F_{Y(t)|G}(y'|g^*)-F_{Y(t)|G}(y|g^*)\} \cdot |\mathcal{G}| \end{align*} So, using Assumption REG: $$ \lim_{y' \downarrow y} \left|\left[\mathbf{D}(y')\right]_{t,g}-\left[\mathbf{D}(y)\right]_{t,g}\right| \le \delta_t^*(y) \cdot |\mathcal{G}|\cdot \bar{L}$$ We can thus put an upper bound on the RHS of (<ref>) $$\lambda\cdot \{\lim_{y' \downarrow y} \left[(\mathbf{G}^*-\mathbf{G})(y')-(\mathbf{G}^*-\mathbf{G})(y)\right]_{t,g} - (\lim_{y' \downarrow y} \left[\mathbf{D}(y')\right]_{t,g} - \left[\mathbf{D}\right]_{t,g})\} \le 2\lambda \cdot \delta_t^*(y) \cdot |\mathcal{G}|\cdot \bar{L}$$ Meanwhile, by REG: \begin{align*} \lim_{y' \downarrow y} \left\{\left[\mathbf{G}^*(y')\right]_{t,g} - \left[\mathbf{G}^*\right]_{t,g}\right\} & = \lim_{y' \downarrow y} \{F_{Y(t)|G}(y'|g^*)-F_{Y(t)|G}(y|g^*)\} \cdot \lim_{y' \downarrow y} \frac{F_{Y(t)|G}(y'|g')- F_{Y(t)|G}(y|g')}{F_{Y(t)|G}(y'|g^*)- F_{Y(t)|G}(y|g^*)}\\ \ge \delta_t^*(y) \cdot \underbar{L} \end{align*} Thus inequality (<ref>) holds provided that $$\delta_t^*(y) \cdot \underbar{L} \ge 2\lambda \cdot \delta_t^*(y) \cdot |\mathcal{G}|\cdot \bar{L} \quad \iff \quad \lambda \le \frac{\underbar{L}}{2|\mathcal{G}|\cdot \bar{L}}$$ A visualization of the intuition behind this result is depicted in Figure <ref>. Depiction of the monotonicity result. Blue sinusoidal function depicts an example of a $\left[(\mathbf{G}+\mathbf{D})^\lambda(y)\right]_{t,g}$ that is not weakly increasing. Orange curve depicts $\left[\mathbf{G}^*(y)\right]_{t,g}$ which is weakly increasing. Black curve depicts $\left[\mathbf{G}^\lambda(y)\right]_{t,g}$, which is a linear combination of the blue and orange functions with weights $\lambda=0.1$ and $1-\lambda = 0.9$, respectively. This value of $\lambda$ is small enough that the black curve is weakly increasing everywhere. § EXTENDED ANALYSIS OF IDENTIFICATION UNDER NSOG It is known that unconditional means $\mathbbm{E}[Y_i(t)]$ of a given potential outcome $Y_i(t)$ can be point-identified, given an order condition on the instruments, under an assumption of “no-selection on gains” (NSOG) (see e.g. <cit.> for versions of this result).[<cit.> does not use the terminology of NSOG, but rather “constant average treatment effects”.] Note that identification of $\mathbbm{E}[Y_i(t)]$ and $\mathbbm{E}[Y_i(t')]$ implies identification of unconditional average treatment effects $\mathbbm{E}[Y_i(t')-Y_i(t)]$ as well. NSOG says that treatment effects are mean independent of actual treatment, given any realization of the instruments: [NOSG (no selection on gains)] For any $t,t',t_1,t_2 \in \mathcal{T}$ and $z \in \mathcal{Z}$: $$\mathbbm{E}[Y_i(t')-Y_i(t)|T_i=t_1,Z_i=z] = \mathbbm{E}[Y_i(t')-Y_i(t)|T_i=t_2,Z_i=z]$$ NSOG implies that if we consider any fixed treatment value $0 \in \mathcal{T}$, then $\mathbbm{E}[Y_i(t')-Y_i(0)|T_i=t,Z_i=z] = \mathbbm{E}[Y_i(t')-Y_i(0)|Z_i=z]$ for any $t,z$, which coupled with independence (<ref>) in turn implies that $\mathbbm{E}[Y_i(t')-Y_i(0)|T_i=t,Z_i=z] = \mathbbm{E}[Y_i(t')-Y_i(0)] := \Delta_{t'}$, where note that $\Delta_{t'}$ does not depend on $z$ or $t$. This normalization against an arbitrary treatment $0 \in \mathcal{T}$ allows us to carry around one less index in our expressions. §.§ Identification under NSOG This subsection shows that $\mathbbm{E}[Y_i(t)]$ can be identified for each $t \in \mathcal{T}$ under NSOG given rich enough support of the instruments. The proof essentially follows that of <cit.>, which adapts an argument from <cit.> to cases in which the treatments $\mathcal{T}$ are not necessarily ordered. NSOG implies that with probability one: $$\mathbbm{E}[Y_i(t)-Y_i(0)|T_i,Z_i=z] = \Delta_t$$ for any $t$ and thus letting $t=T_i$: $$\mathbbm{E}[Y_i(T_i)-Y_i(0)|T_i,Z_i=z] = \Delta_{T_i} = \sum_{t \in \mathcal{T}} \mathbbm{1}(T_i=t) \cdot \Delta_t$$ again with probability one. Averaging over the conditional distribution of $T_i$ given $Z_i=z$, we have $$\mathbbm{E}[Y_i-Y_i(0)|Z_i=z] = \sum_{t \in \mathcal{T}} P(T_i=t|Z_i=z) \cdot \Delta_t$$ To now see that $\mathbbm{E}[Y_i(t)]$ can be identified under NSOG given rich enough instrument support, let us assume that $|\mathcal{Z}| \ge |\mathcal{T}|$ and suppose that there exists a set of $|\mathcal{T}|$ instrument values $\tilde{\mathcal{Z}} \subseteq \mathcal{Z}$ such that the $|\mathcal{T}| \times |\mathcal{T}|$ matrix $\Sigma$ with entries $\Sigma_{zt} = P(Z_i=z, T_i=t)$ over all $z \in \tilde{\mathcal{Z}}$ is invertible, with $P(Z_i = z) > 0$ for each $z \in \tilde{\mathcal{Z}}$. The above equation can be re-written \} \cdot \mathbbm{1}(Z_i=z)] = \sum_{t \in \mathcal{T}} \Sigma_{zt} \cdot \Delta_t$$ for each $z \in \tilde{\mathcal{Z}}$, or equivalently \begin{align*} \mathbbm{E}[Y_i \cdot \mathbbm{1}(Z_i=z)] &= P(Z_i=z)\cdot \mathbbm{E}[Y_i(0)] + \sum_{t \in \mathcal{T}} \Sigma_{zt} \cdot \Delta_t\\ &= P(Z_i=z)\cdot \mathbbm{E}[Y_i(0)] + \sum_{t \in \mathcal{T}, t \ne 0} \Sigma_{zt} \cdot \Delta_t\\ &= P(Z_i=z,T_i=0)\cdot \mathbbm{E}[Y_i(0)] + \sum_{t \in \mathcal{T}, t \ne 0} \Sigma_{zt} \cdot \mathbbm{E}[Y_i(0)] + \sum_{t \in \mathcal{T}, t \ne 0} \Sigma_{zt} \cdot \Delta_t\\ &= P(Z_i=z,T_i=0)\cdot \mathbbm{E}[Y_i(0)] + \sum_{t \in \mathcal{T}, t \ne 0} \Sigma_{zt} \cdot \mathbbm{E}[Y_i(t)]\\ & = \sum_{t \in \mathcal{T}} \Sigma_{zt} \cdot \mathbbm{E}[Y_i(t)] \end{align*} using that $\Delta_0 = 0$ in the second equality. This yields a system of $|\mathcal{T}|$ equations in the $|\mathcal{T}|$ unknowns $\mathbbm{E}[Y_i(t)]$ with identified coefficients $\Sigma_{zt}$. Given that $\Sigma^{-1}$ is invertible, we have then that $$\mathbbm{E}[Y_i(t)] = \sum_{z \in \tilde{\mathcal{Z}}} \Sigma^{-1}_{tz} \cdot \mathbbm{E}[Y_i \cdot \mathbbm{1}(Z_i=z)]$$ §.§ How Theorem <ref> does not cover NSOG Since the result of the last section makes no assumption about which response groups can show up in the population, it is compatible with any selection model $\mathcal{G} \subseteq \{0,1\}^{\mathcal{T}^\mathcal{Z}}$, including for example the full powerset $\{0,1\}^{\mathcal{T}^\mathcal{Z}}$ of possible selection groups $\mathcal{T}^\mathcal{Z}$. Whatever $\mathcal{G}$ is, unconditional means like $\mathbbm{E}[Y_i(t)]$ correspond to the choice $c = (1, \dots, 1)'$ in $\mathbbm{R}^{|\mathcal{G}|}$. As long as $\mathcal{G}$ allows never-takers with respect to treatment $t$, this choice of $c$ will not lie in the rowspace of $A^{[t]}$ and hence $[c'(I-(A^{[t]})^+ A^{[t]})]_{g'}$ in the final step of the proof of Theorem <ref>, Eq. <ref>, will be non-zero for at least some $g' \in \mathcal{G}$. The unrestricted selection model $\mathcal{G} = \{0,1\}^{\mathcal{T}^\mathcal{Z}}$ has the property that there exist never-takers with respect to any given $t \in \mathcal{T}$. Recall that Eq. (<ref>) says that if $\mu_c^t$ is outcome-agnostic identified, then: \begin{equation} \label{eq:finalcondition2} \sum_{g'} [c'(I-(A^{[t]})^+ A^{[t]})]_{g'} \cdot P(G_i=g')\cdot \left\{ \mathbbm{E}[Y_i(t)|G_i=g']-\mathbbm{E}[Y_i(t)|G_i=g^*]\right\} = 0 \end{equation} where $g^*$ is an arbitrary fixed selection group in $\mathcal{G}$. Therefore, point identification of $\mathbbm{E}[Y_i(t)]$ under NSOG must be possible, even though the coefficients on $P(G_i=g')\cdot \left\{ \mathbbm{E}[Y_i(t)|G_i=g']-\mathbbm{E}[Y_i(t)|G_i=g^*]\right\}$ in the above are not all equal to zero. Note that one way to satisfy (<ref>) without $[c'(I-(A^{[t]})^+ A^{[t]})]_{g'}=0$ for all $g'$ is to assume that $\mathbbm{E}[Y_i(t)|G_i=g']=\mathbbm{E}[Y_i(t)|G_i=g^*]$ for all $g' \in \mathcal{G}$. However, this is a much stronger assumption than NSOG, and in fact entirely rules out endogeneity of treatment, in the sense that a simple difference in means $\mathbbm{E}[Y_i|T_i=t'] - \mathbbm{E}[Y_i|T_i=t]$ would then be equal to the average treatment effect $\mathbbm{E}[Y_i(t')-Y_i(t)]$ between $t'$ and $t$. In the case of NSOG, identification of the $\mathbbm{E}[Y_i(t)]$ remains consistent with the proof of Theorem <ref> in a less direct way. Recall the set $\mathcal{R}$ from the proof of Theorem <ref>, which consists of all vector valued functions $\bm{\mu}: \mathcal{Y} \rightarrow \mathbbm{R}^{|\mathcal{T}| \times |\mathcal{Z}|}$ that yield proper CDF functions $[\bm{\mu}(y)]_{tg}/P(G_i=g)$ for response groups $g$ that occur in the population. However, some of the $\bm{\mu} \in \mathcal{R}$ that may nevertheless violate NSOG, though the produce valid CDFs. Intuitively, the set $\mathcal{R}$ allowed in this proof may be “too big” if the assumption of NSOG is additionally maintained. Let us see this possibility by considering the construction $\mathbf{G}^\lambda$ used in the proof of Theorem <ref> as a candidate value for $\mathbf{G}^*$ the true potential outcome CDFs, scaled by the selection group probabilities, i.e. $[\mathbf{G}^*(y)]_{t,g}:=P(G_i=g)\cdot F_{Y(t)|G}(y|g)$. Consider any $g \in \mathcal{G}$ such that $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=t)=1$ and $\max_{z \in \mathcal{Z}} \mathbbm{1}(T_g(z)=0)=1$, where $0 \in \mathcal{T}$ is the arbitrary reference treatment. Then if $\mathbf{G}^*=\mathbf{G}$ (corresponding to the limit $\lambda \rightarrow 0$) we have by Eq. (<ref>) that: $$P(G_i=g)\cdot \mathbbm{E}[Y_i(t)|G_i=g] = \sum_{g'} [(A^{[t]})^+A^{[t]}]_{g,g'}\cdot P(G_i=g')\cdot F_{Y(t)|G}(y|g')$$ and similarly for treatment $0$, so that \begin{align*} \mathbbm{E}[Y_i(t)|G_i=g, Z_i=z] = \mathbbm{E}[Y_i(t)|G_i=g]= \sum_{g'} \frac{P(G_i=g')}{P(G_i=g)}\cdot(A^{[t]})^+A^{[t]}]_{g,g'}\cdot \mathbbm{E}[Y(t)|G_i=g'] \end{align*} using independence (<ref>), and similarly for treatment $0$. Therefore, for any $t_1 \in \mathcal{T}$: \begin{align} \mathbbm{E}[&Y_i(t)-Y_i(0)|T_i=t_1, Z_i=z] = \mathbbm{E}[Y_i(t)-Y_i(0)|A^{[t]}_{z,G_i} = 1, Z_i=z] \nonumber \\ &= \sum_g P(G_i=g|A^{[t_1]}_{z,G_i} = 1) \cdot \mathbbm{E}[Y_i(t)-Y_i(0)|G_i=g, Z_i=z] \nonumber \\ &= \frac{1}{P(T_i=t_1|Z_i=z)}\sum_{g:A^{[t_1]}_{zg}=1} P(G_i=g)\cdot \mathbbm{E}[Y_i(t)-Y_i(0)|G_i=g, Z_i=z] \nonumber \\ &= \frac{1}{P(T_i=t_1|Z_i=z)}\sum_{g} A^{[t_1]}_{zg}\cdot P(G_i=g)\cdot \{\mathbbm{E}[Y_i(t)|G_i=g, Z_i=z]-\mathbbm{E}[Y_i(0)|G_i=g, Z_i=z]\} \nonumber \\ &= \frac{1}{P(T_i=t_1|Z_i=z)} \sum_g A^{[t_1]}_{zg} \cdot \sum_{g'} P(G_i=g')\cdot \left\{[(A^{[t]})^+A^{[t]}]_{g,g'}\cdot \mathbbm{E}[Y(t)|G_i=g']\right. \nonumber \\ &\hspace{3.11in} \left.-[(A^{[0]})^+A^{[0]}]_{g,g'}\cdot \mathbbm{E}[Y(0)|G_i=g']\right\} \label{eq:nsogcondition} \end{align} where the third line uses that \begin{align*} P(A^{[t_1]}_{z,G_i} = 1) &= \sum_{g} P(G_i=g)\cdot A^{[t_1]}_{z,g} = \sum_{g} P(G_i=g)\cdot \mathbbm{1}(T_g(z)=t_1)\\ &= \sum_{g} P(G_i=g)\cdot P(T_i(z)=t_1|G_i=g,Z_i=z) = P(T_i=t_1|Z_i=z) \end{align*} by independence Eq. (<ref>). Defining $y^{[t]}$ to be a vector of $P(G_i=g) \cdot \mathbbm{E}[Y_i(t)|G_i=g]$ across all $g \in \mathcal{G}$, we can write the above as $$\mathbbm{E}[Y_i(t)-Y_i(0)|T_i=t_1, Z_i=z] = \frac{[A^{[t_1]}\{(A^{[t]})^+A^{[t]} y^{[t]}-(A^{[0]})^+A^{[0]} y^{[0]}\}]_z}{[A^{[t_1]}P]_z}$$ under $\mathbf{G}^*=\mathbf{G}$, where $P$ is a vector of $P(G_i=g)$ for $g \in \mathcal{G}$ introduced previously in the proof of Theorem <ref>. For NSOG to be satisfied, the RHS of Eq. (<ref>) must be equal to $\Delta_1$, regardless of the value of $t_1$ or $z$. This requires $$ \frac{[A^{[t_1]}\{(A^{[t]})^+A^{[t]} y^{[t]}-(A^{[0]})^+A^{[0]} y^{[0]}\}]_z}{[A^{[t_1]}P]_z} = \Delta_t$$ for all $t_1$ and $z$. It is unclear if this equation can ever be satisfied unless $y^{[t]}_g=0$ for all $t$ and $g$ (in which case $\Delta_t=0$ and the equation is satisfied). Even if the strong condition that $\mathbbm{E}[Y_i(t)|G_i=g]$ be the same for all $g$ were to hold, then $y^{[t]} = \mathbbm{E}[Y_i(t)]\cdot P_g$ for all $g$ and the equation above would reduce to $$ \frac{[A^{[t_1]}(A^{[t]})^+A^{[t]}P]_z}{[A^{[t_1]}P]_z} \cdot \mathbbm{E}[Y_i(t)]-\frac{[A^{[t_1]}(A^{[0]})^+A^{[0]}P]_z}{[A^{[t_1]}P]_z} \cdot \mathbbm{E}[Y_i(0)] = \mathbbm{E}[Y_i(t)]-\mathbbm{E}[Y_i(0)]$$ This would be true if $A^{[t_1]} = A^{[t]} = A^{[0]}$, but that can never occur for $t \ne 0$ unless treatments $t$ or $0$ are never chosen by an selection group (since $[A^{[t]}+A^{[0]}]_{zg} \le 1$; a given individual cannot take two treatments under the same instrument value).
11institutetext: John Adams Institute for Accelerator Science, Blackett Laboratory, Imperial College London, UK # An Overview of Recent Progress in Laser Wakefield Acceleration Experiments S.P.D. Mangles ###### Abstract The goal of this paper is to examine experimental progress in laser wakefield acceleration over the past decade (2004–2014), and to use trends in the data to understand some of the important physical processes. By examining a set of over 50 experiments, various trends concerning the relationship between plasma density, accelerator length, laser power and the final electron beam energy are revealed. The data suggest that current experiments are limited by dephasing and that current experiments typically require some pulse evolution to reach the trapping threshold. Keywords Laser wakefield accelerators; plasma accelerators; laser-plasma acceleration. ## 0.1 Introduction This paper is a summary of a lecture given at the CERN Accelerator School 2015 on plasma-based wakefield accelerators. Its purpose is to provide an overview of recent experimental progress in laser wakefield acceleration, concentrating on the energy frontier and general trends that can be observed in the data produced by the many groups around the world contributing to this growing field. There are now more than 20 active laboratories performing laser wakefield acceleration experiments. We will not detail the key results from each of these laboratories in this paper, although we will examine data from various published experiments and use the trends in the data to try to gain some understanding of the underlying physical processes. Laser wakefield accelerators were proposed 36 years ago in the seminal 1979 paper by Tajima and Dawson [1] and experiments in laser wakefield acceleration were undertaken as soon as laser pulses of sufficiently short duration and high power became available, thanks to the development of the laser technique called ‘chirped pulse amplification’ [2]. Some of the early work in what we call laser wakefield acceleration (where the pulse duration, $\tau_{\mathrm{L}}$ is comparable to the plasma period $2\pi/\omega_{\mathrm{p}}$) occurred in the 1990s (Refs.[3, 4]) using picosecond glass lasers. The field passed a major milestone in the early 2000s as high-power (${\sim}10\UW[T]$) femtosecond laser pulses, using titanium sapphire laser systems, became available. One major result from that era occurred when three groups from the UK, USA and France all demonstrated that laser wakefield accelerators could produce electron beams with well-defined energies [5, 6, 7]. The electron beams produced by these ${\sim}10\UW[T]$ lasers had energies of ${\sim}100\UMeV$ and were produced in plasmas of a plasma density ${\sim}10^{19}\Ucm^{-3}$ that were only ${\sim}1\Umm$ long. One of the key challenges that has driven progress in the field of laser wakefield acceleration is the maximum achievable beam energy, and this paper will concentrate on this challenge. There have, of course, been many other significant areas of experimental progress including: improving beam stability, especially by controlling injection (Refs.[8, 9]); diagnosis of wakefield accelerators (Refs.[10, 11]) and the resulting improvements in our understanding of the underlying processes; and the application of laser wakefield accelerators for a range of applications, perhaps most notably their use as novel sources of X-radiation (Refs.[12, 13, 14]). Details of the progress in these areas are outside the scope of this paper. Figure 1: Reported electron beam energies from laser wakefield experiments at various laboratories over the last decade; data from Refs. [5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 8, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 9, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. This paper will examine how the maximum achievable beam energy has progressed over the last decade, and will use a set of published results [5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 8, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 9, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60] to try to understand some of the physics behind this progress. The progress in the maximum beam energy in laser wakefield accelerator experiments has been rapid, as shown in energy-year, from a maximum beam energy of $0.2\UGeV$ reported in 2002 [61] to the current record of $4\UGeV$ from the group at the Lawrence Berkeley National Laboratory [58] achieved in 2014 – an increase by a factor of 20 in just over a decade. It should be noted that this is by no means an exhaustive list of all published experiments in laser wakefield accelerators (there are just 52 publications in this dataset, whereas a literature search for papers on ‘laser wakefield’ will find over 1000 papers). ## 0.2 Overall trends in laser wakefield acceleration experiments The rapid progress shown in energy-year is impressive. But how has it been achieved? Over the same period of time, short-pulse (${\approx}30\Ufs$) laser systems have become more powerful. Figure 2 shows that there is a clear trend: higher-power lasers are capable of producing higher-energy electron beams. However, these gains were not achieved by simply increasing the laser power, the researchers behind these experiments have often found the optimum conditions for their experiments. Key parameters involved in this optimization include the operating plasma density, the length of the accelerator and the laser intensity. This section will examine the data from various experiments and compare them with predicted trends, to see whether they can confirm those predictions and the underlying physical processes. Figure 2: Variation of reported electron beam energy with laser power from various experiments; data from Refs. [5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 8, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 9, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. ### 0.2.1 Accelerator length and operating plasma density Let us first consider the density of the plasma accelerator. The energy gained by a particle of charge $q$ in an accelerating structure is simply proportional to the product of the electric field and the length of the accelerator, $d$, $W\simeq qEd$, where $E$ is the average accelerating electric field experienced by the particle. One of the key physical limitations in a laser wakefield accelerator is dephasing. The electrons trapped in a wake are highly relativistic ($\gamma\gg 1$), so they travel at a speed approaching that of light in vacuum ($v_{\mathrm{e}}\to c$), but the phase speed of the wake is determined by the speed of the laser pulse that drives the plasma wave. A simple expression for the speed of a laser pulse in a plasma can be found by using the standard expression for the group velocity of an electromagnetic wave in a plasma, $\frac{v_{\mathrm{g}}}{c}=\sqrt{1-\frac{n_{\mathrm{e}}}{n_{\mathrm{c}}}}\approx 1-\frac{1}{2}\frac{n_{\mathrm{e}}}{n_{\mathrm{c}}}\ ,$ (1) where $n_{\mathrm{e}}$ is the electron density of the plasma, $n_{\mathrm{c}}$ is the critical density for propagation of the electromagnetic (when the plasma frequency $\omega_{\mathrm{p}}$ equals the frequency of the electromagnetic wave, $\omega_{0}$) and it is assumed that $n_{\mathrm{e}}\ll n_{\mathrm{c}}$. The wake’s phase speed is therefore slightly, but significantly, less than $c$; crucially, the lower the plasma density, the faster the phase velocity. Because of this difference between the electron velocity and the wake phase velocity, electrons in a laser-driven wake will outrun the wake111Note that dephasing does not occur in wakefield accelerators driven by highly relativistic charged particle beams as both the accelerated and driver beams are highly relativistic.. If the electron is injected at the start of the accelerating phase of the plasma wave and then outruns the wave by half a plasma wavelength ($\lambda_{\mathrm{p}}/2=\pi c/\omega_{\mathrm{p}}$), it can no longer gain energy from the plasma wave. If the electron has an initial velocity $v_{\mathrm{e}}=\beta_{\mathrm{e}}c$ and the plasma wave has a phase velocity $v_{\phi}=\beta_{\phi}c$, then the time it takes for this to occur is $t_{\mathrm{d}}=\lambda_{\mathrm{p}}/(2c(\beta_{\mathrm{e}}-\beta_{\phi}))$. The dephasing length is then the distance that the electron travels in this time. Since $\beta_{\mathrm{e}}\to 1$ and $\beta_{\phi}\simeq 1-\frac{1}{2}({n_{\mathrm{e}}}/{n_{\mathrm{c}}})$, this reduces to $L_{\rm dephasing}\simeq\frac{n_{\mathrm{c}}}{n_{\mathrm{e}}}\lambda_{\mathrm{p}}\propto n_{\mathrm{e}}^{-\frac{3}{2}}\ .$ (2) It is interesting to see how the lengths of the accelerators in the set of experiments vary, and how this compares with what we might expect if dephasing is important. These data are shown in density-length. The top panel shows how the reported electron beam energy varies with the length of the wakefield accelerator. There is a clear correlation – the higher electron energies are achieved with longer accelerators, as we might expect. The bottom panel of density-length shows how the length of the accelerator and the plasma density at which it was operating are related. The line on this curve is the simple expression for the dephasing length (dephasing_length). Figure 3: Top: Variation of reported electron beam energy with accelerator length. Bottom: Relationship between operating plasma density and accelerator length. The line shows the expression for the dephasing length (dephasing_length). Data from Refs. [5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 8, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 9, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. The maximum electric field that a plasma wave can support increases with plasma density, since it scales as $E_{\mathrm{max}}\simeq m_{\mathrm{c}}\omega_{\mathrm{p}}/e\propto\sqrt{n_{\mathrm{e}}}\ .$ (3) The maximum energy that can be gained by an electron in a plasma wave as a function of plasma density is therefore expected to be $W(n_{\mathrm{e}})\simeq E_{\max}L_{\mathrm{dephasing}}\propto\frac{1}{n_{\mathrm{e}}}\ .$ (4) Figure 4 shows how the plasma density and beam energy vary in the set of experiments. The line on energy-density is simply $W/(m_{\mathrm{e}}c^{2})=\kappa\,n_{\mathrm{c}}/n_{\mathrm{e}}$ and shows good agreement with the entire dataset for $\kappa=1$. The scaling laws in Ref.[62], by Wei Lu et al., for the blow-out or ‘bubble’ regime of wakefield accelerators, suggest that the scaling law should be $W(n_{\mathrm{e}},a_{0})\simeq\frac{2}{3}a_{0}\frac{n_{\mathrm{c}}}{n_{\mathrm{e}}}m_{\mathrm{e}}c^{2}\propto\frac{a_{0}}{n_{\mathrm{e}}}\ ,$ (5) where $a_{0}=eA_{0}/(m_{\mathrm{e}}c)=eE_{0}/(m_{\mathrm{e}}\omega_{0}c)$ is the normalized peak vector potential (or strength parameter) of the laser pulse. This scaling predicts that the beam energy should not only be proportional to $1/n_{\mathrm{e}}$ but also proportional to the laser strength, $a_{0}$. The experiments shown correspond to a wide range of initial laser intensities (corresponding to $a_{0}=0.5$–$4.0$), yet they do not appear to show a dependence on $a_{0}$. Figure 4: Variation of reported electron beam energy with the density in the accelerator. The line shows the relation $W/(m_{\mathrm{e}}c^{2})=\kappa\,n_{\mathrm{c}}/n_{\mathrm{e}}$ with $\kappa=1$. Data from Refs. [5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 8, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 9, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. One possible reason for this apparent discrepancy is that the initial value for $a_{0}$ is not the value of $a_{0}$ that determines the wake amplitude. It is well known that laser pulses can undergo significant evolution once they enter the plasma. The processes of self-focusing, self-compression and photon acceleration[63] can all act to change $a_{0}$ as the pulse propagates. Together, these processes can be termed the ‘self-evolution’ of the laser pulse. One interpretation of the experimental data is, therefore, that the process of self-evolution has occurred until $a_{0}\approx 3$ for all of the data shown. Why would this be the case? One reasonable hypothesis is that each point in the dataset corresponds to the maximum energy achieved during a particular experiment and that this will occur at (or at least close to) the lowest density at which a particular experiment can trap and accelerate electrons. Many experiments operate by fixing the laser power and plasma length while varying the plasma density, for reasons of experimental simplicity. When an experiment is conducted in this manner, there will be a minimum density at which electron beams are trapped and accelerated (the trapping threshold). Because self-evolution happens less quickly and severely at lower densities, the maximum $a_{0}$ that is reached inside the accelerator will decrease with decreasing plasma density. Therefore, the maximum achieved electron beam energy will correspond to the minimum laser strength required to produce trapping. The fact that the experimental dataset matches the non- linear wakefield scaling but only if $a_{0}\approx 3$ suggests that the minimum $a_{0}$ required for trapping is $a_{0}\approx 3$. ### 0.2.2 Laser spot size and matched guiding The experimental data clearly show that higher-power lasers are required to achieve higher electron beam energies. But what are the physical processes behind this trend? It was argued that the experimental trends are consistent with there being a minimum value of $a_{0}\approx 3$, which is needed for trapping, and that this value is reached because of the way the pulse evolves as it propagates. Consider the relationship between the intensity, $I$ ($\propto a_{0}^{2}$), and power, $P$, of a laser pulse. Since $I=P/A$, where $A$ ($\propto w^{2}$, the laser spot size) is the focal spot area, we have $P\propto a_{0}^{2}w^{2}\,.$ (6) The fact that higher-power lasers are needed to reach the $a_{0}\approx 3$ threshold at lower densities therefore implies that the spot size that these laser pulses produce after evolution is larger. Pulse evolution is a result of the feedback between the refractive index gradient associated with the plasma wave and the laser pulse, it is mediated by the plasma itself. Lower-density plasmas, therefore, have a lesser effect on the laser pulse – resulting in slower evolution – and crucially this affects the properties that the pulse obtains as a result of self-evolution. One important concept that arises from this is that of the _matched_ spot size – one where the self-focusing caused by the plasma balances the natural diffraction of the laser pulse and stable propagation occurs. In the blow-out regime (where the laser pulse expels practically all the electrons from inside the bubble), the transverse density profile of the bubble is approximately zero and flat inside the bubble, with very steep walls at the edges. The refractive index of an underdense plasma is $\eta\approx 1-n_{\mathrm{e}}/(2n_{\mathrm{c}})$, so the refractive index profile of this bubble is similar to that of a single-mode optical fibre, we have a ‘core’ of a certain diameter (the bubble diameter) surrounded by ‘cladding’ (the bubble sheath) of a lower refractive index. The guided mode in such an optical fibre has a transverse size that is approximately equal to the size of the core, so we expect that in a laser wakefield accelerator we will get stable propagation (no spot size oscillations) when the transverse size of the laser pulse is approximately equal to that of the bubble. Of course, one main difference between the bubble and an optical fibre is that the size of the bubble is determined by the properties of the laser pulse itself. If the laser spot is too small, then it will drive a bubble that is larger than the laser spot. This over-sized bubble will support a larger mode and the laser pulse will expand, which in turn reduces the bubble size. The radius of the bubble can be found by balancing the ponderomotive force of the laser pulse with the force due to the electric field of the bubble. If the bubble is approximately the same size as the laser pulse then it will be matched. The size of the matched spot can be found by balancing the forces on an electron at the edge of the bubble (the laser’s ponderomotive force and the force due to the electric field inside the bubble). An expression for the matched spot size, $w_{\mathrm{m}}$ is $w_{\mathrm{m}}\simeq 2\frac{c}{\omega_{\mathrm{p}}}\sqrt{a_{0}}\ ,$ (7) where the numerical factor of two was found through particle-in-cell simulations[62]. This expression is particularly useful for finding the correct initial parameters of the accelerator, for a given laser system, one should first determine the spot size at which the threshold $a_{0}\simeq 3$ is reached; this expression can then be used to determine the correct operating plasma density. However, when self-focusing plays an important role, we need an expression for the matched spot size that depends not on the initial laser intensity but rather the value of $a_{0}$ that it reaches after self-focusing. We can use the fact that the ratio of the laser power, $P_{\mathrm{L}}$, to the critical power for self-focusing, $P_{\mathrm{c}}$, can be written as $\frac{P_{\mathrm{L}}}{P_{\mathrm{c}}}=\frac{1}{32}\frac{\omega_{p}^{2}}{c^{2}}\,{a_{0}^{2}w^{2}}\ ,$ (8) and the fact that $a_{0}^{2}w^{2}$ is constant during focusing (assuming that self-focusing happens more rapidly than any pulse compression) to eliminate $a_{0}$ from matched1. This results in the following expression for the matched spot size: $w_{\mathrm{m}}\simeq 2\sqrt{2}\frac{c}{\omega_{\mathrm{p}}}\left(\frac{P_{\mathrm{L}}}{P_{\mathrm{c}}}\right)^{\frac{1}{6}}\ .$ (9) Note that $2w_{\mathrm{m}}\approx\lambda_{\mathrm{p}}$, as long as $P_{\mathrm{L}}$ is not many times greater than $P_{\mathrm{c}}$. It is interesting to examine how the spot size used in experiments compares with this matched spot size. Figure 5 shows the variation in the initial (vacuum) laser focal spot size with the operating plasma density in the experiments in the dataset. There is a clear overall trend towards larger initial spots at lower densities (and therefore higher electron beam energies), and the spot sizes used are reasonably close to $\lambda_{\mathrm{p}}$ (shown as a solid blue line in spot_size). However, given that the evidence suggests that most of these experiments reached $a_{0}\approx 3$ as a result of pulse evolution, it is also interesting to compare the initial spot size with the expected matched spot size for $a_{0}\approx 3$ (this is shown as a dashed red line in spot_size). Most of the experiments are clearly operating at an initial spot size significantly larger than this matched spot size, and none of the selected experiments operate with a spot size below this. This suggests either that most experiments are operating with too large an initial spot size, wasting accelerator length and laser energy while the laser pulse self-focuses, or that there is some experimental advantage in starting at a spot size larger than the matched spot size and letting pulse evolution happen. Figure 5: Variation of initial laser focal spot size with operating plasma density in various laser wakefield acceleration experiments. Solid blue line, plasma wavelength, $\lambda_{\mathrm{p}}$; red dashed line, matched spot size assuming $a_{0}\approx 3$. Data from Refs. [5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 8, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 9, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. ### 0.2.3 To guide or not to guide? It is well known that a tightly focused laser pulse will quickly diffract in a vacuum. Effectively, the pulse can only remain intense over a distance of about a Rayleigh length, $z_{\mathrm{R}}=\pi w_{0}^{2}/\lambda$. The accelerator lengths used in laser wakefield experiments are typically much longer than this and some sort of ‘guiding’ is therefore needed to keep the laser intensity sufficiently high to drive a wake throughout the structure. There are two principal techniques to achieve this guiding, which both rely on creating a waveguide structure to counteract diffraction. This requires that the transverse plasma density profile has a minimum on-axis. Such a density minimum is naturally created by the laser pulse itself – that is, the bubble itself acts as a waveguide. Alternatively, a preformed density minimum can be formed, for example using a capillary discharge [24]; using a preformed waveguide brings significant complexity to an experiment and restricts diagnostic access (for wake imaging diagnostics based on ultrashort probes [11]). However, the bubble cannot self-guide the very front slice of the laser pulse, since the density minimum is not formed immediately. Because of this, external channels are expected to be more efficient, they allow the laser to propagate over greater distances at high $a_{0}$. With these points in mind, it is interesting to see whether the experimental evidence supports the use of external waveguides. In guiding_type, the data are sorted into self-guided and externally guided experiments. The top panel of guiding_type appears to show that there is no real advantage to using external guiding structures; the electron beam energy, as a function of plasma density, follows the same trend for both subsets of the data, they are both limited by dephasing. However, the bottom panel of guiding_type reveals the distinct advantage that experiments in externally guided structures have over self-guided ones. The highest electron energy achieved at a given laser power is almost always from an externally guided experiment; the self-guided experiments at the same power tend to produce lower energy electron beams. Figure 6: Variation of reported electron beam energy from various experiments. Top: Variation of beam energy with plasma density. Bottom: Variation of beam energy with laser power. Filled circles, experiments in preformed guiding structures; open squares, experiments without preformed guiding structures. Data from Refs. [5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 8, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 9, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. Are these data sufficient to suggest that external guiding structures improve the efficiency of laser wakefield accelerators, owing to reduced energy losses as the pulse propagates? Or is something more subtle occurring? In one experiment with an external waveguide using a ${\simeq}20\UW[T]$ laser [30], differences between the plasma density measured during a high-intensity laser shot and plasma density measurements made offline suggested that, under the conditions in which electron beams were observed, the laser pulse caused additional ionization of the plasma. As the plasma used in that experiment was formed from hydrogen, which is readily ionized, this suggested that high-$Z$ impurities from the walls of the capillary were being ionized by the main pulse and injected into the plasma wave. Since that work, a number of groups have proceeded to exploit this ionization injection mechanism to reduce the threshold for injection and increase the beam charge (Refs. [36, 39]). There are also various other injection techniques, including injection due to propagation in a density gradient [35, 9] and colliding pulse injection [8]. In injection_type, the variation of electron beam energy with laser power is plotted again, this time with the data divided into subsets based on injection type. As it is not known whether the majority of capillary-discharge-based experiments rely on self-injection or whether ionization injection plays a role, as in Ref. [30], these experiments have been placed in their own subset. Some of the ionization injection experiments also produce higher electron beam energies for a given laser power than self-injection experiments, and lie on the upper curve of the entire dataset, just as the capillary discharge dataset does. This might suggest that the injection mechanism is more important than the guiding mechanism in determining the maximum energy that can be achieved from a given laser power. This makes sense, as the value that $a_{0}$ reaches after self-evolution decreases with decreasing laser power, but alternative injection mechanisms should lower the value that $a_{0}$ needs to reach in order for injection to occur. However, injection mechanisms other than ionization injection seem to perform similarly to self-injection in terms of the electron energy that can be obtained for a given laser power. At present, the evidence is still inconclusive; this is clearly a matter that requires further study. Figure 7: Variation of reported electron beam energy from various experiments as a function of laser power and for different injection mechanisms. Filled black circles, capillary discharge experiments; black squares, ionization injection; green diamonds, density down-ramp injection; cyan triangles, colliding pulse injection; open black squares, self-injection. Data from Refs. [5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 8, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 9, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. ## 0.3 Future directions Having examined the trends in experimental data from laser wakefield experiments over the last decade, we find that there is one overriding message. To keep pushing the electron energy achievable from a single stage of a laser wakefield to ever higher values clearly requires operation at lower densities and over longer distances, and such experiments will need more powerful lasers. However, it should be noted that laser power can be increased in two ways, either by increasing the laser energy or by decreasing the pulse duration. Most of the data presented were obtained using laser pulses with a duration of ${\sim}30\Ufs$, so the trends in laser power are dominated by the pulse energy. Simulations demonstrate that the pulse length in a laser wakefield accelerator should not be too short. Energy is lost predominantly from the front of the laser pulse (this is the part of the laser that is doing work on the plasma), so the shorter the laser pulse, the more quickly it runs out of energy. As new laser facilities come online around the world, it is worth remembering that the shortest pulse is not necessarily the best for driving laser wakefield accelerators. Pulse durations where $c\tau_{\mathrm{L}}$ is comparable to the plasma wavelength, $\lambda_{\mathrm{p}}$, drive efficient wakes and allow the pump depletion length to match the dephasing length [62]. Pulses significantly shorter than $\lambda_{\mathrm{p}}/2$ expend their energy before the maximum energy can be reached. The experimental trends show that most experiments are attaining electron beam energies that are limited by dephasing. If the community is to be able to keep up the impressive pace it has sustained over the last decade then methods to overcome this limitation will become increasingly important, especially as further increases in laser power become ever more expensive. As a result, techniques to overcome dephasing, such as quasi-phase matching [64], staging [65] and density tapering [66] will all become important areas of research. The final point that we would like to make is that the laser systems currently used to drive laser wakefield accelerators are woefully inefficient. A titanium sapphire laser has a ‘wall plug’ efficiency of ${\sim}0.1\%$. They also currently run at very low repetition rates compared with a conventional accelerator. As laser wakefield accelerators push the energy frontier and strive to become workhorses for applications, it will become increasingly important that both repetition rate and efficiency are properly considered and significantly improved. Much more efficient laser architectures are available, including thin-disc [67] and fibre [68] lasers, that can also readily operate at much higher repetition rates. However, these systems do not yet have the capability to produce laser pulses with the high energy and short pulse duration needed to drive current laser wakefield acceleration experiments. Innovative solutions, including the use of coherent [69] and incoherent [70] combinations of many low-energy laser pulses to drive wakefield experiments, or the use of trains of many low-energy pulses to resonantly drive a high amplitude wakefield [71], may well form the most promising routes to high repetition rate, high-efficiency laser wakefield accelerators suitable for particle physics experiments or light-source based applications. ## References * [1] T. Tajima and J.M. Dawson, Phys. Rev. Lett. 43(4) (1979) 267. http://dx.doi.org/10.1103/PhysRevLett.43.267 * [2] D. Strickland and G. Mourou, Opt. Commun. 55(6) (1985), 447–449. http://dx.doi.org/10.1016/0030-4018(85)90151-8 * [3] K. Nakajima et al., Phys. Scr. T52 (1994) 61. http://dx.doi.org/10.1088/0031-8949/1994/T52/009 * [4] F. Amiranoff et al., Phys. Rev. Lett. 81(5) (1998) 995. http://dx.doi.org/10.1103/PhysRevLett.81.995 * [5] S.P.D. Mangles et al., Nature 431 (2004) 535–538. http://dx.doi.org/10.1038/nature02939 * [6] C.G.R. Geddes et al., Nature 431 (2004) 538–541. http://dx.doi.org/10.1038/nature02900 * [7] J. Faure et al., Nature 431 (2004) 541–544. http://dx.doi.org/10.1038/nature02963 * [8] J. Faure et al., Nature 444 (2006) 737–739. http://dx.doi.org/10.1038/nature05393 * [9] A.J. Gonsalves et al., Nat. Phys. 7 (2011) 862–866. http://dx.doi.org/10.1038/nphys2071 * [10] N.H. Matlis et al., Nat. Phys. 2 (2006) 749–753. http://dx.doi.org/10.1038/nphys442 * [11] A. Sävert et al., Phys. Rev. Lett. 115(5) (2015) 055002. http://dx.doi.org/10.1103/PhysRevLett.115.055002 * [12] A. Rousse et al., Phys. Rev. Lett. 93(13) (2004) 135005. http://dx.doi.org/10.1103/PhysRevLett.93.135005 * [13] S. Kneip et al., Nat. Phys. 6 (2010) 980–983. http://dx.doi.org/10.1038/nphys1789 * [14] M. Fuchs et al., Nat. Phys. 5 (2009) 826–829. http://dx.doi.org/10.1038/nphys1404 * [15] E. Miura et al., Appl. Phys. Lett. 86(25) (2005) 251501. http://dx.doi.org/10.1063/1.1949289 * [16] H. Kotaki et al., Laser Phys. 16(7) (2006) 1107–1110. http://dx.doi.org/10.1134/S1054660X06070140 * [17] M. Mori et al., Phys. Lett. A 356(2) (2006) 146–151. http://dx.doi.org/10.1016/j.physleta.2006.06.001 * [18] S. Masuda et al., J. Phys. IV France 133 (2006) 1127–1129. http://dx.doi.org/10.1051/jp4:2006133229 * [19] C.-T. Hsieh et al., Phys. Rev. Lett. 96(9) (2006) 095001. http://dx.doi.org/10.1103/PhysRevLett.96.095001 * [20] B. Hidding et al., Phys. Rev. Lett. 96(10) (2006) 105004. http://dx.doi.org/10.1103/PhysRevLett.96.105004 * [21] T. Hosokai et al., Phys. Rev. E 73(3) (2006) 036407. http://dx.doi.org/10.1103/PhysRevE.73.036407 * [22] S.P.D. Mangles et al., Phys. Rev. Lett. 96(21) (2006) 215001. http://dx.doi.org/10.1103/PhysRevLett.96.215001 * [23] S.A. Reed et al., Appl. Phys. Lett. 89(23) (2006) 231107. http://dx.doi.org/10.1063/1.2400400 * [24] W.P. Leemans et al., Nat. Phys. 2 (2006) 696–699. http://dx.doi.org/10.1038/nphys418 * [25] S. Masuda et al., Phys. Plasmas 14(2) (2007) 023103. http://dx.doi.org/10.1063/1.2434248 * [26] T. Ohkubo et al., Phys. Rev. ST Accel. Beams 10(3) (2007) 031301. http://dx.doi.org/10.1103/PhysRevSTAB.10.031301 * [27] S. Karsch et al., New J. Phys. 9 (2007) 415. http://dx.doi.org/10.1088/1367-2630/9/11/415 * [28] S.P.D. Mangles et al., Phys. Plasmas 14(5) (2007) 056702. http://dx.doi.org/10.1063/1.2436481 * [29] A. Gamucci et al., IEEE Trans. Plasma Sci. 36(4) (2008) 1699–1706. http://dx.doi.org/10.1109/TPS.2008.2000898 * [30] T.P. Rowlands-Rees et al., Phys. Rev. Lett. 100(10) (2008) 105005. http://dx.doi.org/10.1103/PhysRevLett.100.105005 * [31] N.A.M. Hafz et al., Nat. Photonics 2 (2008) 571. http://dx.doi.org/10.1038/nphoton.2008.155 * [32] K. Schmid et al., Phys. Rev. Lett. 102(12) (2009) 124801. http://dx.doi.org/10.1103/PhysRevLett.102.124801 * [33] S. Kneip et al., Phys. Rev. Lett. 103(3) (2009) 035002. http://dx.doi.org/10.1103/PhysRevLett.103.035002 * [34] D.H. Froula et al., Phys. Rev. Lett. 103(21) (2009) 215006. http://dx.doi.org/10.1103/PhysRevLett.103.215006 * [35] K. Schmid et al., Phys. Rev. ST Accel. Beams 13(9) (2010) 091301. http://dx.doi.org/10.1103/PhysRevSTAB.13.091301 * [36] A.E. Pak et al., Phys. Rev. Lett. 104(2) (2010) 025003. http://dx.doi.org/10.1103/PhysRevLett.104.025003 * [37] T.P.A. Ibbotson et al., New J. Phys. 12 (2010) 45008. http://dx.doi.org/10.1088/1367-2630/12/4/045008 * [38] C.E. Clayton et al., Phys. Rev. Lett. 105(10) (2010) 105003. http://dx.doi.org/10.1103/PhysRevLett.105.105003 * [39] C. McGuffey et al., Phys. Rev. Lett. 104(2) (2010) 025004. http://dx.doi.org/10.1103/PhysRevLett.104.025004 * [40] H. Lu et al., Appl. Phys. Lett. 99(9) (2011) 091502. http://dx.doi.org/10.1063/1.3626042 * [41] S. Fourmaux et al., New J. Phys. 13 (2011) 033017. http://dx.doi.org/10.1088/1367-2630/13/3/033017 * [42] J.S. Liu et al., Phys. Rev. Lett. 107(3) (2011) 035001. http://dx.doi.org/10.1103/PhysRevLett.107.035001 * [43] B.B. Pollock et al., Phys. Rev. Lett. 107(4) (2011) 045001. http://dx.doi.org/10.1103/PhysRevLett.107.045001 * [44] O. Lundh et al., Nat. Phys. 7 (2011) 219–222. http://dx.doi.org/10.1038/nphys1872 * [45] P. Brijesh et al., Phys. Plasmas 19(6) (2012) 063104. http://dx.doi.org/10.1063/1.4725421 * [46] M.Z. Mo et al., Appl. Phys. Lett. 100(7) (2012) 074101. http://dx.doi.org/10.1063/1.3685464 * [47] S. Kneip et al., Phys. Rev. ST Accel. Beams 15(2) (2012) 021302. http://dx.doi.org/10.1103/PhysRevSTAB.15.021302 * [48] R. Weingartner et al., Phys. Rev. ST Accel. Beams 15(11) (2012) 111302. http://dx.doi.org/10.1103/PhysRevSTAB.15.111302 * [49] F. Albert et al., Phys. Rev. Lett. 111(23) (2013) 235004. http://dx.doi.org/10.1103/PhysRevLett.111.235004 * [50] P.A. Walker et al., New J. Phys. 15 (2013) 045024. http://dx.doi.org/10.1088/1367-2630/15/4/045024 * [51] M.Z. Mo et al., Appl. Phys. Lett. 102(13) (2013) 134102. http://dx.doi.org/10.1063/1.4799280 * [52] S. Corde et al., Nat. Commun. 4 (1501) (2013) 1309.6364v1. http://dx.doi.org/10.1038/ncomms2528 * [53] S. Chen et al., Phys. Rev. Lett. 110(15) (2013) 155003. http://dx.doi.org/10.1103/PhysRevLett.110.155003 * [54] H.T. Kim et al., Phys. Rev. Lett. 111(16) (2013) 165002. http://dx.doi.org/10.1103/PhysRevLett.111.165002 * [55] X. Wang et al., Nat. Commun. 4(1988) (2013). http://dx.doi.org/10.1038/ncomms2988 * [56] G. Sarri et al., Phys. Rev. Lett. 113 (2014) 224801. http://dx.doi.org/10.1103/PhysRevLett.113.224801 * [57] N.D. Powers et al., Nat. Photonics 8 (2014) 28–31. http://dx.doi.org/10.1038/nphoton.2013.314 * [58] W.P. Leemans et al., Phys. Rev. Lett. 113(24) (2014) 245002. http://dx.doi.org/10.1103/PhysRevLett.113.245002 * [59] K. Khrennikov et al., Phys. Rev. Lett. 114(19) (2015) 195003. http://dx.doi.org/10.1103/PhysRevLett.114.195003 * [60] M. Schnell et al., J. Plasma Phys. 81(04) (2015) 475810401. http://dx.doi.org/10.1017/S0022377815000379 * [61] V. Malka et al., Science 298 (5598) (2002) 1596–1600. http://dx.doi.org/10.1126/science.1076782 * [62] W. Lu et al., Phys. Rev. ST Accel. Beams 10(6) (2007) 061301. http://dx.doi.org/10.1103/PhysRevSTAB.10.061301 * [63] W.B. Mori, IEEE J. Quant. Electron. 33(11) (1997) 1942–1953. http://dx.doi.org/10.1109/3.641309 * [64] S.J. Yoon et al., Phys. Rev. ST Accel. Beams 15(8) (2012) 081305. http://dx.doi.org/10.1103/PhysRevSTAB.15.081305 * [65] D. Kaganovich et al., Phys. Plasmas 12(10) (2005) 100702. http://dx.doi.org/10.1063/1.2102727 * [66] P. Sprangle et al., Phys. Rev. E 63(5) (2001) 056405. http://dx.doi.org/10.1103/PhysRevE.63.056405 * [67] A. Giesen and J. Speiser, IEEE J. Sel. Top. Quant. Electron. 13(3) (2007) 598. http://dx.doi.org/10.1109/JSTQE.2007.897180 * [68] C. Jauregui, J. Limpert and A. Tünnermann, Nat. Photonics 7 (2013) 861–867. http://dx.doi.org/10.1038/nphoton.2013.273 * [69] G. Mourou et al., Nat. Photonics 7 (2013) 258–261. http://dx.doi.org/10.1038/nphoton.2013.75 * [70] C. Benedetti, et al., Phys. Plasmas 21(5) (2014) 056706\. http://dx.doi.org/10.1063/1.4878620 * [71] S.M. Hooker et al., J. Phys. B: At. Mol. Opt. Phys 47(23) (2014) 234003. http://dx.doi.org/10.1088/0953-4075/47/23/234003
# On the Klein-Gordon Gürses-oscillators and pseudo-Gürses-oscillators: vorticity-energy correlations and spacetime associated degeneracies Omar Mustafa<EMAIL_ADDRESS>Department of Physics, Eastern Mediterranean University, G. Magusa, north Cyprus, Mersin 10 - Turkey. ###### Abstract Abstract: We discuss KG-oscillators in the (1+2)-dimensional Gürses spacetime and under position-dependent mass (PDM) settings. We observe that the KG- Gürses oscillators are introduced as a byproduct of the very nature of the Gürses spacetime structure. We report that the energy levels of such KG-Gürses oscillators admit vorticity-energy correlations as well as spacetime associated degeneracies (STAD). We discuss KG-Gürses oscillators’ results reported by Ahmed Ahmed1 2019 and pinpoint his improper treatment of this model so that his results should be redirected to those reported in this study. Moreover, we introduce a new set of KG pseudo-Gürses oscillators that admits isospectrality and invariance with the KG-Gürses oscillators and inherits the same vorticity-energy correlations as well as STADs. PACS numbers: 05.45.-a, 03.50.Kk, 03.65.-w Keywords: Klein-Gordon oscillators, Gürses spacetime, position-dependent mass, vorticity-energy correlations, spacetime associated degeneracies. ## I Introduction Klein-Gordon (KG) and Dirac oscillators Moshinsky 1989 ; Bruce 1993 ; Dvoeg 1994 ; Mirza 2004 have received much attention over the years. KG-oscillators in Gödel-type spacetime (e.g., Moshinsky 1989 ; Bruce 1993 ; Dvoeg 1994 ; Das 2008 ; Carvalho 2016 ; Garcia 2017 ; Vitoria 2016 ), in cosmic string spacetime and Kaluza-Klein theory backgrounds (e.g., Ahmed1 2021 ; Boumal 2014 ), in Minkowski spacetime with space-like dislocation Mustafa1 2022 , in Som- Raychaudhuri Wang 2015 , in (1+2)-dimensional Gürses space-time backgrounds (e.g., Gurses 1994 ; Ahmed1 2019 ; Mustafa2 2022 ). The KG-oscillators in a (1+2)-dimensional Gürses spacetime described by the metric $ds^{2}=-dt^{2}+dr^{2}-2\Omega r^{2}dtd\theta+r^{2}\left(1-\Omega^{2}r^{2}\right)d\theta^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu};\text{ }\mu,\nu=0,1,2,$ (1) were investigated investigated by Ahmed Ahmed1 2019 ., using $a_{{}_{0}}=b_{{}_{0}}=e_{{}_{0}}=1$, $b_{{}_{1}}=c_{{}_{0}}=\lambda_{{}_{0}}=0$, and vorticity $\Omega=-\mu/3$, in the Gürses metric $ds^{2}=-\phi dt^{2}+2qdtd\theta+\frac{h^{2}\psi-q^{2}}{a_{{}_{0}}}d\theta^{2}+\frac{1}{\psi}dr^{2}$ (2) (i.e., as in Eq.(5) of Gurses 1994 ) where $\phi=a_{{}_{0}},\,\psi=b_{{}_{0}}+\frac{b_{{}_{1}}}{r^{2}}+\frac{3\lambda_{{}_{0}}}{4}r^{2},\,q=c_{{}_{0}}+\frac{e_{{}_{0}}\mu}{3}r^{2},\,h=e_{{}_{0}}r,\,\lambda_{{}_{0}}=\lambda+\frac{\mu^{2}}{27}.$ (3) In this note, we shall show that there are more quantum mechanical features indulged in the spectroscopic structure of the KG-oscillators in the background of such a Gürses spacetime metric (1) than those reported by Ahmed Ahmed1 2019 , should this model be properly addressed. Throughout this note, such KG-oscillators shall be called KG-Gürses oscillators. We organize the current note in the following manner. In section 2, we revisit KG-oscillators in the (1+2)-dimensional Gürses spacetime of (1) and present them in a more general form, that includes position-dependent mass (PDM, which is a metaphoric notion) settings along with Mirza-Mohadesi’s KG-oscillators Mirza 2004 recipe. We observe that the KG-Gürses oscillators are introduced as a byproduct of the very nature of the Gürses spacetime structure. This motivates us to first elaborate and discuss, in section 3, the effects of Gürses spacetime on the energy levels of the KG-Gürses oscillators, without the KG-oscillator prescription of Mirza-Mohadesi Mirza 2004 . Therein, we report that such KG-Gürses oscillators admit vorticity-energy correlations as well as spacetime associated degeneracies (STADs). In section 4, we discuss Ahmed’s model Ahmed1 2019 that includes Mirza-Mohadesi Mirza 2004 recipe and pinpoint Ahmed’s Ahmed1 2019 improper treatment of the model at hand. We consider the PDM KG-Gürses oscillators in section 5. We discuss and report KG pseudo-Gürses oscillators in section 6, where we observe that they admit isospectrality and invariance with the KG Gürses-oscillators and inherit the same vorticity-energy correlations as well as STADs. Our concluding remarks are given in section 7. ## II KG-Gürses oscillators and PDM settings The covariant and contravariant metric tensors corresponding to the (1+2)-dimensional Gürses spacetime of (1), respectively, read $g_{\mu\nu}=\left(\begin{tabular}[]{ccc}$-1\vskip 3.0pt plus 1.0pt minus 1.0pt$&$0$&$-\Omega r^{2}$\\\ $0$&$1\vskip 3.0pt plus 1.0pt minus 1.0pt$&$0$\\\ $-\Omega r^{2}$&$\,0$&$\,r^{2}\left(1-\Omega^{2}r^{2}\right)$\end{tabular}\right)\Longleftrightarrow g^{\mu\nu}=\left(\begin{tabular}[]{ccc}$\left(\Omega^{2}r^{2}-1\right)$&$0\vskip 3.0pt plus 1.0pt minus 1.0pt$&$-\Omega$\\\ $0$&$1\vskip 3.0pt plus 1.0pt minus 1.0pt$&$0$\\\ $-\Omega$&$\,0$&$\,1/r^{2}$\end{tabular}\right)\text{ };\text{ \ }\det\left(g_{\mu\nu}\right)=-r^{2}.$ (4) Then the corresponding KG-equation is given by $\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\Psi\right)=m^{2}\Psi.$ (5) However, we shall now use the momentum operator $p_{\mu}\longrightarrow p_{\mu}+i\mathcal{F}_{\mu},$ (6) so that it incorporates the KG-oscillator prescription of Mirza-Mohadesi Mirza 2004 as well as position-dependent mass (PDM) settings proposed by Mustafa Mustafa1 2022 . Where $\mathcal{F}_{\mu}=\left(0,\mathcal{F}_{r},0\right)$ and our $\mathcal{F}_{r}=\eta r;$ $\eta=m\omega,$ of Ahmed1 2019 and $\mathcal{F}_{r}=\eta r+g^{\prime}\left(r\right)/4g\left(r\right)$ to also include PDM settings as in Mustafa Mustafa1 2022 . This would suggest that Ahmed’s model is retrieved when the positive-valued scalar multiplier $g\left(r\right)=1$. Nevertheless, the reader should be aware that the regular momentum operator $p_{\mu}$ is replaced by the PDM-momentum operator $p_{\mu}+i\mathcal{F}_{\mu}$ to describe PDM KG-particles in general (for more details on this issue the reader is advised to refer to Mustafa1 2022 ). Under such assumptions, the KG-equation (5) would transform into $\frac{1}{\sqrt{-g}}\left(\partial_{\mu}+\mathcal{F}_{\mu}\right)\left[\sqrt{-g}g^{\mu\nu}\left(\partial_{\nu}-\mathcal{F}_{\nu}\right)\Psi\right]=m^{2}\Psi,$ (7) which consequently yields $\left\\{-\partial_{t}^{2}+\left(\Omega\,r\,\partial_{t}-\frac{1}{r}\partial_{\theta}\right)^{2}+\partial_{r}^{2}+\frac{1}{r}\partial_{r}-M\left(r\right)-m^{2}\right\\}\Psi=0,$ (8) where $M\left(r\right)=\frac{\mathcal{F}_{r}}{r}+\mathcal{F}_{r}^{\prime}+\mathcal{F}_{r}^{2}.$ (9) We now substitute $\Psi\left(t,r,\theta\right)=\exp\left(i\left[\ell\theta- Et\right]\right)\psi\left(r\right)=\exp\left(-i\left[\ell\theta- Et\right]\right)\frac{R\left(r\right)}{\sqrt{r}}$ (10) to imply $R^{\prime\prime}\left(r\right)+\left[\lambda-\frac{\left(\ell^{2}-1/4\right)}{r^{2}}-\tilde{\omega}^{2}r^{2}-\tilde{M}\left(r\right)\right]R\left(r\right)=0,$ (11) where $\ell=0,\pm 1,\pm 2,\cdots$ is the magnetic quantum number, $\tilde{M}\left(r\right)=-\frac{3}{16}\left(\frac{g^{\prime}\left(r\right)}{g\left(r\right)}\right)^{2}+\frac{1}{4}\frac{g^{{}^{\prime\prime}}\left(r\right)}{g\left(r\right)}+\frac{1}{4}\frac{g^{\prime}\left(r\right)}{rg\left(r\right)}+\frac{1}{2}\frac{g^{\prime}\left(r\right)}{g\left(r\right)}\eta r,$ (12) and $\lambda=E^{2}-2\,\Omega\,\ell\,E-2\eta-m^{2}\text{ ; \ }\tilde{\omega}^{2}=\Omega^{2}E^{2}+\eta^{2}.$ (13) It is obvious that we retrieve Ahmed’s model Ahmed1 2019 when $g\left(r\right)=1$. Moreover, we observe that the KG-Gürses oscillators are introduced as a byproduct of the very nature of the Gürses spacetime structure. This motivates us to first elaborate and discuss the effects of Gürses spacetime on the energy levels of the KG-Gürses oscillators, without the KG-oscillator prescription of Mirza-Mohadesi Mirza 2004 (i.e., with $\eta=0$). ## III KG-Gürses oscillators: vorticity-energy correlations and spacetime associated degeneracies It is obvious that KG-Gürses oscillators are introduced by the very structure of Gürses spacetime. That is, for $\eta=0$, and $g\left(r\right)=1$ our KG- equation (11) collapses into the two-dimensional Schrödinger oscillator $R^{\prime\prime}\left(r\right)+\left[\lambda-\frac{\left(\ell^{2}-1/4\right)}{r^{2}}-\Omega^{2}E^{2}r^{2}\right]R\left(r\right)=0,$ (14) which admits exact textbook solvability so that the eigenvalues and radial eigenfunctions, respectively, read $\lambda=2\left|\Omega E\right|\left(2n_{r}+\left|\ell\right|+1\right)$ (15) and $R\left(r\right)\sim r^{\left|\ell\right|+1/2}\exp\left(-\frac{\left|\Omega E\right|r^{2}}{2}\right)L_{n_{n}}^{\left|\ell\right|}\left(\left|\Omega E\right|r^{2}\right)\Longleftrightarrow\psi\left(r\right)\sim r^{\left|\ell\right|}\exp\left(-\frac{\left|\Omega E\right|r^{2}}{2}\right)L_{n_{n}}^{\left|\ell\right|}\left(\left|\Omega E\right|r^{2}\right).$ (16) where $L_{n_{n}}^{\left|\ell\right|}\left(\left|\Omega E\right|r^{2}\right)$ are the generalized Laguerre polynomials. Now with the help of (13) and (15) we obtain $E^{2}-2\,\Omega\,\ell\,E-m^{2}=2\left|\Omega E\right|\left(2n_{r}+\left|\ell\right|+1\right).$ (17) Figure 1: The energy levels for KG-Gürses oscillators of (20) and (21) are plotted $m=1$ (a) for $n_{r}=0,\,\ell=0,\pm 1,\pm 2$, and (b) for $n_{r}=3$, $\ell=0,\pm 1,\pm 2,\pm 3$. This result should be dealt with diligently and rigorously, as mandated by the very nature of $\left|\Omega E\right|=\Omega_{\pm}E_{\pm}\geq 0$ or $\left|\Omega E\right|=-\Omega_{\mp}E_{\pm}\geq 0$ (that secures the finiteness and square integrability of the radial wavefunction (16)), where $\Omega_{\pm}=\pm\left|\Omega\right|$ and $E_{\pm}=\pm\left|E\right|$, That is, for $\left|\Omega E\right|=\Omega_{\pm}E_{\pm}$ in (17) we obtain $E_{\pm}^{2}-2\,\Omega_{\pm}E_{\pm}\,\tilde{n}_{+}-m^{2}=0;\;\tilde{n}_{+}=2n_{r}+\left|\ell\right|+\ell\,+1,$ (18) and for $\left|\Omega E\right|=-\Omega_{\mp}E_{\pm}$ we get $E_{\pm}^{2}+2\,\Omega_{\mp}E_{\pm}\,\tilde{n}_{-}\,-m^{2}=0;\;\tilde{n}_{-}=2n_{r}+\left|\ell\right|-\ell\,+1.$ (19) Which would allow us to cast $E_{\pm}=\Omega_{\pm}\,\tilde{n}_{+}\pm\sqrt{\Omega^{2}\tilde{n}_{+}^{2}+m^{2}}\Rightarrow\left\\{\begin{tabular}[]{l}$E_{+}=\Omega_{\pm}\,\tilde{n}_{+}+\sqrt{\Omega^{2}\tilde{n}_{+}^{2}+m^{2}}$\\\ $E_{-}=\Omega_{\pm}\,\tilde{n}_{+}-\sqrt{\Omega^{2}\tilde{n}_{+}^{2}+m^{2}}$\end{tabular}\right.,$ (20) for $\left|\Omega E\right|=\Omega_{\pm}E_{\pm}$ and $E_{\pm}=-\Omega_{\mp\,}\tilde{n}_{-}\,\pm\sqrt{\Omega^{2}\tilde{n}_{-}^{2}+m^{2}}\Rightarrow\left\\{\begin{tabular}[]{l}$E_{+}=-\Omega_{-\,}\tilde{n}_{-}\,+\sqrt{\Omega^{2}\tilde{n}_{-}^{2}+m^{2}}$\\\ $E_{-}=-\Omega_{+\,}\tilde{n}_{-}\,-\sqrt{\Omega^{2}\tilde{n}_{-}^{2}+m^{2}}$\end{tabular}\right..$ (21) Consequently, one may rearrange such energy levels and cast them so that $E_{\pm}^{\left(\Omega_{+}\right)}=\pm\left|\Omega\right|\,\tilde{n}_{\pm}\pm\sqrt{\Omega^{2}\tilde{n}_{\pm}^{2}+m^{2}},$ (22) for positive vorticity, and $E_{\pm}^{\left(\Omega_{-}\right)}=\pm\left|\Omega\right|\,\tilde{n}_{\mp}\pm\sqrt{\Omega^{2}\tilde{n}_{\mp}^{2}+m^{2}}.$ (23) for negative vorticity. Notably, we observe that $\tilde{n}_{\pm}\left(\ell=\pm\ell\right)=\tilde{n}_{\mp}\left(\ell=\mp\ell\right)$ which would in effect introduce the so called vorticity-energy correlations so that $E_{\pm}^{\left(\Omega_{+}\right)}\left(\ell=\pm\ell\right)=E_{\pm}^{\left(\Omega_{-}\right)}\left(\ell=\mp\ell\right)$. We have, therefore, four branches of energy levels so that the upper half (above $E=0$ line) is represented by $E_{+}$ and the lower half (below $E=0$ line) is represented by $E_{-}$ in the correlations mentioned above. Yet for massless KG-Gürses oscillators we obtain $E_{\pm}^{\left(\Omega_{+}\right)}=\pm 2\left|\Omega\right|\,\tilde{n}_{\pm}$ and $E_{\pm}^{\left(\Omega_{-}\right)}=\pm 2\left|\Omega\right|\,\tilde{n}_{\mp}$. Moreover, in Figures 1(a) and 1(b) we observe yet a new type of degeneracies in each branch of the energy levels (i.e., in each quarter of the figures). That is, states with the irrational quantum number $\tilde{n}_{+}=2n_{r}+\left|\ell\right|+\ell\,+1$ collapse into $\ell=0$ state for $\forall\ell=-\left|\ell\right|$ and states with $\tilde{n}_{-}=2n_{r}+\left|\ell\right|-\ell\,+1$ collapse into $\ell=0$ state for $\forall\ell=+\left|\ell\right|$. This type of degeneracies is introduced by the structure of spacetime (Gürses spacetime is used here) and therefore should be called, hereinafter, spacetime associated degeneracies (STADs). ## IV KG-Gürses plus Mirza-Mohadesi’s oscillators We now consider KG-Gürses plus Mirza-Mohadesi’s oscillators with $\eta\neq 0$, and $g\left(r\right)=1$. In this case, our KG-equation (11) collapses again into the two-dimensional Schrödinger oscillator $R^{\prime\prime}\left(r\right)+\left[\lambda-\frac{\left(\ell^{2}-1/4\right)}{r^{2}}-\tilde{\omega}^{2}r^{2}\right]R\left(r\right)=0,$ (24) which admits exact textbook solvability so that the eigenvalues and radial eigenfunctions, respectively, read $\lambda=2\left|\tilde{\omega}\right|\left(2n_{r}+\left|\ell\right|+1\right)=2\left|\Omega E\right|\sqrt{1+\frac{\eta^{2}}{\Omega^{2}E^{2}}}\left(2n_{r}+\left|\ell\right|+1\right)$ (25) and $R\left(r\right)\sim r^{\left|\ell\right|+1/2}\exp\left(-\frac{\left|\tilde{\omega}\right|r^{2}}{2}\right)L_{n_{n}}^{\left|\ell\right|}\left(\left|\tilde{\omega}\right|r^{2}\right)\Longleftrightarrow\psi\left(r\right)\sim r^{\left|\ell\right|}\exp\left(-\frac{\left|\tilde{\omega}\right|r^{2}}{2}\right)L_{n_{n}}^{\left|\ell\right|}\left(\left|\tilde{\omega}\right|r^{2}\right).$ (26) Then, equation (13) along with (25) imply $E^{2}-2\Omega E\ell-2\left|\Omega E\right|\sqrt{1+\frac{\eta^{2}}{\Omega^{2}E^{2}}}\left(2n_{r}+\left|\ell\right|+1\right)-\left(m^{2}+2\eta\right)=0.$ (27) It is obvious that for $\eta=0$ in (27) one would exactly obtain the results for the KG-Gürses oscillators discussed above. In Figure 2(a), we notice that the vorticity-energy correlations as well as STADs are now only partially valid because of the energy shifts introduced by Mirza-Mohadesi’s Mirza 2004 parameter $\eta$. In Figures 2(b) and 2(c) we can clearly observe such shifts in each quarter of the figures. That is, quarters 1 and 2 are for $\Omega=\Omega_{+}=+\left|\Omega\right|$ (i.e., for $E_{\pm}^{\left(\Omega_{+}\right)}$), and 3 and 4 are for $\Omega=\Omega_{-}=-\left|\Omega\right|$ (i.e., for $E_{\pm}^{\left(\Omega_{-}\right)}$). At this point, it should be pointed out that this equation was improperly treated by Ahmed Ahmed1 2019 , as he expressed the energies in terms of $\tilde{\omega}$ where $\tilde{\omega}=\sqrt{\Omega^{2}E^{2}+\eta^{2}}$ (see (16) vs (21) with (22) and (16) vs (35) with (36) of Ahmed1 2019 ). That is, the energies are given in terms of the energies and his results starting form his equation (21) to the end of his paper are rendered misleading, and are incorrect. His results should be redirected to the results reported in current note, therefore. Figure 2: The energy levels for KG-Gürses oscillators of (27) are plotted with $m=1$ (a) for $\eta=5$, $n_{r}=1$, $\ell=0,\pm 1,\pm 2$, (b) for $n_{r}=2$, $\ell=1$, $\eta=0,1,3,6,9$ and (c) for $n_{r}=2$, $\ell=-2$, $\eta=0,1,3,6,9$. ## V PDM KG-Gürses oscillators In this section we consider PDM settings for KG-Gürses oscillators, where $g\left(r\right)=\exp\left(2\beta r^{2}\right);\;\beta\geq 0$. Under such settings, KG-equation (11) reads $R^{\prime\prime}\left(r\right)+\left[\lambda-\frac{\left(\ell^{2}-1/4\right)}{r^{2}}-\tilde{\Omega}^{2}r^{2}\right]R\left(r\right)=0,$ (28) with $\lambda=E^{2}-2\,\Omega\,\ell\,E-2\beta-m^{2}\text{ ; \ }\tilde{\Omega}^{2}=\Omega^{2}E^{2}+\beta^{2}.$ (29) In this case, the eigenvalues and radial wavefunctions, respectively, read $\lambda=2\left|\tilde{\Omega}\right|\left(2n_{r}+\left|\ell\right|+1\right)=2\left|\Omega E\right|\sqrt{1+\frac{\beta^{2}}{\Omega^{2}E^{2}}}\left(2n_{r}+\left|\ell\right|+1\right),$ (30) and $R\left(r\right)\sim r^{\left|\ell\right|+1/2}\exp\left(-\frac{\left|\tilde{\Omega}\right|r^{2}}{2}\right)L_{n_{n}}^{\left|\ell\right|}\left(\left|\tilde{\Omega}\right|r^{2}\right)\Longleftrightarrow\psi\left(r\right)\sim r^{\left|\ell\right|}\exp\left(-\frac{\left|\tilde{\Omega}\right|r^{2}}{2}\right)L_{n_{n}}^{\left|\ell\right|}\left(\left|\tilde{\Omega}\right|r^{2}\right).$ (31) Consequently, the energies are given by $E^{2}-2\,\Omega\,\ell\,E-2\beta-m^{2}\text{ }=2\left|\Omega E\right|\sqrt{1+\frac{\beta^{2}}{\Omega^{2}E^{2}}}\left(2n_{r}+\left|\ell\right|+1\right).$ (32) Obviously, the effect of $\beta$ on the energy levels is the same as that of the Mirza-Mohadesi’s oscillators Mirza 2004 parameter $\eta$. This would suggest that Mirza-Mohadesi’s oscillators Mirza 2004 may very well be considered as a special case of PDM KG-oscillators. ## VI KG pseudo-Gürses oscillators: vorticity-energy correlations and spacetime associated degeneracies We now consider a spacetime described by the metric $ds^{2}=-dt^{2}+g\left(r\right)\,dr^{2}-2\Omega Q\left(r\right)r^{2}dtd\theta+Q\left(r\right)r^{2}\left(1-\Omega^{2}Q\left(r\right)r^{2}\right)d\theta^{2}.$ (33) Next, let us introduce a transformation of the radial part so that $\rho=\sqrt{Q\left(r\right)}r=\int\sqrt{g\left(r\right)}dr\Rightarrow\sqrt{g\left(r\right)}=\sqrt{Q\left(r\right)}\left[1+\frac{Q^{\prime}\left(r\right)}{2Q\left(r\right)}r\right],$ (34) where $\mathbb{R}\ni\left(\rho,r\right)\in\left[0,\infty\right]$, and hence $Q\left(r\right)\in\mathbb{R}$ is a positive-valued dimensionless scalar multiplier (so is $g\left(r\right)$). In this case, our spacetime metric (33) now reads $ds^{2}=-dt^{2}+\,d\rho^{2}-2\Omega\rho^{2}dtd\theta+\rho^{2}\left(1-\Omega^{2}\rho^{2}\right)d\theta^{2}.$ (35) This metric looks very much like that of Gürses (1) and consequently the KG- equation (14) that describes KG-Gürses oscillators is indeed invariant and isospectral with the corresponding KG pseudo-Gürses oscillators equation $R^{\prime\prime}\left(\rho\right)+\left[\lambda-\frac{\left(\ell^{2}-1/4\right)}{\rho^{2}}-\Omega^{2}E^{2}\rho^{2}\right]R\left(\rho\right)=0.$ (36) Hence, our KG pseudo-Gürses oscillators would copy the same energies for the KG-Gürses oscillators of (22) and (23) (discussed in section 3) so that $E_{\pm}^{\left(\Omega_{+}\right)}=\pm\left|\Omega\right|\,\tilde{n}_{\pm}\pm\sqrt{\Omega^{2}\tilde{n}_{\pm}^{2}+m^{2}},$ (37) for positive vorticity, and $E_{\pm}^{\left(\Omega_{-}\right)}=\pm\left|\Omega\right|\,\tilde{n}_{\mp}\pm\sqrt{\Omega^{2}\tilde{n}_{\mp}^{2}+m^{2}}.$ (38) for negative vorticity. However, the radial wavefunctions are now given by $R\left(\rho\right)\sim\rho^{\left|\ell\right|+1/2}\exp\left(-\frac{\left|\Omega E\right|\rho^{2}}{2}\right)L_{n_{n}}^{\left|\ell\right|}\left(\left|\Omega E\right|\rho^{2}\right)\Longleftrightarrow\psi\left(\rho\right)\sim\rho^{\left|\ell\right|}\exp\left(-\frac{\left|\Omega E\right|\rho^{2}}{2}\right)L_{n_{n}}^{\left|\ell\right|}\left(\left|\Omega E\right|\rho^{2}\right).$ (39) The following notes on our spacetime metric (33) are unavoidable. (a) The spacetime metric (35) looks very much like Gürses spacetime one of (1) and should be called, hereinafter, pseudo-Gürses spacetime, therefore. (b) If we set $\Omega=-\mu/3$, $a_{{}_{0}}=1$ in $\phi=a_{{}_{0}},\,\psi=b_{{}_{0}}+\frac{b_{{}_{1}}}{\rho^{2}}+\frac{3\lambda_{{}_{0}}}{4}\rho^{2},\,q=c_{{}_{0}}+\frac{e_{{}_{0}}\mu}{3}\rho^{2},\,h=e_{{}_{0}}\rho,\,\lambda_{{}_{0}}=\lambda+\frac{\mu^{2}}{27},$ (40) of (3) and use $Q\left(r\right)=e_{{}_{0}}+\frac{3c_{{}_{0}}}{\mu r^{2}}\Longleftrightarrow g\left(r\right)=\frac{\mu e_{{}_{0}}^{2}r^{2}}{\mu e_{{}_{0}}r^{2}+3c_{{}_{0}}},$ (41) (where the parametric values are adjusted so that $\left(Q\left(r\right),g\left(r\right)\right)\in\mathbb{R}$ are positive- valued functions, i.e., $c_{{}_{0}}<0$ ) we obtain $q=c_{{}_{0}}+\frac{e_{{}_{0}}\mu}{3}r^{2},\;\psi=\frac{1}{e_{{}_{0}}}+\frac{3c_{{}_{0}}}{\mu e_{{}_{0}}^{2}r^{2}}.$ (42) Which is yet another feasible structure for the Gürses spacetime of (2) and (3) with $b_{{}_{0}}=\frac{1}{e_{{}_{0}}},\,b_{{}_{1}}=\frac{3c_{{}_{0}}}{\mu e_{{}_{0}}^{2}},\,\lambda_{{}_{0}}=0,\text{ }h=e_{{}_{0}}r.$ (43) (c) As long as condition (34) is satisfied, all KG-pseudo-Gürses oscillators (including the one in (b) above) in the spacetime model of (35) admit isospectrality and invariance with the KG-Gürses oscillators (14) and inherit the same vorticity-energy correlations so that $E_{\pm}^{\left(\Omega_{+}\right)}\left(\ell=\pm\ell\right)=E_{\pm}^{\left(\Omega_{-}\right)}\left(\ell=\mp\ell\right)$ as well as they inherit the spacetime associated degeneracies, discussed in section 3. ## VII Concluding remarks In the current proposal, we revisited KG-oscillators in the (1+2)-dimensional Gürses spacetime of (1) so that PDM settings and Mirza-Mohadesi’s KG- oscillators Mirza 2004 are included. We have observed that KG-Gürses oscillators are introduced as a byproduct of the very nature of the Gürses spacetime structure. This has, in turn, motivated us to first elaborate and discuss the effects of Gürses spacetime on the energy levels of the KG-Gürses oscillators. We have found that such KG-Gürses oscillators admit vorticity- energy correlations as well as spacetime associated degeneracies (STADs) (documented in Figures 1(a) and 1(b)).. However, for KG-Gürses plus Mirza- Mohadesi’s oscillators we have observed that the vorticity-energy correlations as well as STADs are only partially valid because of the energy shifts introduced by Mirza-Mohadesi’s Mirza 2004 parameter $\eta$ (documented in Figures 2(a), 2(b), and 2(c)). Nevertheless, this model was studied by Ahmed Ahmed1 2019 who has reported improper treatment and incorrect results. Consequently, his reported results (starting from his equation (21) to the end of his paper) should be redirected to the ones reported in the current study. Moreover, we have shown that PDM setting may very well have the same effect on the spectrum as that reported for KG-Gürses plus Mirza-Mohadesi’s oscillators. Yet, a new set of the so called KG pseudo-Gürses oscillators is introduced and is shown to be invariant and isospectral with KG-Gürses oscillators. Therefore, such KG pseudo-Gürses-oscillators would inherit the vorticity- energy correlations as well as STADs of the KG-Gürses oscillators. Data Availability Statement Authors can confirm that all relevant data are included in the article and/or its supplementary information files. The author confirms that there are no online supplementary files (for web only publication) in this article. ## References * (1) M. Moshinsky, J. Phys. A: Math. Gen. 22 (1989) L817. * (2) S. Bruce, P. Minning, Nuovo Cimento II A 106 (1993) 711. * (3) V. V. Dvoeglazov, Nuovo Cimento II A 107 (1994) 1413. * (4) B. Mirza, M. Mohadesi, Commun. Theor. Phys. 42 (2004) 664. * (5) S. Das, G. Gegenberg, Gen. Rel. Grav. 40 (2008) 2115\. * (6) J. Carvalho, A. M. de M. Carvalho, E. Cavalcante, C. Furtado, Eur. Phys. J. C 76 (2016) 365. * (7) G. Q. Garcia, J. R. de S. Oliveira, K. Bakke, C. Furtado, Eur. Phys. J. Plus 132 (2017) 123. * (8) R. L. L. Vitoria, K. Bakke, Eur. Phys. J. Plus 131 (2016) 36. * (9) F. Ahmed, Gravitation and Cosmology 27 (2021) 292\. * (10) A. Boumali. N. Messai, Can. J. Phys. 92 (2014) 1460. * (11) Z. Wang, Z. Long, C. Long, M. Wu, Eur. Phys. J. Plus 130 (2015) 36. * (12) O. Mustafa, Ann. Phys. 446 (2022) 169124. * (13) M. Gürse, Class. Quantum Grav.11 (1994) 2585\. * (14) F. Ahmed, Ann. Phys. 404 (2019) 1. * (15) O. Mustafa, Eur. Phys. J. C 82 (2022) 82.
# Multiple magnetic transitions, metamagnetism and large magnetoresistance in GdAuGe single crystals D. Ram Department of Physics, Indian Institute of Technology, Kanpur 208016, India J. Singh Department of Physics, Indian Institute of Technology Hyderabad, Kandi, Medak 502 285, Telangana, India M. K. Hooda Department of Physics, Indian Institute of Technology, Kanpur 208016, India K. Singh Institute of Low Temperature and Structure Research, Polish Academy of Sciences, ul. Okolna 2, 50-422 Wroclaw, Poland V. Kanchana <EMAIL_ADDRESS>Department of Physics, Indian Institute of Technology Hyderabad, Kandi, Medak 502 285, Telangana, India D. Kaczorowski <EMAIL_ADDRESS>Institute of Low Temperature and Structure Research, Polish Academy of Sciences, ul. Okolna 2, 50-422 Wroclaw, Poland Z. Hossain <EMAIL_ADDRESS>Department of Physics, Indian Institute of Technology, Kanpur 208016, India ###### Abstract We report the physical properties of GdAuGe single crystals, which were grown using Bi flux. The powder x-ray diffraction data shows that the compound crystallizes in hexagonal NdPtSb-type structure (space group P63mc). Magnetization measurements performed for field configuration H $\parallel$ c and H $\perp$ c show that GdAuGe orders antiferromagnetically at the Néel temperature, TN = 17.2 K. Around this temperature, heat capacity and electrical resistivity data exhibit prominent anomaly due to the antiferromagnetic (AFM) transition. In addition to an AFM phase transition, the magnetization data for H $\parallel$ c display the signature of field- induced metamagnetic (MM) transitions below TN. The critical field range for these transitions vary from 0.2 to 6.2 T. The critical fields for the MM transitions decrease with increasing temperature and approach to zero value for temperature approaching TN. For instance, in high field MM transition, critical field changes from 6.2 T at 1.7 K to 1.8 T at 16 K. Interestingly, the magnetoresistance (MR) data (for H $\parallel$ c) record a sharp increase in values at the critical fields that coincide with those seen in magnetization data, tracking the presence of MM transitions. MR is positive and large ($\approx$ 169% at 9 T and 2 K) at low temperatures. Above TN, MR becomes small and switches to negative values. Hall resistivity data reveal the predominance of hole charge carriers in the system. In addition, we observe an emergence of step-like feature in the Hall resistivity data within the field range of second MM, and a significantly large anomalous Hall conductivity of $\sim$ 1270 $\Omega$-1 cm-1 at 2 K. The H$-$T phase diagram constructed from our detailed magnetization and magnetotransport measurements reveals multiple intricate magnetic phase transitions. The electronic and magnetic structure of GdAuGe are also thoroughly investigated using first- principles methods. The electronic band structure calculations reveal that GdAuGe is a Dirac nodal-line semimetal. ## I Introduction Ternary rare-earth intermetallic compounds continue to receive the attention of scientific community because of their complex relationships between composition and structure, interesting magnetic, thermodynamic and transport properties [1, 2, 3, 4, 5, 6]. These compounds possess a wide range of magnetic characteristics, ranging from simple diamagnetic behavior to very complex magnetic phases, depending upon the degree of hybridization between 4f and conduction electrons [5]. The majority of these compounds either display antiferromagnetic (AFM) or ferromagnetic (FM) ordering due to the long-range nature of dominant Ruderman-Kittel-Kasuya-Yosida interactions present in them [5, 6, 7, 8, 9]. The application of an external magnetic field to the AFM ground states of some rare-earth compounds is observed to disrupt their low magnetization state, leading to metamagnetic (MM) transitions [5, 10, 11]. These transitions occur in both strongly and weakly anisotropic magnetic structures and are very sensitive to the crystalline electric field (CEF) effects, which constrain the magnetic moments along a specific axis [5, 7, 10, 11, 12]. For example, GdAgSi exhibits one MM transition at 4 K with critical field of $\sim$ 0.29 T [11]. Furthermore, these intermetallic compounds become more interesting, when interplay between magnetism and novel electronic states generates new exotic quantum states and intriguing physical properties such as quantum critical behavior [13, 14], and unconventional superconductivity [15, 16]. Among them, Eu- and Gd-based intermetallic compounds attract more attention due to their oxidation state Gd3+ and Eu2+ (most stable oxidation state) having the electron configuration 4f7 with a half-filled f shell, resulting in a quenched orbital momentum and very weak spin-orbit coupling (SOC). The f-electron systems having magnetic frustration can also give rise to a skyrmion phase or noncollinear spin texture with nonzero scalar spin chirality [$\chi_{s}$ = Si·(Sj$\times$Sk) $\neq$ 0, where Si, Sj, and Sk are the three nearest spins]. These can act as a fictitious magnetic field on the conduction electrons, giving rise to the topological Hall effect (THE) [17, 18, 19]. For example, Gd2PdSi3 is a centrosymmetric triangular lattice with AFM ordering, it exhibits an intrinsic THE arising from a skyrmion phase under magnetic field [20]. It is further interesting to note that GdAgGe single crystals previously investigated by us do not show any MM character up to field of 7 T, but it is a topological nodal line containing the drumhead surface states in kz = 0 plane, protected by the inversion symmetry [21]. Thus, it becomes important to examine whether it is possible to induce MM transitions and novel electronic state in rare-earth based germanide systems by substituting transition metals. In this context, equiatomic rare-earth gold-germanide (RAuGe) series can be interesting. The compounds of this series crystallize in the non- centrosymmetric hexagonal crystal structure with space group P63mc, where two- dimensional infinite chains of [AuGe] polyanions are separated by rare earth element ions [3]. The detail studies on the magnetic and physical properties of polycrystalline RAuGe compounds have been already reported in the literature [3, 4, 22, 23, 24, 25]. It is observed that HoAuGe and NdAuGe in RAuGe series display the MM transitions at 2 K with the critical fields of 0.4 and 3 T, respectively [4, 22]. Here, we focus on another member of RAuGe series i.e. GdAuGe. Polycrystalline GdAuGe was reported to order antiferromagnetically at 16.9 K [25, 26]. In this report, we study the anisotropic magnetic and electronic transport properties of GdAuGe single crystals with high magnetic fields up to 9 T, as well as detailed electronic structure using first-principles calculations. Our study on GdAuGe single crystals shows the AFM ground state at 17.2 K for fields perpendicular and parallel to the crystallographic c axis. The field applied parallel to the c axis of crystal induces MM transitions below the AFM ordering temperature, which correlate well with the magnetotransport properties of the compound. Furthermore, we present a first-principles calculations of the magnetic ground state and topological character of the compound. ## II Experimental details The single crystals of GdAuGe were synthesized using Bi as an external flux. Starting elements Gd (99.9%, Alfa Aesar), Au (99.99%, Alfa Aesar), Ge (99.999%, Alfa Aesar) and Bi (99.999%, Alfa Aesar) were taken in a molar ratio of 1:1:1:10. The constituent elements were put into an alumina crucible, which was then transferred to a silica quartz tube. The tube was sealed under partial pressure of argon gas. In the next step, the sealed assembly was heated to temperature of 1050 ∘C, where it was held for 24 h in order to obtain homogeneous solution. Subsequently, the slow cooling to 680 ∘C at the rate of 2.5 ∘C/h produced very shiny plate-like single crystals with a typical size of 5 $\times$ 3 $\times$ 0.4 mm3 (as shown in bottom inset of Fig. 1(b)), which were separated from the Bi flux by centrifuging. Figure 1: (a) Rietveld refined powder XRD patterns of crushed single crystals of GdAuGe at room temperature. The blue line represents the difference between the observed intensity (red solid circles) and the calculated intensity (solid black line). The olive vertical lines represent the position of Bragg peaks. (b) The single-crystal XRD pattern of a GdAuGe, showing only (00$l$) reflections. Upper inset shows the rocking curve of peak (004). Lower inset shows a optical image of crystals. The phase purity and orientation of as grown crystals were analyzed by x-ray diffraction (XRD) using a PANalytical X’Pert PRO diffractometer with Cu Kα1 radiation. The XRD pattern of powdered crystals and a representative single crystal recorded at room temperature is shown in Figs. 1(a) and 1(b), respectively. It confirms the single phase growth of the compound crystallizing in the hexagonal crystal structure with space group P63mc (No. 186). The lattice parameters ($a$ = $b$ = 4.4281 Å, and $c$ = 7.4262 Å) obtained from Rietveld refinement are in good agreement with previously reported data in the literature [25, 26, 27]. The presence of (00$l$) peaks in the single crystal diffraction pattern shows that the crystallographic $c$ axis of crystal is perpendicular to its flat plane. The upper inset of Fig. 1(b) presents the rocking curve of (004) peak with a full width at half maximum (FWHM) $\Delta$$\theta$ = 0.024∘, indicating a high quality of the single crystal used. The desired chemical composition of crystals was further confirmed by energy-dispersive x-ray spectroscopy using a JEOL JSM-6010LA scanning electron microscope. Electrical resistivity and magnetoresistance measurements were performed using a Quantum Design physical property measurement system (PPMS) by the standard four-probe method. Heat capacity measurements were performed by the conventional relaxation method in the same PPMS platform. The magnetic susceptibility and magnetization were measured down to 1.7 K using a Quantum Design magnetic property measurement system. Based on the density functional theory (DFT) [28, 29], the first-principles calculations were carried out using the projector augmented wave [30] approach as implemented in the Vienna ab initio simulation package [31, 32]. The generalised gradient approximation (GGA) with the Perdew-Burke-Ernzerhof [33] parametrization was utilised to account for exchange-correlation effects. A Hubbard U parameter (GGA+U) of 6 eV was used to address the correlation effects of Gd-$f$ states [34, 35]. The calculations were done with a plane wave energy cutoff of 600 eV, and the energy convergence criterion was chosen to be 10-8 eV. The geometry optimization was performed with 2 $\times$ 2 $\times$ 2 supercell using a 16 $\times$ 16 $\times$ 8 k-mesh as per the Monkhorst-Pack method [36]. Figure 2: (a) Temperature-dependent magnetic susceptibility of GdAuGe measured under an applied magnetic field of $\mu_{0}$H = 0.1 T for H $\parallel$ c and H $\perp$ c in ZFC and FC modes. (b) The inverse magnetic susceptibility as a function of temperature for H $\parallel$ c and H $\perp$ c. The solid orange lines show the Curie-Weiss fit above 50 K. ## III Results and discussion ### III.1 Magnetic properties Figure 2(a) presents the temperature (T) dependence of magnetic susceptibility ($\chi$) measured under zero-field cooling (ZFC) and field cooling (FC) conditions at the constant magnetic field of 0.1 T applied perpendicular and parallel to the crystallographic c axis. A maximum of $\chi$(T) is visible at TN = 17.2 K for both field configurations, which is indicative of an AFM ordering in the compound and marks the boundary between AFM and paramagnetic (PM) phase. This value is very close to the previously reported TN for GdAuGe [25, 26, 27]. It is to be noted that $\chi$(T) shows bifurcation in ZFC$-$FC measurements below 15 K for field configuration H $\perp$ c. It points out spin reorientations in the compound below 15 K, which also corroborates with second anomaly in heat capacity data. The magnetic anisotropy of the system in the AFM region is low as evident from Fig. 2(b). It becomes insignificant in the PM region (above TN). Above 50 K, the data plotted as inverse magnetic susceptibility ($\chi^{-1}$) vs. T in Fig. 2(b) fit to the Curie-Weiss formula $\chi(T)=C/(T-\Theta)$, where $C$ and $\Theta$ are the Curie constant and Curie-Weiss temperature, respectively. The least-square fitting yields the value of $\Theta$ $\approx$ -4.7 and -6.9 K for H $\perp$ c and H $\parallel$ c, respectively. The estimated effective magnetic moment of $\mu_{eff}$ = 7.76 (for H $\perp$ c) and 7.80 $\mu_{B}$/Gd (for H $\parallel$ c) is in close agreement with the theoretical value expected for Gd3+ ion. Figure 3: (a) Temperature-dependent magnetic susceptibility of GdAuGe measured under different applied magnetic fields of $\mu_{0}$H = 0.01, 0.5, 1, 2, 3, 4, 5, 5.5, 6.0, 6.5, and 7.0 T for H $\parallel$ c. (b) Magnified view of magnetic susceptibility curves at higher fields of $\mu_{0}$H = 5.0, 5.5, 6.0, 6.5, and 7.0 T. The arrows are guide to various magnetic transitions such as AFM TN (magenta) and field-induced anomalies Tm1 (blue) and Tm2 (red). Figure 4: (a) Isothermal magnetization of GdAuGe at several different temperatures of T = 1.7, 4, 6, 8, 10, 12, 14, and 16 K for H $\parallel$ c. Top inset of (a) shows the magnetic isotherms in temperature range of T = 18$-$30 K. Bottom inset of (a) presents the magnetic field dependence of magnetization measured at temperature T = 1.7 K for H $\perp$ c. (b) Magnetic field dependence of differential magnetization at the various temperatures for H $\parallel$ c. The dotted arrows show the critical fields Hc1 and Hc2 at various temperatures corresponding to two MM transitions observed in GdAuGe. Inset of (b) shows a zoom view of low-field magnetizations at 1.7 K in both field-directions. Next, we study the $\chi_{c}$(T) behavior under different applied magnetic fields (0.01$-$7 T) along the c axis. The data plotted under various magnetic fields are shown in Figs. 3(a) and 3(b). At very low field, $\mu_{0}$Hc $\sim$ 0.01 T, a very sharp peak, which is a typical characteristics of an AFM ordering can be observed at TN $\sim$ 17.2 K (Fig. 3(a)). With further increase in field, this peak gets suppressed in magnitude and becomes broad along with shift towards low temperatures. Above 0.5 T, we observe the onset of field-induced anomalies in addition to an AFM transition. These anomalies shift towards low temperatures with an increase in the field. Fig. 3(b) presents the magnified view of these anomalies along with AFM transition at higher fields. The peaks of anomalies are marked by arrows to facilitate the view. It is interesting to note that AFM peak becomes almost flat at $\mu_{0}$Hc $\sim$ 7 T, and the curvature of $\chi$ vs. T tends to approach the FM state through these field-induced anomalies in the system. Further, we measured isothermal magnetization for H $\parallel$ c between 1.7 and 30 K with magnetic fields up to 7 T, as shown in Fig. 4(a). They show monotonic increase in magnetization values with no sign of saturation. Magnetization value reaches $\sim$ 2.02 $\mu_{B}$/Gd at 7 T, which is much smaller than the value expected for free Gd3+ ion. Magnetic isotherms further reveal the sudden change in slope at two critical fields below 18 K, indicating the emergence of two MM transitions in GdAuGe. It is noteworthy to mention here that we also measured in-plane magnetization (H $\perp$ c). But it did not reveal any MM transition besides the negligible hysteresis at low magnetic fields (see insets of Figs. 4(a) and 4(b)). Next, we calculate the MM critical fields at various temperatures using the maxima of field-dependent differential magnetization curves as shown in Fig. 4(b). At 1.7 K, the critical fields are $\mu_{0}$Hc1 $\sim$ 0.8 and $\mu_{0}$Hc2 $\sim$ 6.2 T, which decrease with increasing temperature. This trend is in accordance with spin-flop transition expected in antiferromagnets [37] and has been observed in other Gd-based compounds such as Gd2Te3 [38]. Above 16 K, MM transitions completely disappear and the compound enters into PM state. Figure 5: (a) Temperature-dependent heat capacity (Cp) of GdAuGe single crystal. The solid blue line represents the Debye-Einstein Model fit to the experimental data. The inset shows a magnified view of the Cp behavior at low temperatures. (b) Magnetic entropy of GdAuGe as a function of temperature. Figure 6: Temperature-dependent electrical resistivity of GdAuGe single crystal in (a) without and (b) with magnetic field. The current is applied along ab plane of crystal. Transverse magnetoresistance as a function of magnetic field at different temperatures (c) below TN and inset above TN for H $\parallel$ c. The arrows and dotted line mark different phase transitions. The origin of MM transitions in our data remains unclear like other rare-earth silver and gold germanides systems. Although, it is observed that a number of factors such as strong magnetocrystalline anisotropy, CEF effects, and competition between long-range FM and AFM interactions contribute to the MM transitions observed in rare earth compounds [5, 7, 10]. In the present case, magnetocrystalline anisotropy is small and CEF effects are minimal considering the fact that Gd3+ ions are in symmetric 8S7/2 state [7]. Recently, multiple MM transitions observed in CeRh3Si2 were explained using the Ising model, which generates the series of commensurate and incommensurate phases, leading to the metamagnetism like features [39]. The transition from a commensurate to incommensurate phase was reported in isostructural HoAuGe, which shows MM transition $\sim$ 0.4 T at 2 K [22]. Such possibility in GdAuGe is subject of future investigations. ### III.2 Heat capacity and entropy Figure 5(a) shows the heat capacity (Cp) of GdAuGe single crystal measured in T range 2$-$300 K. A broad peak feature in low temperature Cp data near TN is consistent with an AFM ordering observed from the magnetic measurements. The size of this peak ($\Delta$Cp) is $\sim$ 7.2 J/mol K, which is almost half of the value predicted by mean-field theory for amplitude-modulated magnetic structure. A close zoom-in view of the peak shows that it is split into two parts at TN = 16.8 and 14.9 K (see inset of Fig. 5(a)). Two magnetic transitions in GdAuGe based on Cp(T) data were reported earlier in Ref. [25, 27]. It is suggested to be associated with spin-reorientation processes in the compound [25]. Furthermore, we observe a broad hump around 6 K in low temperature Cp data. This kind of broad hump has been reported in other Gd based compounds, for example, in GdCu2Si2, it is observed at $\sim$ 3 K and is associated with emergence of (2J+1)-fold degeneracy of multiplet in the ordered system [40, 41]. Figure 7: (a) The Hall resistivity measured at various temperatures from 2 to 100 K. The inset displays the magnitude of anomalous Hall resistivity at 2 K. In panels (b) and (c), the solid black line represents the fit of Eq. (4) to the experimental data at 2 and 4 K. Panel (d) shows the temperature variation of ordinary and anomalous Hall coefficient. (e) The anomalous Hall resistivity $\rho_{xy}^{A}$ ($\displaystyle\star$), and conductivity $\sigma_{xy}^{A}$ ($\displaystyle\blacksquare$) and $\sigma_{xy}^{A^{\prime}}$ ($\displaystyle\blacktriangleright$) as a function of temperature. At 300 K, Cp value approaches $\sim$ 73.74 J/mol K, which is within the Dulong-Petit limit. Attempts to determine the electronic specific-heat coefficient, $\gamma$ from low temperature Cp data fail due to the nonlinearity caused by the magnetic anomalies. Cp data above TN can be well described by the following expression $C_{p}(T)=\gamma T+qC_{D}(T)+(1-q)C_{E}(T)$ (1) where $q$ is the weight factor, and $C_{D}$(T) and $C_{E}$(T) are Debye and Einstein Cp contributions, respectively, defined as $C_{D}(T)=9nR\left(\frac{T}{\Theta_{D}}\right)^{3}\int_{0}^{\Theta_{D}/T}\frac{x^{4}e^{x}}{(e^{x}-1)^{2}}dx$ (2) and $C_{E}(T)=3nR\left(\frac{\Theta_{E}}{T}\right)^{2}\frac{e^{\Theta_{E}/T}}{(e^{\Theta_{E}/T}-1)^{2}}$ (3) where $\Theta_{D}$ and $\Theta_{E}$ are the Debye and Einstein temperatures, respectively. The values of various fitting parameters follow as, $\gamma$ = 3.9 mJ/mol K2, $\Theta_{D}$ = 307, $\Theta_{E}$ = 90 K, and $q$ = 0.58. The magnetic entropy (shown in Fig. 5(b)) is calculated using the formula, S${}_{m}=\int\frac{C_{m}}{T}dT$, where magnetic contribution, Cm is obtained by subtracting lattice part from the experimental data using Eq. (1). The entropy Sm released at TN is slightly lower than the theoretical value S = $R$ln(2J+1) = 17.3 J/mol K for Gd3+ with J = 7/2. The Sm(T) reaches $R$ln8 at 24 K and then saturates above 28 K. A slightly higher value of saturation entropy is due to the partial subtraction of phonon contribution [42]. ### III.3 Magnetotransport The electrical resistivity, $\rho$ as a function of T measured along the ab plane of crystal is shown in Fig 6(a). The investigated crystal shows the room temperature resistivity value of around 206 $\mu$$\Omega$ cm, and residual resistivity ratio ($\rho_{300K}$/$\rho_{2K}$) $\approx$ 11.33. It is comparable to the values reported in the literature for other Gd based ternary intermetallic compounds [43, 44]. The $\rho$(T) exhibits a typical metal like behavior. It decreases systematically with decreasing T until it registers a sharp drop in value near the magnetic transition temperature. The sharp drop at TN = 17.2 K is the result of substantial reduction in the spin-disorder scattering and corroborates the results of magnetic and heat capacity measurements. Furthermore, we also measured the temperature-dependent electrical resistivity with magnetic field H $\parallel$ c, as shown in Fig. 6(b). As field strength increases, the AFM transition TN shifts to lower temperatures, as marked with black arrows. Above 1 T, a second anomaly appears at Tm1, which does not change so much with magnetic field. The values of TN and Tm1 are consistent with the $\chi_{c}$(T) data as discussed above. The transverse magnetoresistance (MR) measured for field configuration H $\parallel$ c in T range 2$-$40 K, are shown in Fig. 6(c). The MR is positive for T $\leq$ 14 K and becomes negative at T $\geq$ 16 K, as observed in AFM systems. In weak fields, it follows H1.3 field dependence and its value reaches only about 7% at 0.8 T. For fields higher than 0.8 T, a weak anomaly is visible in MR data, thereafter, MR increases sublinearly. It reaches up to $\approx$ 123% at 2 K until an onset of another anomaly at 6.2 T. In the vicinity of this anomaly, MR value suddenly jumps to $\approx$ 160% and tends to saturate above 7 T. The anomalies observed in MR data in the vicinity of critical fields, where we observed the MM transitions in magnetic isotherms, clearly indicate that they are related to the MM transitions observed in the compound. The positive MR below TN for an AFM phase is quite naturally expected, however large and positive MR due to the MM transitions in GdAuGe is in contrast to small and negative MR observed in several rare-earth compounds [5, 45]. Ideally, application of magnetic field reduces the electrical resistivity of ferromagnet and paramagnet, leading to negative MR. However, in the literature, numerous cases of this type of sudden enhancement in MR have been observed [5, 46, 47, 48, 49]. The MR value of our crystal is almost twice the value of $\sim$ 82% observed for TbAgGe crystals [5] and comparable to the value reported for EuAg4As2 single crystals [49]. With increasing T, sharp steps of increase in MR observed due to MM transitions gradually decreases and completely disappear above TN, leading to the negative values of MR. Figure 8: (a) The crystal structure of GdAuGe. (b) The irreducible Brillouin zone of the bulk along with the (001) projected surface. Figure 9: (a)$-$(h) FM and AFM configurations for 2 $\times$ 2 $\times$ 2 supercell with Gd spins. Here, AFM1, AFM3, AFM4 are A-, C-, G-type, respectively, whereas other configurations are stripe-type AFM. Red and green arrows denote the spin-up and spin-down, respectively. Figure 7(a) displays the Hall resistivity ($\rho_{xy}$) of single-crystalline GdAuGe, measured within the ab plane over a T range of 2 to 100 K. The $\rho_{xy}$ increases continuously with an increasing magnetic field in a slightly nonlinear manner and its value remains positive throughout the temperature range, indicating that holes are the majority charge carriers. Moreover, we estimate the carrier concentration and mobility to be approximately 2.69 $\times$ 1020 cm-3 and 167 cm2 V-1 s-1, respectively, by obtaining the slope from the linear fit of the 100 K dataset. At low temperatures (below 16 K), the $\rho_{xy}$ exhibits a step-like increase around the critical magnetic field range, where we observed the signature of MM transition in magnetization and MR data. Considering that the step-like feature in $\rho_{xy}$ is a part of the anomalous Hall resistivity ($\rho_{xy}^{A}$). To calculate the magnitude of $\rho_{xy}^{A}$, we adopt the method used in Ref. [50], as shown in the inset of Fig. 7(a). The $\rho_{xy}^{A}$ at 2 K is around 3.61 $\mu\Omega$ cm, and its magnitude decreases with increasing T, reaching $\sim$ 1.2 $\mu\Omega$ cm at 12 K (see the Fig. 7(e)). In general, the total Hall resistivity including the $\rho_{xy}^{A}$ term is given by the following expression $\rho_{xy}=\rho_{xy}^{O}+\rho_{xy}^{A}=R_{0}H+R_{s}\mu_{0}M,$ (4) where R0 and Rs are the ordinary and anomalous Hall coefficients, respectively and M(H) is isothermal magnetization data as a function of field. In our data, it is difficult to separate $\rho_{xy}^{O}$ and $\rho_{xy}^{A}$ contributions as the magnetic moments do not saturate up to field strength 7 T. Therefore, we have simulated the experimental data up to $\mu_{0}H=$ 7 T using Eq. (4). The results for 2 and 4 K data set are displayed in Figs. 7(b) and 7(c), respectively. The estimated values of $R_{0}$ and $R_{s}$ from simulated data are presented in Fig. 7(d). The values of $R_{s}$ are significantly larger than $R_{0}$. Next, we present the anomalous Hall conductivity (AHC), $\sigma_{xy}^{A}$ = $\rho_{xy}^{A}$/($\rho_{xx}^{2}+\rho_{xy}^{2}$), in Fig. 7(e), and the AHC decreases with the increasing temperatures. At 2 K, its value is about 1270 $\Omega$-1 cm-1, which is of the same order as reported for AFM topological systems DyPtBi [51] and TbPtBi [52]. To further check the consistency of calculated AHC, we have estimated the AHC ($\sigma_{xy}^{A^{\prime}}$) using the $R_{s}$ and change in magnetization value around the MM transition. The obtained values of $\sigma_{xy}^{A^{\prime}}$ are quite close to that directly calculated from $\rho_{xy}$(H) curves (see Fig. 7(e)). ### III.4 Electronic structure Figure 10: (a) Total and projected density of states of GdAuGe. (b) Electronic band structure along $\Gamma$-$M$-$K$-$\Gamma$-$A$-$L$-$H$-$A$ path without SOC. (c) The orbital-decomposed electronic band structure without SOC. (d) The electronic band structure with SOC. Inset shows the Dirac points DP1, DP2 and DP3. Figure 11: (a) and (c) The illustration of the nodal line, where $a$, $b$, $c$, $d$ are equally spaced points between $M$ and $K$ along kz = 0 plane, and $e$, $f$, $g$, $h$ between $L$ and $H$ along kz = 0.5 plane, respectively. (b) and (d) Electronic band structures along the k-paths as indicated in (a) and (c), respectively. Iso-energy Fermi contours along (e) kz = 0 and (f) kz = 0.5 planes, which show the nodal lines. The unit cell of GdAuGe consists of six atoms, with Gd, Au, and Ge atoms occupying Wyckoff positions 2a, 2b, and 2b, respectively. As shown in Fig. 8(a), it crystallizes in a hexagonal structure with the space group $P$63${mc}$ (186). Along with three vertical mirror planes $\widetilde{M}$${}_{x\bar{y}}$ = {M${}_{x\bar{y}}$$|$00$\frac{1}{2}$}, $\widetilde{M}$2xy = {M2xy$|$00$\frac{1}{2}$}, and My, the structure has threefold rotational symmetry, C3z and twofold screw rotational symmetry, S2z = {C2z$|$00$\frac{1}{2}$}. In Fig. 8(b), we display the (001) surface Brillouin zone (BZ) beside the bulk BZ. In order to investigate the possible magnetic configurations, we have examined the FM and seven AFM (including A-, C-, G\- and stripe-type) spin configurations with 2 $\times$ 2 $\times$ 2 supercell. The possible magnetic configurations are shown in Fig. 9. The calculated ground state energy for each configuration is presented in Table 1. From the Table 1, it can be seen that the AFM5 configuration yields the lowest energy. Here, AFM5 configuration exhibits the AFM coupling along the a axis, whereas FM coupling along the b and c axes. To further confirm the spin orientations in the AFM5 case, we have calculated the ground state energies along different spin alignments such as [001], [010], [100], [011], [101], [110] and [111]. The computed ground state energy differences are given in Table 2. The minimum ground state energy is observed for the [100] spin configuration. A similar AFM5 magnetic structure has also been reported in the isostructural compounds RAuGe (R = Tb$-$Er), where magnetic moments are inclined with respect to the c axis [22, 23, 3]. This inclination angle of magnetic moment decreases with increase in number of 4f electrons. For example, TbAuGe magnetic moment is inclined at an angle of 65∘ to the c axis, while ErAuGe magnetic moment is along the c axis. Following the trend of the magnetic structure of isostructural RAuGe compounds, the magnetic moments of GdAuGe are likely to be aligned along the ab plane, as suggested by our DFT calculations. However, it cannot be completely ascertained, as our experimental data indicate that moments are preferably aligned along the c axis. Further, the CEF effects are absent in GdAuGe, unlike isostructural RAuGe compounds, which could affect the orientation of magnetic moments [23]. Our DFT results are valid for T = 0 K. Furthermore, we have used a fixed value of U (= 6 eV) in the absence of experimentally determined U value. Thus, correlation effects are not taken care of appropriately. These inherent limitations might be responsible for the difference between the theoretically predicted magnetic structure and the magnetic measurements. To determine the precise orientation of Gd spins within the GdAuGe and to resolve the discrepancy between our theoretical calculations and experimental observations, further investigations are required, especially using microscopic techniques. Table 1: Calculated energies of different magnetic configurations (in meV) with the reference energy considered to be 0 meV. Configuration | Energy (meV) | Configuration | Energy (meV) ---|---|---|--- FM | 11.93 | AFM4 | 3.59 AFM1 | 5.72 | AFM5 | 0.00 AFM2 | 5.71 | AFM6 | 3.59 AFM3 | 0.02 | AFM7 | 2.79 Table 2: Calculated energies of different spin configurations in AFM5 case with the reference energy considered to be 0 $\mu$eV. Configuration | [001] | [010] | [100] | [011] | [101] | [110] | [111] ---|---|---|---|---|---|---|--- Energy ($\mu$eV) | 49.79 | 21.56 | 0.00 | 34.79 | 24.12 | 20.11 | 15.09 Figure 12: The H-T phase diagram of the GdAuGe when magnetic field is applied along the c axis. Denotations are mentioned in main text. The dark cyan dot line illustrates the fit of molecular field theory equation H=H0[1$-$TN(H)/TN(H = 0)]1/2 to the experimental data. Dotted gray lines are just guides to the eyes. Furthermore, the total density of states (DOS) and projected density of states (PDOS) were calculated for AFM5 case, to illustrate the behavior of Gd, Au and Ge elements, and the results are displayed in Fig. 10(a). The valence band region is equally contributed by Gd, Au and Ge atoms, whereas the conduction region is dominated by Gd in both spin channels. Moreover, GdAuGe has a small DOS at the Fermi level, which confirms the semimetallic nature of the compound. We have also investigated the electronic band structure properties. Figure 10(b) shows the electronic band structure with the spin-up (in red color) and the spin-down (in blue color) channels. The electronic band structure exhibits some band crossing points near the Fermi level along the kz = 0 as well as kz = 0.5 plane, which might lead to the nodal line. To determine the non-trivial nature of these bands, we calculated the orbital decomposed band structure (Fig. 10(c)) and it infers that Gd-$d$ and Ge-$p$ states are main contributors to the band crossing points. From the Fig. 10(c), we observe two crossing points along kz = 0 plane. The band inversion between Gd-$d$ and Ge-$p$ in one crossing point can be seen, which reflects the non- trivial nature of crossing points, whereas another crossing point lacks the band inversion and shows the trivial nature of crossings. Similarly, the band inversion can also be seen along kz = 0.5 plane. Notably, each band along the kz = 0.5 plane is twofold degenerate due to the anticommutation relation between My and S2z symmetries [53], which show the four-fold degeneracy in bands at the crossing point and hints towards the presence of a Dirac nodal line. To analyze these band crossings, we have performed a detailed calculation of the band structures along $\Gamma$-$M/a/b/c/d/K$ paths (see Fig. 11(a)) as well as $A$-$L/e/f/g/h/H$ paths (see Fig. 11(c)) and found that the Dirac-type band crossings appeared in all the above-mentioned paths (see Figs. 11(b) and 11(d)) reflecting the occurrence of two $\Gamma$-centered nodal rings protected by My symmetry, and one $A$-centered Dirac nodal ring protected by My and S2z symmetries. Furthermore, we have confirmed the presence of nodal lines through iso-energy Fermi contours and shown in Figs. 11(e) and 11(f). With the inclusion of SOC, we can see the band opening at the crossing points, which is shown in Fig. 10(d). In addition, there exist multiple Dirac points (DP1, DP2, and DP3) along $A$-$L$ path, which is shown in the inset of Fig. 10(d). The Dirac points DP1, DP2, and DP3 are generated by My and non-symmorphic (S2z) symmetries. ### III.5 H-T phase diagram Based on the experimental data presented above, we have constructed the H$-$T phase diagram for H $\parallel$ c, which is depicted in Fig. 12. The phase line boundaries are calculated using the peak positions of derivatives of the $\chi_{c}$(T), M(H), $\rho_{xx}$(H), and $\rho_{xx}$(T) data. The resulting phase diagram shows four distinct regions in the magnetically ordered state. The first region, labeled AFM I, corresponds to AFM phase. The magnetic structure in this region below TN is collinear AFM at low fields as evidenced by our electronic band structure calculations (see Fig. 9(f)) and experimentally observed value of $\chi_{c}$(1.7 K)/$\chi_{c}$(TN) $\approx$ 0.5 (refer to Fig. 2(a)). As the strength of field increases, we move into region AFM II. In this region, AFM spins tend to align along the direction of external magnetic field, and get partially flopped above the critical field Hc1, which is deduced from the M(H) measurements. At Hc1, the first spin-flop transition occurs, followed by the second spin-flop transition at the critical field Hc2. After Hc2 and below the mean field fitting line, the system is in region AFM III, which likely corresponds to an incommensurate magnetic structure. Both Hc1 and Hc2 decrease with increasing T. The phase boundary of Hc2 nearly overlap with that of TN in the T range 12$-$16 K. However, below 12 K, it is well separated from TN and its values, derived from the magnetization and transport measurements, are in a good agreement. Such complex magnetic phases are also observed in the isostructural RAuGe (Tb$-$Er) [23, 22, 3]. We further note that TN shifts towards low temperatures with increasing field like Hc1 and Hc2. Its trend is in well accord with that predicted by the molecular field theory equation H = H0[1$-$TN(H)/TN(H = 0)]1/2, where H0 is the critical field strength required to completely destroy the AFM phase transition [54]. The fit of this equation yields H0 = 8.57 T and TN(H = 0) = 17.15 K. Further increase in field drives the system from region AFM III to PPM, which we refer to as the polarized paramagnetic (PPM) phase, where a large part of the magnetic moments is aligned along the field. It displays two field-induced anomalies, Tm1 and Tm2, in the vicinity of TN. These anomalies show small variation in their temperatures with increasing fields. As the temperature surpasses Tm1, the compound transitions to the PM state. ## IV Conclusions We have investigated the magnetic, thermodynamic and magnetotransport properties of GdAuGe single crystals grown using Bi flux. The magnetic susceptibility measurements for field configuration H $\parallel$ c and H $\perp$ c revealed the AFM ground state in the compound with TN = 17.2 K. The anomalies observed near TN in the heat capacity data and a sharp drop in electrical resistivity data below TN further confirmed the AFM ordering in the compound. The magnetization data for H $\parallel$ c showed two successive MM transitions at T = 1.7 K with critical fields of $\sim$ 0.82 and 6.2 T. The magnetotransport data recorded for H $\parallel$ c near the critical fields of MM transitions was observed to show unexpectedly positive and large values of transverse MR (169% at 9 T and 2 K) for temperatures less than TN. At higher temperatures, the MR decreases and becomes negative in the PM regime. A large anomalous Hall conductivity $\sim$ 1270 $\Omega$-1 cm-1 was observed near Hc2 at 2 K. The phase-diagram in H $\parallel$ c vs. T plane was constructed from the magnetization and magnetotransport measurements, which unveiled multiple magnetic phase transitions including a collinear AFM ground state, two successive spin-flop transitions and a polarized paramagnetic state corresponding to the two field-induced magnetic anomalies. The electronic band structure analysis shows the presence of two nodal rings along kz = 0 plane and a Dirac nodal ring along kz = 0.5 plane, which makes GdAuGe a Dirac nodal- line semimetal. ## V Acknowledgment We acknowledge IIT Kanpur and the Department of Science and Technology, India, [Order No. DST/NM/TUE/QM-06/2019 (G)] for financial support. J.S and V.K. acknowledge the National Supercomputing Mission (NSM) for providing computing resources of ‘PARAM SEVA’ at IIT, Hyderabad. V.K. would like to acknowledge DST-FIST (SR/FST/PSI-215/2016) for the financial support. J.S. was supported through a CSIR scholarship. K.S. and D.K. acknowledge financial support from the National Science Centre (Poland) under Research Grant No. 2021/41/B/ST3/01141. ## References * Szytuła [1999] A. Szytuła, Croatica chemica acta 72, 171 (1999). * Lotfi _et al._ [2022] S. Lotfi, R. Arrieta, G. G. C. Peterson, P. Delgado, and J. Brgoch, ACS Organic & Inorganic Au 2, 318 (2022). * Baran _et al._ [2000a] S. Baran, M. Hofmann, B. Penc, M. Ślaski, A. Szytuła, and A. Zygmunt, Physica B: Condensed Matter 276-278, 656 (2000a). * Bashir _et al._ [2014] A. K. H. Bashir, M. B. Tchoula Tchokonté, J. L. Snyman, B. M. Sondezi, and A. M. Strydom, Journal of Applied Physics 115, 17E134 (2014). * Morosan _et al._ [2004] E. Morosan, S. Bud’ko, P. Canfield, M. Torikachvili, and A. Lacerda, Journal of Magnetism and Magnetic Materials 277, 298 (2004). * Goruganti _et al._ [2008] V. Goruganti, K. D. D. Rathnayaka, J. H. Ross, Y. Öner, C. S. Lue, and Y. K. Kuo, Journal of Applied Physics 103, 073919 (2008). * Morosan _et al._ [2005] E. Morosan, S. L. Bud’ko, and P. C. Canfield, Phys. Rev. B 72, 014425 (2005). * Jensen and Mackintosh [1991] J. Jensen and A. R. Mackintosh, _Rare earth magnetism_ (Clarendon Press Oxford, 1991). * Malick _et al._ [2022] S. Malick, J. Singh, A. Laha, V. Kanchana, Z. Hossain, and D. Kaczorowski, Phys. Rev. B 105, 045103 (2022). * Stryjewski and Giordano [1977] E. Stryjewski and N. Giordano, Advances in Physics 26, 487 (1977). * Baran _et al._ [2000b] S. Baran, M. Hofmann, J. Leciejewicz, B. Penc, M. Ślaski, A. Szytuła, and A. Zygmunt, Journal of Magnetism and Magnetic Materials 222, 277 (2000b). * Ram _et al._ [2023a] D. Ram, S. Malick, Z. Hossain, and D. Kaczorowski, Phys. Rev. B 108, 024428 (2023a). * Gegenwart _et al._ [2008] P. Gegenwart, Q. Si, and F. Steglich, Nature Physics 4, 186 (2008). * Löhneysen _et al._ [2007] H. v. Löhneysen, A. Rosch, M. Vojta, and P. Wölfle, Rev. Mod. Phys. 79, 1015 (2007). * Paramanik _et al._ [2013] U. B. Paramanik, D. Das, R. Prasad, and Z. Hossain, Journal of Physics: Condensed Matter 25, 265701 (2013). * Pfleiderer [2009] C. Pfleiderer, Rev. Mod. Phys. 81, 1551 (2009). * Schulz _et al._ [2012] T. Schulz, R. Ritz, A. Bauer, M. Halder, M. Wagner, C. Franz, C. Pfleiderer, K. Everschor, M. Garst, and A. Rosch, Nature Physics 8, 301 (2012). * Xiao _et al._ [2019] X. Xiao, L. Peng, X. Zhao, Y. Zhang, Y. Dai, J. Guo, M. Tong, J. Li, B. Li, W. Liu, J. Cai, B. Shen, and Z. Zhang, Applied Physics Letters 114, 142404 (2019). * Ueda _et al._ [2012] K. Ueda, S. Iguchi, T. Suzuki, S. Ishiwata, Y. Taguchi, and Y. Tokura, Phys. Rev. Lett. 108, 156601 (2012). * Kurumaji _et al._ [2019] T. Kurumaji, T. Nakajima, M. Hirschberger, A. Kikkawa, Y. Yamasaki, H. Sagayama, H. Nakao, Y. Taguchi, T. hisa Arima, and Y. Tokuras, Science 365, 914 (2019). * Ram _et al._ [2023b] D. Ram, J. Singh, M. K. Hooda, O. Pavlosiuk, V. Kanchana, Z. Hossain, and D. Kaczorowski, Phys. Rev. B 107, 085137 (2023b). * Gibson _et al._ [2001] B. J. Gibson, R. Pöttgen, W. Schnelle, B. Ouladdiaf, and R. K. Kremer, Journal of Physics: Condensed Matter 13, 2593 (2001). * Baran _et al._ [2001] S. Baran, M. Hofmann, G. Lampert, N. Stüsser, A. Szytuła, D. Többens, P. Smeibidl, and S. Kausche, Journal of Magnetism and Magnetic Materials 236, 293 (2001). * Penc _et al._ [1999] B. Penc, S. Baran, M. Ślaski, and A. Szytuła, Journal of Alloys and Compounds 282, L6 (1999). * Gibson _et al._ [1996] B. J. Gibson, W. Schnelle, R. Pöttgen, K. Bartkowski, and R. K. Kremer, Czechoslovak Journal of Physics 46, 2573 (1996). * Pöttgen _et al._ [1998] R. Pöttgen, G. Kotzyba, and E. A. Görlich, Journal of Solid State Chemistry 141, 352 (1998). * Kurumaji _et al._ [2023] T. Kurumaji, M. Gen, S. Kitou, K. Ikeuchi, M. Nakamura, A. Ikeda, and T. hisa Arima, Journal of Alloys and Compounds 947, 169475 (2023). * Hohenberg and Kohn [1964] P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964). * Kohn and Sham [1965] W. Kohn and L. J. Sham, Phys. Rev. 140, A1133 (1965). * Blöchl [1994] P. E. Blöchl, Phys. Rev. B 50, 17953 (1994). * Kresse and Furthmüller [1996] G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996). * Kresse and Joubert [1999] G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999). * Perdew _et al._ [1996] J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). * Petersen _et al._ [2006] M. Petersen, J. Hafner, and M. Marsman, Journal of Physics: Condensed Matter 18, 7021 (2006). * Li _et al._ [2015] Z. Li, H. Su, X. Yang, and J. Zhang, Phys. Rev. B 91, 235128 (2015). * Monkhorst and Pack [1976] H. J. Monkhorst and J. D. Pack, Phys. Rev. B 13, 5188 (1976). * Arantes _et al._ [2018] F. R. Arantes, D. Aristizábal-Giraldo, S. H. Masunaga, F. N. Costa, F. F. Ferreira, T. Takabatake, L. Mendon ça Ferreira, R. A. Ribeiro, and M. A. Avila, Phys. Rev. Materials 2, 044402 (2018). * Muthuselvam _et al._ [2019] I. P. Muthuselvam, R. Nehru, K. R. Babu, K. Saranya, S. N. Kaul, S.-M. Chen, W.-T. Chen, Y. Liu, G.-Y. Guo, F. Xiu, and R. Sankar, Journal of Physics: Condensed Matter 31, 285802 (2019). * Amorese _et al._ [2022] A. Amorese, D. Khalyavin, K. Kummer, N. B. Brookes, C. Ritter, O. Zaharko, C. B. Larsen, O. Pavlosiuk, A. P. Pikul, D. Kaczorowski, M. Gutmann, A. T. Boothroyd, A. Severing, and D. T. Adroja, Phys. Rev. B 105, 125119 (2022). * Blanco _et al._ [1991] J. A. Blanco, D. Gignoux, and D. Schmitt, Phys. Rev. B 43, 13145 (1991). * Bouvier _et al._ [1991] M. Bouvier, P. Lethuillier, and D. Schmitt, Phys. Rev. B 43, 13137 (1991). * Xie _et al._ [2021] W. Xie, S. S. Luo, H. Su, X. Y. Zheng, Z. Y. Nie, M. Smidman, T. Takabatake, and H. Q. Yuan, Phys. Rev. B 104, 174425 (2021). * Mukhopadhyay _et al._ [2021] A. Mukhopadhyay, K. Singh, S. Sen, K. Mukherjee, A. K. Nayak, and N. Mohapatra, Journal of Physics: Condensed Matter 33, 435804 (2021). * Talik _et al._ [2006] E. Talik, J. Kusz, W. Hofmeister, M. Matlak, M. Skutecka, and M. Klimczak, Journal of Alloys and Compounds 423, 47 (2006). * Bud’ko _et al._ [1999] S. Bud’ko, Z. Islam, T. Wiener, I. Fisher, A. Lacerda, and P. Canfield, Journal of Magnetism and Magnetic Materials 205, 53 (1999). * Hossain _et al._ [2000] Z. Hossain, S. Hamashima, K. Umeo, T. Takabatake, C. Geibel, and F. Steglich, Phys. Rev. B 62, 8950 (2000). * Laha and Hossain [2018] A. Laha and Z. Hossain, Journal of Magnetism and Magnetic Materials 465, 654 (2018). * Jammalamadaka _et al._ [2009] S. N. Jammalamadaka, N. Mohapatra, S. D. Das, and E. V. Sampathkumaran, Phys. Rev. B 79, 060403(R) (2009). * Zhu _et al._ [2020a] Q. Zhu, L. Li, Z. Yang, Z. Lou, J. Du, J. Yang, B. Chen, H. Wang, and M. Fang, Science China Physics, Mechanics & Astronomy 64, 227011 (2020a). * Zhou _et al._ [2023] H. Zhou, M. Shi, Y. Huang, W. Ma, X. Xu, J. Wang, and S. Jia, Phys. Rev. Mater. 7, 024404 (2023). * Zhang _et al._ [2020] H. Zhang, Y. L. Zhu, Y. Qiu, W. Tian, H. B. Cao, Z. Q. Mao, and X. Ke, Phys. Rev. B 102, 094424 (2020). * Zhu _et al._ [2020b] Y. Zhu, B. Singh, Y. Wang, C.-Y. Huang, W.-C. Chiu, B. Wang, D. Graf, Y. Zhang, H. Lin, J. Sun, A. Bansil, and Z. Mao, Phys. Rev. B 101, 161105(R) (2020b). * Yu _et al._ [2022] Z.-M. Yu, Z. Zhang, G.-B. Liu, W. Wu, X.-P. Li, R.-W. Zhang, S. A. Yang, and Y. Yao, Science Bulletin 67, 375 (2022). * Morrish [1965] A. H. Morrish, _The Physical Principles of Magnetism_ (1965).
# MHD turbulence formation in solar flares: 3D simulation and synthetic observations W. Ruan Centre for mathematical Plasma Astrophysics, Department of Mathematics, KU Leuven, Celestijnenlaan 200B, B-3001 Leuven, Belgium<EMAIL_ADDRESS>L. Yan Key Laboratory of Earth and Planetary Physics, Institute of Geology and Geophysics, Chinese Academy of Sciences R. Keppens Centre for mathematical Plasma Astrophysics, Department of Mathematics, KU Leuven, Celestijnenlaan 200B, B-3001 Leuven, Belgium (Accepted Oct 17, 2022) ###### Abstract Turbulent plasma motion is common in the universe, and invoked in solar flares to drive effective acceleration leading to high energy electrons. Unresolved mass motions are frequently detected in flares from extreme ultraviolet (EUV) observations, which are often regarded as turbulence. However, how this plasma turbulence forms during the flare is still largely a mystery. Here we successfully reproduce observed turbulence in our 3D magnetohydrodynamic simulation where the magnetic reconnection process is included. The turbulence forms as a result of an intricate non-linear interaction between the reconnection outflows and the magnetic arcades below the reconnection site, in which the shear-flow driven Kelvin-Helmholtz Instability (KHI) plays a key role for generating turbulent vortices. The turbulence is produced above high density flare loops, and then propagates to chromospheric footpoints along the magnetic field as Alfvénic perturbations. High turbulent velocities above 200 km s-1 can be found around the termination shock, while the low atmosphere reaches turbulent velocities of 10 km s-1 at a layer where the number density is about 1011 cm-3. The turbulent region with maximum non-thermal velocity coincides with the region where the observed high-energy electrons are concentrated, demonstrating the potential role of turbulence in acceleration. Synthetic views in EUV and fitted Hinode-EIS spectra show excellent agreement with observational results. An energy analysis demonstrates that more than 10% of the reconnection downflow kinetic energy can be converted to turbulent energy via KHI. ††journal: ApJ ## 1 Introduction Solar flares actively drive space weather, which affects interplanetary space and the Earth environment and atmosphere through energetic particles, strong EUV/X-ray emissions and associated coronal mass ejections (CMEs). In a solar flare event, up to 1033 erg of free energy stored in the solar magnetic field can be released via the magnetic reconnection process, and up to half of this energy is used to produce energetic electrons (Aulanier et al., 2013; Aschwanden et al., 2017). Magnetic reconnection can develop at current sheets, marking locations where a component of the magnetic field reverses (Parker, 1963). This changes the configuration of magnetic field lines, converts magnetic energy to thermal and kinetic plasma energy and produces fast flows that leave the reconnection site along the current sheet (a recent review see Yamada et al. (2010)). In a typical flare event, reconnection is generally believed to occur at coronal heights, and the downward reconnection outflows lead to – and interact with – magnetic arcades in lower regions of the corona. These arcades are filled with hot and dense ionized gas and they become bright in the images at soft X-rays (SXR) and specific EUV passbands (Shibata et al., 1995). Information on energetic electrons produced in solar flares is often derived from hard X-rays (HXR, photons with energy $>20$ keV), as energetic photons are believed to be produced by electrons via the bremsstrahlung mechanism (e.g. Kontar et al., 2011). Strong and isolated HXR sources are often observed (1) at the footpoints of the hot and dense arcade flare loops at chromospheric heights (0.5 - 3 Mm above the photosphere) and (2) near the top of the flare loops (e.g. Masuda et al., 1994; Tomczak & Ciborski, 2007; Su et al., 2013). Hence, a large amount of energetic electrons of energy $>20$ keV must be produced during these explosive events (Hudson & Ryan, 1995; Tomczak, 2001; Krucker et al., 2008). In the study of solar flares, promising mechanisms to produce energetic electrons are turbulence acceleration, as well as shock acceleration and direct current (DC) electric field acceleration (Aschwanden, 2005). For the former mechanism, unresolved mass motions with velocities exceeding 100 km s-1 have been discovered in flares for decades (e.g. Doschek et al., 1980; Gabriel et al., 1981; Antonucci et al., 1982). The unresolved mass motions are often thought of as turbulence, although it is impossible to know whether typical energy cascades are happening from remote observations. Recent observations with higher resolution data demonstrate that this plasma turbulence is not really localized, and covers the entire flare region including flare loop top, legs, footpoints and the region above the looptop (e.g. Doschek et al., 2014; Kontar et al., 2017; Jeffrey et al., 2018; Stores et al., 2021). Peaks in turbulence velocity values tend to show up above the high density flare loops, reaching 100-200 km s-1, while footpoints have lower turbulent velocities of a few tens km s-1. Turbulence has been frequently reproduced in 2D simulations, but the obtained turbulence is more localized (e.g. Fang et al., 2016; Ruan et al., 2018; Ye et al., 2019; Wang et al., 2022). Wang et al. (2022) invoke the Kelvin-Helmholtz instability in their 2D settings, to explain the observed wiggling of the current sheet above. In contributions to solar flare research, we investigated (Ruan et al., 2020) the interplay between 2D magnetohydrodynamic flare evolutions as coupled to analytic prescriptions of the accelerated electron beams, where we could reproduce both HXR source regions; and evolve the flare far into the postflare regime to reproduce flare-driven coronal rain (Ruan et al., 2021). Recently, a fully 3D solar flare simulation shows that turbulence can be produced in flares, with a clear role played by a mixture of the Rayleigh-Taylor (RTI) and the Richtmyer- Meshkov instability (RMI) at the interface between the reconnection termination shock and the flare arcade, leading to finger-like supra-arcade downflows (Shen et al., 2022). Our present 3D simulation will augment this result, by looking especially at the spatial distribution and the values of the non-thermal velocities obtained, and by making detailed comparisons to observations. Our present 3D simulation goes one step further to investigate the cause and consequence of the plasma turbulence in the entire flare region by successfully reproducing the observational features of the plasma turbulence, including the spatial distribution and characteristic velocities. ## 2 Method The simulation is performed with the open-source MPI-AMRVAC code (Xia et al., 2018; Keppens et al., 2021). The square simulation box has a domain of -50 Mm $\leq x\leq$ 50 Mm, -50 Mm $\leq y\leq$ 50 Mm and 0 $\leq z\leq$ 100 Mm. The minimal resolution is $64\times 64\times 64$, but an equivalent high resolution of $1024\times 1024\times 1024$ is achieved via employing 5 levels in our adaptive mesh refinement strategy, making our smallest cells less than 100 km across (the Shen et al. (2022) grid cell size was at 260 km). Gravity, thermal conduction and optically thin radiative losses are included, where the cooling curve comes from Colgan et al. (2008). A spatially varying, but temporally invariant background heating is employed to balance the radiative loss and maintain a corona. The governing equations are identical to Ruan et al. (2020). A magnetic-field-line-based transition region adaptive conduction (TRAC) method is adopted to ensure that underresolving the sharp transition region variations – which would need better than 30 km resolution (Johnston et al., 2020; Zhou et al., 2021) – does not lead to erroneous coronal temperature and density evolutions. The initial conditions for number density, temperature and background heating are similar to our 2.5D flare simulation presented in Ruan et al. (2021), where all initial profiles are functions of height $z$ only. The initial coronal region has an electron density of order $\sim 10^{9}$ cm-3 and temperature of $2-3$ MK. The number density and temperature profiles are obtained from a relaxation in which the model C7 temperature profile in Avrett & Loeser (2008) and a density profile calculated based on hydrostatic equilibrium are employed as initial conditions, where the number density at the bottom boundary is 3.7 $\times$ 1014 cm-3. The initial conditions for magnetic field are modified from those in Ruan et al. (2021), which is given by $\displaystyle B_{x}$ $\displaystyle=$ $\displaystyle\sqrt{B_{0}^{2}-B_{z}^{2}},$ (1) $\displaystyle B_{y}$ $\displaystyle=$ $\displaystyle 0,$ (2) $\displaystyle B_{z}$ $\displaystyle=$ $\displaystyle\begin{cases}-B_{0},&\quad y<-\lambda\\\ B_{0},&\quad y>\lambda\\\ B_{0}\sin[\pi y/(2\lambda)],&\quad else\\\ \end{cases}$ (3) where $B_{0}=30$ G is the initial magnetic field strength and $\lambda=10$ Mm. We dissipate the thick current sheet into a thin current sheet in the pre- flare phase by adopting a resistivity given by $\eta(y,z)=\eta_{1}\exp(-y^{2}/w_{\eta y}^{2})\exp[-(z-h_{\eta 1})^{2}/w_{\eta z}^{2}]\,,$ (4) where $\eta_{1}=10^{-2}$, $w_{\eta y}=10\ \rm Mm$, $h_{\eta 1}=30\ \rm Mm$ and $w_{\eta z}=15\ \rm Mm$. Magnetic arcades are produced in the lower atmosphere during the slow dissipation of the thick current sheet, without generating strong chromospheric evaporation. Hot and dense flare loops will only be generated above the magnetic arcades in the upcoming impulsive phase, that leads to formation of clear bright loops in the high temperature EUV images (e.g. 131 Å). The anomalous resistivity prescription is changed when a thin current sheet is formed at $t=450$ s. The new resistivity adopted then is given by $\eta(x,y,z)=\\\ \begin{cases}\eta_{2}+\eta_{3}\exp\\{-[x^{2}+y^{2}+(z-h_{\eta 2})^{2}]/r_{\eta}^{2}\\},&480\ \mathrm{s}\geq t>450\ \mathrm{s}\\\ \eta_{2},&\quad t>480\ \mathrm{s}\end{cases}$ (5) where $\eta_{2}=10^{-3}$, $\eta_{3}=10^{-1}$, $h_{\eta 2}=50\ \rm Mm$ and $r_{\eta}=5\ \rm Mm$. Fast reconnection is triggered by this localized strong resistivity at the center of our thinned current sheet. The magnetic field inside the initial current sheet contains an $x$ component, therefore the earliest formed magnetic arcades (mainly located inside/below the dense flare loop) also contain an $x$ component, and hence have a shear angle to the magnetic neutral line. The magnetic arcades generated at the impulsive phase are almost in $y$-$z$ planes, as the $x$ component of the magnetic field inside the reconnection current sheet has been transported to the lower atmosphere and the region above the simulation box. Periodic boundaries are employed at the $x$ boundaries. Symmetric boundary conditions are adopted for density, pressure and $x$/$z$ components of velocity/magnetic field at the $y$ boundaries, while anti-symmetric conditions are used for $y$-components of velocity and magnetic field. Fixed boundary conditions are employed at the bottom boundary. We employ zero gradient extrapolation at the (two-layer) ghost cells of the upper boundary for density, velocity and magnetic field. The temperature at the top ghost cells is forced to decrease at a rate $dT/dz=-T_{b}/$(50 Mm) to avoid runaway high temperatures at the boundary, where $T_{b}$ is the instantaneous local temperature at the boundary. A combination of the ‘HLL’ (initials of authors Harten, Lax and van Leer in Harten et al. (1983)) approximate Riemann solver and ‘Cada3’ (first author from Čada & Torrilhon (2009)) flux limiter is employed at the low level ($\leq 3$) grids, as the third-order limiter ‘Cada3’ achieves high accuracy at low spatial resolution. Another combination, ‘HLL’ with ‘Vanleer’ (from van Leer (1974)) limiter, is adopted at high level grids located at the low atmosphere and flaring regions, as the second-order ‘Vanleer’ limiter has better performance in high gradient regions. A strong stability preserving, three-step Runge-Kutta method is employed in time integration (Ruuth, 2006). Contribution functions from the CHIANTI atomic database have been used in synthesizing EUV emissions (Del Zanna et al., 2015). The spatial resolution of the synthesized EUV images is the same as given by the observation (using a pixel size of 435 km for image at 131 Å passband). The synthesized 255 Å spectra are assumed to have a slit width of 1450 km and a pixel resolution of 725 km $\times$ 22 mÅ. The 131 Å line is mainly related to emission by Fe VIII and Fe XXI ions with peak formation temperatures of 105.6 K and 107.0 K, and the 255 Å line is mainly associated with Fe XXIV ions with a peak formation temperature of 107.2 K (Culhane et al., 2007; Lemen et al., 2012). Scattering effects by the instrument have been included via multiplying by a Point Spread Function (PSF), where we assume the PSF is a Gaussian function of standard deviation of 1 pixel (Grigis et al., 2013). Note that only the contribution of hot plasma ($>3$ MK) is considered in synthesizing 131 Å emission, to avoid unrealistic strong emission at the low atmosphere due to underresolved transition region physics. The GOES SXR flux is calculated with the method given in Pinto et al. (2015). ## 3 Results ### 3.1 Bright EUV arcades evolution in flaring region Our simulation starts with a single vertical ($x-z$ oriented) current sheet all along $y=0$, which runs through the chromosphere and the corona. Magnetic reconnection happens inside this current sheet at coronal heights, leading to formation of an extended coronal magnetic arcade system below the reconnection site. Some magnetic arcades are filled with hot ($\sim 10$ MK) and dense plasma ($\sim 10^{10}$ cm-3) by downward flows from the reconnection site and (thermal conduction driven) upward flows from the chromosphere, respectively. This then forms high density flare loops, which are bright in images at the EUV 131 Å passband. Photons at this passband are mainly released by Fe XXI ions at temperatures $\sim$10 MK in flare events. Figure 1 gives synthesized images at 131 Å passband of the simulated flare loops, where for comparison, an actual flare event observed by the Atmospheric Imaging Assembly (AIA) onboard spacecraft Solar Dynamics Observatory (SDO) is also given (Lemen et al., 2012). The axes orientation is shown in both top panels, where $Z$ is the vertical direction in our simulation, the bottom of the chromosphere being located at $Z=0$. We indicated a reference bar of 10 arcsec length, to better compare with observations, noting that 1 arcsec $=$ 725 km. Here we use capital letters ($X$, $Y$, $Z$) to indicate directions when the length unit is arcsec and use lower case letters ($x$, $y$, $z$) when the length unit is Mm. The hot flare loops start to form at $t\approx 7$ min due to the triggering of fast reconnection, and then the flare enters an impulsive phase. Here we focus on the generation and development of turbulence in the impulsive phase, at the time after the formation of the flare loops. Most of the figures in our paper use the data at $t\approx 9$ min (shown with a black dashed line in Fig. 1d), when finger-like structures have not yet appeared. The wide initial current sheet is slowly dissipated by resistivity and then becomes a thin current sheet in the period $t<7$ min. Magnetic arcades are also formed in this period, but without the generation of strong evaporations. Consequently, there are no bright 131 Å loops or strong SXR emission during this period. The finger-like structures, which are caused by RTI/RMI according to Shen et al. (2022), also appear in our simulation, but clearly at a phase later than the one we analyze in detail here (see Fig. 1d). RMI is an instability that often happens at the surface between two fluids of different densities, which can be regarded as a special type of RTI (Richtmyer, 1960; Meshkov, 1969; Zhou et al., 2021). The ‘traditional’ RTI is caused by a constant acceleration (e.g. due to external gravity), and the initial perturbation grows exponentially, whereas the RMI is the result of an impulsive acceleration (e.g. caused by a shock wave), and the initial perturbation grows linearly. Both RTI/RMI instabilities may end up in turbulent dynamics, in which additionally KHI is involved for generation of vortices. Figure 1: (a) & (b): Synthesized 131 Å views of the flare loop systems obtained in our simulation. The two views are synthesized with different line of sight (LOS) directions, where the corresponding orientation axes are given. The red curves in (b) are projections of several magnetic field lines. The approximate locations of the magnetic reconnection X-point (white cross) and the reconnection outflows (white arrows) are also given in (b). Panel (c) shows a 131 Å image of a flare event from Dec 26, 2011, where the LOS direction is similar to that in (a), as reported by Cheng & Qiu (2016). Panel (d) gives the time development of synthesized GOES SXR flux at 1-8 Å passband, where the black dashed line gives the corresponding time of the synthesized views. A cross-section in the $X-Z$ plane, in a zoomed-in view on the top of the arcade, shown in panel (d), confirms the RTI/RMI process from Shen et al. (2022), happening at a later time referring to the blue dotted line in this panel. An animation of this figure is available. Panel (a)-(c) of the animation give synthesized 131 Å views obtained with different LOS, where the corresponding orientation axes are given. Panel (d) of the animation shows the synthesized GOES SXR flux at 1-8 Å, the same as panel (d) of this figure. The animation covers $\sim 10$ minutes of physical time starting at $t=0$ (real- time duration 14 s). ### 3.2 Where is the turbulence? Figure 2: (a) Non-thermal velocity distribution obtained from Gaussian fitting of synthesized EIS/255 spectra, where the fitting method refers to Stores et al. (2021). (b) Non-thermal velocity distribution obtained from plasma density/velocity distribution. Cyan contours in (b) give the locations of the bright loop in Fig. 1b, where the contour lines show the intensity levels 25%, 50% and 80% of peak 131 Å flux. The regions inside the blue contour have an average magnetic field strength lower than 25 G. The green line gives the approximate location of termination shock (also see Fig. 5c), but note that the shock locations are different at different X-slices due to interactions between reconnection outflows and magnetic arcades. Panel (c) gives the EIS/255 spectrum inside the blue box in (a) and (d) gives the spectrum inside the green box in (a) (marked with ‘+’). Solid lines show Gaussian fitting results of the spectra. Panel (e) shows a non-thermal velocity map from the EIS/255 spectra of an observed flare, where the cyan contours give the location of hot/dense flare loop and white contours give the location of footpoints at the solar surface, as taken from Stores et al. (2021). The spatial resolution of the synthesized EIS/255 data is reduced before fitting, as in Stores et al. (2021). Panel (a) has a pixel size of 4 arcsec $\times$ 4 arcsec, which is close to that in panel in (e) (6 arcsec $\times$ 4 arcsec). Panel (b) has a smaller pixel size of 0.138 arcsec $\times$ 0.138 arcsec. An animation of this figure is available, showing the evolution of spatial average electron-number density in $x$-direction (panel a) and non-thermal velocity distribution obtained from plasma density/velocity distribution (panel b). The contours in the panels show the intensity levels 10%, 25%, 50% and 80% of peak 131 Å flux. The animation covers 2.37 minutes of physical time starting at t = 7.51 minutes ( real-time duration 3 s). We obtain the turbulent, non-thermal velocities from synthesized spectral profiles of the emission line at 255.1136 Å, corresponding to a peak temperature 107.2 K, as in Stores et al. (2021). The non-thermal velocity distribution derived from the spectral profiles is given in Fig. 2a, where the observational result from Stores et al. (2021) is also replicated in Fig. 2e. The observational data were obtained with the EUV Imaging Spectrometer (EIS) onboard Hinode (Culhane et al., 2007). The synthesized spectral data adopted the same spatial and wavelength resolutions as the observational data, and the instrument effect has also been included. In order to obtain a clear view on the distribution of turbulence relative to the high density flare loop, we employ a LOS parallel to our $x$ direction (i.e. perpendicular to our flare loops) when synthesizing these EIS/255 spectra. Figure 2a gives the non- thermal velocity distribution across the $y-z$ plane obtained from the EIS spectra, while a higher resolution equivalent calculated from the available magnetohydrodynamic (MHD) plasma parameters (electron number density $N_{\mathrm{e}}$ and $x$-component of velocity $v_{x}$) by means of $v_{\mathrm{nth}}(y,z)=\sqrt{\frac{\int N_{\mathrm{e}}(x,y,z)v_{x}^{2}(x,y,z)\mathrm{d}x-[\int N_{\mathrm{e}}(x,y,z)v_{x}(x,y,z)\mathrm{d}x]^{2}/\int N_{\mathrm{e}}(x,y,z)\mathrm{d}x}{\int N_{\mathrm{e}}(x,y,z)\mathrm{d}x}}\,,$ (6) is shown in panel b. The 131 Å contours indicate the location of the flare EUV loops. The non-thermal velocity distribution from panels a/b shows very similar features to the observational distribution in panel e, where the turbulence has a wide spatial distribution above the AIA/131 flare loops and the maximum non-thermal velocity is located above the looptop. The maximum non-thermal velocity is located at a weak magnetic field region (see panel b), which is a promising electron acceleration site suggested by Chen et al. (2020) and observationally demonstrated in Fleishman et al. (2022). The spectral profiles of the high non-thermal velocity regions have perfect Gaussian distributions as demonstrated in Fig. 2 panels c and d, which is a distinctive feature of flare turbulence. We confirm from the high resolution non-thermal velocity map (panel b) that there are two isolated high $v_{\mathrm{nth}}$ regions above the high density flare loop, which also show up in panel a. The high non- thermal velocity below the loops in panel a is mainly caused by chromospheric evaporations related to shocks rather than by turbulence. ### 3.3 How is the turbulence produced? In order to investigate how the turbulence is produced, we analyze the dynamics in a slice (parallel to the $x$-$z$ plane at $y=3$ Mm or $Y\sim 4$ arcsec) that runs across a region with maximal non-thermal velocities (as indicated by the white dashed vertical line in Fig. 2b). For that vertical plane, Figure 3a (top left panel) shows the localized AIA/131 emission flux distribution as well as the ‘velocity field’ (through streamlines of $v_{x}\vec{e}_{x}+v_{z}\vec{e}_{z}$) on the slice. On its right hand side, the same panel Figure 3a gives the non-thermal velocity distribution as function of the height as obtained from Fig. 2b. It is obvious in the velocity streamline view that there are a lot of vortices around the height with maximal non-thermal velocity (i.e. at $Z\approx 35$ arcsec). Figure 3b-e (four top right panels) show in colorscales the spatial distributions of number density $N_{\mathrm{e}}$, $v_{x}$, $v_{z}$, and thermal pressure $P$ (the latter in dimensionless unit) of a selected turbulent region: namely in the red box shown in Fig. 3a. We each time overlay the streamlines as well. Vortices appear prominently near $Z\sim 35$ arcsec, where we find large shear velocities (e.g. note the velocity jump $>1000$ km s-1 in $v_{z}$ from $X\sim 21$ arcsec to $X\sim 23$ arcsec), which indicates the turbulence is produced via KHI. KHI may happen at the interface between high speed shear flows to produce vortices at their interface. This instability is often inhibited by magnetic tension in magnetized plasmas. When the flow velocity vector is parallel to the magnetic field lines, triggering KHI requires that (half) the velocity jump is larger than the local Alfvén speed (Keppens et al., 1999). Here, in the slice analysed, we find conditions that are quite different from this standard 2D situation. We have here a guiding magnetic field that is locally almost perpendicular to the $x$-$z$ plane shown here (shape of the guiding field refers to the white solid line in Fig. 4a), while there are large shears in both $v_{x}$ and $v_{z}$. As a result, KHI vortices can be produced on this $x$-$z$ plane without distorting the magnetic field too much, and hence without producing a stabilizing magnetic tension. Such a condition is unconditionally unstable for KHI. The relative orientation of magnetic field and flow shear is similar in the case of so-called TWIKH rolls from transverse-wave-induced KHI in coronal loops (e.g. Antolin et al., 2014; Guo et al., 2019; Shi et al., 2021). A simplified growth rate estimate of KHI in such a condition $\omega=kv_{0}$ (using equation 13.38 in Goedbloed et al., 2019) gives a growth timescale ($1/\omega$) of seconds, the same order of magnitude as the vortex generation time scale seen in our simulation. Here, we used $k=1/(\mathrm{several\ Mm})$ since our vortices have a length scale of Mm and a half shear speed $v_{0}$ of hundreds up to one thousand km s-1, as found in the flow distribution. Figure 3: Color map in top left panel (a): AIA/131 emission flux distribution at slice $\mathrm{Y}\sim 4$ arcsec, along the white dashed vertical line in Fig. 2b. At the RHS of panel(a) we show non-thermal velocity. Top right panels (b)-(e): $N_{\mathrm{e}}$, $v_{{x}}$, $v_{{z}}$ and thermal pressure inside the red box in panel (a). Bottom left panels (f)&(g): $N_{\mathrm{e}}$ and $v_{{z}}$ at a slice marked with a green dashed line in (b)-(e). Bottom right panels (h)&(i): $N_{\mathrm{e}}$ and $v_{{z}}$ at a slice marked with the cyan dashed line in (b)-(e). The red vertical dashed lines in all bottom panels (f)-(i) give the location of the slice shown in top right panels (b)-(e). Streamlines in (a)-(e) show the velocity field given by $v_{x}\vec{e}_{x}+v_{z}\vec{e}_{z}$, while that in (f)-(i) show the field given by $v_{y}\vec{e}_{y}+v_{z}\vec{e}_{z}$. The width of stream lines in panel (a) is in proportion to $\sqrt{v_{x}^{2}+v_{z}^{2}}$ to highlight the turbulent regions, while that in other panels is constant. The initial shear motions come from non-linear interactions observed in the downward reconnection flows with the magnetic arcades above the AIA/131 loop. Collisions between the downflows and the arcades leads to the formation of a termination shock and complicated reflected upflows. Their spatial distribution on the vertical $y$-$z$ plane is different for different $x$ (as shown in the bottom panels Fig. 3g & i), which enhances shear motion between downward flows and upward reflection flows. Combining all views collected in Fig. 3 we see that vortices prefer to form on $x$-$z$ planes, which is perpendicular to the local magnetic field direction. A related, detailed study on the interaction between reconnection flows and their arcades in 2D conditions is found in Ye et al. (2021). The magnetic field strength inside the flaring region is spatial-dependent, where the legs of the guiding line shown in Fig. 4a have an average strength of $\sim 40$ G, while the apex of the line has a weaker strength of less than 20 G due to the impact of turbulent motion. In real flares, the magnetic field can be much stronger, with field strength higher than 100 G above the 131 Å loops. However, turbulence can still be produced with the mechanism we mentioned above, as it is difficult for the magnetic field to inhibit the growth of KHI when the flows velocity vectors are perpendicular to the field lines. Note further that the turbulent upper border of the AIA/131 loops in Fig. 3a is attributed by Shen et al. (2022) to RTI/RMI effects. These authors focused on the dark fingers as also seen in an as yet early stage in our Fig. 3a, where the connection with Supra-Arcade Downflows was made by Shen et al. (2022). The RTI/RMI region does not lead to large turbulent velocities during our simulation and is clearly situated below the region of maximal non-thermal velocities. As demonstrated in Fig. 3a, the non-thermal velocity around the upper border of the AIA/131 loops is smaller than 50 km s-1. Our (higher resolution) simulation demonstrates that the turbulent zone is foremost located above the EUV loops (see also Fig. 2) and most likely driven by KHI mechanisms. ### 3.4 Turbulence at the lower atmosphere Turbulence can also be found at lower atmospheric regions in our simulation. The bottom slice in the 3D view at left in Fig. 4a demonstrates the $v_{x}$ distribution at a layer at height $z=2$ Mm. Turbulent motions up to $\sim$ 10 km s-1 appear at the footpoints of the high density arcades. These are shown in the vertical slice in the 3D view of Fig. 4a, in their number density distribution at $x=30$ Mm ($X\sim$ 41 arcsec). The turbulent region at the $z=2$ Mm layer has an average number density of $\sim$1011 cm-3 and an average temperature of $\sim$1 MK. The temperature actually drops rapidly from several MK to a typical chromospheric temperature of $\sim$0.01 MK around this height. Turbulent motions can still be found at a lower $z=1$ Mm layer where the average number density is $\sim$1013 cm-3, but with reduced speeds by one order of magnitude compared to those at $z=2$ Mm. Figure 4: Horizontal slice in the left 3D view panel (a): $v_{x}$ distribution at $z=2$ Mm, while the vertical slice shows the $N_{\mathrm{e}}$ distribution at $x=30$ Mm. Right panels (b)-(e): Time-space plots of $B_{x}$, $v_{x}$, Alfvén speed $v_{\mathrm{A}}$ and $N_{\mathrm{e}}$ along the white solid (field) line shown in (a), where the minimum $s$-value is located at the left end of this line. The horizontal dashed lines in these panels (b)-(e) and the vertical dotted line in Fig. 1d give the corresponding time for the 3D view (a) at left. We find that the turbulence at this lower atmosphere region gets propagated from higher regions downwards, rather than being generated locally. We select a (time-invariant) curve which gives a general direction of the magnetic field (the white solid line in Fig. 4a) and study the time development of typical MHD quantities along that curve. The time-space plots of $B_{x}$ and $v_{x}$ along the curve (top right panels) demonstrate that there are structures propagating from the middle of this curve to its ends. The $B_{x}$ and $v_{x}$ are seen to show an anti-phase when structures are propagating toward $s>0$, while they are in phase when the structures are propagating toward $s<0$. Since the magnetic field vector is pointing from $s<0$ to $s>0$, such a phase relationship indicates that the structures are Alfvénic perturbations. In fact, the propagation speeds (fitted speeds are indicated in Fig. 4b) of the structures are close to the local Alfvén speeds which are quantified in panel (d). Sudden decreases in propagating speed around $s=30$ Mm are a result of a change in the local Alfvén speed, where this change in Alfvén speed is caused by plasma density variation that is due to chromospheric evaporation processes, as evident from the time-space plot of number density in panel (e). The downward propagating Alfvénic perturbations and the turbulence in the chromosphere may contribute to the generation of energetic electrons (e.g. Fletcher & Hudson, 2008). The Alfvénic perturbations may bring a lot of energy to the lower atmosphere, and contribute to the heating of the chromosphere and the generation of evaporations (e.g. Russell & Fletcher, 2013; Reep & Russell, 2016). Those effects are worth investigating in future studies. The bigger picture obtained from all previous sections is as follows: turbulence gets produced above the looptops due to KHI, and this turbulence gets propagated along magnetic fields to lower regions, leading to a spatially distributed turbulent plasma at all heights. The arcade-shape spatial distribution of the non-thermal velocity regions seen in Fig. 2a and b also supports this bigger picture, in line with the arcade-shape magnetic field that forms during the flare process. ### 3.5 Turbulent Kinetic Energy When connecting MHD turbulence with actual non-thermal particle acceleration, we should still verify whether the released magnetic energy from reconnection can be efficiently converted to turbulence energy. For KHI driven turbulence, the energy in the turbulence comes from the kinetic energy of bulk flows, the reconnection downflows in particular. Therefore, here we compare the time- evolving turbulence energy with the time-integrated reconnection downflow kinetic energy. Turbulence energy consist of kinetic energy of the turbulent velocity field and magnetic energy of the turbulent magnetic field. The spatial distribution of the kinetic energy density and the magnetic energy density at $t\approx 9$ min is shown in Fig. 5a and b, respectively. The turbulent kinetic energy density is calculated from $Ev_{\mathrm{tur}}(y,z)=\int\rho(x,y,z)v_{x}^{2}(x,y,z)\mathrm{d}x-[\int\rho(x,y,z)v_{x}(x,y,z)\mathrm{d}x]^{2}/\int\rho(x,y,z)\mathrm{d}x,$ (7) and the turbulent magnetic energy density is calculated from $Eb_{\mathrm{tur}}(y,z)=\int B_{x}^{2}(x,y,z)\mathrm{d}x-[\int B_{x}(x,y,z)\mathrm{d}x]^{2}/\int\mathrm{d}x,$ (8) where $\rho$ is density. An assumption applied in the calculation is that the turbulence is anisotropic (Alfvénic), such that the turbulent motion is (only) freely developed in directions perpendicular to the (average) magnetic field, which is why we only incorporate $x$-components in both flow and magnetic field turbulent quantifications for this $y-z$ view. Fig. 5a and b demonstrate that $Ev_{\mathrm{tur}}$ and $Eb_{\mathrm{tur}}$ have similar density and spatial distribution, again supporting our assumption that the turbulence is indeed Alfvénic. The time-integrated kinetic energy of the reconnection downflow is calculated via integrating the kinetic energy flux that goes across a slice at height $z=45$ Mm ($Z\sim 62$ arcsec, the black dashed line in Fig. 5c), where we do the integration from $t=450$ s, the time where the fast reconnection regime starts. The color map shows the spatial average vertical velocity distribution given by $\bar{v}_{z}(y,z)=\int N_{\mathrm{e}}(x,y,z)v_{z}(x,y,z)\mathrm{d}x/\int N_{\mathrm{e}}(x,y,z)\mathrm{d}x,$ (9) which gives locations of the reconnection downflow. Figure 5: (a) Spatial distribution of turbulent kinetic energy. (b) Spatial distribution of turbulent magnetic energy. (c) X-averaged vertical speed distribution. Taken along the black solid line from this panel (c), we show in (d) the time-integrated reconnection downflow kinetic energy (black solid line) that goes through the surface. Panel (d) further shows in blue solid line the instantaneous total turbulence energy, and its division over kinetic energy and turbulent magnetic energy. In this panel (d), the vertical red solid line corresponds to the time for panels (a), (b) and (c). As in Fig. 2, the contours in panels (a),(b) and (c) show the AIA/131 loop location. Figure 5d gives a comparison of the (time) integrated reconnection downflow kinetic energy $Ek$, the instantaneous total turbulence energy $E_{\mathrm{tur}}$, and the kinetic and magnetic components of this turbulence energy ($E_{\mathrm{tur}}=$$Ev_{\mathrm{tur}}$ $+$ $Eb_{\mathrm{tur}}$). $Ev_{\mathrm{tur}}$ closely follows $Eb_{\mathrm{tur}}$ during the entire time evolution as expected, a feature of Alfvénic motions. $E_{\mathrm{tur}}$ shows a similar tendency as $Ek$, where we find $E_{\mathrm{tur}}\approx 0.1Ek$, so roughly a 10 % fraction at each time. The total, time-integrated amount that is transferred from bulk kinetic energy of the reconnection downward flows to turbulence energy is more than the instantaneous turbulence energy $Ek$, considering that turbulence energy is continuously (transported to the chromosphere and) diffused. ## 4 Conclusions The spatial distribution and the characteristic velocity of the plasma turbulence in flare observation are successfully reproduced in our 3D MHD flare simulation. The maximum turbulent velocity reached is higher than 200 km s-1, and is located in close connection to the reconnection termination shock. Reconnection downflows collide with magnetic arcades near the termination shock and then produce complicated reflection upflows. Non-linear interactions between the downflows and upflows lead to spatially extended regions where high speed turbulent motion forms naturally, in which KHI dominates. More than 10% of the downflow kinetic energy is converted into turbulence energy via this mechanism. The turbulence is anisotropic due to magnetic tension, as the turbulent vortices only appear most clearly inside the plane that is perpendicular to the guiding magnetic field. The locally generated turbulence can propagate along the guiding magnetic field as Alfvénic perturbations, and then leads to a wider spatial distribution of turbulent motions. The lower atmosphere, where the number density reaches $10^{11}$ cm-3, still has turbulent motions of order 10 km s-1. Our model works for any flare that involves magnetic reconnection, no matter how big the flare is or how long it takes to release energy, since it is governed by the scale-invariant MHD description. Downwards reconnection outflows invariably encounter lower-lying magnetic fields, by which the outflows are stopped and diverted in the direction perpendicular to the magnetic field, making the flow shear orthogonal to the field. Such a condition is a perfect environment for KHI in full 3D (and missed in 2D settings), regardless of the magnetic field strength. KHI can be easily triggered for various reasons in such a condition, where the interaction between the downflows and reflected upflows is (only) one of them. Our simulations used various algorithmic improvements that render it possible to resolve details down to few 100 km, thanks to adaptive mesh refinement, and the TRAC treatment to properly evolve multi-dimensional MHD from chromosphere to corona. Application of this TRAC method is not crucial for this KHI turbulence at the studied impulsive phase, but will be important for properly generating coronal rain in the gradual phase (Ruan et al., 2021). A 3D numerical study on the formation of post-flare coronal rain can serve as interesting follow-up to this work. Then, the role of other instabilities (like RTI/RMI and thermal instabilities) in various evolutionary stages of a 3D flare loop can be clarified in detail. The maximum non-thermal velocity is located at a weak magnetic field region, where the high-energy electrons are accelerated. It indicates that plasma turbulence probably plays an essential role in accelerating flare electrons. The next step of this study should address the consequences of the fluid turbulence for charged particle dynamics more seriously, either through test particle assessment of the attainable particle acceleration efficiency, or even evolving to hybrid (fluid/particle) or fully kinetic models. There is no doubt that turbulence can accelerate electrons/ions to high energies, but how much of the energy released by reconnection can go to energetic electrons/ions in this way is still a question. According to Aschwanden et al. (2017), about half of the energy will go to the energetic electrons and about 17% of the energy will go to the energetic ions on average. Emslie et al. (2012) quotes lower acceleration efficiency, but still about 20% of the released energy would go to the energetic electrons and ions. It might be difficult to achieve this in a multi-step process such as turbulence or shock acceleration (Cargill, 1996; Miller et al., 1997). Most likely, the tens of percent energy transfer efficiency is the result of multiple acceleration mechanisms cooperating (e.g., turbulence acceleration and shock acceleration). Note that other scenarios can achieve high acceleration efficiencies, such as the acceleration at fragmented current sheets (Cargill et al., 2012). We thank the referee Peter Cargill for very constructive comments. WR was supported by a postdoctoral mandate (PDMT1/21/027) by KU Leuven. LY was supported by the Youth Innovation Promotion Association of CAS (2021064). RK is supported by Internal funds KU Leuven through the project C14/19/089 TRACESpace, and an FWO project G0B4521N. RK also received funding from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme (grant agreement No. 833251 PROMINENT ERC-ADG 2018). ## References * Antolin et al. (2014) Antolin, P., Yokoyama, T., & Van Doorsselaere, T. 2014, ApJ, 787, L22 * Antonucci et al. (1982) Antonucci, E., Gabriel, A. H., Acton, L. W., et al. 1982, Sol. Phys., 78, 107 * Aschwanden (2005) Aschwanden, M. J. 2005, Physics of the Solar Corona. An Introduction with Problems and Solutions (2nd edition) * Aschwanden et al. (2017) Aschwanden, M. J., Caspi, A., Cohen, C. M. S., et al. 2017, ApJ, 836, 17 * Aulanier et al. (2013) Aulanier, G., Démoulin, P., Schrijver, C. J., et al. 2013, A&A, 549, A66 * Avrett & Loeser (2008) Avrett, E. H., & Loeser, R. 2008, ApJS, 175, 229 * Cargill (1996) Cargill, P. 1996, EOS Transactions, 77, 353 * Cargill et al. (2012) Cargill, P. J., Vlahos, L., Baumann, G., Drake, J. F., & Nordlund, Å. 2012, Space Sci. Rev., 173, 223 * Chen et al. (2020) Chen, B., Shen, C., Gary, D. E., et al. 2020, Nature Astronomy, 4, 1140 * Cheng & Qiu (2016) Cheng, J. X., & Qiu, J. 2016, ApJ, 825, 37 * Colgan et al. (2008) Colgan, J., Abdallah, J., J., Sherrill, M. E., et al. 2008, ApJ, 689, 585 * Culhane et al. (2007) Culhane, J. L., Harra, L. K., James, A. M., et al. 2007, Sol. Phys., 243, 19 * Del Zanna et al. (2015) Del Zanna, G., Dere, K. P., Young, P. R., Landi, E., & Mason, H. E. 2015, A&A, 582, A56 * Doschek et al. (1980) Doschek, G. A., Feldman, U., Kreplin, R. W., & Cohen, L. 1980, ApJ, 239, 725 * Doschek et al. (2014) Doschek, G. A., McKenzie, D. E., & Warren, H. P. 2014, ApJ, 788, 26 * Emslie et al. (2012) Emslie, A. G., Dennis, B. R., Shih, A. Y., et al. 2012, ApJ, 759, 71 * Fang et al. (2016) Fang, X., Yuan, D., Xia, C., Van Doorsselaere, T., & Keppens, R. 2016, ApJ, 833, 36 * Fleishman et al. (2022) Fleishman, G. D., Nita, G. M., Chen, B., Yu, S., & Gary, D. E. 2022, Nature, doi:10.1038/s41586-022-04728-8 * Fletcher & Hudson (2008) Fletcher, L., & Hudson, H. S. 2008, ApJ, 675, 1645 * Gabriel et al. (1981) Gabriel, A. H., Phillips, K. J. H., Acton, L. W., et al. 1981, ApJ, 244, L147 * Goedbloed et al. (2019) Goedbloed, H., Keppens, R., & Poedts, S. 2019, Shear flow and rotation (Cambridge University Press), 473–524 * Grigis et al. (2013) Grigis, P., Yingna, S., & Weber, M. 2013, AIA PSF characterization and image deconvolution, Tech. rep., Tech. Rep., AIA team * Guo et al. (2019) Guo, M., Van Doorsselaere, T., Karampelas, K., & Li, B. 2019, ApJ, 883, 20 * Harten et al. (1983) Harten, A., Lax, P. D., & Leer, B. v. 1983, SIAM Review, 25, 35 * Hudson & Ryan (1995) Hudson, H., & Ryan, J. 1995, ARA&A, 33, 239 * Jeffrey et al. (2018) Jeffrey, N. L. S., Fletcher, L., Labrosse, N., & Simões, P. J. A. 2018, Science Advances, 4, 2794 * Johnston et al. (2020) Johnston, C. D., Cargill, P. J., Hood, A. W., et al. 2020, A&A, 635, A168 * Keppens et al. (2021) Keppens, R., Teunissen, J., Xia, C., & Porth, O. 2021, Computers & Mathematics With Applications, 81, 316 * Keppens et al. (1999) Keppens, R., Tóth, G., Westermann, R. H. J., & Goedbloed, J. P. 1999, Journal of Plasma Physics, 61, 1 * Kontar et al. (2017) Kontar, E. P., Perez, J. E., Harra, L. K., et al. 2017, Phys. Rev. Lett., 118, 155101 * Kontar et al. (2011) Kontar, E. P., Brown, J. C., Emslie, A. G., et al. 2011, Space Sci. Rev., 159, 301 * Krucker et al. (2008) Krucker, S., Battaglia, M., Cargill, P. J., et al. 2008, A&A Rev., 16, 155 * Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17 * Masuda et al. (1994) Masuda, S., Kosugi, T., Hara, H., Tsuneta, S., & Ogawara, Y. 1994, Nature, 371, 495 * Meshkov (1969) Meshkov, E. 1969, Fluid Dynamics, 4, 101 * Miller et al. (1997) Miller, J. A., Cargill, P. J., Emslie, A. G., et al. 1997, J. Geophys. Res., 102, 14631 * Parker (1963) Parker, E. N. 1963, ApJS, 8, 177 * Pinto et al. (2015) Pinto, R. F., Vilmer, N., & Brun, A. S. 2015, A&A, 576, A37 * Reep & Russell (2016) Reep, J. W., & Russell, A. J. B. 2016, ApJ, 818, L20 * Richtmyer (1960) Richtmyer, R. D. 1960, Communications on Pure and Applied Mathematics, 13, 297 * Ruan et al. (2018) Ruan, W., Xia, C., & Keppens, R. 2018, A&A, 618, A135 * Ruan et al. (2020) —. 2020, ApJ, 896, 97 * Ruan et al. (2021) Ruan, W., Zhou, Y., & Keppens, R. 2021, ApJ, 920, L15 * Russell & Fletcher (2013) Russell, A. J. B., & Fletcher, L. 2013, ApJ, 765, 81 * Ruuth (2006) Ruuth, S. 2006, Mathematics of Computation, 75, 183 * Shen et al. (2022) Shen, C., Chen, B., Reeves, K. K., et al. 2022, Nature Astronomy, 6, 317 * Shi et al. (2021) Shi, M., Van Doorsselaere, T., Antolin, P., & Li, B. 2021, ApJ, 922, 60 * Shibata et al. (1995) Shibata, K., Masuda, S., Shimojo, M., et al. 1995, ApJ, 451, L83 * Stores et al. (2021) Stores, M., Jeffrey, N. L. S., & Kontar, E. P. 2021, ApJ, 923, 40 * Su et al. (2013) Su, Y., Veronig, A. M., Holman, G. D., et al. 2013, Nature Physics, 9, 489 * Tomczak (2001) Tomczak, M. 2001, A&A, 366, 294 * Tomczak & Ciborski (2007) Tomczak, M., & Ciborski, T. 2007, A&A, 461, 315 * van Leer (1974) van Leer, B. 1974, Journal of Computational Physics, 14, 361 * Čada & Torrilhon (2009) Čada, M., & Torrilhon, M. 2009, Journal of Computational Physics, 228, 4118 * Wang et al. (2022) Wang, Y., Cheng, X., Ren, Z., & Ding, M. 2022, ApJ, 931, L32 * Xia et al. (2018) Xia, C., Teunissen, J., El Mellah, I., Chané, E., & Keppens, R. 2018, ApJS, 234, 30 * Yamada et al. (2010) Yamada, M., Kulsrud, R., & Ji, H. 2010, Reviews of modern physics, 82, 603 * Ye et al. (2021) Ye, J., Cai, Q., Shen, C., et al. 2021, ApJ, 909, 45 * Ye et al. (2019) Ye, J., Shen, C., Raymond, J. C., Lin, J., & Ziegler, U. 2019, MNRAS, 482, 588 * Zhou et al. (2021) Zhou, Y., Williams, R. J., Ramaprabhu, P., et al. 2021, Physica D: Nonlinear Phenomena, 423, 132838 * Zhou et al. (2021) Zhou, Y.-H., Ruan, W.-Z., Xia, C., & Keppens, R. 2021, A&A, 648, A29
# Unveiling Nucleon 3D Chiral-Odd Structure with Jet Axes Wai Kin Lai<EMAIL_ADDRESS>Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Southern Nuclear Science Computing Center, South China Normal University, Guangzhou 510006, China Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA Xiaohui Liu <EMAIL_ADDRESS>Center of Advanced Quantum Studies, Department of Physics, Beijing Normal University, Beijing 100875, China Center for High Energy Physics, Peking University, Beijing 100871, China Manman Wang Center of Advanced Quantum Studies, Department of Physics, Beijing Normal University, Beijing 100875, China Hongxi Xing<EMAIL_ADDRESS>Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Southern Nuclear Science Computing Center, South China Normal University, Guangzhou 510006, China (August 27, 2024) ###### Abstract We reinterpret jet clustering as an axis-finding procedure which, along with the proton beam, defines the virtual-photon transverse momentum $q_{T}$ in deep inelastic scattering (DIS). In this way, we are able to probe the nucleon intrinsic structure using jet axes in a fully inclusive manner, similar to the Drell-Yan process. We present the complete list of azimuthal asymmetries and the associated factorization formulae at leading power for deep-inelastic scattering of a nucleon. The factorization formulae involve both the conventional time-reversal-even (T-even) jet function and the T-odd one, which have access to all transverse-momentum-dependent parton distribution functions (TMD PDFs) at leading twist. Since the factorization holds as long as $q_{T}\ll Q$, where $Q$ is the photon virtuality, the jet-axis probe into the nucleon structure should be feasible for machines with relatively low energies such as the Electron-Ion Collider in China (EicC). We show that, within the winner-take-all (WTA) axis-finding scheme, the coupling between the T-odd jet function and the quark transversity or the Boer-Mulders function could induce sizable azimuthal asymmetries at the EicC, the EIC and HERA. We also give predictions for the azimuthal asymmetry of back-to-back dijet production in $e^{+}e^{-}$ annihilation at Belle and other energies. ## I Introduction Recently jets and jet substructure have been proposed as alternative probes for portraying the full three-dimensional (3D) image of a nucleon and enriched the content of the transverse-momentum-dependent (TMD) spin physics Kang _et al._ (2017); Liu _et al._ (2019); Gutierrez-Reyes _et al._ (2019a); Arratia _et al._ (2020); Liu _et al._ (2020); Gutierrez-Reyes _et al._ (2018, 2019b); Kang _et al._ (2022, 2020a). The jet probe into the nucleon structure has been shown to be able to access the TMD parton distribution functions, including the Sivers function of a transversely polarized nucleon. Conventionally, we require the jets to acquire large transverse momenta, and therefore jets are regarded only feasible for high-energy colliders such as the LHC but practically challenging for machines with a relatively low center- of-mass energy such as the Electron-Ion Collider in China (EicC) Anderle _et al._ (2021) or detectors more optimized for low energy scales, such as the EIC Comprehensive Chromodynamics Experiment (ECCE) Abdul Khalek _et al._ (2021). However, in this work, we will argue that this is not the case by reinterpreting jet clustering as an axis-finding procedure to measure the virtual photon $q_{T}$, which allows an inclusive probe of the TMD spin physics suitable also for low energy machines Liu (2021). In order to maximize the full outreach of the jet probe into the complete list of the nucleon spin structure, the concept of the time-reversal-odd (T-odd) jet function was proposed recently Liu and Xing (2021). The T-odd jet function couples directly to the chiral-odd nucleon parton distributions, such as the quark transversity and the Boer-Mulders function of the proton. It immediately opens up many unique opportunities for probing the nucleon intrinsic spin dynamics using jets, which were thought to be impossible. Besides, the T-odd jet function is interesting by its own, since it could “film” the QCD non- perturbative dynamics by continuously changing the jet axis from one to another. In this work, we study the phenomenology of the T-odd jet function in deep- inelastic scattering (DIS) of a nucleon and $e^{+}e^{-}$ annihilation. In Section II, we explain how the jet axis is used for measuring the photon $q_{T}$ in a fully inclusive way and argue why the jet-axis probe of the nucleon spin and the TMDs is feasible even for low-energy machines such as the Electron-Ion Collider in China (EicC) and Belle Accardi _et al._ (2022). In Section III, we briefly review the notion of the T-odd jet function. In Section IV, we give the complete list of the azimuthal asymmetries in the jet- axis probe in deep-inelastic scattering of a nucleon. We give predictions on the azimuthal asymmetries associated with the couplings of the T-odd jet function with the quark transversity and the Boer-Mulders function at the EicC, the EIC Abdul Khalek _et al._ (2021), and HERA. In Section V, we study the azimuthal asymmetry of back-to-back dijet production in $e^{+}e^{-}$ annihilation, which is induced by the T-odd jet function. In Section VI, we give a summary and an outlook. ## II Measuring photon $q_{T}$ in DIS All conventional probes of the TMDs and the spin structure are more or less equivalent to measuring the virtual-photon transverse momentum $q_{T}$ with respect to two pre-defined axes. For instance, in the Drell-Yan process, the incoming nucleon beams naturally set up the $\pm z$-axis and the photon transverse momentum is then straightforwardly determined. In DIS, since we only have one nucleon beam, we thus need another direction to define the photon $q_{T}$. Tagging a final-state hadron becomes a natural option for this purpose, and in this case, the photon $q_{T}$ is then measured with respect to the nucleon beam and the tagged hadron momentum $P_{h}$. This is nothing but the semi-inclusive deep inelastic scattering (SIDIS). Finding an axis for measuring the photon $q_{T}$ in DIS is certainly not limited to tagging hadrons. Many other strategies could also help here, such as the final-state-particle clustering. The procedure follows exactly the jet clustering algorithms, but a with different emphasis. Here the jet clustering procedure is barely a recursive algorithm for us to determine the axes, which once being determined, we measure the photon $q_{T}$ with respect to one of them and the proton beam to probe the nucleon structure, while totally forget about the jet, as illustrated in Fig. 1. Therefore, the jet-axis probe is fully differential just like the SIDIS. Figure 1: Photon $q_{T}$ by the jet axis in DIS, where one of the jet axes and the proton beam determine the ${\bar{n}}$ and $n$ directions respectively, which decide the photon $q_{T}$. Based on what we have described, we can derive the factorization formula for the jet-axis probe. Formally, the factorization theorem reads $\displaystyle d\sigma\propto f_{i}^{U/T}(x,p_{T}^{2})\otimes{\cal J}_{1,i}(z,k_{T}^{2})+g_{i}^{U/T}(x,p_{T}^{2})\otimes{\cal J}_{T,i}(z,k_{T}^{2})\,,$ (1) where $f_{i}^{U/T}(x,p_{T}^{2})$ and $g_{i}^{U/T}(x,p_{T}^{2})$ are the proton partonic transverse-momentum distributions for parton flavor $i$, and ${\cal J}_{1,i}(z,k_{T}^{2})$ and ${\cal J}_{T,i}(z,k_{T}^{2})$ are both the jet- axis-finding functions (jet functions) which encode the perturbatively calculable jet clustering procedure. The conventional jet function ${\cal J}_{1,i}(z,k_{T}^{2})$ is induced by an unpolarized quark, while a transversely polarized quark gives rise to the time-reversal odd (T-odd) jet function ${\cal J}_{T,i}(z,k_{T}^{2})$. The detailed factorization form and the definition of the T-odd jet function, ${\cal J}_{T,i}(z,k_{T}^{2})$, will be given in the following sections. Here we note that the factorization theorem holds as long as $Q\gg k_{T}$, which shares the same requirement for the SIDIS factorization to be valid. In this sense, just like the SIDIS, the jet-axis probe will also be low-energy-machine friendly, and could likely be implemented at the EicC. To adapt the jet axis finding procedure to low-energy machines, instead of using the usual $k_{T}$-type jet algorithms that are widely used at the LHC, in DIS we default to the energy-type jet algorithms which is more feasible for clustering particles with low transverse momenta and populated in the forward/backward rapidities. For instance, we can adopt the spherically- invariant jet algorithm Cacciari _et al._ (2012), defined by $\displaystyle d_{ij}={\rm min}(E^{-2}_{i},E^{-2}_{j})\frac{1-\cos\theta_{ij}}{1-\cos R}\,,\quad d_{iB}=E_{i}^{-2}\,.$ (2) where $\theta_{ij}$ is the angle between particles $i$ and $j$, while $E_{i}$ and $E_{j}$ are the energy carried by them. For TMD studies, the radius parameter $R$ will be chosen such that $R\sim{\cal O}(1)\gg q_{T}/Q$. ## III T-odd jet function The inclusive photon $q_{T}$ cross sections with respect to the proton beam and the jet axis can be written in terms of a factorization theorem Eq. (1) derived from the soft-collinear effective theory (SCET) Bauer _et al._ (2001); Bauer and Stewart (2001); Bauer _et al._ (2002a, b). The factorization theorem involves the transverse-momentum-dependent (TMD) correlator $\displaystyle{\cal J}^{ij}(z,k_{T})$ $\displaystyle=\frac{1}{2z}\sum_{X}\int\frac{dy^{+}d^{2}\bm{y}_{T}}{(2\pi)^{3}}e^{ik\cdot y}\langle 0|{\chi}^{i}_{\bar{n}}(y)|JX\rangle\langle JX|\bar{\chi}^{j}_{\bar{n}}(0)|0\rangle|_{y^{-}=0}\,,$ (3) where $\bar{n}$ is a light-like vector along the direction of the jet, $\chi_{\bar{n}}=W^{\dagger}_{\bar{n}}\xi_{\bar{n}}$ is the product of the collinear quark field $\xi_{\bar{n}}$ and the collinear Wilson line $W^{\dagger}_{\bar{n}}$. Here, $z$ is the momentum fraction of the jet with respect to the fragmenting parton which initiates the jet, i.e. $z=P_{J}^{-}/k^{-}$, with $P_{J}$ being the jet momentum that defines the jet axis, and $k$ the momentum of the fragmenting quark. The jet algorithm dependence is implicit in Eq. (3), which determines the $P_{J}$ and hence the jet axis, and can be calculated perturbatively. Conventionally, only the chiral-even Dirac structure $\not{\bar{n}}$ in Eq. (3) was considered. However, as noted in Ref. Liu and Xing (2021), in the nonperturbative regime in which $k_{T}\sim\Lambda_{\rm QCD}$, spontaneous chiral symmetry breaking leads to a nonzero component of the jet which is both time-reversal-odd (T-odd) and chiral-odd, when the jet axis is different from the direction of the fragmenting parton. Therefore, the correlator in Eq. (3) in general is a sum of two structures: $\displaystyle{\cal J}(z,k_{T})$ $\displaystyle={\cal J}_{1}(z,k_{T}^{2})\frac{\not{\bar{n}}}{2}+i{\cal J}_{T}(z,k_{T}^{2})\frac{\not{k}_{T}\not{\bar{n}}}{2}\,,$ (4) where ${\cal J}_{1}(z,k_{T}^{2})$ is the traditional jet function, and ${\cal J}_{T}(z,k_{T}^{2})$ is the T-odd jet function. Due to its chiral-odd nature, an immediate application of the T-odd jet function is to probe the chiral-odd TMD PDFs of the nucleons in DIS, such as the Boer-Mulder function and the transversity, which were thought to be impossible to access using jets. The T-odd jet function has the following advantages: * • Universality Like the traditional jet function, the T-odd jet function is process independent. * • Flexibility The flexibility of choosing a jet recombination scheme and hence the jet axis allows us to adjust sensitivity of the jet function to different nonperturbative contributions. This provides an opportunity to “film” the QCD nonperturbative dynamics, if one continuously changes the axis from one to another. * • Perturbative predictability Since a jet contains many hadrons, the jet function has more perturbatively calculable degrees of freedom than the fragmentation function. For instance, in the winner-take-all (WTA) scheme, for $R\sim\mathcal{O}(1)\gg|\bm{q}_{T}|/E_{J}$, the $z$-dependence in the jet function is completely determined Gutierrez-Reyes _et al._ (2019b): $\displaystyle{\cal J}(z,k_{T},R)=\delta(1-z){\mathfrak{J}}(k_{T})+{\cal O}\left(\frac{k_{T}^{2}}{E_{J}^{2}R^{2}}\right)\,.$ (5) * • Nonperturbative predictability Similar to the study in Ref. Becher and Bell (2014), the T-odd jet function can be factorized into a product of a perturbative coefficient and a nonperturbative factor. The nonperturbative factor has an operator definition Vladimirov (2020), and as a vacuum matrix element, it can be calculated on the lattice Shanahan _et al._ (2020); Zhang _et al._ (2020). This is unlike the TMD fragmentation function, which is an operator element with a final-state hadron tagged, making evaluation on the lattice impossible by known techniques. The T-odd jet function will show up in various jet observables which are sensitive to nonperturbative physics. In the following, we study the azimuthal asymmetries in the jet-axis probe in DIS and back-to-back dijet production in $e^{+}e^{-}$ annihilation. ## IV Photon $q_{T}$ with respect to the jet axis in deep-inelastic scattering Consider deep-inelastic scattering of an electron off a polarized nucleon $e^{-}(l)+N(P)\to e^{-}(l^{\prime})+J(P_{J})+X$, ($N=p,n$), in which we tag a jet and specify the jet axis with some recombination scheme. We define the ${\bm{q}}_{T}$ of the virtual photon by going to the so-called factorization frame, in which the proton beam direction and the jet axis direction are exactly opposite to each other, as shown in Fig. 2 (a). Alternatively, one can go to the gamma-nucleon system (GNS), a frame in which the virtual photon momentum and the proton beam are head-to-head (including the case of proton being at rest), and define ${\bm{P}}_{J\perp}$ of the jet as in Fig. 2 (b). One can show that ${\bm{q}}_{T}=-{\bm{P}}_{J\perp}/z$ up to corrections of order $1/Q^{2}$. Therefore, measuring ${\bm{P}}_{J\perp}$ is equivalent to measuring ${\bm{q}}_{T}$. In the following, we will describe the kinematics in the GNS system, which is a convention commonly used in SIDIS Bacchetta _et al._ (2007). Figure 2: Axes in DIS in different frames: (a) the factorization frame, in which ${\bm{q}}_{T}$ is defined, and (b) the GNS system, in which ${\bm{P}}_{J\perp}$ is defined. Figure 3: Kinematic configuration of DIS in the GNS system. Let $M$ be the mass of the nucleon $N$ and $q=l-l^{\prime}$ is the momentum carried by the virtual photon with virtuality $Q^{2}=-q^{2}$. We introduce the invariant variables $\displaystyle x=\frac{Q^{2}}{2P\cdot q}\,,\quad y=\frac{P\cdot q}{P\cdot l}\,,\quad z=\frac{P\cdot P_{J}}{P\cdot q}\,,\quad\gamma=\frac{2Mx}{Q}\,.$ (6) In the nucleon rest frame, we can define the perpendicular component of any 3-vector as the component perpendicular to the virtual photon momentum, ${\bm{q}}$. Equivalently, in Lorentz invariant notations, given any 4-vector $v^{\mu}$, we define its perpendicular component by $v^{\mu}_{\perp}=g_{\perp}^{\mu\nu}v_{\nu}$, where $\displaystyle g_{\perp}^{\mu\nu}$ $\displaystyle=g^{\mu\nu}-\frac{q^{\mu}P^{\nu}+P^{\mu}q^{\nu}}{P\cdot q(1+\gamma^{2})}+\frac{\gamma^{2}}{1+\gamma^{2}}\left(\frac{q^{\mu}q^{\nu}}{Q^{2}}-\frac{P^{\mu}P^{\nu}}{M^{2}}\right)\,.$ (7) The 3-momenta ${\bm{l}}$ and ${\bm{l}}^{\prime}$ define a plane, with respect to which we can define the azimuthal angle of any 3-vector perpendicular to ${\bm{q}}$. In Lorentz invariant notations, this is equivalent to defining the azimuthal angle of any 4-vector $v^{\mu}$ by $\displaystyle\cos\phi_{v}=-\frac{l_{\mu}v_{\nu}g_{\perp}^{\mu\nu}}{\sqrt{l_{\perp}^{2}v^{2}_{\perp}}}\,,\quad\sin\phi_{v}=-\frac{l_{\mu}v_{\nu}\epsilon^{\mu\nu}}{\sqrt{l_{\perp}^{2}v^{2}_{\perp}}}\,,$ (8) where $\displaystyle\epsilon_{\perp}^{\mu\nu}$ $\displaystyle=\epsilon^{\mu\nu\rho\sigma}\frac{P_{\rho}q_{\sigma}}{P\cdot q\sqrt{1+\gamma^{2}}}\,,$ (9) with $\epsilon^{0123}=1$. We denote the azimthal angles of the jet momentum $P_{J}$ and the nucleon spin $S$ by $\phi_{J}$ and $\phi_{S}$ respectively. The definitions of $P_{J\perp}$, $\phi_{J}$, and $\phi_{S}$ are depicted pictorially in Fig. 3. The nucleon spin is decomposed as sum of a longitudinal and a perpendicular component, $\displaystyle S^{\mu}=S_{\parallel}\frac{P^{\mu}-\frac{M^{2}}{P\cdot q}q^{\mu}}{M\sqrt{1+\gamma^{2}}}+S^{\mu}_{\perp}\,,\quad S_{\parallel}=\frac{S\cdot q}{P\cdot q}\frac{M}{\sqrt{1+\gamma^{2}}}\,.$ (10) The helicity of the incoming electron is denoted by $\lambda_{e}$. We define $\psi$ as the azimuthal angle of $\bm{l}^{\prime}$ around $\bm{l}$. The fully differential cross section has the most general form given by $\displaystyle\frac{d\sigma}{dxdyd\psi dzd\phi_{J}dP_{J\perp}^{2}}=\frac{\alpha}{xyQ^{2}}\frac{y^{2}}{2(1-\epsilon)}\left(1+\frac{\gamma^{2}}{2x}\right)\left\\{F_{UU,T}+\epsilon F_{UU,L}+\sqrt{2\epsilon(1+\epsilon)}\cos\phi_{J}F_{UU}^{\cos\phi_{J}}\right.$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad+\epsilon\cos(2\phi_{J})F^{\cos 2\phi_{J}}_{UU}+\lambda_{e}\sqrt{2\epsilon(1-\epsilon)}\sin\phi_{J}F^{\sin\phi_{J}}_{LU}$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad+S_{\parallel}\left[\sqrt{2\epsilon(1+\epsilon)}\sin\phi_{J}F_{UL}^{\sin\phi_{J}}+\epsilon\sin(2\phi_{J})F_{UL}^{\sin 2\phi_{J}}\right]$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad+S_{\parallel}\lambda_{e}\left[\sqrt{1-\epsilon^{2}}F_{LL}+\sqrt{2\epsilon(1-\epsilon)}\cos\phi_{J}F_{LL}^{\cos\phi_{J}}\right]$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad+|{\bm{S}}_{\perp}|\left[\sin(\phi_{J}-\phi_{S})\left(F_{UT,T}^{\sin(\phi_{J}-\phi_{S})}+\epsilon F_{UT,L}^{\sin(\phi_{J}-\phi_{S})}\right)\right.$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad+\epsilon\sin(\phi_{J}+\phi_{S})F_{UT}^{\sin(\phi_{J}+\phi_{S})}+\epsilon\sin(3\phi_{J}-\phi_{S})F_{UT}^{\sin(3\phi_{J}-\phi_{S})}$ $\displaystyle\left.\quad\quad\quad\quad\quad\quad\quad\quad\quad+\sqrt{2\epsilon(1+\epsilon)}\sin\phi_{S}F_{UT}^{\sin\phi_{S}}+\sqrt{2\epsilon(1+\epsilon)}\sin(2\phi_{J}-\phi_{S})F_{UT}^{\sin(2\phi_{J}-\phi_{S})}\right]$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad+|{\bm{S}}_{\perp}|\lambda_{e}\left[\sqrt{1-\epsilon^{2}}\cos(\phi_{J}-\phi_{S})F_{LT}^{\cos(\phi_{J}-\phi_{S})}+\sqrt{2\epsilon(1-\epsilon)}\cos\phi_{S}F_{LT}^{\cos\phi_{S}}\right.$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\left.\left.+\sqrt{2\epsilon(1-\epsilon)}\cos(2\phi_{J}-\phi_{S})F_{LT}^{\cos(2\phi_{J}-\phi_{S})}\right]\right\\}\,,$ (11) where $\alpha$ is the fine structure constant and $\displaystyle\epsilon=\frac{1-y-\frac{1}{4}\gamma^{2}y^{2}}{1-y+\frac{1}{2}y^{2}+\frac{1}{4}\gamma^{2}y^{2}}\,.$ (12) The structure functions $F$’s on the right-hand side of Eq. (11) depend on $x,Q^{2},z$, and $P_{J\perp}^{2}$. The subscripts $A,B,C$ of $F_{AB,C}$ denote the polarizations of the incoming lepton, the incoming nucleon, and the virtual photon respectively, with $U$ standing for unpolarized, $T$ for transversely polarized, and $L$ for longitudinally polarized. Up to corrections of $\mathcal{O}(M^{2}/Q^{2})$, we have $\psi\approx\phi_{S}$, $\gamma^{2}\approx 0$, and the following approximations for the coefficients of the $F$’s in Eq. (11): $\displaystyle\frac{y^{2}}{2(1-\epsilon)}\approx 1-y+\frac{1}{2}y^{2},$ $\displaystyle\frac{y^{2}}{2(1-\epsilon)}\epsilon\approx 1-y,$ $\displaystyle\frac{y^{2}}{2(1-\epsilon)}\sqrt{2\epsilon(1+\epsilon)}\approx(2-y)\sqrt{1-y},$ $\displaystyle\frac{y^{2}}{2(1-\epsilon)}\sqrt{2\epsilon(1-\epsilon)}\approx y\sqrt{1-y}\,,$ $\displaystyle\frac{y^{2}}{2(1-\epsilon)}\sqrt{1-\epsilon^{2}}\approx y\left(1-\frac{1}{2}y\right)\,.$ (13) The structure functions $F$’s are convolutions of the nucleon TMD PDFs and the jet functions. As noted in Eq. (4), there are two jet functions $\mathcal{J}_{1}(z,k_{T}^{2})$ and $\mathcal{J}_{T}(z,k_{T}^{2})$ at leading power in $\Lambda_{QCD}/Q$. At leading order in $M/Q$, there are eight quark TMD PDFs for the nucleon Angeles-Martinez _et al._ (2015), each encoding a specific correlation between the quark spin and the proton spin as depicted in Table 1. Among these eight TMD PDFs, $f_{1}$, $g_{1L}$, $f^{\perp}_{1T}$, and $g_{1T}$ are chiral-even, while $h_{1}^{\perp}$, $h_{1L}^{\perp}$, $h_{1T}$, and $h_{1T}^{\perp}$ are chiral-odd. For any functions $w({\bm{p}}_{T},{\bm{k}}_{T})$, $f(x,p_{T}^{2})$, and $\eta(z,k_{T}^{2})$, we define $\displaystyle{\cal C}[wf\eta]\equiv x\sum_{a}e_{a}^{2}\int d^{2}\bm{p}_{T}\int d^{2}\bm{k}_{T}\,\delta^{(2)}\left(\bm{p}_{T}-\bm{k}_{T}-\bm{P}_{J\perp}/z\right)w(\bm{p}_{T},\bm{k}_{T})f^{a}(x,p_{T}^{2})\eta^{a}(z,k_{T}^{2})\,,$ (14) where $a$ denotes a quark or antiquark flavor. At leading order in $\alpha_{s}$ and $M/Q$, the nonvanishing $F$’s are given by $\displaystyle F_{UU,T}={\cal C}[f_{1}{\cal J}_{1}]\,,$ (15) $\displaystyle F_{LL}={\cal C}[g_{1L}{\cal J}_{1}]\,,$ (16) $\displaystyle F^{\sin(\phi_{J}-\phi_{S})}_{UT,T}={\cal C}\left[-\frac{\hat{\bm{h}}\cdot\bm{p}_{T}}{M}f^{\perp}_{1T}{\cal J}_{1}\right]\,,$ (17) $\displaystyle F^{\cos(\phi_{J}-\phi_{S})}_{LT}={\cal C}\left[\frac{\hat{\bm{h}}\cdot\bm{p}_{T}}{M}g_{1T}{\cal J}_{1}\right]\,,$ (18) $\displaystyle F_{UU}^{\cos(2\phi_{J})}={\cal C}\left[-\frac{(2(\hat{\bm{h}}\cdot\bm{k}_{T})(\hat{\bm{h}}\cdot\bm{p}_{T})-\bm{k}_{T}\cdot\bm{p}_{T})}{M}h_{1}^{\perp}{\cal J}_{T}\right]\,,$ (19) $\displaystyle F_{UL}^{\sin(2\phi_{J})}={\cal C}\left[-\frac{(2(\hat{\bm{h}}\cdot\bm{k}_{T})(\hat{\bm{h}}\cdot\bm{p}_{T})-\bm{k}_{T}\cdot\bm{p}_{T})}{M}h_{1L}^{\perp}{\cal J}_{T}\right]\,,$ (20) $\displaystyle F^{\sin(\phi_{J}+\phi_{S})}_{UT}={\cal C}\left[-\hat{\bm{h}}\cdot\bm{k}_{T}h_{1}{\cal J}_{T}\right]\,,$ (21) $\displaystyle F_{UT}^{\sin(3\phi_{J}-\phi_{S})}={\cal C}\left[\frac{2(\hat{\bm{h}}\cdot\bm{p}_{T})(\bm{p}_{T}\cdot\bm{k}_{T})+\bm{p}_{T}^{2}(\hat{\bm{h}}\cdot\bm{k}_{T})-4(\hat{\bm{h}}\cdot\bm{p}_{T})^{2}(\hat{\bm{h}}\cdot\bm{k}_{T})}{2M^{2}}h_{1T}^{\perp}{\cal J}_{T}\right]\,,$ (22) where $\hat{\bm{h}}=\bm{P}_{J\perp}/|\bm{P}_{J\perp}|$ and $h_{1}=h_{1T}+\frac{\bm{p}_{T}^{2}}{2M^{2}}h_{1T}^{\perp}$ . From Eqs. (15)-(22), we see that the T-even jet function $\mathcal{J}_{1}$ couples to the chiral-even TMD PDFs, while the T-odd jet function $\mathcal{J}_{T}$ couples to the chiral-odd TMD PDFs. With both the T-even and T-odd jet functions, one can thus access all eight TMD PDFs at leading twist. hadron quark | unpolarized | chiral | transverse ---|---|---|--- $U$ | $f_{1}$ | | $h_{1}^{\perp}$ $L$ | | $g_{1L}$ | $h^{\perp}_{1L}$ $T$ | $f^{\perp}_{1T}$ | $g_{1T}$ | $h_{1T},h^{\perp}_{1T}$ Table 1: The eight TMD PDFs of a nucleon at leading twist. With the known proton TMD PDFs and partial knowledge on the jet functions, we can make preliminary predictions on the azimuthal asymmetries associated with the jet-axis probe in DIS machines such as the EIC, the EicC, and HERA. As notes in Section III, the $z$-dependence of the jet functions become trivial with the WTA jet-axis definition. In the following, we will adopt the spherically-invariant jet algorithm Eq. (2) with $R=1$ and the WTA scheme, so that $\displaystyle{\cal J}(z,k_{T},R)$ $\displaystyle=\delta(1-z){\mathfrak{J}}(k_{T})+{\cal O}\left(\frac{k_{T}^{2}}{E_{J}^{2}R^{2}}\right)\,,$ (23) where $\displaystyle{\mathfrak{J}}(k_{T})$ $\displaystyle={J}(k_{T}^{2})\frac{\not{\bar{n}}}{2}+i{J}_{T}(k_{T}^{2})\frac{\not{k}_{T}\not{\bar{n}}}{2}\,.$ (24) We will study the azimuthal asymmetries associated with the terms $|{\bm{S}}_{\perp}|\epsilon\sin(\phi_{J}+\phi_{S})F_{UT}^{\sin(\phi_{J}+\phi_{S})}$ and $\epsilon\cos(2\phi_{J})F_{UU}^{\cos 2\phi_{J}}$ in Eq. (11). These terms probe the transversity $h_{1}$ and the Boer-Mulders function $h_{1}^{\perp}$. These terms can be singled out by specific modulated cross sections. We define a $|{\bm{P}}_{J\perp}|$-distribution of asymmetry that probes the transversity by $\displaystyle A^{\sin(\phi_{J}+\phi_{S})}(|{\bm{P}}_{J\perp}|)$ $\displaystyle=\frac{2}{|\bm{S}_{\perp}|\int d\sigma\epsilon}\int d\sigma\sin(\phi_{J}+\phi_{S})$ (25) $\displaystyle=\frac{\langle\epsilon F_{UT}^{\sin(\phi_{J}+\phi_{S})}\rangle}{\bar{\epsilon}\langle F_{UU,T}\rangle}\,,$ (26) where by $\langle X\rangle$ we mean $\displaystyle\langle X\rangle$ $\displaystyle=\int dx\int dy\int d\phi_{S}\int dz\int d\phi_{J}\,\frac{\alpha}{xyQ^{2}}\frac{y^{2}}{2(1-\epsilon)}X\,,$ (27) and $\displaystyle\bar{\epsilon}=\frac{\int d\sigma\epsilon}{\int d\sigma}\,.$ (28) We can write $F_{UU,T}$ and $F_{UT}^{\sin(\phi_{J}+\phi_{S})}$ as $\displaystyle F_{UU,T}$ $\displaystyle=x\sum_{a}e_{a}^{2}\int\frac{d^{2}b}{(2\pi)^{2}}e^{-i{\bm{P}}_{J\perp}\cdot{\bm{b}}}\tilde{f}^{a}_{1}(x,b^{2})\tilde{J}^{a}(b^{2})\,,$ (29) $\displaystyle F_{UT}^{\sin(\phi_{J}+\phi_{S})}$ $\displaystyle=-ix\sum_{a}e_{a}^{2}\int\frac{d^{2}b}{(2\pi)^{2}}\frac{P_{J\perp}^{i}}{|\bm{P}_{J\perp}|}e^{-i{\bm{P}}_{J\perp}\cdot{\bm{b}}}\tilde{h}^{a}_{1}(x,b^{2})\partial_{b^{i}}\tilde{J}^{a}_{T}(b^{2})\,,$ (30) where we have used the Fourier transforms $\tilde{f}_{1}(x,b^{2})=\int d^{2}p_{T}\,e^{-i{\bm{p}}_{T}\cdot{\bm{b}}}f_{1}(x,p_{T}^{2})$ and $\tilde{J}(b^{2})=\int d^{2}k_{T}\,e^{-i{\bm{k}}_{T}\cdot{\bm{b}}}J(k_{T}^{2})$, and similarly for $\tilde{h}_{1}$ and $\tilde{J}_{T}$. Similar to the treatment in Ref. Kang _et al._ (2015), we include the effect of evolution by including a Sudakov factor in $b$-space $\displaystyle\tilde{f}_{1}(x,b^{2},Q)=e^{-S_{\rm pert}-S_{\rm NP}}\tilde{f}_{1}(x,b^{2},Q_{0})\,,$ (31) $\displaystyle\tilde{J}(b^{2},Q)=e^{-S_{\rm pert}-S_{\rm NP}}\tilde{J}(b^{2},Q_{0})\,,$ (32) and similarly for $\tilde{h}_{1}$ and $\tilde{J}_{T}$, where $\displaystyle S_{\rm pert}$ $\displaystyle=\int^{Q}_{\mu_{b}}\frac{d\mu}{\mu}\,\frac{\alpha_{s}(\mu)}{2\pi}C_{F}\ln\frac{Q^{2}}{\mu^{2}}\,,$ (33) $\displaystyle S_{\rm NP}$ $\displaystyle=\frac{g_{2}}{2}\ln\left(\frac{b}{b_{*}}\right)\ln\left(\frac{Q}{Q_{0}}\right)\,,$ (34) with $\mu_{b}=1.22/b_{*}$, $b_{*}=b/\sqrt{1+b^{2}/b_{*}^{2}}$, $Q_{0}=1.549$ GeV, and $g_{2}=0.84$. Here, we have adopted the expression of $S_{\rm pert}$ at leading logarithm. For the transversity, we use the fitted parametrized form Ref. Martin _et al._ (2015). For the jet functions, although not mandatory, we will apply the jet charge measurement in order to enhance flavor separation. This amounts to replacing the overall normalizations of the jet functions by the charge bins $r_{a}$, whose values for ${J}^{a}$ and for jet charge $Q_{J}>0.25$ and $Q_{J}<-0.25$ have been obtained in Ref. Kang _et al._ (2020b) and will be used in this work. For the charge bins associated with $J^{a}_{T}$, we will take them as a product $N_{a}r_{a}$, where $N_{a}$ is the ratio of the overall normalization of the pion Collins function $H_{1}^{\perp a}$ to the overall normalization of fragmentation function $D_{1}^{a}$ as obtained in Ref. de Florian _et al._ (2007). For the $p_{T}^{2}$-dependence of $\tilde{J}$ and $\tilde{J}_{T}$, we use that of the fragmentation function and the Collins function of the pion as obtained in Ref. de Florian _et al._ (2007). Figure 4 (a) shows the predictions of the asymmetry $A^{\sin(\phi_{J}+\phi_{S})}(|{\bm{P}}_{J\perp}|)$ at the EIC according to Eq. (26) (solid lines) and Eq. (25) (data points) from simulations using Pythia 8.2 Sjöstrand _et al._ (2015) with the package StringSpinner Kerbizi and Lönnblad (2021), which incorporates spin interactions in the event generator. From Fig. 4 (a), we see that the theoretical predictions on the $A^{\sin(\phi_{J}+\phi_{S})}(|{\bm{P}}_{J\perp}|)$ distribution from the factorization formula Eq. (11) roughly agree with the event generator simulations. In Figure 4 (b), we show the prediction of $A^{\sin(\phi_{J}+\phi_{S})}(|{\bm{P}}_{J\perp}|)$ with the E-scheme for the jet-axis definition from Pythia 8.2 Sjöstrand _et al._ (2015) with StringSpinner, with the same kinematic setting as Fig. 4 (a). We see that the asymmetry no longer exists in the E-scheme. This is because the asymmetry is nonvanishing only when the direction of the fragmenting parton which initiates the jet differs with that of the jet axis, which hardly the case in the E-scheme. In this sense, by choosing different jet axes we are able to “film” the nonperturbative dynamics of QCD. Figure 4: Azimuthal asymmetry $A^{\sin(\phi_{J}+\phi_{S})}$ in the jet-axis probe at the EIC in (a) the WTA scheme and (b) the E-scheme. The data points with error bars are from event generator simulations. The solid lines in (a) are from Eq. (26). Likewise, we can make predictions on the asymmetry that probes the Boer- Mulders function defined by $\displaystyle A^{\cos(2\phi_{J})}(|{\bm{P}}_{J\perp}|)$ $\displaystyle=\frac{2}{\int d\sigma\epsilon}\int d\sigma\cos(2\phi_{J})$ (35) $\displaystyle=\frac{\langle\epsilon F_{UU}^{\cos(2\phi_{J})}\rangle}{\bar{\epsilon}\langle F_{UU,T}\rangle}\,.$ (36) The structure function $F_{UU}^{\cos(2\phi_{J})}$ can be written as $\displaystyle F_{UU}^{\cos(2\phi_{J})}$ $\displaystyle=-\frac{x}{M}\sum_{a}e_{a}^{2}\int\frac{d^{2}b}{(2\pi)^{2}}e^{-i{\bm{P}}_{J\perp}\cdot{\bm{b}}}\left[\frac{2}{|{\bm{P}}_{J\perp}|^{2}}\left(P^{i}_{J\perp}\cdot\partial_{i}\tilde{h}^{\perp a}_{1}\right)\left(P^{j}_{J\perp}\cdot\partial_{j}\tilde{J}^{a}_{T}\right)-\partial_{i}\tilde{h}^{\perp a}_{1}\partial_{i}\tilde{J}_{T}\right]\,,$ (37) where $\tilde{h}_{1}^{\perp}$ and $\tilde{J}_{T}$ are the Fourier transforms of $h_{1}^{\perp}$ and $J_{T}$ respectively. We adopt the Boer-Mulders functions obtained from Ref. Barone _et al._ (2010). The predictions on $A^{\cos(2\phi_{J})}(|{\bm{P}}_{J\perp}|)$ at the EIC according to Eq. (36) are shown in Fig. (5). Figure 5: Azimuthal asymmetry $A^{\cos(2\phi_{J})}$ in the jet-axis probe at the EIC as predicted from Eq. (36). The predictions of $A^{\sin(\phi_{J}+\phi_{S})}$ and $A^{\cos(2\phi_{J})}$ at the EicC are shown in Fig. 6. As in Fig. 4 (a) and Fig. 5, the data points with error bars are from event generator simulations and the lines are from the factorization formulae. Figure 6: Azimuthal asymmetries $A^{\sin(\phi_{J}+\phi_{S})}$ (a) and $A^{\cos(2\phi_{J})}$ (b) in the jet-axis probe at the EicC. The data points with error bars are from event generator simulations and the lines are from the factorization formulae. For the sake of comparison with data in SIDIS, in Fig. 7 we show the asymmetries $A^{\sin(\phi_{J}+\phi_{S})}(|{\bm{P}}_{J\perp}|)$ (a) and $A^{\cos(2\phi_{J})}(|{\bm{P}}_{J\perp}|)$ (b) at HERA with predictions for jets from Eqs. (26) and (36) (dashed lines), predictions for pion production from the parallels of Eqs. (26) and (36) as in Refs. Bacchetta _et al._ (2007); Barone _et al._ (2010) (solid lines), and data points for pion production from the HERMES experiment Airapetian _et al._ (2010, 2013) (data points with error bars). From Fig. 7, we see that the T-odd jet function does give azimuthal asymmetries with sizes and shapes similar to those in SIDIS, and so should be observable even at low-energy machines. Figure 7: Azimuthal asymmetries $A^{\sin(\phi_{J}+\phi_{S})}$ and $A^{\cos(2\phi_{J})}$ at HERA. The data points with error bars are from the HERMES experiment for pion production. The solid lines are predictions from the factorization formulae for pion production. The dashed line are predictions from the factorization formulae for the jet-axis probe. ## V back-to-back dijet production in $e^{+}e^{-}$ annihilation The T-odd jet function will give rise to novel jet phenomena in $e^{+}e^{-}$ annihilation, which are measurable at $e^{+}e^{-}$ machines. For instance, consider back-to-back dijet production in $e^{+}e^{-}$ annihilation, as shown in Fig. 8. We define ${\bm{q}}_{T}=-{\bm{P}}_{J_{1}\perp}$. The back-to-back limit corresponds to $|{\bm{q}}_{T}|\ll\sqrt{s}R$, where $R$ is the jet radius. The azimuthal asymmetry $A$ Kang _et al._ (2015) is given by $\displaystyle A$ $\displaystyle=2\int d\cos\theta\,\frac{d\phi_{1}}{\pi}\cos(2\phi_{1})A^{J_{1}J_{2}}\,,$ (38) where $\displaystyle A^{J_{1}J_{2}}$ $\displaystyle=1+\cos(2\phi_{1})\frac{\sin^{2}\theta}{1+\cos^{2}\theta}\frac{F_{T}}{F_{U}}\,,$ (39) with $\displaystyle F_{U}$ $\displaystyle=|\bm{q}_{T}|\,\sum_{q}e_{q}^{2}\,\int\frac{{d}^{2}b}{(2\pi)^{2}}e^{i\bm{q}_{T}\cdot\bm{b}}\tilde{J}_{1}^{q}(b^{2}){\tilde{J}}^{\bar{q}}_{1}(b^{2})\,,$ (40) $\displaystyle F_{T}$ $\displaystyle=|\bm{q}_{T}|\,\sum_{q}\,e_{q}^{2}\,\int\frac{{d}^{2}b}{(2\pi)^{2}}e^{i\bm{q}_{T}\cdot\bm{b}}\left(2\frac{q_{T}^{i}q_{T}^{j}}{|\bm{q}_{T}|^{2}}-\delta^{ij}\right)\partial_{b^{i}}\tilde{J}^{q}_{T}(b^{2})\partial_{b^{j}}\tilde{J}^{\bar{q}}_{T}(b^{2})\,.$ (41) Figure 8: Back-to-back dijet production in $e^{+}e^{-}$ annihilation. In Fig. 9, we plot the asymmetry $A$ as a function of $|\bm{q}_{T}|$ as predicted by Eq. (39) for four different values of $\sqrt{s}$. To enhance the sensitivity, we have demanded that $Q_{J}>0.25$ for one of the jets and $Q_{J}<-0.25$ for the other. The value $\sqrt{s}=\sqrt{110}$ GeV corresponds to the Belle experiment. The values $\sqrt{s}=91.2$ GeV, $165$ GeV, and $240$ GeV correspond to the $Z$-threshold, the $W$-threshold, and the $Z$-Higgs- threshold respectively at LEP as well as the CEPC. One can see that the asymmetry is more significant at low-energy machines. Figure 9: Azimuthal asymmetry $A$ for dijet production in $e^{+}e^{-}$ annihilation as predicted by Eq. (39) at Belle, LEP and the CEPC, with $Q_{J}>0.25$ for one of the jets and $Q_{J}<-0.25$ for the other. ## VI Summary and outlook In this work, we reinterpreted the jet clustering procedure as a way to define an axis, which together with the proton beam defines the transverse momentum of the vitual photon in DIS. In this way, one can use jet-axis measurements in DIS to probe the TMD PDFs of the nucleons, just like in the Drell Yan process. We provided the complete list of azimuthal asymmetries in the jet-axis probe in DIS at leading power. We showed that, by including the T-odd jet function in addition to the traditional one, all eight TMD PDFs of a nucleon at leading twist can be accessed by the jet-axis probe. As concrete examples, within the WTA axis scheme, we demonstrated that with both event-generator simulations and predictions from the factorization formulae, couplings of T-odd jet function with the quark transversity and the Boer-Mulders function give rise to sizable azimuthal asymmetries at DIS machines of various energy regimes, such as the EIC, the EicC, and HERA. We also demonstrated, with event- generator simulations, how the change of the jet-axis definition induces changes in the asymmetry distributions drastically. We also gave predictions for the azimuthal asymmetry of back-to-back dijet production in $e^{+}e^{-}$ annihilation. The T-odd jet function has opened the door to a fully comprehensive study of nucleon 3D structure with jet probes. Further theoretical and phenomenological studies of the T-odd jet function, such as high-order corrections, evaluations of the soft function on the lattice, and fittings with experimental data, will empower the jet probe as a precision tool which is fully differential for the study of TMD physics. ###### Acknowledgements. W. K. L. and H. X. are supported by the National Natural Science Foundation of China under Grant No. 12022512, No. 12035007, and by the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008. X. L. and M. W. are supported by the National Natural Science Foundation of China under Grant No. 12175016. W. K. L. acknowledges support by the UC Southern California Hub, with funding from the UC National Laboratories division of the University of California Office of the President. ## References * Kang _et al._ (2017) Z.-B. Kang, X. Liu, F. Ringer, and H. Xing, JHEP 11, 068 (2017), arXiv:1705.08443 [hep-ph] . * Liu _et al._ (2019) X. Liu, F. Ringer, W. Vogelsang, and F. Yuan, Phys. Rev. Lett. 122, 192003 (2019), arXiv:1812.08077 [hep-ph] . * Gutierrez-Reyes _et al._ (2019a) D. Gutierrez-Reyes, Y. Makris, V. Vaidya, I. Scimemi, and L. Zoppi, JHEP 08, 161 (2019a), arXiv:1907.05896 [hep-ph] . * Arratia _et al._ (2020) M. Arratia, Z.-B. Kang, A. Prokudin, and F. Ringer, Phys. Rev. D 102, 074015 (2020), arXiv:2007.07281 [hep-ph] . * Liu _et al._ (2020) X. Liu, F. Ringer, W. Vogelsang, and F. Yuan, Phys. Rev. D 102, 094022 (2020), arXiv:2007.12866 [hep-ph] . * Gutierrez-Reyes _et al._ (2018) D. Gutierrez-Reyes, I. Scimemi, W. J. Waalewijn, and L. Zoppi, Phys. Rev. Lett. 121, 162001 (2018), arXiv:1807.07573 [hep-ph] . * Gutierrez-Reyes _et al._ (2019b) D. Gutierrez-Reyes, I. Scimemi, W. J. Waalewijn, and L. Zoppi, JHEP 10, 031 (2019b), arXiv:1904.04259 [hep-ph] . * Kang _et al._ (2022) Z.-B. Kang, K. Lee, D. Y. Shao, and F. Zhao, (2022), arXiv:2201.04582 [hep-ph] . * Kang _et al._ (2020a) Z.-B. Kang, K. Lee, and F. Zhao, Phys. Lett. B 809, 135756 (2020a), arXiv:2005.02398 [hep-ph] . * Anderle _et al._ (2021) D. P. Anderle _et al._ , Front. Phys. (Beijing) 16, 64701 (2021), arXiv:2102.09222 [nucl-ex] . * Abdul Khalek _et al._ (2021) R. Abdul Khalek _et al._ , (2021), arXiv:2103.05419 [physics.ins-det] . * Liu (2021) X. Liu (Jets at EicC: jet axes for TMDs presented at EicC CDR meeting 2021, (online), 20-21 November, 2021). * Liu and Xing (2021) X. Liu and H. Xing, (2021), arXiv:2104.03328 [hep-ph] . * Accardi _et al._ (2022) A. Accardi _et al._ , in _2022 Snowmass Summer Study_ (2022) arXiv:2204.02280 [hep-ex] . * Cacciari _et al._ (2012) M. Cacciari, G. P. Salam, and G. Soyez, Eur. Phys. J. C 72, 1896 (2012), arXiv:1111.6097 [hep-ph] . * Bauer _et al._ (2001) C. W. Bauer, S. Fleming, D. Pirjol, and I. W. Stewart, Phys. Rev. D63, 114020 (2001), arXiv:hep-ph/0011336 [hep-ph] . * Bauer and Stewart (2001) C. W. Bauer and I. W. Stewart, Phys. Lett. B516, 134 (2001), arXiv:hep-ph/0107001 [hep-ph] . * Bauer _et al._ (2002a) C. W. Bauer, D. Pirjol, and I. W. Stewart, Phys. Rev. D65, 054022 (2002a), arXiv:hep-ph/0109045 [hep-ph] . * Bauer _et al._ (2002b) C. W. Bauer, S. Fleming, D. Pirjol, I. Z. Rothstein, and I. W. Stewart, Phys. Rev. D66, 014017 (2002b), arXiv:hep-ph/0202088 [hep-ph] . * Becher and Bell (2014) T. Becher and G. Bell, Phys. Rev. Lett. 112, 182002 (2014), arXiv:1312.5327 [hep-ph] . * Vladimirov (2020) A. A. Vladimirov, Phys. Rev. Lett. 125, 192002 (2020), arXiv:2003.02288 [hep-ph] . * Shanahan _et al._ (2020) P. Shanahan, M. Wagman, and Y. Zhao, Phys. Rev. D 102, 014511 (2020), arXiv:2003.06063 [hep-lat] . * Zhang _et al._ (2020) Q.-A. Zhang _et al._ (Lattice Parton), Phys. Rev. Lett. 125, 192001 (2020), arXiv:2005.14572 [hep-lat] . * Bacchetta _et al._ (2007) A. Bacchetta, M. Diehl, K. Goeke, A. Metz, P. J. Mulders, and M. Schlegel, JHEP 02, 093 (2007), arXiv:hep-ph/0611265 . * Angeles-Martinez _et al._ (2015) R. Angeles-Martinez _et al._ , Acta Phys. Polon. B 46, 2501 (2015), arXiv:1507.05267 [hep-ph] . * Kang _et al._ (2015) Z.-B. Kang, A. Prokudin, P. Sun, and F. Yuan, Phys. Rev. D 91, 071501 (2015), arXiv:1410.4877 [hep-ph] . * Martin _et al._ (2015) A. Martin, F. Bradamante, and V. Barone, Phys. Rev. D 91, 014034 (2015), arXiv:1412.5946 [hep-ph] . * Kang _et al._ (2020b) Z.-B. Kang, X. Liu, S. Mantry, and D. Y. Shao, Phys. Rev. Lett. 125, 242003 (2020b), arXiv:2008.00655 [hep-ph] . * de Florian _et al._ (2007) D. de Florian, R. Sassot, and M. Stratmann, Phys. Rev. D 75, 114010 (2007), arXiv:hep-ph/0703242 . * Sjöstrand _et al._ (2015) T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, Comput. Phys. Commun. 191, 159 (2015), arXiv:1410.3012 [hep-ph] . * Kerbizi and Lönnblad (2021) A. Kerbizi and L. Lönnblad, (2021), arXiv:2105.09730 [hep-ph] . * Barone _et al._ (2010) V. Barone, S. Melis, and A. Prokudin, Phys. Rev. D 81, 114026 (2010), arXiv:0912.5194 [hep-ph] . * Airapetian _et al._ (2010) A. Airapetian _et al._ (HERMES), Phys. Lett. B 693, 11 (2010), arXiv:1006.4221 [hep-ex] . * Airapetian _et al._ (2013) A. Airapetian _et al._ (HERMES), Phys. Rev. D 87, 012010 (2013), arXiv:1204.4161 [hep-ex] .
# Image plane detection of FRB121102 with the MeerKAT radio telescope J. C. Andrianjafy,1 N. Heeralall-Issur,1 A. A. Deshpande$,^{2,3,4}$ K. Golap,5 P. Woudt,6 M. Caleb,7,8 E. D. Barr,9 W. Chen,9 F. Jankowski,10 M. Kramer,9,10 B. W. Stappers,10 and J. Wu9 1Department of Physics, University of Mauritius, Réduit 80837, Mauritius 2Raman Research Institute, Bangalore 560080 , India 3Inter-University Centre for Astronomy and Astrophysics, Pune 411007, India 4Indian Institute of Technology, Kanpur 208016, India 5National Radio Astronomy Observatory, PO Box O, Socorro, NM 87801, USA 6Department of Astronomy, University of Cape Town, Private Bag X3, Rondebosch, 7701, South Africa 7Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, 2006, NSW, Australia 8ASTRO3D: ARC Centre of Excellence for All-sky Astrophysics in 3D, Canberra, 2601, ACT, Australia 9Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany 10Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, UK E-mail<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract We present the analysis of radio interferometric 2-s images from a MeerKAT observation of the repeating fast radio burst FRB121102 on September 2019, during which 11 distinct pulses have been previously detected using high time and frequency resolution data cubes. In this work, we detected 6 out of the 11 bursts in the image plane at 1.48 GHz with a minimum peak signal-to-noise ratio (S/N) of 5 $\sigma$ and a fluence detection limit of $\sim$ 0.512 Jy ms. These constitute the first detections of a fast radio burst (FRB) or a radio transient using 2-s timescale images with MeerKAT data. Analysis of the fitted burst properties revealed a weighted average precision of $\sim$ 1 arcsec in the localization of the bursts. The accurate knowledge of FRB positions is essential for identifying their host galaxy and understanding their mysterious nature which is still unresolved to this day. We also produced 2-s images at 1.09 GHz but yielded no detection which we attributed to the spectral structure of the pulses that are mostly higher in strength in the upper frequencies. We also explore a new approach to difference imaging analysis (DIA) to search for transients and find that our technique has the potential to reduce the number of candidates and could be used to automate the detection of FRBs in the image plane for future MeerKAT observations. ###### keywords: radio continuum: transients – instrumentation: interferometers – techniques: image processing ††pubyear: 2022††pagerange: Image plane detection of FRB121102 with the MeerKAT radio telescope–LABEL:lastpage ## 1 Introduction Fast radio bursts (FRBs) are the newly discovered bright $\sim$ 1 Jy $\sim$ millisecond duration radio transients (Lorimer et al., 2007). The source of FRBs emission is yet of unknown origin but the foremost leading theory suggests magnetars as their progenitors (Li et al., 2021; Scholz et al., 2020; Bochenek et al., 2020; Mereghetti et al., 2020). The mysterious nature of FRBs aroused tremendous consideration from the astronomy community and there have been several advancements made during the past decade to understand these phenomenon (for reviews, see Caleb & Keane, 2021; Petroff et al., 2019). Given their high range of dispersion measure values, FRBs are potentially invaluable tools to probe the cosmological unverse such as the study of intergalactic turbulence (Zhu & Feng, 2021), the determination of the cosmic baryon density in the intergalactic medium (Macquart et al., 2020), as well as an estimation of the Hubble constant, being suggested by Wei et al. (2019). However, these studies can only be operated with an accurate knowledge of the position of the FRB source and its host galaxy. One of the major findings in FRB field is the first detection of the repeating source FRB121102 (Spitler et al., 2016; Scholz et al., 2016) with the single dish Arecibo telescope. The repetition of the bursts allowed targeted follow- up observations with radio interferometers to measure its position up to $\sim$ milliarcsecond precision using high resolution fast timescale imaging (with coordinates $\alpha=05^{\textrm{h}}31^{\textrm{m}}58.698^{\textrm{s}},\delta=33^{\circ}08^{\prime}52.586^{\prime\prime}$; Marcote et al., 2017; Chatterjee et al., 2017). As a result, Tendulkar et al. (2017) found that FRB121102 is localized in a low-mass and low-metallicity dwarf galaxy by matching the measured position with optical observations from the Gemini North telescope. Among the few hundreds distinct FRBs reported in the literature (Amiri et al., 2021; Petroff et al., 2016), more than 10 of them have been localized to their host galaxies by combining the interferometric image plane location of the bursts and its matched position with telescopes operating at other wavelengths. For instance, Ravi et al. (2019) localized FRB190523 with the use of $\sim$0.5-s radio images from the Deep Synoptic Array (DSA) and the low resolution imaging spectrometer of the Keck I telescope. Similarly, the position of FRB190711 were obtained with 3.1-s radio images from the Australian Square Kilometre Array Pathfinder (ASKAP) and deep images of the Very Large Telescope (Macquart et al., 2020). In general, detecting FRB and fast transient type sources through interferometric imaging can be achieved by correlating the signals from each pair of antennas in the array and averaging the correlated output, known as visibilities, in short integration time that is adequate to the science requirements. However, searching for fast radio transients in the image plane can encounter different challenges. The sparse (u,v) sample over the short period of time could give rise to lower sensitivity and poor quality images. The duration of the transient pulses is often much shorter than the integration time, causing loss of signals into the noisy visibility data. The other challenging factor is also the high rate at which data is recorded that makes computing heavily expensive, and often preventing real time imaging detection. In 2019, observations of the repeating source FRB121102 have been performed with the MeerKAT radio telescope in South Africa. From this observation, 11 bursts from FRB121102 have been detected using high time and frequency resolution data cubes (Caleb et al., 2020) by the The Meer(more) TRAnsients and Pulsars (MeerTRAP) team (Stappers, 2016). Knowing the position and arrival times of these pulses, these detections are of crucial importance in investigating and testing the capability of the MeerKAT telescope to detect FRBs in the image plane using fast dump visibility data, which will be the main focus of this paper. The paper is organized as follows, the observations are described in Section 2.1, followed by the description of our methodology including data flagging, calibration and imaging in Section 2.2 and 2.3. The results are presented in Section 3 and their properties will be discussed in Section 4 as well as its implications towards MeerKAT surveys before we conclude in Section 5. ## 2 Methods ### 2.1 Data observations As part of a Director’s Discretionary Time (DDT) proposal, MeerKAT (Mauch et al., 2020; Jonas, 2016) carried out 3 hour observations towards the position of FRB121102 on the 6th September 2019, 10th September 2019, 6th October 2019, and 8th October 2019. The 4 sessions were conducted in slightly different telescope configurations (number of antennas, integration time, frequency channel resolution), therefore we will only describe the 10th September observation during which FRB121102 pulses have been previously detected and for which we performed our analysis. In our data, the observations were conducted using 58 out of the 64 MeerKAT dishes at a frequency center of 1.28 GHz and a total bandwidth of 856 MHz that is divided into 4096 frequency channels. Although the data was initially recorded with a native resolution of 4.785 $\mu$s, the visibility data was dumped at 2 s integration time for the imaging purposes. The observations of the target were separated into 12 scans of 15 minutes each. A primary calibrator, J0408-6545, was observed for 5 minutes at the beginning of the observation, as well as a complex gain calibrator J0534+1927, which was observed in two sessions of 1 minute duration before and after the 3 hour observation of the target. To accelerate the data processing in this work (see Section 2.2 and 2.3), we only carried out our analysis on a set of 5 minutes data around the reported arrival times of the 11 bursts (Caleb et al., 2020). ### 2.2 Data editing Firstly, we corrected for the observed shift in the time stamps of the visibility data which was offset by 2 s to their true values. This offset error has been fixed for the latest MeerKAT raw data as discussed in Mauch et al. (2020). We have decided to unflag all the data flagged by the MeerKAT online flagger 111https://skaafrica.atlassian.net/wiki/spaces/ESDKB/pages/305332225/Radio+Frequency+Interference+RFI. This procedure was applied because the effect of the online flagger to transient detection is not well studied and transient emissions could be mistaken as radio frequency interference (RFI). Due to the weak response of the receiver at the edges (Mauch et al., 2020), we trimmed 210 and 186 frequency channels at the lower and higher edges of the frequency band. These values were chosen based on manual inspection of the visibility data. Our final bandwith for processing is then ranging from 900 to 1673 MHz. To mitigate RFI, frequencies that are known to be corrupted for MeerKAT L-band were flagged for short baselines ($<$1km). Given the large amount of data, automated flagging algorithms are required to remove residual RFI. To this end, we adopted different strategies to flag the calibrators and target data. In the case of calibrators, we run a combination of automated algorithms including, sumthreshold in AOFlagger (version 3.0.0; Offringa et al., 2012), tfcrop and rflag from the Common Astronomy Software Applications, CASA(version 6.0.0; McMullin et al., 2007). The aggresive flagging applied to the calibrators is necessary to obtain good calibration solutions. However, for the target data, we decided to avoid algorithms that flag data based on thresholding in the time direction in order to minimize removal of potential transient candidates that only appear for a short period of time (Cendes et al., 2018). We instead chose to use the threshold_channel_rms function in AOFlagger which averages data of a given segment length in time and flag frequency channels in which the rms of their amplitude values exceed a user based threshold (3$\sigma$ level in our case). This step was repeated three times during which we vary the segment length in a decreasing time intervals (5 minutes, 1 minute and 10 seconds) to identify channels containing both steady and intermittent RFI. After these procedures, about 40 $\%$ of our target data was removed. ### 2.3 Data calibration and imaging #### 2.3.1 Initial calibration Following data flagging, we performed standard data calibration method using CASA tools. With a known absolute flux density ($\sim$ 17 Jy at 1.28 GHz and a spectral index of -1.179222https://github.com/IanHeywood/oxkat/blob/master/oxkat/1GC_04_casa_setjy.py), we used the primary calibrator to derive the frequency-dependent complex gain factors. We then bootstrapped its flux density to the secondary calibrator, from which time-dependent gain solutions were determined. The solutions were then applied to all datasets. Afterwards, we divided the data into 7 subbands and performed 2 rounds of phase-only self calibration in each of the subbands independently. The sky- models to perform self calibration was generated with the fast generic widefield imager WSCLEAN (version 2.9.0; Offringa & Smirnov, 2017; Offringa et al., 2014). The sky-models from each 5 minute dataset contain sources with high signal-to-noise ratio (S/N $>450$) that is sufficient for our self calibration process to succeed. The observations did not contain a polarized calibrator to perform an accurate polarimetric calibration. The other alternative is to use the averaged polarization calibration of the antennas obtained from other MeerKAT data, which was calibrated using a polarized calibrator and apply them to our measurement sets. The latter method, as described in Plavin et al. (2020), was not feasible in our case because it requires the two datasets to have the same reference antenna and there were no available MeerKAT datasets that had polarization calibration using our preferred reference antenna (${}^{\prime\prime}m007^{\prime\prime}$), which was chosen based on the S/N of the parallel-hand calibration solutions. #### 2.3.2 Peeling We identified two bright sources ($>100$ mJy/beam) dominating the FRB121102 field, that exhibit direction-dependent effects towards the center, especially with the 2-s images. Such effects are undesirable as residual undeconvolved sidelobes from those sources can potentially mimic transients in the image plane (Frail et al., 2012; Bower et al., 2007). Hence, we chose to remove these sources with the technique known as peeling (Noordam, 2004). There are many existing approaches (Kazemi et al., 2011; Smirnov, 2011; Intema et al., 2009) to perform peeling but we adopted similar strategy to the one described in Williams et al. (2019). In this method, the calibration terms $G_{bs}$ specific to the bright source to be peeled is determined and it can be subtracted by replacing the model column with: $MC_{cor}=\frac{1}{G_{bs}}M_{bs}(u,v),$ (1) where $MC_{cor}$ is the corrected model column and $M_{bs}(u,v)$ an approximated model of the bright source in (u,v) space. In our case, we performed a phase-only followed by a bandpass self calibration towards the source and the inverse of the gains $G_{bs}$ was calculated by dividing the data with the corrected column. $M_{bs}(u,v)$ was generated by predicting the model image of the bright source during self calibration. The bright sources were peeled independently in each 7 subbands to capture the variations of $G_{bs}$ across the frequency band. We implemented these procedures using CASA and WSCLEAN tasks wrapped into customized python scripts. Figure 1 shows a comparison of the 2-s images at 1.09 GHz (see Section 2.3.3 for imaging method) before and after peeling where the rms value at the position of the target went from $533$ to $410$ $\mu$Jy/beam. \begin{overpic}[scale={0.49}]{Fig1_nopeel.png} \put(9.0,12.0){\includegraphics[scale={0.15}]{Fig1_nopeel_zoom.png}} \end{overpic} \begin{overpic}[scale={0.49}]{Fig1_peel.png} \put(9.0,12.0){\includegraphics[scale={0.15}]{Fig1_peel_zoom.png}} \end{overpic} Figure 1: Comparison of MeerKAT 2-s images of the FRB121102 field at 1.09 GHz before (top panel) and after (bottom panel) peeling two bright sources (blue circles) using the method described in section 2.3.2. The blue squares indicate an area of $18^{\prime}\times 18^{\prime}$ around the position of FRB121102. A zoom in of this region is shown in the bottom left corner of each panel. The improvement in the image quality is clearly seen as the ripples caused by the bright sources was mitigated after peeling, along with the appearance of faint emission from real sources. #### 2.3.3 Imaging Table 1: WSCLEAN main parameters during 2-s imaging. The other parameters were set to their default values. Parameter | Values ---|--- size | 6000 scale | 1.5 arcsec auto-mask | 4 auto-threshold | 0.1 weight | Briggs 1 super-weight | 10 minuv-l | 100 taper-inner-tukey | 400 mgain | 0.85 To search for the bursts, we only divided the data into two frequency bands centered around 1.09 GHz and 1.48 GHz. We review briefly the effect of this spectral window setup in Section 4.5.5. We produced stokes I images of each integration for each 5 minutes data set with WSCLEAN using the imaging parameters in Table 1. These parameters were tuned to obtain reliable images without decreasing drastically the sensitivity. The automatic masking scheme is suitable to our science goal as it allows deep cleaning close to the thermal noise value, but only constrained towards peaks with high S/N. A circular taper was applied to the inner edges of the (u,v) samples as this form of tapering was observed to decrease slightly the level of sidelobes. We did not apply primary beam correction because our target is situated at the phase center. Figure 2 illustrates typical examples of the 2-s images of the two subbands when the bursts are not present. The rms thermal noise of the images were evaluated within a $7^{\prime}\times 7^{\prime}$ square around FRB121102 position, using the method described in (Swinbank et al., 2015) from which pixel values that are more than 4 standard deviations away from the median are masked. Overall, the mean rms is 403 $\mu$Jy/beam and 259 $\mu$Jy/beam at 1.09 and 1.48 GHz with similar elliptically restored beam size of $42^{\prime\prime}\times 13^{\prime\prime}$. For each sequence of images, we evaluated the peak fluctuation over the same local region and estimated that only pixels with peak S/N $>$5$\sigma$ can be considered as potential transient emission. \begin{overpic}[scale={0.49}]{Fig2_1090MHz.png} \put(58.0,10.0){\includegraphics[scale={0.21}]{Fig2_1090MHz_zoom.png}} \end{overpic} \begin{overpic}[scale={0.49}]{Fig2_1480MHz.png} \put(58.0,10.0){\includegraphics[scale={0.21}]{Fig2_1480MHz_zoom.png}} \end{overpic} Figure 2: MeerKAT 2-s images of FRB121102 field at 1.09 GHz (Top) and 1.48 GHz (Bottom) when the bursts are off. The blue squares indicate a $7^{\prime}\times 7^{\prime}$ area centered around the position of the FRB and the corresponding zoomed-in view is shown in the bottom right corner of each image. For each zoom-in image, the grey ellipse in the bottom left corner indicates the synthesized beam size and the white cross in the center indicates the reported position of FRB121102. A known $\sim$ 3 mJy compact source, J053153+3310 ($\alpha=05^{\textrm{h}}31^{\textrm{m}}53.92^{\textrm{s}},\delta=33^{\circ}10^{\prime}20.07^{\prime\prime}$), is also observed which we used for astrometry in Section 4.2 and Section 4.3 ### 2.4 FRB limits Among the main factors deciding the detection of FRB in our image plane is the intrinsic width of the bursts. Considering that the pulse width of the 11 FRBs are much shorter than our integration time, we would expect their peak flux density to decrease due to averaging of the FRB signal in the visibility. Based on the framework described in Trott et al. (2013) and Rowlinson et al. (2016), the minimum FRB flux density that our 2-s image is sensitive to can be estimated by: $S_{min}=S_{snap,2s}(\frac{\Delta t}{w}),$ (2) where $S_{snap,2s}$ is the rms in one snapshot 2-s image, $\Delta t$ is 2 s and $w$ the duration of the FRB pulses. From equation (2), we can then define the minimum detectable fluence as: $F_{min}=S_{min}w=S_{snap,2s}\Delta t,$ (3) By taking into account the rms values measured in Section 2.3.3, and using the equation (3), we calculated that the theoretical fluence limit values in our images are 0.80 and 0.51 Jy ms at 1.09 GHz and 1.48 GHz. Using the fluence values reported in Table 1 of Caleb et al. (2020), we could then set constraints on which burst we would expect to detect as illustrated in Figure 3, and from which we show that 5 bursts are detectable at 1.09 GHz, while 6 of them lay above the detection limit at 1.48 GHz. Figure 3: Theoretical fluence limits of detectable FRBs in the MeerKAT 2-s snapshot images produced in this work. The detection limits were calculated using the equation (3) derived from the frameworks in Trott et al. (2013) and Rowlinson et al. (2016). The blue triangles are the bursts that we detected at 1.48 GHz while the red points were not observed. These bursts represents all pulses detected in the dynamic spectrum by Caleb et al. (2020) and referred with the same indices. ie. B01 refers to burst number 01. ## 3 Results ### 3.1 Burst detection After inspecting the images, we have detected 6 bursts at the arrival times of B02, B03, B05, B07, B08 and B11 (Labelled with the same indices as in Caleb et al. (2020)) at the position of FRB121102 at 1.48 GHz with a peak S/N above 5$\sigma$. These constitute the first detections of FRBs or any transient sources in 2 seconds radio images with the MeerKAT telescope. Figure 4 illustrates the 2-s images during the appearance of all the detected bursts. To estimate the integrated flux density, we fitted a gaussian component with the task imfit in CASA in a circular regions that enclose the burst structure. The measured properties of each fitted burst are shown in Table 2. None of these bursts were observed at 1.09 GHz above our peak detection threshold. None of the bursts situated below our detection limits (see Figure 3) were detected in the two subbands as expected. ### 3.2 Burst Positions The fitted centroid location of the bursts are scattered within $\lesssim 2.70^{\prime\prime}$ of the milliarcsecond localization from the simultaneous observations of European VLBI network and the Arecibo telescopes (Marcote et al., 2017). Given that the position uncertainties (see Table 2) are inversely proportional to the source S/N, we estimated that the strongest burst, B11, which is offset by $\sim 1.07^{\prime\prime}$, gives the highest confidence in our measured positions. Nevertheless, to account for all the 6 detected bursts, we calculated the weighted average position of the burst emission peaks by using as weights the inverse of the uncertainties from CASA. As a result, we found a weighted average position of $\alpha=05^{\textrm{h}}31^{\textrm{m}}58.68^{\textrm{s}}\pm 0.31^{\prime\prime},\delta=33^{\circ}08^{\prime}53.61^{\prime\prime}\pm 0.69^{\prime\prime}$ and a combined offset of $\sim 1.03^{\prime\prime}$. The position offsets measured for the individual bursts is shown in Figure 5. (a) Burst 02 (b) Burst 03 (c) Burst 05 (d) Burst 07 (e) Burst 08 (f) Burst 11 Figure 4: MeerKAT 2-s images of the detected bursts (White circles) at 1.48 GHz. The grey ellipse in the bottom left corner indicates the synthesized beam size. Table 2: Properties of the bursts detected in the image plane at 1.48 GHz. The quoted arrival times are the time center of the 2-s visibility data at the moment of detection on 10 September 2019. The peak $P_{\nu}$, integrated flux density $S_{\nu}$ and position values were measured with the CASA task imfit. The peak S/N is based on the rms value of the image during detection. The systematic offsets are obtained with the procedure described in Section 4.2 . Burst Arrival time Peak flux density $P_{\nu}$ Flux density $S_{\nu}$ Peak S/N Centroid position [J2000] Systematic offsets (UTC) (mJy) (mJy) RA (h:m:s) DEC (∘:′:′′) RA (′′) DEC (′′) 02 03:58:31.4 1.85 $\pm$ 0.21 2.22 $\pm 0.46$ 7.2 05:31:58.70 $\pm 0.67^{\prime\prime}$ 33:08:55.37 $\pm 2.87^{\prime\prime}$ $+0.51\pm 0.36$ $-0.09\pm 2.27$ 03 03:58:33.4 1.41 $\pm$ 0.15 1.93 $\pm 0.14$ 5.4 05:31:58.61 $\pm 0.60^{\prime\prime}$ 33:08:54.75 $\pm 3.86^{\prime\prime}$ $-0.71\pm 0.26$ $-1.08\pm 2.37$ 05 04:26:10.7 1.58 $\pm$ 0.15 1.32 $\pm 0.30$ 6.1 05:31:58.80 $\pm 0.30^{\prime\prime}$ 33:08:54.76 $\pm 2.70^{\prime\prime}$ $+0.58\pm 0.14$ $+2.07\pm 1.43$ 07 05:04:37.8 1.51 $\pm$ 0.22 1.66 $\pm 0.48$ 5.4 05:31:58.69 $\pm 1.03^{\prime\prime}$ 33:08:52.49 $\pm 4.12^{\prime\prime}$ $-0.19\pm 0.42$ $-1.62\pm 2.22$ 08 05:38:38.9 1.67 $\pm$ 0.17 1.60 $\pm 0.48$ 6.4 05:31:58.53 $\pm 0.74^{\prime\prime}$ 33:08:53.54 $\pm 2.78^{\prime\prime}$ $-0.08\pm 0.49$ $+0.91\pm 1.91$ 11 06:06:04.2 2.90 $\pm$ 0.16 2.83 $\pm 0.35$ 11.1 05:31:58.64 $\pm 0.75^{\prime\prime}$ 33:08:51.81 $\pm 1.96^{\prime\prime}$ $+0.03\pm 0.58$ $-0.47\pm 1.62$ Figure 5: Offsets of the fitted centroid position of the FRB121102 bursts detected in this work (Blue points) to the milliarcsecond localization with EVN (Marcote et al., 2017). The orange point indicates the weighted average position based on the fitted position uncertainties with CASA. The horizontal lines indicate the position errors for each point. ## 4 Discussion ### 4.1 Bursts S/N In comparison to our detection limit in Section 2.4, B03 and B11 have the lowest and highest S/N as expected. However, we could notice that the fitted peak S/N for B07 in the 2-s image is among the lowest despite its high fluence value of $\sim$ 2.23 Jy ms (see Figure 3). Similarly, we measured a high S/N of B02 ($>$ 7.2) although it is among the bursts with the lowest fluence. The discrepancies between the observed and expected S/N could arise from the variation of rms between the images. At the arrival time of B07 for example, the rms is slightly higher than the average value ($\sim 274\;\mu$Jy), inducing imfit to decrease the fitted peak to $\sim 1.51$ mJy although the peak directly extracted from the image pixel values is $1.94$ mJy. Further discussion of flux density accuracy is discussed in the next section. ### 4.2 Flux density and position uncertainty Due to the sparse (u,v) coverage of the 2 s data, the measured flux density and position from the gaussian fitting can be affected by systematic errors due to residual calibration effects or by the fluctuations of thermal noise in the images. We use the observed compact source, J053153+3310 (with coordinates $\alpha=05^{\textrm{h}}31^{\textrm{m}}53.92^{\textrm{s}},\delta=33^{\circ}10^{\prime}20.07^{\prime\prime}$ from Marcote et al. (2017)), with a flux density range comparable to our bursts ($\sim 3$ mJy, $\sim 11\sigma$), to evaluate the flux density and position uncertainties. Given the short angular separation of the compact source from the target (offset by $\sim 100^{\prime\prime}$ which is about three synthesized beam widths away), we estimated that both sources are affected to the same variation of noises and share relatively the same systematic uncertainties in their fitted values. Hence, with the 2-s snapshot images in each 5 minute data, we fitted the compact source with imfit the same way as the bursts. We defined its fractional flux density error as the ratio of the root mean square of the flux density variation errors with the mean flux density in each 5 minutes epoch. As a result, we obtained an average fractional error of 12 $\%$ in flux density values. Furthermore, we did not observe any significant increase in flux density greater than the fractional error in the compact source during the appearances of the bursts. However, the S/N of the compact source in the B07 image decreased by 1, which indicates that the fitted properties of B07 could be under estimated. We evaluated the systematic offsets in our position by comparing the MeerKAT position of the compact source to the VLBI observations. The magnitude of the offsets has a median of $0.45^{\prime\prime}$ in RA and $1.28^{\prime\prime}$ in Dec, with interquartile ranges of $0.57^{\prime\prime}$ and $1.62$ respectively. The systematic offsets measured at the arrival time of each burst is shown in Table 2. ### 4.3 Astrometry The early MeerKAT data suffered from few instrumental issues that could cause systemic inaccuracies of the astrometry (Heywood et al., 2022; Knowles et al., 2022; Mauch et al., 2020). We investigated if the discrepancies in our burst positions were the results of these bugs or due to the limited dynamic range of the 2-s images. To this end, we merged all the 5 minute chunks of data in our analysis and imaged the concatenated measurement set ($\sim$ 50 minute observations) to assess the position of J053153+3310 in a higher dynamic range image. We obtained a noise level of 42 $\mu$Jy, yielding a S/N of $\sim$70 for this compact source. With this deeper image, the fitted position of J053153+3310 now deviates by $0.07^{\prime\prime}\pm 0.03$ in RA and $0.46^{\prime\prime}\pm 0.13$ in DEC from its catalogue position, which is smaller compared to the median offsets observed in the 2-s images, and suggesting that the large position uncertainties are mainly from statistical origin. Further testing were applied based on cross-correlation operation between the 2-s images to probe if the peak of the cross-correlated image exhibit a relative shift from the origin, which would indicate the presence of astrometric errors of all the sources in the field resulting from calibration or related to beam shape differences. Therefore, we chose one reference image at the beginning of the observation and cross-correlate it with the 2-s images prior to all the burst appearances. As a consequence, we did not observe a shift in the output peaks with pre-burst images near (in time) the reference image. However, a 1-pixel shift in the declination axis was always noticed for images separated by more than $\sim 1$ hour to the reference image. Given that the point spread function (PSF) major axis in our images is elongated along the declination axis, these analysis suggest an astrometric uncertainty of 1.5” (size of one pixel) due to beam shape in our DEC position. By taking into account the S/N and beam width, we estimated that the spread of the FRBs and their position uncertainties in our 2-s images is fairly comparable to the expectation and astrometry correction is not required for the purpose of the present work. The low fractional error and position uncertainty that we obtained decrease the probability that the bursts that we have detected could be produced from imaging artefacts. ### 4.4 Non detection at 1.09 GHz Despite the high range values in the rms of the images at 1.09 GHz ($\sim 400\mu$Jy), some of the bursts that we have detected are still above the detection limit at this frequency but yielded no detection. The produced residual images at the burst arrival times were verified visually but revealed no significant peaks above the 5$\sigma$ detection threshold. Considering that FRB121102 pulses showed some spectral variations, we suspect that the non detection could be explained by the fact that these bursts peak at the higher frequencies as can be seen with their dynamic spectra in Figure 1 of Caleb et al. (2020). Spectral index measurements could further support this explanation but the quality of the images declined rapidly in images produced with shorter frequency resolution, making high level of uncertainty in flux density estimation. ### 4.5 Review methods for future MeerKAT observations In the next few subsections, we will briefly review and discuss the methodology that we performed in Section 2 and discuss their practical implication in searching for FRBs in future MeerKAT surveys or archival data. #### 4.5.1 Flagging The flagging approach that we described in Section 2.2 efficiently removed all suspected RFI without negatively affecting the detection of the bursts. Manual inspection of the visibility data at the moment of appearance was still checked and we flagged small residual RFI and the bursts were still observed in the images. To investigate the efficiency of our method, we decided to apply to our data the automated flagging algorithm Tricolour333https://github.com/ratt-ru/tricolour, which is widely incorporated into some of the latest MeerKAT data reduction pipelines such as MeerKATHI444https://meerkathi.readthedocs.io/en/latest/ (Józsa et al., 2020) or oxkat555https://ascl.net/code/v/2627(Heywood, 2020). As a results, the S/N for B05, B08, and B07 decreased by $\sim 10\%$ and B02 was not detected. Although, we tested Tricolour in its default mode and a customized strategy might be required to optimize the algorithm, these tests show how an automated flagging technique using thresholding following the time axis can affect the detection of short transient emissions. #### 4.5.2 Bright source peeling The removal of bright sources in Section 2.3.2 to minimize the sidelobe effects is a computationally intensive process. In order to assess its efficiency, we made 2-s images at the moment of the burst appearance without peeling any sources. At 1.09 GHz where the bright sources are the most dominant due to the steep slope of their spectral index, it became practically unfeasible to differentiate genuine astronomical sources, such as the compact source, from sidelobes in the forms of horizontal stripes (see Figure 1), which could increase false detection rate. Furthermore, we also observed a decrease of $\leavevmode\nobreak\ 10\%$ in S/N of the bursts at 1.48 GHz without applying the peeling. #### 4.5.3 Weighting scheme Further testing of the weight values during imaging shows that the bursts remain detected above our detection threshold mainly from natural to the 0.8 value of the Briggs robust parameter (Briggs, 1995). We decided to use robust 1 because it provides more resolution gain without much degradation in S/N from natural weighting mode. Given that a considerable number of MeerKAT dishes ($\sim 40\;\text{antennas}$) is located around the $\sim$ 1 km inner core, briggs values that come close to uniformly weighted scheme deteriorate drastically the sensitivity due to the sparse distribution of the 2 s $(u,v)$ coverage. #### 4.5.4 Integration time The majority of MeerKAT visibility measurement sets are dumped at 8 s integration time. In order to understand the detectability of similar bursts and fast transients in the MeerKAT archival data, we performed 8-s imaging using the same parameters as in Table 1. The overall resulting rms image is $150\;\mu$Jy, yelding a fluence detection limit of $1.2$ Jy ms using the equation 3. In this case, B07 and B11 are the bursts that remain above the detection limit but we only detected B11 with a reduced peak S/N ($\sim 6.8$) compared to the 2-s image values. Figure 6 displays the detection of B11 in the 8-s image. These findings show the importance of fast imaging and we recommend future MeerKAT observations of FRBs to be operated using the 2 second dump rate. The computational expenses of the short timescale imaging is discussed in Section 4.6.3. Figure 6: MeerKAT image of B11 using 8s integration time. The rms in this image is 150 $\mu$Jy. #### 4.5.5 Bandwidth Instead of using the full band, we decided to split the data into two spectral windows to avoid similar averaging issue as discussed in Section 4.5.4. Indeed, since the bursts does not appear in all the band, their signals are expected to be averaged out with larger bandwidths. Nonetheless, we produced 2-s images at the arrival times of the 11 bursts by combining all the available frequencies. As a results, we did not observe improvement of the image quality (rms $\sim 260\mu$Jy ), and only B02, B03 and B11 were detected. ### 4.6 Image subtraction #### 4.6.1 Reference image Since we already know the location of our target source and the arrival times of the bursts, it was straightforward to make the search towards a specific region of the sky. However, in a given survey of an unknown field, it is often challenging to identify a real transient source, especially at short integration time, where imaging artefacts can occasionally appear and increases the number of potential candidates. One of the most well known techniques used in transient search surveys is difference image analysis (DIA) (Tomaney & Crotts, 1996; Bond et al., 2001), in which a reference image which is a model of the sky containing all steady-state sources, is subtracted from a new image. This section is not intended to derive a generalized image subtraction method for MeerKAT data, which can be a complex topic that could be explored in future studies, but rather a tentative framework to show its capability. Advanced discussions of the subject from previous works could be found in Sánchez et al. (2019), Zackay et al. (2016), Bramich (2008) or Alard (2000). In contrast to existing methods, where the reference image is constructed only from the preceding images of a sequence, we propose a new method where the subsequent images are also considered. Such procedure is inspired by the moving average technique widely known in economics (Chou, ). Given an image sequence of $N$ samples $\\{I\\}=\\{I_{1},I_{2},I_{3}..,I_{N}\\}$, the reference image $R_{i}$ of the $i-$th image, $I_{i}$, to be subtracted is defined as: $\centering R_{i}=\frac{1}{2n}\sum_{\begin{subarray}{c}j=i-n\\\ j\neq i\end{subarray}}^{i+n}I_{j},\@add@centering$ (4) where $n$ is a tunable parameter that indicates the number of images in both sides of the $i-$th image to include in the summation. We tested our method with different values of $n$ and observed that the overall bursts S/N in the subtracted images increase by $20\%$ typically for $n=1$ to $5$ and tend to follow a flat curve afterwards. Figure 4 illustrates the evolution of S/N for B11 in the generated difference images for $n=1$ to $n=20$. For B11 only, we noticed a brief up and down trend after $n=5$ that was not seen in the S/N curve of the other bursts. Based on Figure 4, it is tempting to claim that $n=7$, which provided the maximum S/N is the optimal value, but we verified from manual inspection that it is simply due to small decrease of thermal noise from the added images at that period. Moreover, keeping the value of $n$ as small as possible is essential to preserve images with similar PSF that are produced from visibility with nearly the same (u,v) sampling distribution. Figure 9 demonstrates our difference imaging procedures and its effect in a $38^{\prime}\times 38^{\prime}$ region around the burst position. The mean rms values of the subtracted images are about $3$ to $5\%$ lower than the original images and the bursts were still observed above $5\sigma$ except for B03 with $3\sigma$ level. The non detection of B03 is due to the presence of B02 when constructing the reference image at that moment, given that the appearance of the two bursts are only separated by 2 s. This shows that our proposed method is mainly limited to one-off events like most FRBs, or to repeating sources that emit pulses within sufficiently large time interval ($\geqslant 20$ s for $n=5$). Additionally, constant sources in the $38^{\prime}\times 38^{\prime}$ area were almost completely removed and the occasional low level remaining artefacts are unclean subtraction from extended or bright sources. We show in Figure 9 the subtracted images for all 6 bursts. Figure 7: Signal-to-noise ratio (S/N) of B11 in the subtracted images for different values of $n$ (see equation 4), to build the reference image. (a) (b) (c) Figure 8: Illustration of an image subtraction procedure as described in Section 4.6 for an area of $38^{\prime}\times 38^{\prime}$ around FRB121102. The figures are showing the reference image (a), B11 field before (b) and after (c) subtraction of the reference image. The subtracted image shows that most sources except B11 (White circle in the center) were removed. (a) Burst 02 (b) Burst 03 (c) Burst 05 (d) Burst 07 (e) Burst 08 (f) Burst 11 Figure 9: MeerKAT 2-s image around an area of $7^{\prime}\times 7^{\prime}$ of the detected bursts after applying image subtraction (see Section 4.6). #### 4.6.2 Blind search mode In order to show the potential advantages of our method in a blind search mode, we apply the procedure described in Section 4.6.1 to all 2-s images produced from each 5 minute data and use the automated source finder PySE666https://tkp.readthedocs.io/en/release4.0/tools/pyse.html?highlight=pyse from the TraP pipeline (Swinbank et al., 2015) to identify the remaining point sources in the subtracted images. We set the detection threshold to 6$\sigma$ to decrease the number of candidates and mask the bright source positions to exclude eventual residual effects. We constrained the search to half of the images (about $1.25{{}^{\circ}}$ around the center) where 80$\%$ of the total flux density of the image resides and where the PSF is similar. As a results, only B07,B08 and B11 were detected and there were 23 false detections observed in a total of 1380 images, from which we calculated the fraction of false detection per synthesized beam element to be around $1.34\times 10^{-6}$. In a blind search mode, automatic searches through the difference images are often performed with the use of machine learning techniques to classify real transients from artefacts (Goldstein et al., 2015; Wright et al., 2015), but in our case, the remaining candidates can also be reduced by searching for dispersed pulses in the dynamic spectrum of the raw voltage. Furthermore, more than $60\%$ of FRBs detected and published to date (Spanakis-Misirlis, 2022) have a fluence comparable or even stronger ($\sim$ 4 Jy ms to 3500 Jy ms) than B11 and we would expect to detect them with our method. #### 4.6.3 Computational expenses Since our technique requires to perform imaging for each integration, it is important to assess the computational cost to run this approach. In terms of data storage, we are only considering to keep the 2-s dump rate around the time intervals where serious candidates are triggered and the data could be averaged to the standard integration time of 8 second otherwise. The data rate of the 2-s visibilities is 0.55 TB per hour for MeerKAT777https://skaafrica.atlassian.net/wiki/spaces/ESDKB/pages/277315585/MeerKAT+specifications#Visibility- integration-times, hence if we only keep for instance a 1 minute dataset due to an eventual candidates, only 9 GB of memory is needed. With regards to imaging, the gridding step, which is the main operation that requires heavy processing, is often implemented with the use of a convolution kernel of $5\times 5$ pixel wide, which means that 25 additions and multiplications will be operated for each visibility. One complex multiplication is a 7 float operations, whereas the addition involves 2 float operations, from which we can estimate that the convolution of one visibility requires $9\times 50$ float operations. The number of visibility points involved in a stokes I 2-s imaging for the full MeerKAT array is given by $n_{polarization}\times n_{channels}\times{n_{baseline}}=2\times 4096\times 2016$, thus the computation of gridding need about $\sim 7.5$ GB float operations, which could be achieved in less than 1 second with a GPU or a multicore CPU implementation (Veenboer & Romein, 2020). ## 5 Conclusions The detection of 11 bursts from FRB121102 was reported by Caleb et al. (2020) with the MeerKAT radio telescope using high time and frequency resolution filterbank data. In this work, we investigated the ability of MeerKAT to detect these bursts in 2-s snapshot images produced with visibility data. We detected 6 out of the 11 bursts in the images above a detection threshold of $5\sigma$ at 1.48 GHz. These represent the first detections of FRBs and radio transients in MeerKAT image produced with 2 seconds timescale. The 6 bursts were detected in accordance to our expectation with a fluence detection limit of $\sim 0.512$ Jy ms at the corresponding frequency. Additionally, the analysis of their properties revealed an $\sim$ arcsec precision of their localization in the images which is highly required to tie the FRB position with their potential host galaxy. We estimated from further investigation that the detection is strongly limited by the integration time and we recommend future MeerKAT observations of FRBs to be operated using the 2 second dump rate, which is the fastest supported integration period for this telescope. We explored a new approach to the difference imaging analysis from which each image to be subtracted has their own reference image generated from their respective neighbor images. Such method takes advantage of the similarity of the $(u,v)$ distribution in the 2-s visibility data and could be adequately suitable to detect FRB-like transients in fast images. ## Acknowledgements This paper employs a MeerKAT data resulted from a Director’s Discretionary Time (DDT) proposal (Project ID: DDT-20190905-MC-01). The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation (DSI). The authors acknowledge the contribution of all those who designed and built the MeerKAT instrument. Research reported in this paper is supported by the Newton Fund project, DARA (Development in Africa with Radio Astronomy), and awarded by the UK’s Science and Technology Facilities Council (STFC) \- grant reference ST/R001103/1. J.C.A. acknowledges the MeerTRAP team in collaboration with the ThunderKAT programme for allowing the access to the data in achieving this work. MeerTRAP is a project to continuously use the MeerKAT radio telescope to search the radio sky for pulsars and fast radio transients and to rapidly and accurately locate them. ThunderKAT is the MeerKAT Large Survey Projects (LSPs) for image- domain (explosive) radio transients. M.C. acknowledges support of an Australian Research Council Discovery Early Career Research Award (project number DE220100819) funded by the Australian Government and the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. M.C, F.J and B.W.S acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation pro- gramme (grant agreement No 694745). We acknowledge the use of the ilifu cloud computing facility - www.ilifu.ac.za, a partnership between the University of Cape Town, the University of the Western Cape, the University of Stellenbosch, Sol Plaatje University, the Cape Peninsula University of Technology and the South African Radio Astronomy Observatory. The ilifu facility is supported by contributions from the Inter-University Institute for Data Intensive Astronomy (IDIA - a partnership between the University of Cape Town, the University of Pretoria and the University of the Western Cape), the Computational Biology division at UCT and the Data Intensive Research Initiative of South Africa (DIRISA). ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding authors. ## References * Alard (2000) Alard C., 2000, Astronomy and Astrophysics Supplement Series, 144, 363 * Amiri et al. (2021) Amiri M., et al., 2021, arXiv preprint arXiv:2106.04352 * Bochenek et al. (2020) Bochenek C. D., Ravi V., Belov K. V., Hallinan G., Kocz J., Kulkarni S. R., McKenna D. L., 2020, Nature, 587, 59 * Bond et al. (2001) Bond I., et al., 2001, Monthly Notices of the Royal Astronomical Society, 327, 868 * Bower et al. (2007) Bower G. C., Saul D., Bloom J. S., Bolatto A., Filippenko A. V., Foley R. J., Perley D., 2007, The Astrophysical Journal, 666, 346 * Bramich (2008) Bramich D., 2008, Monthly Notices of the Royal Astronomical Society: Letters, 386, L77 * Briggs (1995) Briggs D. S., 1995, Ph. D. Thesis * Caleb & Keane (2021) Caleb M., Keane E., 2021, Universe, 7, 453 * Caleb et al. (2020) Caleb M., et al., 2020, Monthly Notices of the Royal Astronomical Society, 496, 4565 * Cendes et al. (2018) Cendes Y., et al., 2018, Astronomy and computing, 23, 103 * Chatterjee et al. (2017) Chatterjee S., et al., 2017, Nature, 541, 58 * (12) Chou Y.-l., , Statistical analysis: with business and economic applications, 1975 * Frail et al. (2012) Frail D., Kulkarni S., Ofek E., Bower G., Nakar E., 2012, The Astrophysical Journal, 747, 70 * Goldstein et al. (2015) Goldstein D., et al., 2015, The Astronomical Journal, 150, 82 * Heywood (2020) Heywood I., 2020, Astrophysics Source Code Library, pp ascl–2009 * Heywood et al. (2022) Heywood I., et al., 2022, The Astrophysical Journal, 925, 165 * Intema et al. (2009) Intema H., Van der Tol S., Cotton W., Cohen A., Van Bemmel I., Röttgering H., 2009, Astronomy & Astrophysics, 501, 1185 * Jonas (2016) Jonas J., 2016, Proceedings of MeerKAT Science: On the Pathway to the SKA, pp 25–27 * Józsa et al. (2020) Józsa G. I., et al., 2020, arXiv preprint arXiv:2006.02955 * Kazemi et al. (2011) Kazemi S., Yatawatta S., Zaroubi S., Lampropoulos P., De Bruyn A., Koopmans L., Noordam J., 2011, Monthly Notices of the Royal Astronomical Society, 414, 1656 * Knowles et al. (2022) Knowles K., et al., 2022, Astronomy & Astrophysics, 657, A56 * Li et al. (2021) Li C., et al., 2021, Nature Astronomy, 5, 378 * Lorimer et al. (2007) Lorimer D. R., Bailes M., McLaughlin M. A., Narkevic D. J., Crawford F., 2007, Science, 318, 777 * Macquart et al. (2020) Macquart J.-P., et al., 2020, Nature, 581, 391 * Marcote et al. (2017) Marcote B., et al., 2017, The Astrophysical Journal Letters, 834, L8 * Mauch et al. (2020) Mauch T., et al., 2020, The Astrophysical Journal, 888, 61 * McMullin et al. (2007) McMullin J. P., Waters B., Schiebel D., Young W., Golap K., 2007, in Astronomical data analysis software and systems XVI. p. 127 * Mereghetti et al. (2020) Mereghetti S., et al., 2020, The Astrophysical Journal Letters, 898, L29 * Noordam (2004) Noordam J. E., 2004, in Ground-based telescopes. pp 817–825 * Offringa & Smirnov (2017) Offringa A., Smirnov O., 2017, Monthly Notices of the Royal Astronomical Society, 471, 301 * Offringa et al. (2012) Offringa A. R., van de Gronde J. J., Roerdink J. B. T. M., 2012, A&A, 539 * Offringa et al. (2014) Offringa A., et al., 2014, Monthly Notices of the Royal Astronomical Society, 444, 606 * Petroff et al. (2016) Petroff E., et al., 2016, arXiv preprint arXiv:1601.03547 * Petroff et al. (2019) Petroff E., Hessels J., Lorimer D., 2019, The Astronomy and Astrophysics Review, 27, 1 * Plavin et al. (2020) Plavin A., Cotton W D., Mauch T., 2020, Obit Development Memo Series, 62, 1 * Ravi et al. (2019) Ravi V., et al., 2019, Nature, 572, 352 * Rowlinson et al. (2016) Rowlinson A., et al., 2016, Monthly Notices of the Royal Astronomical Society, 458, 3506 * Sánchez et al. (2019) Sánchez B., et al., 2019, Astronomy and Computing, 28, 100284 * Scholz et al. (2016) Scholz P., et al., 2016, The Astrophysical Journal, 833, 177 * Scholz et al. (2020) Scholz P., Collaboration C., et al., 2020, The Astronomer’s Telegram, 13681, 1 * Smirnov (2011) Smirnov O. M., 2011, Astronomy & Astrophysics, 527, A108 * Spanakis-Misirlis (2022) Spanakis-Misirlis A., 2022, arXiv preprint arXiv:2208.03508 * Spitler et al. (2016) Spitler L., et al., 2016, Nature, 531, 202 * Stappers (2016) Stappers B., 2016, Proc. Sci.(MeerKAT2016), 10 * Swinbank et al. (2015) Swinbank J. D., et al., 2015, Astronomy and Computing, 11, 25 * Tendulkar et al. (2017) Tendulkar S. P., et al., 2017, The Astrophysical Journal Letters, 834, L7 * Tomaney & Crotts (1996) Tomaney A. B., Crotts A. P., 1996, arXiv preprint astro-ph/9610066 * Trott et al. (2013) Trott C. M., Tingay S. J., Wayth R. B., 2013, The Astrophysical Journal Letters, 776, L16 * Veenboer & Romein (2020) Veenboer B., Romein J. W., 2020, Astronomy and Computing, 32, 100386 * Wei et al. (2019) Wei J.-J., Li Z., Gao H., Wu X.-F., 2019, Journal of Cosmology and Astroparticle Physics, 2019, 039 * Williams et al. (2019) Williams P., Allers K., Biller B., Vos J., 2019, Research Notes of the AAS, 3, 110 * Wright et al. (2015) Wright D., et al., 2015, Monthly Notices of the Royal Astronomical Society, 449, 451 * Zackay et al. (2016) Zackay B., Ofek E. O., Gal-Yam A., 2016, The Astrophysical Journal, 830, 27 * Zhu & Feng (2021) Zhu W., Feng L.-L., 2021, The Astrophysical Journal, 906, 95
††institutetext: 1 Graduate University for Advanced Studies (Sokendai), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan 2Asia Pacific Center for Theoretical Physics (APCTP), Pohang 37673, Republic of Korea 3Department of Physics, Pohang University of Science and Technology, Pohang 37673, Republic of Korea 4 Department of Physics, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan # Residual flavor symmetry breaking in the landscape of modular flavor models Keiya Ishiguro1 Hiroshi Okada2,3 Hajime Otsuka4<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We study a symmetry breaking of residual flavor symmetries realized at fixed points of the moduli space. In the supersymmetric modular invariant theories, a small departure of the modulus from fixed points is required to realize fermion mass hierarchies and sizable CP-breaking effects. We investigate whether one can dynamically fix the moduli values in the vicinity of the fixed points in the context of Type IIB string theory. It is found that the string landscape prefers $|\delta\tau|\simeq 10^{-5}$ for the deviation of the complex structure modulus from all fixed points and the CP-breaking vacuum is statistically favored. To illustrate phenomenological implications of distributions of moduli values around fixed points, we analyze the lepton sector on a concrete $A_{4}$ modular flavor model. ††preprint: ## 1 Introduction The flavor symmetry is a powerful approach to understand the flavor structure of quarks and leptons, and in addition, it provides the bridge between bottom- up and top-down approaches of model building. Indeed, when the flavor symmetry is embedded into a geometric symmetry of an extra-dimensional space, subgroups of the geometric symmetry would control the flavor structure of matter zero- modes. For instance, the $PSL(2,\mathbb{Z})$ modular symmetry incorporates the phenomenologically interesting non-Abelian discrete symmetries such as $S_{3},S_{4},A_{4}$ and $A_{5}$ in the principal subgroups deAdelhartToorop:2011re . From the viewpoint of ultraviolet physics, it was known that the $SL(2,\mathbb{Z})$ modular group and its subgroups appearing in toroidal compactifications are connected to the flavor symmetries of matter zero-modes in heterotic orbifold models Ferrara:1989qb ; Lerche:1989cs ; Lauer:1990tm ; Baur:2019kwi ; Baur:2019iai and Type IIB superstring theories with magnetized D-branes Kobayashi:2018rad ; Kobayashi:2018bff ; Ohki:2020bpo ; Kikuchi:2020frp ; Kikuchi:2020nxn ; Kikuchi:2021ogn ; Almumin:2021fbk . Such flavor symmetries are called modular flavor symmetries. The multi moduli cases such as $Sp(2h,\mathbb{Z})$ symplectic modular symmetry are also discussed in the context of heterotic string theory on toroidal orbifolds Baur:2020yjl and Calabi-Yau manifolds Strominger:1990pd ; Candelas:1990pi ; Ishiguro:2020nuf ; Ishiguro:2021ccl . From the phenomenological point of view, the modular flavor symmetries are attractive for predicting the masses and mixing angles of quarks and leptons under a certain value of the moduli fields Feruglio:2017spp ; Kobayashi:2018vbk ; Penedo:2018nmg ; Novichkov:2018nkm ; Ding:2019xna ; Liu:2019khw ; Chen:2020udk ; Novichkov:2020eep ; Liu:2020akv ; Wang:2020lxk ; Yao:2020zml ; Ding:2020msi . The higher-dimensional operators in the Standard Model effective field theory are also controlled by the modular symmetries Kobayashi:2021pav ; Kobayashi:2022jvy , taking into account the selection rule of the string theory Kobayashi:2021uam . The flavor symmetry of quarks/leptons, as well as CP symmetry, is broken only by the modulus $\tau$ parametrizing the shape of the torus. Note that the CP transformation is regarded as an outer automorphism of the modular group for the single modulus Baur:2019kwi ; Novichkov:2019sqv and multi moduli cases Ishiguro:2021ccl . Once the modulus is fixed, there is no flavor symmetry in a generic moduli space. However, there exist so-called fixed points in the fundamental region of the $PSL(2,\mathbb{Z})$: $\tau=i,w,i\infty$ with $w=\frac{-1+i\sqrt{3}}{2}$, preserving $\mathbb{Z}_{2}$, $\mathbb{Z}_{3}$ and $\mathbb{Z}_{2}$ symmetries, respectively. Such fixed points play an important role for several phenomenological applications of the lepton sector Novichkov:2018ovf ; Novichkov:2018yse ; Novichkov:2018nkm ; Ding:2019gof ; Okada:2019uoy ; King:2019vhv ; Okada:2020rjb ; Okada:2020ukr ; Okada:2020brs ; Feruglio:2021dte ; Kobayashi:2021pav ; Kobayashi:2022jvy as well as controlling the effective action such as the dark matter (DM) stability Kobayashi:2021ajl . To dynamically fix the moduli values gives a strong prediction on proposed modular flavor models. These attempts were performed in Refs. Kobayashi:2019xvz ; Kobayashi:2019uyt ; Kobayashi:2020uaj ; Ishiguro:2020tmo ; Novichkov:2022wvg . However, in most of modular flavor models, one requires a slight difference in moduli values from fixed points to explain the observed masses and mixing angles of quarks/leptons as recently discussed in Ref. Novichkov:2022wvg . In this paper, we adopt a top-down approach to dynamically fix the moduli values around the fixed points of the moduli space. In the string theory, background fluxes can stabilize the moduli fields such that subgroups of $SL(2,\mathbb{Z})$ are realized Kobayashi:2020hoc and also the CP symmetry is spontaneously broken Kobayashi:2020uaj ; Ishiguro:2020nuf . In addition, the flux landscape prefers the stabilization of moduli fields at fixed points with enhanced symmetries DeWolfe:2004ns ; Ishiguro:2020tmo . The purpose of this paper is to investigate the stabilization of moduli values at nearby fixed points and discuss the phenomenological implications. For concreteness, we focus on Type IIB string theory on toroidal orientifolds, where the complex structure moduli determine the flavor structure of quarks and leptons. Turning on background three-form fluxes, these complex structure moduli, as well as the dilaton, will be stabilized at statistically-favored symmetric points. To break enhanced symmetries in the complex structure moduli space, we incorporate non-perturbative effects whose existence is motivated by the stabilization of remaining volume moduli associated with the metric of extra- dimensional space. It is then expected that these non-perturbative effects can slightly shift the values of complex structure moduli from fixed points. Indeed, our systematic analysis of flux vacua with non-perturbative effects reveals that the complex structure moduli are stabilized at nearby fixed points whose magnitudes are controlled by non-perturbative effects. Furthermore, we also incorporate the uplifting potential to obtain the de Sitter (dS) vacuum as discussed in the Kachru-Kallosh-Linde-Trivedi (KKLT) scenario Kachru:2003aw . Such a supersymmetry (SUSY) breaking source also shift the value of the complex structure moduli from fixed points. 111Soft SUSY breaking terms will keep the modular invariance in the moduli-mediated SUSY breaking scenario Kikuchi:2022pkd , and their phenomenological aspects are discussed in Refs. Du:2020ylx ; Kobayashi:2021jqu ; Otsuka:2022rak . It turns out that the string landscape prefers $|\delta\tau|\simeq 10^{-5}$ for the deviation of the complex structure modulus from fixed points $\langle\tau\rangle=i,w,2i$, respectively.222Here, $\tau=i\infty$ is approximated as $\tau=2i$. It corresponds to a specific SUSY breaking scale. In addition, the CP-breaking vacua are statistically favored due to the existence of non-perturbative effects as well as the uplifting source, although the number of CP-breaking vacua is statistically small in the finite number of flux vacua Ishiguro:2020tmo . These moduli values are well fitted with observed masses and mixing angles in the lepton sector on a concrete $A_{4}$ modular flavor model. Furthermore, a quasi-stable dark matter (DM) would be realized due to the softly broken residual flavor symmetry at fixed points. This paper is organized as follows. After reviewing the structure of Type IIB flux vacua on $T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})$ orientifolds in section 2.1, we incorporate the non-perturbative effects to stabilize the volume moduli in section 2.2. We numerically estimate the deviations of the complex structure modulus $\tau$ from fixed points in section 2.3, taking into account SUSY breaking effects. These effects slightly break the enhanced symmetries in the moduli space of toroidal orientifolds. Given these moduli values, we study the concrete $A_{4}$ modular flavor model in section 3 with an emphasis on the lepton sector. The distributions of $A_{4}$ model and the string landscape are compared. We summarize the paper in section 4. In Appendix A, we list the $A_{4}$ modular forms used in this paper. ## 2 Moduli distributions in the string landscape In section 2.1, we first review the flux vacua in Type IIB string theory on $T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})$ orientifolds with an emphasis on the enhanced symmetry in the complex structure moduli space. Next, we focus on non-perturbative effects, which slightly break the enhanced symmetries in moduli space of toroidal orbifolds as discussed in section 2.2. Finally, we plot the deviation of the complex structure modulus from fixed points and the typical SUSY breaking scale in section 2.3. ### 2.1 Flux vacua with enhanced symmetries In Type IIB string theory on $T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})$ orientifolds, the moduli Kähler potential and the flux-induced superpotential are given by333We follow the convention of Ref. Ishiguro:2020tmo . $\displaystyle K$ $\displaystyle=-\ln(-i(S-\bar{S}))-2\ln{\cal V}(T)-3\ln\left(i(\tau-\bar{\tau})\right),$ (2.1) where $S,T,\tau$ denote the dilaton, Kähler moduli (volume moduli) and the complex structure modulus, respectively. Here and in what follows, we adopt the reduced Planck mass unit unless we specify it, and we consider the isotropic torus $\tau=\tau_{1}=\tau_{2}=\tau_{3}$ to simplify our analysis. In Type IIB flux compactifications, one can consider the so-called Gukov-Vafa- Witten type superpotential Gukov:1999ya induced by background three-form fluxes: $\displaystyle W_{\rm flux}$ $\displaystyle=a^{0}\tau^{3}-3a\tau^{2}-3b\tau- b_{0}-S\left(c^{0}\tau^{3}-3c\tau^{2}-3d\tau-d_{0}\right),$ (2.2) where $\\{a^{0},a,b,b_{0},c^{0},c,d,d_{0}\\}$ represent three-form flux quanta with the notation of Ref. Ishiguro:2020tmo . These integers are now quantized in multiple of 8 on $T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})$ geometry. In this paper, we analyze the SUSY minima: $\displaystyle\partial_{S}W=\partial_{\tau}W=W=0\,,$ (2.3) at which the energy of scalar potential vanishes $V=e^{K}(K^{I\bar{J}}D_{I}W\overline{D_{J}W}-3|W|^{2})=0$ with $D_{I}W=\partial_{I}W+W\partial_{I}K$. Here, we use the so-called no-scale structure for the Kähler moduli: $K^{i\bar{j}}K_{i}K_{\bar{j}}=3$ with $K_{i}=\partial_{T^{i}}K$ and $K^{i\bar{j}}$ being the inverse of Kähler metric. The SUSY conditions can be analytically solved by redefining the superpotential as $\displaystyle W_{\rm RR}=a^{0}\tau^{3}-3a\tau^{2}-3b\tau- b_{0}=(r\tau+s)P(\tau)\,,$ (2.4) $\displaystyle W_{\rm NS}=c^{0}\tau^{3}-3c\tau^{2}-3d\tau-d_{0}=(u\tau+v)P(\tau)\,.$ (2.5) Here, we denote a quadratic (integer-coefficient) polynomial $P(\tau)$ with respect to $\tau$, and the minimum of $\tau$ is found by solving $P(\tau)=0$. Following Ref. Betzler:2019kon , we write $\displaystyle P(\tau)=l\tau^{2}+m\tau+n\,,$ (2.6) under $m^{2}-4ln<0$, whose expression leads to the vacuum expectation value of $\tau$: $\displaystyle\langle\tau\rangle$ $\displaystyle=\frac{-m+\sqrt{m^{2}-4ln}}{2l}\quad(l,n>0)\,,$ $\displaystyle\langle\tau\rangle$ $\displaystyle=\frac{-m-\sqrt{m^{2}-4ln}}{2l}\quad(l,n<0)\,.$ (2.7) The vacuum expectation value of the dilaton field is obtained by solving $\partial_{\tau}W=0$, i.e., $\displaystyle P(\tau)\partial_{\tau}\\{(r\tau+s)-S(u\tau+v)\\}+\\{(r\tau+s)-S(u\tau+v)\\}\partial_{\tau}P(\tau)=0\,.$ (2.8) Since the $\tau$ is now stabilized at Eq. (2.7) determined by $P(\tau)=0$, we require $\displaystyle\langle S\rangle=\frac{r\tau+s}{u\tau+v}\,.$ (2.9) Note that the condition $\partial_{\tau}P(\tau)=0$ gives rise to a real $\tau$ and the dilaton cannot be stabilized anymore. At this stage, one cannot stabilize the Kähler moduli and requires additional sources such as non- perturbative effects. Before going into the detail of the volume moduli stabilization, we also review the structure of flux vacua on the toroidal orientifold. Remarkably, the background three-form fluxes induce a net D3-brane charge: $\displaystyle N_{\rm flux}$ $\displaystyle=\frac{1}{l_{s}^{4}}\int H_{3}\wedge F_{3}=c^{0}b_{0}-d_{0}a^{0}+\sum_{i=1}^{3}(c^{i}b_{i}-d_{i}a^{i})=c^{0}b_{0}-d_{0}a^{0}+3(cb- da)\,,$ (2.10) with the string length $l_{s}$, which should be canceled on a compact manifold. Taking into the contributions of D3/D7-branes and O3/O7-planes, the flux-induced D3-brane charge is constrained as $\displaystyle 0\leq N_{\rm flux}\leq N_{\rm flux}^{\rm max}={\cal O}(10^{5})\,.$ (2.11) Here, we admit the F-theory extension of the Type IIB orientifolds where a largest value of O3-plane contribution is given by $1820448$ Candelas:1997eh ; Taylor:2015xtz . Furthermore, $N_{\rm flux}$ should be in multiple of 192 due to the fact that $\\{a^{0},a,b,b_{0},c^{0},c,d,d_{0}\\}\in 8\mathbb{Z}$. For concreteness, we focus on the vacuum structure of $\tau$ whose fixed points in the moduli space are $\tau=i,w=\frac{-1+i\sqrt{3}}{2},i\infty$, each of which corresponds to the $\mathbb{Z}_{2}$, $\mathbb{Z}_{3}$, $\mathbb{Z}_{2}$ fixed points, respectively. The $\tau=i,w$ fixed points are statistically favored in the flux landscape, as seen in Fig. 1, where the higher the degeneracy of vacua, the darker the color is. Note that one cannot realize $\tau=i\infty$ requiring the infinite value of flux quanta, and it is inconsistent with the charge cancellation condition of D3-brane charge (2.11), namely the tadpole cancellation condition. In this respect, we adopt $\tau=2i$ as an approximation of $\tau=i\infty$. Such an approximation will often be used in the phenomenological analysis of modular flavor models. Figure 1: The numbers of stable vacua on the fundamental domain of $\tau$ in the case of $N_{\rm flux}^{\rm max}=192\times 1000$ Ishiguro:2020tmo . ### 2.2 Stabilization of volume moduli by non-perturbative effects In this section, we analyze the stabilization of volume moduli along the line of KKLT scenario Kachru:2003aw . The stabilization of volume moduli is performed by the following non-perturbative effects: $\displaystyle W=W_{\rm flux}(\tau,S)+W_{\rm np}(S,T)\,,$ (2.12) where $\displaystyle W_{\rm np}=\sum_{m}C_{m}e^{ia_{m}T+ib_{m}S}$ (2.13) is supposed to be generated by D-brane instanton effects with $a_{m},b_{m}=2\pi$ or strong dynamics on D7-branes wrapping the rigid cycle with $a_{m}=2\pi/N$ and $N$ being the rank of the gauge group. Here and in what follows, we consider a simple setup where the volume of internal manifold is determined by a single Kähler modulus $T$ whose Kähler potential is given by $K=-3\ln(i(\bar{T}-T))$. In the KKLT construction, the dilaton and the complex structure moduli are determined in the context of Type IIB flux compactifications as discussed in section 2.1. Note that the vacuum expectation value of flux superpotential vanishes in our analysis in the previous section; thereby the dilaton- dependent non-perturbative effects would induce the constant term in the effective superpotential: $\displaystyle W_{\rm eff}=W_{\rm np}(\langle S\rangle,T)\,,$ (2.14) which includes the following terms required in the KKLT construction: $\displaystyle W_{\rm eff}\simeq\langle e^{ibS}\rangle+Ce^{iaT}\,.$ (2.15) Thus, the overall Kähler modulus is stabilized at $T=T_{0}$ satisfying $\displaystyle D_{T}W_{\rm eff}=\partial_{T}W_{\rm eff}+K_{T}W_{\rm eff}=0\,,$ (2.16) at which the minimum value of $T$ is estimated as $\displaystyle a\langle T\rangle\approx\ln(C/w_{0})\,,$ (2.17) with $w_{0}=\langle e^{ibS}\rangle$. Here, the origin of small superpotential $w_{0}$ relies on the dilaton-dependent non-perturbative effects. It is also possible to realize the small flux superpotential in Type IIB/F-theory flux compactifications (see, Refs. Demirtas:2019sip ; Honma:2021klo , for the large complex structure regime). In what follows, the prefactor $C$ is assumed to be a constant, in particular, 1. So far, we have assumed that the dilaton and the complex structure moduli are stabilized in flux compactifications. However, the presence of non- perturbative effects slightly shifts their values. Indeed, the true vacuum is found by solving $\displaystyle D_{I}W=0\,,$ (2.18) which changes the moduli values obtained in the previous section. To find the slight difference from the fixed points of complex structure modulus, we utilize the perturbation method; the non-perturbative superpotential $W_{\rm np}$ causes the shift of the minima: $\displaystyle\tau$ $\displaystyle=\langle\tau\rangle+\delta\tau\,,$ $\displaystyle S$ $\displaystyle=\langle S\rangle+\delta S\,,$ $\displaystyle T$ $\displaystyle=\langle T\rangle+\delta T\,,$ (2.19) where the reference points $\\{\langle\tau\rangle,\langle S\rangle,\langle T\rangle\\}$ are given in Eqs. (2.7), (2.9) and (2.17), respectively. Following Ref.Abe:2006xi , we estimate the deviation up to a linear order. Let us consider the Kähler-invariant quantity $G=K+\ln|W|^{2}$ satisfying $G_{A}=\partial_{A}G=0$ at the SUSY minima. Here and in what follows, the index $A$ denotes both the holomorphic and anti-holomorphic fields: $\\{S,T,\tau,\bar{S},\bar{T},\bar{\tau}\\}$. From the expansion (2.19), $G_{A}$ is expanded as $\displaystyle G_{A}=G_{A}\bigl{|}_{\langle\rangle}+\delta\phi^{B}G_{AB}\bigl{|}_{\langle\rangle}+({\cal O}(\delta\phi)^{2})\,,$ (2.20) where $\bigl{|}_{\langle\rangle}$ means $\bigl{|}_{\tau=\langle\tau\rangle,S=\langle S\rangle,T=\langle T\rangle}$. Under the assumption $a,b>1$, we obtain $\displaystyle G_{IJ},G_{\bar{I}\bar{J}}\gg G_{I\bar{J}},G_{\bar{I}J}\,,$ (2.21) which implies that $G_{AB}$ and $G^{AB}$ are diagonalized only by the holomorphic and anti-holomorphic parts, respectively. As a result, we obtain $\displaystyle\delta\phi^{I}=G^{IJ}G_{J}\bigl{|}_{\langle\rangle}+({\cal O}(\delta\phi)^{2})\,,$ (2.22) whose explicit form is written by $\displaystyle\delta\tau$ $\displaystyle=W_{\rm eff}\left(-\frac{G_{S}}{W_{S\tau}}\right)\biggl{|}_{\langle\rangle}+{\cal O}(W_{\rm eff}^{2})\,,$ $\displaystyle\delta S$ $\displaystyle=\frac{W_{\rm eff}}{W_{S\tau}}\left(\frac{W_{\tau\tau}}{W_{S\tau}}G_{S}-G_{\tau}\right)\biggl{|}_{\langle\rangle}+{\cal O}(W_{\rm eff}^{2})\,,$ $\displaystyle\delta T$ $\displaystyle=\left(-\frac{G_{ST}}{G_{TT}}\right)\biggl{|}_{\langle\rangle}\delta S\,.$ (2.23) Note that the internal volume should be larger than the string length, $\displaystyle{\rm Im}(T)\gg 1\,,$ (2.24) and the weak string coupling ${\rm Im}(S)>1$; thereby the magnitude of the flux superpotential is exponentially small: $\displaystyle\langle W_{\rm eff}\rangle\simeq w_{0}+e^{iaT}<10^{-3}\,.$ (2.25) Here and in the following numerical calculations, we take $a=b=2\pi$ for concreteness. In this way, the deviation of the vacuum values $\\{\delta\tau,\delta S,\delta T\\}$ are determined by the non-perturbative effects, implying that the deviation is naturally suppressed with respect to the volume modulus. From the phenomenological point of view, such a small deviation of $\tau$ is useful for predicting the masses and mixing angles of quarks and leptons, as discussed in detail in section 3. Before going into a phenomenological analysis, we discuss the supersymmetry breaking effects in the next section. ### 2.3 Moduli values at nearby fixed points So far, we have analyzed the stabilization of the complex structure modulus, dilaton and Kähler moduli at SUSY minima. However, the obtained vacuum energy is negative, i.e., anti-de Sitter (AdS) vacuum. To realize a dS vacuum, it is required to uplift the AdS vacuum to the dS one. Among several proposals for the uplifting scenarios, we focus on the anti D3-brane as originally utilized in the KKLT scenario Kachru:2003aw . The anti D3-brane induces the positive vacuum energy due to its SUSY breaking effect, $\displaystyle V_{\rm up}=\frac{D}{(i(\bar{T}-T))^{3}}\,,$ (2.26) where a constant $D$ is chosen to realize the present vacuum energy. Then, the effective scalar potential is described as $\displaystyle V=e^{K}\left(K^{I\bar{J}}D_{I}W\overline{D_{J}W}-3|W|^{2}\right)+V_{\rm up}\,,$ (2.27) indicating that the uplifting source further causes the shift of the moduli values obtained in the previous section. To see the deviation of complex structure moduli values from fixed points, we numerically calculate the deviation of $\tau$ from $\langle\tau\rangle=i,w,2i$ by utilizing a finite number of flux vacua with $N_{\rm flux}^{\rm max}=1000$. By calculating the minimization condition of the full scalar potential $\partial_{I}V=0$ for $I=\tau,S,T$, we find deviations of the complex structure modulus from fixed points $\delta\tau\equiv\tau-\langle\tau\rangle$ as shown in Figs. 2, 3 and 4. It turns out that the flux landscape prefers $|\delta\tau|\simeq 10^{-5}$ from fixed points $\langle\tau\rangle=i,w,2i$, but there is no sizable difference in the phase direction. It means that the CP symmetry parametrized by $\tau\rightarrow-\bar{\tau}$ is broken in a generic moduli space. Furthermore, we plot the typical SUSY breaking scale, i.e., the gravitino mass $m_{3/2}=e^{K/2}W$, in the same figures. At the statistically favored moduli values $|\delta\tau|={\cal O}(10^{-5})$, the gravitino mass is $m_{3/2}={\cal O}(10^{-5})$ in the reduced Planck mass unit. Note that the small $\delta\tau$ is originating from non-perturbative effects and the uplifting source, both of which are the same order. Figure 2: The number of vacua at nearby $\langle\tau\rangle=i$ as a function of $|\delta\tau|$ in the left panel and ${\rm Arg}(\delta\tau)$ in the right panel, respectively. In the left panel, the absolute value of gravitino mass is plotted as a function of $|\delta\tau|$. Figure 3: The number of vacua at nearby $\langle\tau\rangle=w$ as a function of $|\delta\tau|$ in the left panel and ${\rm Arg}(\delta\tau)$ in the right panel, respectively. In the left panel, the absolute value of gravitino mass is plotted as a function of $|\delta\tau|$. Figure 4: The number of vacua at nearby $\langle\tau\rangle=2i$ as a function of $|\delta\tau|$ in the left panel and ${\rm Arg}(\delta\tau)$ in the right panel, respectively. In the left panel, the absolute value of gravitino mass is plotted as a function of $|\delta\tau|$. ## 3 $A_{4}$ modular flavor model To illustrate implications of distributions of moduli fields around fixed points, we study the phenomenology of lepton sector on a concrete $A_{4}$ modular flavor model. ### 3.1 Setup | $L$ | $\\{e^{c},\mu^{c},\tau^{c}\\}$ | $N^{c}$ | $H_{u}$ | $H_{d}$ ---|---|---|---|---|--- $SU(2)_{L}$ | $\bf{2}$ | $\bf{1}$ | $\bf{1}$ | $\bf{2}$ | $\bf{2}$ $U(1)_{Y}$ | $-\frac{1}{2}$ | $1$ | $0$ | $\frac{1}{2}$ | $-\frac{1}{2}$ $A_{4}$ | ${\bf 3}$ | $\\{\bf 1,1^{\prime},1^{\prime}\\}$ | ${\bf 3}$ | ${\bf 1}$ | ${\bf 1}$ $-k_{I}$ | $-2$ | $\\{-2,-4,-4\\}$ | $-2$ | $0$ | $0$ Table 1: Charge assignments under $SU(2)_{L}\times U(1)_{Y}\times A_{4}$ in the lepton and Higgs sectors, where $k_{I}$ denotes the modular weight of matter superfields $\Phi_{I}$. For concreteness, we specify charge assignments under $SU(2)_{L}\times U(1)_{Y}\times A_{4}$ for the lepton and Higgs sectors as summarized in Tab. 1. Here, the $A_{4}$ group belongs to the $SL(2,\mathbb{Z})$ modular group parametrized by the modulus $\tau$. The Yukawa couplings are constructed in a modular invariant way. (For more details, see, Appendix A.) Then, we can write down the modular invariant superpotential: $\displaystyle W$ $\displaystyle=y_{e}(Y_{\bf 3}^{(4)}L)_{1}H_{d}e^{c}+\sum_{\bf r=3,3^{\prime}}y_{\mu}^{({\bf r})}(Y_{\bf r}^{(6)}L)_{1}H_{d}\mu^{c}+\sum_{\bf r=3,3^{\prime}}y_{\tau}^{({\bf r})}(Y_{\bf r}^{(6)}L)_{1}H_{d}\tau^{c}$ $\displaystyle+\sum_{\bf r=1,1^{\prime}}y_{d}^{({\bf r})}(Y_{\bf r}^{(4)}LH_{u}N^{c})_{1}+y_{d}^{({\bf 3_{S}})}(Y_{\bf 3}^{(4)}H_{u}(LN^{c})_{\bf 3_{S}})_{1}+y_{d}^{({\bf 3_{A}})}(Y_{\bf 3}^{(4)}H_{u}(LN^{c})_{\bf 3_{A}})_{1}$ $\displaystyle+\sum_{\bf r=1,1^{\prime},3}M^{({\bf r})}(Y_{\bf r}^{(4)}N^{c}N^{c})_{1},$ (3.1) where $Y_{\bf r}^{(k)}$ denotes the holomorphic modular form with weight $k$ for ${\bf r}$ representations under the $A_{4}$ group, and $\\{y_{e},y_{\mu}^{({\bf r})},y_{\tau}^{({\bf r})},y_{d}^{({\bf r})}\\}$ are parameters. Here, we introduce the Majorana mass terms to realize small neutrino masses. In the following, we enumerate the mass matrix of the lepton sector. 1. 1. Charged-lepton mass matrix After the electroweak symmetry breaking, charged-lepton mass matrix is written as $\displaystyle(m_{l})_{LR}=\frac{v_{d}}{\sqrt{2}}\begin{pmatrix}Y_{1}^{(4)}&Y_{3}^{(6)}+\epsilon_{\mu}Y_{3^{\prime}}^{(6)}&Y_{3}^{(6)}+\epsilon_{\tau}Y_{3^{\prime}}^{(6)}\\\ Y_{3}^{(4)}&Y_{2}^{(6)}+\epsilon_{\mu}Y_{2^{\prime}}^{(6)}&Y_{2}^{(6)}+\epsilon_{\tau}Y_{2^{\prime}}^{(6)}\\\ Y_{2}^{(4)}&Y_{1}^{(6)}+\epsilon_{\mu}Y_{1^{\prime}}^{(6)}&Y_{1}^{(6)}+\epsilon_{\tau}Y_{1^{\prime}}^{(6)}\\\ \end{pmatrix}\times\begin{pmatrix}y_{e}&0&0\\\ 0&y_{\mu}^{(\bf 3)}&0\\\ 0&0&y_{\tau}^{(\bf 3)}\\\ \end{pmatrix},$ (3.2) where we introduce $\displaystyle\langle H_{d}\rangle=v_{d},\quad\epsilon_{\mu}=\frac{y_{\mu}^{(\bf 3^{\prime})}}{y_{\mu}^{({\bf 3})}},\quad\epsilon_{\tau}=\frac{y_{\tau}^{(\bf 3^{\prime})}}{y_{\tau}^{({\bf 3})}},\quad Y_{\bf 3}^{(k)}=\begin{pmatrix}Y_{1}^{(k)}\\\ Y_{2}^{(k)}\\\ Y_{3}^{(k)}\\\ \end{pmatrix},\quad Y_{\bf 3^{\prime}}^{(k)}=\begin{pmatrix}Y_{1^{\prime}}^{(k)}\\\ Y_{2^{\prime}}^{(k)}\\\ Y_{3^{\prime}}^{(k)}\\\ \end{pmatrix}.$ (3.3) The explicit modular forms are listed in Appendix A. Then the charged-lepton mass square eigenstate can be found by ${\rm diag}(|m_{e}|^{2},|m_{\mu}|^{2},|m_{\tau}|^{2})\equiv V^{\dagger}_{l_{L}}m_{l}^{\dagger}m_{l}V_{l_{L}}$. We numerically determine the three parameters $y_{e},y_{\mu}^{(\bf 3)},y_{\tau}^{(\bf 3)}$ to fit the three charged-lepton masses by applying the relations: $\displaystyle{\rm Tr}[m_{l}^{\dagger}m_{l}]=|m_{e}|^{2}+|m_{\mu}|^{2}+|m_{\tau}|^{2},$ (3.4) $\displaystyle{\rm Det}[m_{l}^{\dagger}m_{l}]=|m_{e}|^{2}|m_{\mu}|^{2}|m_{\tau}|^{2},$ (3.5) $\displaystyle({\rm Tr}[m_{l}^{\dagger}m_{l}])^{2}-{\rm Tr}[(m_{l}^{\dagger}m_{l})^{2}]=2(|m_{e}|^{2}|m_{\nu}|^{2}+|m_{\mu}|^{2}|m_{\tau}|^{2}+|m_{e}|^{2}|m_{\tau}|^{2}).$ (3.6) Therefore, input parameters are $\epsilon_{\mu},\ \epsilon_{\tau}$ in the charged-lepton sector. 2. 2. Dirac Yukawa mass matrix $\displaystyle(m_{D})_{LN}$ $\displaystyle=\frac{v_{u}}{\sqrt{2}}\Biggl{[}\frac{y_{d}^{({\bf 3_{S}})}}{3}\begin{pmatrix}2Y_{1}^{(4)}&-Y_{3}^{(4)}&-Y_{2}^{(4)}\\\ -Y_{3}^{(4)}&2Y_{2}^{(4)}&-Y_{1}^{(4)}\\\ -Y_{2}^{(4)}&-Y_{1}^{(4)}&2Y_{3}^{(4)}\\\ \end{pmatrix}+\frac{y_{d}^{({\bf 3_{A}})}}{2}\begin{pmatrix}0&Y_{3}^{(4)}&-Y_{2}^{(4)}\\\ -Y_{3}^{(4)}&0&Y_{1}^{(4)}\\\ Y_{2}^{(4)}&-Y_{1}^{(4)}&0\\\ \end{pmatrix}$ $\displaystyle\hskip 40.0pt+y_{d}^{({\bf 1})}Y_{\bf 1}^{(4)}\begin{pmatrix}1&0&0\\\ 0&0&1\\\ 0&1&0\\\ \end{pmatrix}+y_{d}^{({\bf 1^{\prime}})}Y_{\bf 1^{\prime}}^{(4)}\begin{pmatrix}0&0&1\\\ 0&1&0\\\ 1&0&0\\\ \end{pmatrix}\Biggl{]}$ $\displaystyle=m_{d_{0}}\Biggl{[}\begin{pmatrix}2Y_{1}^{(4)}&(-1+g_{D})Y_{3}^{(4)}&-(1+g_{D})Y_{2}^{(4)}\\\ -(1+g_{D})Y_{3}^{(4)}&2Y_{2}^{(4)}&(-1+g_{D})Y_{1}^{(4)}\\\ (-1+g_{D})Y_{2}^{(4)}&-(1+g_{D})Y_{1}^{(4)}&2Y_{3}^{(4)}\\\ \end{pmatrix}$ $\displaystyle\hskip 40.0pt+h_{1}\begin{pmatrix}1&0&0\\\ 0&0&1\\\ 0&1&0\\\ \end{pmatrix}+h_{2}\begin{pmatrix}0&0&1\\\ 0&1&0\\\ 1&0&0\\\ \end{pmatrix}\Biggl{]}$ $\displaystyle\equiv m_{d_{0}}\tilde{m}_{D},$ (3.7) where we define $\displaystyle\langle H_{u}\rangle=v_{u},\quad m_{d_{0}}\equiv\frac{y_{d}^{(\bf 3_{S})}}{3\sqrt{2}}v_{u},\quad g_{D}=\frac{3y_{d}^{(\bf 3_{A})}}{2y_{d}^{(\bf 3_{S})}},\quad h_{1}=\frac{3y_{d}^{({\bf 1})}Y_{\bf 1}^{(4)}}{y_{d}^{(\bf 3_{S})}},\quad h_{2}=\frac{3y_{d}^{({\bf 1^{\prime}})}Y_{\bf 1^{\prime}}^{(4)}}{y_{d}^{(\bf 3_{S})}}.$ (3.8) 3. 3. Majorana mass matrix $\displaystyle M_{N}$ $\displaystyle=\frac{M_{1}}{3}\begin{pmatrix}2Y_{1}^{(4)}&-Y_{3}^{(4)}&-Y_{2}^{(4)}\\\ -Y_{3}^{(4)}&2Y_{2}^{(4)}&-Y_{1}^{(4)}\\\ -Y_{2}^{(4)}&-Y_{1}^{(4)}&2Y_{3}^{(4)}\\\ \end{pmatrix}+M_{2}Y_{\bf 1}^{(4)}\begin{pmatrix}1&0&0\\\ 0&0&1\\\ 0&1&0\\\ \end{pmatrix}+M_{3}Y_{\bf 1^{\prime}}^{(4)}\begin{pmatrix}0&0&1\\\ 0&1&0\\\ 1&0&0\\\ \end{pmatrix}$ $\displaystyle=M_{0}\Biggl{[}\begin{pmatrix}2Y_{1}^{(4)}&-Y_{3}^{(4)}&-Y_{2}^{(4)}\\\ -Y_{3}^{(4)}&2Y_{2}^{(4)}&-Y_{1}^{(4)}\\\ -Y_{2}^{(4)}&-Y_{1}^{(4)}&2Y_{3}^{(4)}\\\ \end{pmatrix}+f_{1}\begin{pmatrix}1&0&0\\\ 0&0&1\\\ 0&1&0\\\ \end{pmatrix}+f_{2}\begin{pmatrix}0&0&1\\\ 0&1&0\\\ 1&0&0\\\ \end{pmatrix}\Biggl{]}$ $\displaystyle\equiv M_{0}\tilde{M}_{N},$ (3.9) where we define $\displaystyle M_{0}\equiv\frac{M_{1}}{3},\quad f_{1}=\frac{3Y_{\bf 1}^{(4)}M_{2}}{M_{1}},\quad f_{2}=\frac{3Y_{\bf 1^{\prime}}^{(4)}M_{3}}{M_{1}}.$ (3.10) Then, the active neutrino mass matrix is given by $\displaystyle m_{\nu}$ $\displaystyle\approx- m_{D}^{T}M_{N}^{-1}m_{D}=-\kappa\tilde{m}_{D}^{T}\tilde{M}_{N}^{-1}\tilde{m}_{D}=-\kappa\tilde{m}_{\nu},$ (3.11) where the mass dimensional parameter $\kappa$ is defined by $m_{d_{0}}^{2}/M_{0}$. $\tilde{m}_{\nu}$ is diagonalized by applying a unitary matrix as $V^{\dagger}_{\nu}(\tilde{m}_{\nu}^{\dagger}\tilde{m}_{\nu})V_{\nu}=(\tilde{m}_{1}^{2},\tilde{m}_{2}^{2},\tilde{m}_{3}^{2})$. In this case, $\kappa$ is determined by $\displaystyle({\rm NH}):\ \kappa^{2}=\frac{|\Delta m_{\rm atm}^{2}|}{\tilde{m}_{3}^{2}-\tilde{m}_{1}^{2}},\quad({\rm IH}):\ \kappa^{2}=\frac{|\Delta m_{\rm atm}^{2}|}{\tilde{m}_{2}^{2}-\tilde{m}_{3}^{2}},$ (3.12) where $\Delta m_{\rm atm}^{2}$ is atmospheric neutrino mass square difference, and NH and IH stand for normal and inverted hierarchies, respectively. The solar mass square difference is also found in terms of $\kappa$ as follows: $\displaystyle\Delta m_{\rm sol}^{2}={\kappa^{2}}({\tilde{m}_{2}^{2}-\tilde{m}_{1}^{2}}).$ (3.13) In our numerical analysis, we regard $\Delta m_{\rm atm}^{2}$ as an input parameter from experiments so that $\Delta m_{\rm sol}^{2}$ be output parameter giving numerical $(\tilde{m}_{1}^{2},\tilde{m}_{2}^{2},\tilde{m}_{3}^{2})$. Then, one finds $U_{\rm PMNS}=V^{\dagger}_{l_{L}}V_{\nu}$, and it is parametrized by three mixing angles $\theta_{ij}(i,j=1,2,3;i<j)$, one CP violating Dirac phase $\delta_{\rm CP}$, and two Majorana phases $\\{\alpha_{21},\alpha_{32}\\}$ as follows: $U_{\rm PMNS}=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta_{\rm CP}}\\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta_{\text{CP}}}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta_{\text{CP}}}&s_{23}c_{13}\\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta_{\text{CP}}}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta_{\text{CP}}}&c_{23}c_{13}\end{pmatrix}\begin{pmatrix}1&0&0\\\ 0&e^{i\frac{\alpha_{21}}{2}}&0\\\ 0&0&e^{i\frac{\alpha_{31}}{2}}\end{pmatrix},$ (3.14) where $c_{ij}$ and $s_{ij}$ stand for $\cos\theta_{ij}$ and $\sin\theta_{ij}$, respectively. These mixings are rewritten in terms of the component of $U_{\rm PMNS}$ as follows: $\displaystyle\sin^{2}\theta_{13}=|(U_{\rm PMNS})_{13}|^{2},\quad\sin^{2}\theta_{23}=\frac{|(U_{\rm PMNS})_{23}|^{2}}{1-|(U_{\rm PMNS})_{13}|^{2}},\quad\sin^{2}\theta_{12}=\frac{|(U_{\rm PMNS})_{12}|^{2}}{1-|(U_{\rm PMNS})_{13}|^{2}}.$ (3.15) In addition, we can compute the Jarlskog invariant, $\delta_{\text{CP}}$ from PMNS matrix elements $(U_{\rm PMNS})_{\alpha i}\equiv U_{\alpha i}$: $J_{\rm CP}=\text{Im}[U_{e1}U_{\mu 2}U_{e2}^{*}U_{\mu 1}^{*}]=s_{23}c_{23}s_{12}c_{12}s_{13}c^{2}_{13}\sin\delta_{\rm CP},$ (3.16) and the Majorana phases are also estimated in terms of other invariants $I_{1}$ and $I_{2}$ constructed by PMNS matrix elements: $I_{1}=\text{Im}[U^{*}_{e1}U_{e2}]=c_{12}s_{12}c_{13}^{2}\sin\left(\frac{\alpha_{21}}{2}\right),\ I_{2}=\text{Im}[U^{*}_{e1}U_{e3}]=c_{12}s_{13}c_{13}\sin\left(\frac{\alpha_{31}}{2}-\delta_{\rm CP}\right).$ (3.17) Furthermore, the effective mass for the neutrinoless double beta decay is given by $\displaystyle\langle m_{ee}\rangle=\kappa|\tilde{D}_{\nu_{1}}c^{2}_{12}c^{2}_{13}+\tilde{D}_{\nu_{2}}s^{2}_{12}c^{2}_{13}e^{i\alpha_{21}}+\tilde{D}_{\nu_{3}}s^{2}_{13}e^{i(\alpha_{31}-2\delta_{\rm CP})}|\,,$ (3.18) where its observed value could be measured by KamLAND-Zen experiment in future KamLAND-Zen:2016pfg . In our numerical analysis below, we will do $\Delta\chi$ square analysis referring to Ref. Esteban:2020cvm . ### 3.2 Numerical analysis In this section, we show the allowed region with $\chi$ square analysis to satisfy the current neutrino oscillation data, where we randomly select within the following ranges of input parameters, $\displaystyle|\delta\tau|\in[10^{-20},0.1],\ \\{\epsilon_{\mu},\epsilon_{\tau},g_{D},f_{1},f_{2},h_{1},h_{2}\\}\in[10^{-4},2],$ (3.19) where we assume all the parameters (except $\tau$) are real for simplicity. We also take Yukawa couplings of the SM charged leptons at the GUT scale $2\times 10^{16}$ GeV and $\Delta m_{\rm atm}^{2}$ as input parameters, where $\tan\beta=5$ is taken as a bench mark Bjorkeroth:2015ora : $\displaystyle y_{e}=(1.97\pm 0.024)\times 10^{-6},\ y_{\mu}=(4.16\pm 0.050)\times 10^{-4},\ y_{\tau}=(7.07\pm 0.073)\times 10^{-3},$ (3.20) $\displaystyle|\Delta m_{\rm atm}^{2}|=(2.431-2.598)\times 10^{-21}\ {\rm eV^{2}}\,\,\ {\rm for}\ {\rm NH},$ (3.21) $\displaystyle|\Delta m_{\rm atm}^{2}|=(2.412-2.583)\times 10^{-21}\ {\rm eV^{2}}\,\,\ {\rm for}\ {\rm IH},$ (3.22) where the charged-lepton masses are within 1$\sigma$ region, while $\Delta m_{\rm atm}^{2}$ is within 3$\sigma$ region. Here, the lepton masses are defined by $m_{\ell}=y_{\ell}v_{H}$ with $v_{H}=174$ GeV. Then, we pick the output data up only when the $\chi$ square is within 5$\sigma$ considering five measured neutrino oscillation data; $(\Delta m^{2}_{\rm sol},\ \sin^{2}\theta_{13},\ \sin^{2}\theta_{23},\ \sin^{2}\theta_{12}$) Esteban:2018azc . Here, we do not include $\delta_{\rm CP}$ in the $\chi$ square analysis due to the large ambiguity of experimental results in $3\sigma$ interval. In general, IH case is more difficult to accumulate more data to satisfy the neutrino oscillation data, because the minimum $\chi$ square is $2.7$ in Nufit 5.0 Esteban:2018azc . #### 3.2.1 Nearby $\tau=i$ Figure 5: Each of color represents ${\rm blue}\leq 1\sigma$, $1\sigma<{\rm green}\leq 2\sigma$, $2\sigma<{\rm yellow}\leq 3\sigma$, $3\sigma<{\rm red}\leq 5\sigma$. In Fig. 5, we show our several allowed regions on $\tau$ at nearby $\tau=i$ in case of NH, where each of color represents ${\rm blue}\leq 1\sigma$, $1\sigma<{\rm green}\leq 2\sigma$, $2\sigma<{\rm yellow}\leq 3\sigma$, $3\sigma<{\rm red}\leq 5\sigma$. The up-left one represents the allowed region of the imaginary part of $\tau$ in terms of the real part of $\tau$. The up- right one demonstrates the allowed region of neutrinoless double beta decay $\langle m_{ee}\rangle$ in terms of the lightest active neutrino mass $m_{1}$. There are two linear correlations between them. Furthermore, the smaller $\chi$ square is localized at nearby their smaller masses. The down-left one shows the allowed region of Majorana phases $\alpha_{21}$ and $\alpha_{31}$. Since we take all input parameters (except $\tau$) to be real values, both the allowed regions are localized at nearby by $0^{\circ},\ 180^{\circ}$. The down-right one depicts the allowed region of Dirac phase $\delta_{\text{CP}}$ in terms of the sum of neutrino masses $\sum m_{i}$. The vertical line is the upper bound on cosmological constraint $\sum m_{i}<0.12$ eV Planck:2018vyg . There is an intriguing tendency that allowed region of smaller $\chi$ square is localized at smaller $\sum m_{i}$ that is within the cosmological bound. Another feature is that the best fit value of Dirac CP phase $\sim 195^{\circ}$ would be reproduced when we allow up to $5\sigma$ interval. Figure 6: $|\delta\tau|<10^{-15}$ for black, $10^{-15}\leq|\delta\tau|<10^{-12}$ for gray, $10^{-12}\leq|\delta\tau|<10^{-10}$ for purple, $10^{-10}\leq|\delta\tau|<10^{-7}$ for brown, $10^{-7}\leq|\delta\tau|<10^{-5}$ for blue green, $10^{-5}\leq|\delta\tau|<10^{-3}$ for orange, and $10^{-3}\leq|\delta\tau|<10^{-1}$ for magenta. In Fig. 6, we show the several figures in terms of deviation from $\tau=i$ in the same case of Fig. 5 at 5 $\sigma$ interval, where each of color represents $|\delta\tau|<10^{-15}$ for black, $10^{-15}\leq|\delta\tau|<10^{-12}$ for gray, $10^{-12}\leq|\delta\tau|<10^{-10}$ for purple, $10^{-10}\leq|\delta\tau|<10^{-7}$ for brown, $10^{-7}\leq|\delta\tau|<10^{-5}$ for blue green, $10^{-5}\leq|\delta\tau|<10^{-3}$ for orange, and $10^{-3}\leq|\delta\tau|<10^{-1}$ for magenta. The up-left one is the same as the case of up-right one in Fig. 5. It implies that smaller deviations $|\delta\tau|$ tend to be localized at nearby their smaller masses. The up- right one is the same as the case of down-left one in Fig. 5. This figure would show rather trivial. Therefore, the smaller deviation is localized at $0^{\circ}$ and $180^{\circ}$, while the larger deviation deviates from these two points. It directly follows from our phase source is $\tau$ only. The down-left one is the same as the case of down-right one in Fig. 5. The smaller deviation would be favored in the point of view of the bound on cosmological constraint. Figure 7: Each of color represents ${\rm blue}\leq 1\sigma$, $1\sigma<{\rm green}\leq 2\sigma$, $2\sigma<{\rm yellow}\leq 3\sigma$, $3\sigma<{\rm red}\leq 5\sigma$. In Fig. 7, we show our several allowed regions on $\tau$ at nearby $\tau=i$ in case of IH, where color legends are the same as the one of Fig. 5. Therefore, we have found only the allowed region of $3\sigma-5\sigma$. The up-left one represents the allowed region of imaginary part of $\tau$ in terms of real part of $\tau$. The up-right one demonstrates the allowed region of neutrinoless double beta decay $\langle m_{ee}\rangle$ in terms of the lightest active neutrino mass $m_{3}$. There are two correlations between them; one is a linear line and another is a slightly curved one. The solutions tend to be localized at nearby smaller mass of $m_{3}$ with $\langle m_{ee}\rangle=0.015,\ 0.05$ eV. The down-left one shows the allowed region of Majorana phases $\alpha_{21}$ and $\alpha_{31}$. Both the allowed regions are localized at nearby by $0^{\circ},\ 180^{\circ}$ similar to the case of NH. The down-right one depicts the allowed region of the sum of neutrino masses $\sum m_{i}$ in terms of Dirac phase $\delta_{\text{CP}}$. The vertical line is the upper bound on cosmological constraint. $\delta_{\rm CP}$ is allowed at the points $0^{\circ}$ and $180^{\circ}$. While a large part of $\sum m_{i}$ would be ruled out by the cosmological bound. Therefore, we would predict a narrow range of $0.1~{}{\rm eV}\leq\sum m_{i}\leq 0.12~{}{\rm eV}$ in this case. Figure 8: $|\delta\tau|<10^{-15}$ for black, $10^{-15}\leq|\delta\tau|<10^{-12}$ for gray, $10^{-12}\leq|\delta\tau|<10^{-10}$ for purple, $10^{-10}\leq|\delta\tau|<10^{-7}$ for brown, $10^{-7}\leq|\delta\tau|<10^{-5}$ for blue green, $10^{-5}\leq|\delta\tau|<10^{-3}$ for orange, and $10^{-3}\leq|\delta\tau|<10^{-1}$ for magenta. In Fig. 8, we show the several figures in terms of deviation from $\tau=i$ where the color legends are the same as the one in Fig. 6. The up-left one corresponds to the case of up-right one in Fig. 7. It implies that smaller deviations $|\delta\tau|$ tend to be localized at nearby their smaller masses $m_{3}$. The up-right one corresponds to the case of down-left one in Fig. 7. This figure would show rather trivial. Therefore, the smaller deviation is localized at $0^{\circ}$ and $180^{\circ}$, while the larger deviation deviates from these two points. It directly follows from the fact that our phase source is $\tau$ only. The down-left one corresponds to the case of down-right one in Fig. 7. The smaller deviation would be favored in the point of view of the bound on cosmological constraint. Finally, we discuss ratios of the number of solutions in a corresponding range of $-{\rm Log}_{10}|\delta\tau|$ to the number of whole solutions for both the string landscape in Fig. 2 and the $A_{4}$ model within $5\sigma$. Fig. 9 indicates both the distributions of $A_{4}$ model with NH and the moduli fields in the string landscape peak around $|\delta\tau|={\cal O}(10^{-5})$, but such a signal will not be found in the IH case. Figure 9: Ratios of the number of solutions in a corresponding range of $-{\rm Log}_{10}|\delta\tau|$ to the number of whole solutions for both the string landscape in Fig. 2 and the $A_{4}$ model within $5\sigma$. We present the NH and the IH in the left and right panels, respectively. #### 3.2.2 Nearby $\tau=\omega$ Figure 10: Each of color represents ${\rm blue}\leq 1\sigma$, $1\sigma<{\rm green}\leq 2\sigma$, $2\sigma<{\rm yellow}\leq 3\sigma$, $3\sigma<{\rm red}\leq 5\sigma$. In Fig. 10, we show our several allowed regions on $\tau$ at nearby $\tau=\omega$ in case of NH, where the color legends are the same as the one in Fig. 6. The up-left one represents the allowed region of the imaginary part of $\tau$ in terms of the real part of $\tau$. The smaller $\chi$ square denoted by blue color is closest to the fixed point of $\tau=\omega$, which would be a good tendency. The up-right one demonstrates the allowed region of neutrinoless double beta decay $\langle m_{ee}\rangle$ in terms of the lightest active neutrino mass $m_{1}$. There is a linear correlation with width between them. Furthermore, all the regions of $\chi$ square tend to run the whole range. The down-left one shows the allowed region of Majorana phases $\alpha_{21}$ and $\alpha_{31}$. Even though the whole region is allowed, there exist two islands at around $-50^{\circ}\leq\alpha_{21},\alpha_{31}\leq 50^{\circ}$. The down-right one depicts the allowed region of Dirac phase $\delta_{\text{CP}}$ in terms of the sum of neutrino masses $\sum m_{i}$. The vertical line is the upper bound on cosmological constraint. Below this bound, the whole region is allowed for $\delta_{\rm CP}$. At nearby this bound, $\delta_{\rm CP}$ is allowed by $0^{\circ}-100^{\circ}$ and $270^{\circ}-360^{\circ}$. Furthermore, the smaller $\chi$ square tends to be localized at nearby the cosmological bound, and its testability would be enhanced. Figure 11: $|\delta\tau|<10^{-15}$ for black, $10^{-15}\leq|\delta\tau|<10^{-12}$ for gray, $10^{-12}\leq|\delta\tau|<10^{-10}$ for purple, $10^{-10}\leq|\delta\tau|<10^{-7}$ for brown, $10^{-7}\leq|\delta\tau|<10^{-5}$ for blue green, $10^{-5}\leq|\delta\tau|<10^{-3}$ for orange, and $10^{-3}\leq|\delta\tau|<10^{-1}$ for magenta. In Fig. 11, we show the several figures in terms of deviation from $\tau=\omega$ where the color legends are the same as the one in Fig. 6. The up-left one corresponds to the case of up-right one in Fig. 10. The up-right one corresponds to the case of down-left one in Fig. 10. The down-left one corresponds to the case of down-right one in Fig. 10. These figures show us that larger deviation; $10^{-3}\leq|\delta\tau|<10^{-1}$, is requested when the neutrino oscillations are satisfied. It is not favored by the theoretical point of view as we already discussed in Sec. 2. Figure 12: Each of color represents ${\rm blue}\leq 1\sigma$, $1\sigma<{\rm green}\leq 2\sigma$, $2\sigma<{\rm yellow}\leq 3\sigma$, $3\sigma<{\rm red}\leq 5\sigma$. In Fig. 12, we show our several allowed regions on $\tau$ at nearby $\tau=\omega$ in case of IH, where the color legends are the same as the one in Fig. 6. The up-left one represents the allowed region of the imaginary part of $\tau$ in terms of the real part of $\tau$. We have found only the allowed region of $2\sigma-5\sigma$. The up-right one demonstrates the allowed region of neutrinoless double beta decay $\langle m_{ee}\rangle$ in terms of the lightest active neutrino mass $m_{3}$. There seems to be a linear correlation between them, and 0.02 eV$\lesssim\langle m_{ee}\rangle\lesssim$ 0.06 eV up to $5\sigma$, but the allowed regions are localized at nearby small masses up to $2\sigma$. The down-left one shows the allowed region of Majorana phases $\alpha_{21}$ and $\alpha_{31}$. We find the allowed regions $100^{\circ}\lesssim\alpha_{21}\lesssim 280^{\circ}$ and $50^{\circ}\lesssim\alpha_{31}\lesssim 340^{\circ}$. The down-right one depicts the allowed region of Dirac phase $\delta_{\text{CP}}$ in terms of the sum of neutrino masses $\sum m_{i}$. The allowed region at yellow plots; $\sum m_{i}\simeq 0.11$ eV, is totally within the cosmological constraint. This implies that $m_{3}$ is almost zero combined with the up-right figure. Figure 13: $|\delta\tau|<10^{-15}$ for black, $10^{-15}\leq|\delta\tau|<10^{-12}$ for gray, $10^{-12}\leq|\delta\tau|<10^{-10}$ for purple, $10^{-10}\leq|\delta\tau|<10^{-7}$ for brown, $10^{-7}\leq|\delta\tau|<10^{-5}$ for blue green, $10^{-5}\leq|\delta\tau|<10^{-3}$ for orange, and $10^{-3}\leq|\delta\tau|<10^{-1}$ for magenta. In Fig. 13, we show the several figures in terms of deviation from $\tau=\omega$ where the color legends are the same as the one in Fig. 8. The up-left one corresponds to the case of up-right one in Fig. 12. The up-right one corresponds to the case of down-left one in Fig. 12. The down-left one corresponds to the case of down-right one in Fig. 12. These figures also show us that larger deviation; $10^{-3}\leq|\delta\tau|<10^{-1}$, is requested when the neutrino oscillations are satisfied. It is not favored by the theoretical point of view as we already discussed in Sec. 2. In conclusion, in the case of $\tau=\omega$, both the case of NH and IH would not be favored by the theoretical viewpoint. #### 3.2.3 Nearby $\tau=2i$ Figure 14: Each of color represents ${\rm blue}\leq 1\sigma$, $1\sigma<{\rm green}\leq 2\sigma$, $2\sigma<{\rm yellow}\leq 3\sigma$, $3\sigma<{\rm red}\leq 5\sigma$. In Fig. 14, we show our several allowed regions on $\tau$ at nearby $\tau=2i$ in case of NH, where the color legends are the same as the one in Fig. 6. The up-left one represents the allowed region of the imaginary part of $\tau$ in terms of the real part of $\tau$. The smaller $\chi$ square denoted by blue color is closest to the fixed point of $\tau=2i$, which would be a good tendency. The up-right one demonstrates the allowed region of neutrinoless double beta decay $\langle m_{ee}\rangle$ in terms of the lightest active neutrino mass $m_{1}$. There is main linear correlation between them. We find the allowed regions 0 eV$\leq m_{1}\leq$0.014 eV, and 0 eV$\leq\langle m_{ee}\rangle\leq$0.013 eV. The down-left one shows the allowed region of Majorana phases $\alpha_{21}$ and $\alpha_{31}$. Both the phases allow to be $0^{\circ}$ or $180^{\circ}$. The down-right one depicts the allowed region of Dirac phase $\delta_{\text{CP}}$ in terms of the sum of neutrino masses $\sum m_{i}$. The whole allowed region of $\sum m_{i}$ is totally within the bound on cosmological constraint; 0.058 eV$\leq\sum m_{i}\leq$ 0.082 eV, whereas the allowed region of $\delta_{\rm CP}$ is the same as Majorana phases; $0^{\circ}$ or $180^{\circ}$. Note that it is trivial that we find no phases since the situation is similar to the case of $\tau=i$. Figure 15: $|\delta\tau|<10^{-15}$ for black, $10^{-15}\leq|\delta\tau|<10^{-12}$ for gray, $10^{-12}\leq|\delta\tau|<10^{-10}$ for purple, $10^{-10}\leq|\delta\tau|<10^{-7}$ for brown, $10^{-7}\leq|\delta\tau|<10^{-5}$ for blue green, $10^{-5}\leq|\delta\tau|<10^{-3}$ for orange, and $10^{-3}\leq|\delta\tau|<10^{-1}$ for magenta. In Fig. 15, we show the several figures in terms of deviation from $\tau=2i$ in the same case of Fig. 5, where the color legends are the same as the one in Fig. 6. The up-left one is the same as the case of up-right one in Fig. 14. The up-right one is the same as the case of down-left one in Fig. 14. The down-left one is the same as the case of down-right one in Fig. 14. These figures suggest us that size of deviation almost run the whole ranges that are allowed by the neutrino oscillation data. Figure 16: Each of color represents ${\rm blue}\leq 1\sigma$, $1\sigma<{\rm green}\leq 2\sigma$, $2\sigma<{\rm yellow}\leq 3\sigma$, $3\sigma<{\rm red}\leq 5\sigma$. In Fig. 16, we show our several allowed regions on $\tau$ at nearby $\tau=2i$ in case of IH, where color legends are the same as the one of Fig. 5. Therefore, we have found only the allowed region of $2\sigma-5\sigma$. The up- left one represents the allowed region of the imaginary part of $\tau$ in terms of the real part of $\tau$. The up-right one demonstrates the allowed region of neutrinoless double beta decay $\langle m_{ee}\rangle$ in terms of the lightest active neutrino mass $m_{3}$. We find the allowed regions as follows: 0 eV$\leq m_{3}\leq$0.03 eV and 0.014 eV$\leq\langle m_{ee}\rangle\leq$0.04 eV up to $5\sigma$, but the allowed regions are localized at nearby small masses at yellow plots. The down-left one shows the allowed region of Majorana phases $\alpha_{21}$ and $\alpha_{31}$. $\alpha_{21}$ is allowed by $100^{\circ}$ to $200^{\circ}$, while $\alpha_{31}$ is wider region than $\alpha_{21}$. However the allowed regions are localized at nearby $\alpha_{21}=180^{\circ}$ and $\alpha_{31}=0^{\circ}$ at yellow plots. The down-right one depicts the allowed region of Dirac phase $\delta_{\text{CP}}$ in terms of the sum of neutrino masses $\sum m_{i}$. The vertical line is the upper bound on cosmological constraint. $\delta_{\rm CP}$ is allowed at the points $0^{\circ}$ and $180^{\circ}$. On the other hand, almost half the points of $\sum m_{i}$ would be ruled out by the cosmological bound. Therefore, we would predict a narrow range of $0.1$eV $\leq\sum m_{i}\leq 0.12$ eV that is almost the same as the one in case of $\tau=i$. Figure 17: $|\delta\tau|<10^{-15}$ for black, $10^{-15}\leq|\delta\tau|<10^{-12}$ for gray, $10^{-12}\leq|\delta\tau|<10^{-10}$ for purple, $10^{-10}\leq|\delta\tau|<10^{-7}$ for brown, $10^{-7}\leq|\delta\tau|<10^{-5}$ for blue green, $10^{-5}\leq|\delta\tau|<10^{-3}$ for orange, and $10^{-3}\leq|\delta\tau|<10^{-1}$ for magenta. In Fig. 17, we show the several figures in terms of deviation from $\tau=2i$ in the same case of Fig. 5, where the color legends are the same as the one in Fig. 8. The up-left one is the same as the case of up-right one in Fig. 16. The up-right one is the same as the case of down-left one in Fig. 16. The down-left one is the same as the case of down-right one in Fig. 16. These figures suggest us that size of deviation almost run the whole ranges that are allowed by the neutrino oscillation data. Figure 18: Ratios of the string landscape in Fig. 4 and the $A_{4}$ model within $5\sigma$, where the ratios are defined as those of the number of solutions in a corresponding range of $-{\rm Log}_{10}|\delta\tau|$ to the number of whole solutions. We present the NH and the IH in the left and right panels, respectively. In a similar to $\tau\simeq i$, we plot ratios of the number of solutions in a corresponding range of $-{\rm Log}_{10}|\delta\tau|$ to the number of whole solutions for both the string landscape in Fig. 2 and the $A_{4}$ model within $5\sigma$ in Fig. 18. It indicates both the distributions of $A_{4}$ model with NH and the moduli fields in the string landscape peak around $|\delta\tau|={\cal O}(10^{-5})$, but such a signal will not be found in the IH case. Here, we summarize our results where $\tau=\omega$ does not favor a theoretical point of view from the string landscape. Thus, we focus on $\tau=i$ and $\tau=2i$ only. In the case of $\tau=i$ with NH, there is an intriguing tendency that the allowed region of smaller $\chi$ square is localized at smaller $\sum m_{i}$ that is within the cosmological bound. Another feature is that the best fit value of Dirac CP phase $\sim 195^{\circ}$ would be reproduced when we allow up to $5\sigma$ interval. It implies that smaller deviations $|\delta\tau|$ tend to be localized at nearby their smaller masses. In the case of $\tau=i$ with IH, There are two correlations between them; one is a linear line, and another is a slightly curved one. The solutions tend to be localized at nearby smaller mass of $m_{3}$ with $\langle m_{ee}\rangle=0.015,\ 0.05$ eV. A large part of $\sum m_{i}$ would be ruled out by the cosmological bound. Therefore, we would predict a narrow range of $0.1~{}{\rm eV}\leq\sum m_{i}\leq 0.12~{}{\rm eV}$. It implies that smaller deviations $|\delta\tau|$ tend to be localized at nearby their smaller masses $m_{3}$. The smaller deviation would be favored in the point of view of the bound on the cosmological constraint. Both the distributions of $A_{4}$ model with NH and the moduli fields in the string landscape peak around $|\delta\tau|={\cal O}(10^{-5})$, but such a signal is not found. In the case of $\tau=2i$ with NH, the smaller $\chi$ square denoted in blue is closest to the fixed point of $\tau=2i$, which would be a good tendency. We find the allowed regions 0 eV$\leq m_{1}\leq$0.014 eV, and 0 eV$\leq\langle m_{ee}\rangle\leq$0.013 eV. The whole allowed region of $\sum m_{i}$ is totally within the bound on cosmological constraint; 0.058 eV$\leq\sum m_{i}\leq$ 0.082 eV. The size of the deviation from $\tau=2i$ almost runs the whole ranges that are allowed by the neutrino oscillation data. In the case of $\tau=2i$ with IH, we find the allowed regions as follows: 0 eV$\leq m_{3}\leq$0.03 eV and 0.014 eV$\leq\langle m_{ee}\rangle\leq$0.04 eV up to $5\sigma$, but the allowed regions are localized at nearby small masses at yellow plots. $\alpha_{21}$ is allowed by $100^{\circ}$ to $200^{\circ}$, while $\alpha_{31}$ is wider region than $\alpha_{21}$. Almost half the points of $\sum m_{i}$ would be ruled out by the cosmological bound. Therefore, we would predict a narrow range of $0.1$eV $\leq\sum m_{i}\leq 0.12$ eV that is almost the same as the one in the case of $\tau=i$. The size of the deviation from $\tau=2i$ almost runs the whole ranges that are allowed by the neutrino oscillation data. Both the distributions of $A_{4}$ model with NH and moduli fields in the string landscape peak around $|\delta\tau|={\cal O}(10^{-5})$, but such a signal is not found in the IH. ## 4 Conclusions The residual flavor symmetries appearing in fixed points of $PSL(2,\mathbb{Z})$ moduli space are employed in a wide variety of modular flavor models, but a small departure of the modulus from fixed points is required to realize the observed masses and mixing angles of quarks/leptons and CP-breaking effects in the bottom-up modular invariant theories. In this paper, we have explicitly demonstrated the breaking of residual flavor symmetry from the top-down approach. Following Ref. Ishiguro:2020nuf , we have studied the moduli stabilization in the context of Type IIB string theory on $T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})$ orientifold. In Type IIB flux compactifications, it was known that $\mathbb{Z}_{2}$ and $\mathbb{Z}_{3}$ fixed points on the fundamental domain of the complex structure modulus space are statistically favored in the finite number of vacua. However, the volume moduli have not been stabilized yet, and a present stage of acceleration of the Universe should be realized. In this respect, we have incorporated non-perturbative corrections to the superpotential as well as uplifting sources to stabilize the volume moduli at the dS vacua. These sources naturally shift the value of $\tau$ from fixed points by a small amount. We find that the deviations of $\tau$ from fixed points $\langle\tau\rangle=i,w,2i$ are statistically favored at $|\delta\tau|\simeq 10^{-5}$ and the CP symmetry $\tau\rightarrow-\bar{\tau}$ is broken in a generic choice of background fluxes. Since the SUSY is broken by the existence of uplifting source, the typical SUSY-breaking scale, i.e., the gravitino mass, is estimated as of ${\cal O}(10^{13})$ GeV at the small departure $|\delta\tau|\simeq 10^{-5}$. In this way, the top-down approach restricts ourselves to the specific value of the modulus $\tau$ as well as the SUSY- breaking scale. To illustrate phenomenological implications, we analyze the concrete $A_{4}$ modular flavor model with an emphasis on the lepton sector. Under charge assignments for the lepton and Higgs sectors in Tab. 1, we have presented several predictions in the vicinity of three fixed points by a global $\chi^{2}$ analysis in both the normal and inverted hierarchies of neutrinos. It turns out that there exist many phenomenologically promising models around $\langle\tau\rangle=i$ with the normal hierarchy, whose number is compared with that of the string landscape in Fig. 9. It implies similar distributions for string and $A_{4}$ models with respect to $\delta\tau$. Furthermore, there is an intriguing tendency that allowed region of smaller $\chi$ square is localized at smaller $\sum m_{i}$ that is within the cosmological bound. Before closing our paper, it is worthwhile mentioning the quasi-stable DM candidate due to the tiny deviation from the fixed points in $\tau=i,\ 2i$. In Ref. Kobayashi:2021ajl , especially, DM and neutrino oscillation data can simultaneously be explained at $\tau=i$ where DM is a Majorana heavy fermion with modular weight $-2$. In this set up 444In order to identify DM, we would need to assign a singlet under $A_{4}$ symmetry in order to avoid mixings among Majorana fermions that spoil the stability of DM. Also, we might need to construct a model with all the singlets under $A_{4}$ to get neutrino oscillation data. In this sense, our model has to be modified when there is DM in a theory., DM decays into leptons and Higgses via a Dirac term. Assuming order one free parameter and the DM mass ($m_{X}$) is much heavier than the leptons and Higgses, we estimate its lifetime ($\tau_{X}$) as follows: $\displaystyle\tau_{X}\simeq 1.32\times 10^{-25}\times\left|Y^{(6)}_{1}\right|^{-2}\left(\frac{1\ {\rm TeV}}{m_{X}}\right){\rm sec}.$ (4.1) When $m_{X}=1$ TeV, the upper limit of $\left|Y^{(6)}_{1}\right|$ be less than the order $10^{-21}$ in order $X$ to be a quasi-stable DM imposing $10^{17}{\rm sec}\lesssim\tau_{X}$. Here, $10^{17}$ sec is the age of the Universe. This constraint is equivalent to $|\delta\tau|\lesssim 5.57\times 10^{-9}$ that is within our valid parameter space. ###### Acknowledgements. This work was supported by JSPS KAKENHI Grant Numbers JP20K14477 (Hajime O.) and JP22J12877 (K.I). The work of Hiroshi O. is supported by the Junior Research Group (JRG) Program at the Asia-Pacific Center for Theoretical Physics (APCTP) through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government and was supported by the Korean Local Governments-Gyeongsangbuk-do Province and Pohang City. Hiroshi O. is sincerely grateful for all the KIAS members. ## Appendix A $A_{4}$ modular forms Note that the modulus-dependent modular forms are constructed by the weight 2 modular form, $\displaystyle Y_{\bf 3}^{(2)}=\begin{pmatrix}Y_{1}\\\ Y_{2}\\\ Y_{3}\\\ \end{pmatrix},$ (A.1) with $\displaystyle Y_{1}(\tau)$ $\displaystyle=\frac{i}{2\pi}\left(\frac{\eta^{\prime}(\tau/3)}{\eta(\tau/3)}+\frac{\eta^{\prime}((\tau+1)/3)}{\eta((\tau+1)/3)}+\frac{\eta^{\prime}((\tau+2)/3)}{\eta((\tau+2)/3)}-\frac{27\eta^{\prime}(3\tau)}{\eta(3\tau)}\right),$ (A.2) $\displaystyle Y_{2}(\tau)$ $\displaystyle=\frac{-i}{\pi}\left(\frac{\eta^{\prime}(\tau/3)}{\eta(\tau/3)}+\omega^{2}\frac{\eta^{\prime}((\tau+1)/3)}{\eta((\tau+1)/3)}+\omega\frac{\eta^{\prime}((\tau+2)/3)}{\eta((\tau+2)/3)}\right),$ (A.3) $\displaystyle Y_{3}(\tau)$ $\displaystyle=\frac{-i}{\pi}\left(\frac{\eta^{\prime}(\tau/3)}{\eta(\tau/3)}+\omega\frac{\eta^{\prime}((\tau+1)/3)}{\eta((\tau+1)/3)}+\omega^{2}\frac{\eta^{\prime}((\tau+2)/3)}{\eta((\tau+2)/3)}\right)\,,$ (A.4) where $\eta(\tau)$ denotes the Dedekind eta-function and $\omega=e^{2\pi i/3}$. Recalling that the other modular forms are constructed by tensor products of $Y_{\bf 3}^{(2)}$, we list the modular forms used in our analysis: $\displaystyle{Y^{\rm(4)}_{\bf 3}}(\tau)=\begin{pmatrix}Y_{1}^{2}-Y_{2}Y_{3}\\\ Y_{3}^{2}-Y_{1}Y_{2}\\\ Y_{2}^{2}-Y_{1}Y_{3}\end{pmatrix}\,,$ $\displaystyle Y_{\bf 1}^{(4)}=Y_{1}^{2}+2Y_{2}Y_{3}\,,\qquad\qquad Y_{\bf 1^{\prime}}^{(4)}=Y_{3}^{2}+2Y_{1}Y_{2}\,.$ $\displaystyle{Y^{\rm(6)}_{\bf 3}}(\tau)=Y_{\bf 1}^{(4)}{Y^{\rm(2)}_{\bf 3}}(\tau)=(Y_{1}^{2}+2Y_{2}Y_{3})\begin{pmatrix}Y_{1}\\\ Y_{2}\\\ Y_{3}\end{pmatrix}\,,$ $\displaystyle{Y^{\rm(6)}_{\bf 3^{\prime}}}(\tau)=Y_{\bf 1^{\prime}}^{(4)}{Y^{\rm(2)}_{\bf 3}}(\tau)=(Y_{3}^{2}+2Y_{1}Y_{2})\begin{pmatrix}Y_{3}\\\ Y_{1}\\\ Y_{2}\end{pmatrix}\,.$ (A.5) ## References * (1) R. de Adelhart Toorop, F. Feruglio and C. Hagedorn, _Finite Modular Groups and Lepton Mixing_ , _Nucl. Phys. B_ 858 (2012) 437 [1112.1340]. * (2) S. Ferrara, .D. Lust and S. Theisen, _Target Space Modular Invariance and Low-Energy Couplings in Orbifold Compactifications_ , _Phys. Lett. B_ 233 (1989) 147. * (3) W. Lerche, D. Lust and N.P. Warner, _Duality Symmetries in $N=2$ Landau-ginzburg Models_, _Phys. Lett. B_ 231 (1989) 417. * (4) J. Lauer, J. Mas and H.P. Nilles, _Twisted sector representations of discrete background symmetries for two-dimensional orbifolds_ , _Nucl. Phys. B_ 351 (1991) 353. * (5) A. Baur, H.P. Nilles, A. Trautner and P.K.S. Vaudrevange, _Unification of Flavor, CP, and Modular Symmetries_ , _Phys. Lett. B_ 795 (2019) 7 [1901.03251]. * (6) A. Baur, H.P. Nilles, A. Trautner and P.K.S. Vaudrevange, _A String Theory of Flavor and $\mathscr{CP}$_, _Nucl. Phys. B_ 947 (2019) 114737 [1908.00805]. * (7) T. Kobayashi, S. Nagamoto, S. Takada, S. Tamba and T.H. Tatsuishi, _Modular symmetry and non-Abelian discrete flavor symmetries in string compactification_ , _Phys. Rev. D_ 97 (2018) 116002 [1804.06644]. * (8) T. Kobayashi and S. Tamba, _Modular forms of finite modular subgroups from magnetized D-brane models_ , _Phys. Rev. D_ 99 (2019) 046001 [1811.11384]. * (9) H. Ohki, S. Uemura and R. Watanabe, _Modular flavor symmetry on a magnetized torus_ , _Phys. Rev. D_ 102 (2020) 085008 [2003.04174]. * (10) S. Kikuchi, T. Kobayashi, S. Takada, T.H. Tatsuishi and H. Uchida, _Revisiting modular symmetry in magnetized torus and orbifold compactifications_ , _Phys. Rev. D_ 102 (2020) 105010 [2005.12642]. * (11) S. Kikuchi, T. Kobayashi, H. Otsuka, S. Takada and H. Uchida, _Modular symmetry by orbifolding magnetized $T^{2}\times T^{2}$: realization of double cover of $\Gamma_{N}$_, _JHEP_ 11 (2020) 101 [2007.06188]. * (12) S. Kikuchi, T. Kobayashi and H. Uchida, _Modular flavor symmetries of three-generation modes on magnetized toroidal orbifolds_ , _Phys. Rev. D_ 104 (2021) 065008 [2101.00826]. * (13) Y. Almumin, M.-C. Chen, V. Knapp-Pérez, S. Ramos-Sánchez, M. Ratz and S. Shukla, _Metaplectic Flavor Symmetries from Magnetized Tori_ , _JHEP_ 05 (2021) 078 [2102.11286]. * (14) A. Baur, M. Kade, H.P. Nilles, S. Ramos-Sanchez and P.K.S. Vaudrevange, _Siegel modular flavor group and CP from string theory_ , _Phys. Lett. B_ 816 (2021) 136176 [2012.09586]. * (15) A. Strominger, _SPECIAL GEOMETRY_ , _Commun. Math. Phys._ 133 (1990) 163. * (16) P. Candelas and X. de la Ossa, _Moduli Space of Calabi-Yau Manifolds_ , _Nucl. Phys. B_ 355 (1991) 455. * (17) K. Ishiguro, T. Kobayashi and H. Otsuka, _Spontaneous CP violation and symplectic modular symmetry in Calabi-Yau compactifications_ , _Nucl. Phys. B_ 973 (2021) 115598 [2010.10782]. * (18) K. Ishiguro, T. Kobayashi and H. Otsuka, _Symplectic modular symmetry in heterotic string vacua: flavor, CP, and R-symmetries_ , _JHEP_ 01 (2022) 020 [2107.00487]. * (19) F. Feruglio, _Are neutrino masses modular forms?_ , in _From My Vast Repertoire …: Guido Altarelli’s Legacy_ , A. Levy, S. Forte and G. Ridolfi, eds., pp. 227–266 (2019), DOI [1706.08749]. * (20) T. Kobayashi, K. Tanaka and T.H. Tatsuishi, _Neutrino mixing from finite modular groups_ , _Phys. Rev. D_ 98 (2018) 016004 [1803.10391]. * (21) J.T. Penedo and S.T. Petcov, _Lepton Masses and Mixing from Modular $S_{4}$ Symmetry_, _Nucl. Phys. B_ 939 (2019) 292 [1806.11040]. * (22) P.P. Novichkov, J.T. Penedo, S.T. Petcov and A.V. Titov, _Modular A 5 symmetry for flavour model building_, _JHEP_ 04 (2019) 174 [1812.02158]. * (23) G.-J. Ding, S.F. King and X.-G. Liu, _Neutrino mass and mixing with $A_{5}$ modular symmetry_, _Phys. Rev. D_ 100 (2019) 115005 [1903.12588]. * (24) X.-G. Liu and G.-J. Ding, _Neutrino Masses and Mixing from Double Covering of Finite Modular Groups_ , _JHEP_ 08 (2019) 134 [1907.01488]. * (25) P. Chen, G.-J. Ding, J.-N. Lu and J.W.F. Valle, _Predictions from warped flavor dynamics based on the $T′$ family group_, _Phys. Rev. D_ 102 (2020) 095014 [2003.02734]. * (26) P.P. Novichkov, J.T. Penedo and S.T. Petcov, _Double cover of modular $S_{4}$ for flavour model building_, _Nucl. Phys. B_ 963 (2021) 115301 [2006.03058]. * (27) X.-G. Liu, C.-Y. Yao and G.-J. Ding, _Modular invariant quark and lepton models in double covering of $S_{4}$ modular group_, _Phys. Rev. D_ 103 (2021) 056013 [2006.10722]. * (28) X. Wang, B. Yu and S. Zhou, _Double covering of the modular $A_{5}$ group and lepton flavor mixing in the minimal seesaw model_, _Phys. Rev. D_ 103 (2021) 076005 [2010.10159]. * (29) C.-Y. Yao, X.-G. Liu and G.-J. Ding, _Fermion masses and mixing from the double cover and metaplectic cover of the $A_{5}$ modular group_, _Phys. Rev. D_ 103 (2021) 095013 [2011.03501]. * (30) G.-J. Ding, S.F. King, C.-C. Li and Y.-L. Zhou, _Modular Invariant Models of Leptons at Level 7_ , _JHEP_ 08 (2020) 164 [2004.12662]. * (31) T. Kobayashi, H. Otsuka, M. Tanimoto and K. Yamamoto, _Modular symmetry in the SMEFT_ , _Phys. Rev. D_ 105 (2022) 055022 [2112.00493]. * (32) T. Kobayashi, H. Otsuka, M. Tanimoto and K. Yamamoto, _Lepton flavor violation, lepton $(g-2)_{\mu,\,e}$ and electron EDM in the modular symmetry_, 2204.12325. * (33) T. Kobayashi and H. Otsuka, _On stringy origin of minimal flavor violation_ , _Eur. Phys. J. C_ 82 (2022) 25 [2108.02700]. * (34) P.P. Novichkov, J.T. Penedo, S.T. Petcov and A.V. Titov, _Generalised CP Symmetry in Modular-Invariant Models of Flavour_ , _JHEP_ 07 (2019) 165 [1905.11970]. * (35) P.P. Novichkov, J.T. Penedo, S.T. Petcov and A.V. Titov, _Modular S 4 models of lepton masses and mixing_, _JHEP_ 04 (2019) 005 [1811.04933]. * (36) P.P. Novichkov, S.T. Petcov and M. Tanimoto, _Trimaximal Neutrino Mixing from Modular A4 Invariance with Residual Symmetries_ , _Phys. Lett. B_ 793 (2019) 247 [1812.11289]. * (37) G.-J. Ding, S.F. King, X.-G. Liu and J.-N. Lu, _Modular S 4 and A4 symmetries and their fixed points: new predictive examples of lepton mixing_, _JHEP_ 12 (2019) 030 [1910.03460]. * (38) H. Okada and M. Tanimoto, _Towards unification of quark and lepton flavors in $A_{4}$ modular invariance_, _Eur. Phys. J. C_ 81 (2021) 52 [1905.13421]. * (39) S.F. King and Y.-L. Zhou, _Trimaximal TM 1 mixing with two modular $S_{4}$ groups_, _Phys. Rev. D_ 101 (2020) 015001 [1908.02770]. * (40) H. Okada and M. Tanimoto, _Quark and lepton flavors with common modulus $\tau$ in $A_{4}$ modular symmetry_, 2005.00775. * (41) H. Okada and M. Tanimoto, _Modular invariant flavor model of $A_{4}$ and hierarchical structures at nearby fixed points_, _Phys. Rev. D_ 103 (2021) 015005 [2009.14242]. * (42) H. Okada and M. Tanimoto, _Spontaneous CP violation by modulus $\tau$ in $A_{4}$ model of lepton flavors_, _JHEP_ 03 (2021) 010 [2012.01688]. * (43) F. Feruglio, V. Gherardi, A. Romanino and A. Titov, _Modular invariant dynamics and fermion mass hierarchies around $\tau=i$_, _JHEP_ 05 (2021) 242 [2101.08718]. * (44) T. Kobayashi, H. Okada and Y. Orikasa, _Dark matter stability at fixed points in a modular $A_{4}$ symmetry_, 2111.05674. * (45) T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto and T.H. Tatsuishi, _$A_{4}$ lepton flavor model and modulus stabilization from $S_{4}$ modular symmetry_, _Phys. Rev. D_ 100 (2019) 115045 [1909.05139]. * (46) T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, T.H. Tatsuishi and H. Uchida, _$CP$ violation in modular invariant flavor models_, _Phys. Rev. D_ 101 (2020) 055046 [1910.11553]. * (47) T. Kobayashi and H. Otsuka, _Challenge for spontaneous $CP$ violation in Type IIB orientifolds with fluxes_, _Phys. Rev. D_ 102 (2020) 026004 [2004.04518]. * (48) K. Ishiguro, T. Kobayashi and H. Otsuka, _Landscape of Modular Symmetric Flavor Models_ , _JHEP_ 03 (2021) 161 [2011.09154]. * (49) P.P. Novichkov, J.T. Penedo and S.T. Petcov, _Modular Flavour Symmetries and Modulus Stabilisation_ , 2201.02020. * (50) T. Kobayashi and H. Otsuka, _Classification of discrete modular symmetries in Type IIB flux vacua_ , _Phys. Rev. D_ 101 (2020) 106017 [2001.07972]. * (51) O. DeWolfe, A. Giryavets, S. Kachru and W. Taylor, _Enumerating flux vacua with enhanced symmetries_ , _JHEP_ 02 (2005) 037 [hep-th/0411061]. * (52) S. Kachru, R. Kallosh, A.D. Linde and S.P. Trivedi, _De Sitter vacua in string theory_ , _Phys. Rev. D_ 68 (2003) 046005 [hep-th/0301240]. * (53) S. Kikuchi, T. Kobayashi, K. Nasu, H. Otsuka, S. Takada and H. Uchida, _Modular symmetry of soft supersymmetry breaking terms_ , 2203.14667. * (54) X. Du and F. Wang, _SUSY breaking constraints on modular flavor $S_{3}$ invariant SU(5) GUT model_, _JHEP_ 02 (2021) 221 [2012.01397]. * (55) T. Kobayashi, T. Shimomura and M. Tanimoto, _Soft supersymmetry breaking terms and lepton flavor violations in modular flavor models_ , _Phys. Lett. B_ 819 (2021) 136452 [2102.10425]. * (56) H. Otsuka and H. Okada, _Radiative neutrino masses from modular $A_{4}$ symmetry and supersymmetry breaking_, 2202.10089. * (57) S. Gukov, C. Vafa and E. Witten, _CFT’s from Calabi-Yau four folds_ , _Nucl. Phys. B_ 584 (2000) 69 [hep-th/9906070]. * (58) P. Betzler and E. Plauschinn, _Type IIB flux vacua and tadpole cancellation_ , _Fortsch. Phys._ 67 (2019) 1900065 [1905.08823]. * (59) P. Candelas, E. Perevalov and G. Rajesh, _Toric geometry and enhanced gauge symmetry of F theory / heterotic vacua_ , _Nucl. Phys. B_ 507 (1997) 445 [hep-th/9704097]. * (60) W. Taylor and Y.-N. Wang, _The F-theory geometry with most flux vacua_ , _JHEP_ 12 (2015) 164 [1511.03209]. * (61) M. Demirtas, M. Kim, L. Mcallister and J. Moritz, _Vacua with Small Flux Superpotential_ , _Phys. Rev. Lett._ 124 (2020) 211603 [1912.10047]. * (62) Y. Honma and H. Otsuka, _Small flux superpotential in F-theory compactifications_ , _Phys. Rev. D_ 103 (2021) 126022 [2103.03003]. * (63) H. Abe, T. Higaki and T. Kobayashi, _Remark on integrating out heavy moduli in flux compactification_ , _Phys. Rev. D_ 74 (2006) 045012 [hep-th/0606095]. * (64) KamLAND-Zen collaboration, _Search for Majorana Neutrinos near the Inverted Mass Hierarchy Region with KamLAND-Zen_ , _Phys. Rev. Lett._ 117 (2016) 082503 [1605.02889]. * (65) I. Esteban, M.C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou, _The fate of hints: updated global analysis of three-flavor neutrino oscillations_ , _JHEP_ 09 (2020) 178 [2007.14792]. * (66) F. Björkeroth, F.J. de Anda, I. de Medeiros Varzielas and S.F. King, _Towards a complete A ${}_{4}\times$ SU(5) SUSY GUT_, _JHEP_ 06 (2015) 141 [1503.03306]. * (67) I. Esteban, M.C. Gonzalez-Garcia, A. Hernandez-Cabezudo, M. Maltoni and T. Schwetz, _Global analysis of three-flavour neutrino oscillations: synergies and tensions in the determination of $\theta_{23}$, $\delta_{CP}$, and the mass ordering_, _JHEP_ 01 (2019) 106 [1811.05487]. * (68) Planck collaboration, _Planck 2018 results. VI. Cosmological parameters_ , _Astron. Astrophys._ 641 (2020) A6 [1807.06209].
# mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus Matthieu Futeral 1,2 Armel Zebaze1,4 Pedro Ortiz Suarez5 Julien Abadji1 Rémi Lacroix3, 6 Cordelia Schmid1,2 Rachel Bawden1 Benoît Sagot1 1Inria 2Département d’informatique de l’ENS, CNRS, PSL Research University 3Institut du développement et des ressources en informatique scientifique, CNRS 4Sorbonne Université, Paris, France 5Common Crawl Foundation 6Université Paris-Saclay Correspondence to<EMAIL_ADDRESS> ###### Abstract Multimodal Large Language Models (mLLMs) are trained on a large amount of text-image data. While most mLLMs are trained on caption-like data only, Alayrac et al. (2022) showed that additionally training them on interleaved sequences of text and images can lead to the emergence of in-context learning capabilities. However, the dataset they used, M3W, is not public and is only in English. There have been attempts to reproduce their results but the released datasets are English-only. In contrast, current multilingual and multimodal datasets are either composed of caption-like only or medium-scale or fully private data. This limits mLLM research for the 7,000 other languages spoken in the world. We therefore introduce mOSCAR, to the best of our knowledge the first large-scale multilingual and multimodal document corpus crawled from the web. It covers 163 languages, 315M documents, 214B tokens and 1.2B images. We carefully conduct a set of filtering and evaluation steps to make sure mOSCAR is sufficiently safe, diverse and of good quality. We additionally train two types of multilingual model to prove the benefits of mOSCAR: (1) a model trained on a subset of mOSCAR and captioning data and (2) a model train on captioning data only. The model additionally trained on mOSCAR shows a strong boost in few-shot learning performance across various multilingual image-text tasks and benchmarks, confirming previous findings for English-only mLLMs. The dataset can be accessed here.111https://oscar- project.github.io/documentation/versions/mOSCAR/ ## 1 Introduction Multimodal large language models (mLLMs) are trained on large amounts of text- image data (Radford et al., 2021; Yu et al., 2022; Li et al., 2023; Wang et al., 2023; OpenAI, 2023; Gemini Team et al., 2023; Chameleon Team, 2024). The main paradigm until recently was to train a model from a large collection of web-crawled images and their captions (Li et al., 2021; Wang et al., 2022; Chen et al., 2023b). Models such as Flamingo (Alayrac et al., 2022) challenged this paradigm by being additionally trained on interleaved sequences of text and images from web documents, showing state-of-the-art results on various tasks and in-context learning capabilities that are not present in models trained on caption-like data only. Additionally, McKinzie et al. (2024) recently proved that including interleaved text-image data during training was necessary to get good few-shot learning performance. However, the datasets used to train mLLMs are either private (Alayrac et al., 2022), monolingual or multilingual but only medium-scale (Srinivasan et al., 2021). Some attempts have been made to reproduce these datasets (Zhu et al., 2023; Laurençon et al., 2023) but the resulting datasets are only available in English. Few image-text datasets are multilingual and most of them are obtained by translating English caption-like datasets, such as multilingual Conceptual Captions (Sharma et al., 2018), into multiple languages using neural machine translation (NMT) systems (Surís et al., 2022; Maaz et al., 2024). This presents some drawbacks such as some languages still being poorly translated by current state-of-the-art NMT models (Liu et al., 2020; Costa-jussà et al., 2022) and some cultural subtleties inherent in each language not being fully conveyed. Some efforts have been conducted to collect large-scale multilingual image captioning datasets, such as LAION-5B (Schuhmann et al., 2022), but they are limited to caption data too, are relatively noisy and more importantly contain a non-negligible share of “not safe for work” (NSFW) content such as pædopornographic images (Schuhmann et al., 2022). This motivated us to collect and release the first large-scale multilingual and multimodal document dataset derived from Common Crawl.222https://commoncrawl.org/. The Common Crawl Foundation is a non-profit organization that crawls the web on a monthly basis. Our dataset, multimodal OSCAR (mOSCAR), follows the OSCAR initiative (Ortiz Suárez et al., 2019; Abadji et al., 2021, 2022) and covers 315M documents in 163 languages, 214B tokens and 1.2B images. Figure 1 shows an example of a document, more can be found in Appendix A.3. We carry out extensive filtering to increase its safety and quality. To prove mOSCAR’s utility, we train a multilingual OpenFlamingo (Awadalla et al., 2023) from a Gemma-2B language model (Gemma Team et al., 2024) on a subset of mOSCAR and captioning data from LAION-400M (Schuhmann et al., 2021), recaptioned with BLIP (Li et al., 2022), filtered with CLIP (Radford et al., 2021) and translated with NLLB (Costa-jussà et al., 2022). We compare against a similar model trained on captioning data only and show we obtain a strong boost in few-shot learning, confirming previous findings for English (Alayrac et al., 2022; McKinzie et al., 2024; Laurençon et al., 2024). The dataset and models will be made publicly available. Figure 1: Example of a French document from mOSCAR. ## 2 Related Work ##### Large-scale web-based datasets Numerous datasets have been created by filtering web-crawled data. These include large-scale text-only datasets (Ortiz Suárez et al., 2019; Raffel et al., 2020; Wenzek et al., 2020; Gao et al., 2020; Abadji et al., 2021; Xue et al., 2021; Laurençon et al., 2022; Abadji et al., 2022; Penedo et al., 2023) and multimodal ones (Sharma et al., 2018; Changpinyo et al., 2021; Jia et al., 2021; Schuhmann et al., 2021, 2022; Byeon et al., 2022; Laurençon et al., 2023; Zhu et al., 2023; Gadre et al., 2024). Even if these datasets are not as high quality as smaller and/or hand-crafted ones, they are now the standard to pretrain foundation models, as it has been shown that training bigger models on more data leads to better downstream performances (Brown et al., 2020; Hoffmann et al., 2022; Touvron et al., 2023a, b). ##### English image-text datasets The first open-source image-text datasets were manually created, small-scale and English-only (Ordonez et al., 2011; Lin et al., 2014; Plummer et al., 2015; Krishna et al., 2017). Scaling up these datasets was an appealing solution to overcome limitations of previous image-text models; a few works (Sharma et al., 2018; Changpinyo et al., 2021) proposed to collect millions of image-text pairs from the web before filtering them with well-designed steps. Relaxing the filtering steps enabled the collection of more data and led to large-scale datasets to train image-text foundation models (Radford et al., 2021; Li et al., 2021; Schuhmann et al., 2021, 2022; Byeon et al., 2022). However, these datasets generally contain caption-like image-text pairs only, and it is therefore difficult to observe in-context learning abilities similarly to text-only language models trained on raw documents (Raffel et al., 2020). Alayrac et al. (2022) overcome this issue by training their model directly on documents with interleaved image-text data. While their results are promising, their M3W dataset is English-only and private. Recently, open- source efforts (Zhu et al., 2023; Laurençon et al., 2023) have been made to release a similar dataset but they are still monolingual. ##### Multilingual image-text datasets Only a few image-text datasets are available in multiple languages. One of the first focused on collecting Google images from short queries based on word frequencies from Wikipedia pages in 98 languages (Hewitt et al., 2018). Later, Srinivasan et al. (2021) proposed the WIT dataset, an image-text dataset composed of Wikipedia pages. Although of high quality, it is only medium-scale even for high-resource languages and there are fewer than 50k unique images for most languages. Another approach lies in bootstrapping multilingual and multimodal data from a model trained with English-only data (Mohammed et al., 2023). While effective for captioning, it is computationally expensive to implement in practice. Other multilingual image-text datasets exist but focus on captions only and are highly domain-specific (Kosar et al., 2022; Leong et al., 2022). ## 3 Dataset Creation Pipeline ### 3.1 Data collection We collect mOSCAR from the Web ARchive Content (WARC) files of three 2023 Common Crawl dumps, processing them using the FastWARC library (Bevendorff et al., 2021). We remove documents smaller than 500 bytes (50% of the documents), as we find they are usually too small to be considered documents and tend to contain noisy text. We then navigate through the entire Document Object Model (DOM) tree with a depth first search algorithm and ChatNoir library (Bevendorff et al., 2018) to extract nodes of interests corresponding to specific HTML tags. Following previous work, we extract text from the tags that usually contain the main content of web pages (we refer to them as DOM text nodes), i.e. <p>, <h*>, <title>, <description>, <ul>, <ol>, <aside>, <dl>, <dd>, <dt>. Similarly to (Laurençon et al., 2023), we choose to remove <table> content as most often it is irrelevant and difficult to render. We extract all <img> tags (we refer to them as DOM image nodes). We then remove documents with fewer than 3 text nodes (as they do not contain enough text) and more than 30 image nodes (as we found them to be too noisy). ### 3.2 Language identification We identify the language of each document using the state-of-the-art open-LID language detector (Burchell et al., 2023), covering 201 languages. We apply open-LID to each DOM text node and keep the three most probable languages with their respective probabilities. The language of the document is then determined by summing over the probabilities of each language detected for each text segment, weighted by the number of characters in the segment333This is to avoid mis-assigning the language due to the presence of many short, non- informative DOM text nodes in the same language (e.g. “Cookies”, “Subscribe”, “Newsletter” etc.) and because language identification is generally less reliable for short segments. and taking the language with the highest score. ### 3.3 Text-only filtering We apply a series of filtering steps to the text content of each document independently of the images, with the aim of discarding poor quality documents and cleaning text as best as possible. We first filter at the text-node level and then at the whole document level, before running near-deduplication to keep unique text nodes within a document and unique documents in the dataset. ##### Text node filtering We use a set of heuristics (see Appendix A.2) to extract as much human- generated content as possible while discarding noisy text related to ads and website functions (e.g. “Instagram”, “Facebook”). We then keep DOM text nodes with content over 10 bytes. This step, designed to improve the quality of extracted text, removes on average 55% of text nodes. ##### Document filtering We mostly filter “not safe for work” (NSFW) content at the document level. We use an English regular expression to detect adult content, similar to the one used by the Université Toulouse 1 Capitole444https://dsi.ut- capitole.fr/blacklists/index_en.php and remove the entire document if there is a match with any of the DOM text nodes’ contents, removing on average 0.5% of documents (mostly English ones). We acknowledge that there is a high probability that this also discards safe content, e.g. we could remove content from certain communities who use some explicit words in a non-sexual way (Sap et al., 2019). However, we explicitly favour recall over precision to minimise the risk of unsafe content. We additionally remove documents containing fewer than five DOM text nodes and fewer than 300 characters after the previous filtering steps, removing 70.6% of documents. ##### Deduplication We conduct several types of per-language deduplication at different levels, as this has been shown to improve training efficiency (Abbas et al., 2023). First, we keep unique documents only by removing exact duplicates at the document level. We also remove exact duplicates of text nodes within the same document (4% of text nodes) and near-duplicate text nodes (1% of text nodes) by computing the Levenshtein ratio (Levenshtein, 1966) between all text nodes within the same document and applying a threshold of 0.95. If near-duplicates are found, we keep the first one in the document. Finally, we conduct per language near-deduplication at the document level with MinHashLSH (Broder, 1997; Gionis et al., 1999) following Smith et al. (2022), removing on average 19% of documents:555With some disparity among languages as we found more duplicates for low- than high-resource languages. we turn documents into hashing vectors, compute min hashes from these vectors and perform Locality Sensitive Hashing to remove duplicates666We performed this using the datasketch python library. (see Appendix A.5 for more details). ### 3.4 Image-only filtering We downloaded images from the URLs in DOM image nodes using a modified version of the img2dataset toolkit (Beaumont, 2021) that includes an antivirus scan and follows robots.txt instructions to respect the Robots Exclusion Protocol. We then apply a series of filtering steps, first removing images based on heuristics, and then applying multiple NSFW detection models to remove undesirable content. Finally, we conduct a set of deduplication steps. ##### Rule-based filters Similarly to previous works (Schuhmann et al., 2021) and to avoid extracting low-resolution images and favicons, we keep images with a minimum height and width of 150 pixels. We restrict the aspect ratio to be between 3 and 1/3 (to remove banners), we remove images if their URLs contain the words “logo”, “banner”, “button”, “widget”, “icon” or “plugin” or if the image name from the URL matches “twitter”, “facebook” or “rss” (to remove logos). This step removes 13.6% of the URLs. At this stage, we downloaded 2.5B images with an average success rate of 55%. ##### NSFW detection We use multiple NSFW automatic models to remove as much unsafe content as possible. We first combine two NSFW detectors: nsfw-detector (Laborde, ), a 5-class classifier with a MobileNet (Howard et al., 2017) backbone fine-tuned on 60GB of annotated data and NudeNet,777https://github.com/vladmandic/nudenet an object detector trained to detect different types of nudity in images. We combined the two models as we found the first to be gender-biased while the second gives a large number of false positives for non-human images. Concretely, we consider an image an NSFW candidate if the sum of the probabilities for the classes ‘porn’ and ‘hentai’ is superior to 0.8 using nsfw-detector. We then tag the image as NSFW if one of the sensitive ‘exposed’ classes of NudeNet gets a probability superior to 0.5. We additionally use Safer by Thorn888https://safer.io/, a private pornography detector, and tag the image as NSFW if the probability of the class ‘pornography’ is superior to 0.8. If a document contains an image with an NSFW tag, we remove the entire document from the dataset, which removes 0.5% of images. We manually inspecting 1,000 images of the remaining data and found no NSFW content. We manually inspected 1,000 images of the removed content and found 63.4% of NSFW images. ##### CSAM content Child Sexual Abuse Material (CSAM) is widespread on the internet and is therefore likely to be found in such a large-scale dataset crawled from the web. Removing CSAM is challenging as there is no training data nor open-source detection models available as these could be used in a harmful way. We again rely on Safer, a proprietary 3-class classifier trained to detect CSAM and pornography content from images. We tag the image as CSAM if the probability of the class CSAM is superior to 0.4 to favour recall over precision. As mentioned above, if a document contains an image with a CSAM tag, we remove it from the dataset. This step removes 0.07% of the images. ##### Deduplication To avoid memorisation issues often seen in models trained on datasets with many duplicated images (Somepalli et al., 2023; Carlini et al., 2023; Webster et al., 2023; Somepalli et al., 2024), we perform deduplication at the image level. We first remove duplicate images within the same document by URL matching (removing 8.7% of URLs). We then compute a perceptual hash (pHash) for each image using the imagehash library999https://github.com/JohannesBuchner/imagehash and remove images with the same pHash within the same document, keeping only the first occurrence. We also limit the number of times an image can appear in the dataset per-language to 10 using both URL matching and perceptual hashing (this removes 2.5% of images). We do this per-language and not across languages as having the same images in documents from different languages could encourage cross-lingual transfer. ### 3.5 Data decontamination LLMs and mLLMs are trained on web-crawled data that can contain the benchmarks they are tested on (Dodge et al., 2021). As they are good at memorizing training data (Carlini et al., 2023), this data contamination is problematic. We therefore discard all images with the same perceptual hash as any of the images from the evaluation benchmarks (and their training sets) we use (see Section 5.1). This step removes on average 126,016 images for high-resource languages (up to 300K images for English), 6,862 images for mid-resource languages and 45 images for low-resource languages. ### 3.6 Text-image joint filtering Our aim is to obtain truly multimodal documents where all images are related to at least one of the text nodes in some way101010We do not limit ourselves to caption-like relation and instead allow all types of text-image relation. and vice versa. We choose to apply joint text-image filtering to discard images and/or text nodes that are irrelevant to the rest of the document (e.g. the case of ads and website functionalities). To do this, we use NLLB- SIGLIP111111siglip-base-patch16-224 as vision encoder and nllb-distilled-600M as text encoder. (Visheratin, 2023), a multilingual version of SIGLIP (Zhai et al., 2023) trained with the encoder of NLLB (Costa-jussà et al., 2022), which covers all mOSCAR languages.121212We use the open-clip (Ilharco et al., 2021) model version and the transformers (Wolf et al., 2020) library. We compute cosine similarity scores between all images and all paragraphs131313We refer to paragraph as the text content in a DOM text node. within a same document. To remove irrelevant text nodes or images in a document, we mimic a text-image retrieval task, which means we avoid using arbitrary cosine similarity thresholds for each language and can reduce length biases and those in favour of caption-like paragraphs. For each candidate pair we randomly sample 63 negative images and 63 negative similar-length paragraphs from the same language but other documents. We tag the text node (resp. image) as valid if the cosine similarity of the pair is among the top 8 of the text-to-image (resp. image-to-text) similarity scores computed with the candidate text node (resp. image) and all the negative images (resp. text nodes). This means that we tag the text node (resp. image) as valid if it has a significantly higher score than a score computed with a random image (resp. text) for at least one of the images (resp. text node) in the document. We then discard text nodes and images not tagged as valid (on average 35% of the DOM text nodes and 10% of the images within a document). After this filtering step, we apply additional text-only filters to keep documents superior to 100 bytes. ## 4 Multimodal Open Super-large Crawled Aggregated coRpus (mOSCAR) mOSCAR is extracted from three Common Crawl dumps from 2023. Due to computational constraints and in order to extract a maximum number of documents for low-resource languages, we extracted all languages from the first dump only. We removed the 6 most high-resource languages from the second dump and only extracted the languages with fewer than 1M documents for the last dump. Table 1 shows a distribution of the total number of languages and their number of documents. To avoid data poisoning (Carlini et al., 2024), we release a hash (sha512) with each mOSCAR image. #documents | 10M | 5M | 1M | 500K | 200K | 50K | 10K | 5K | 1K ---|---|---|---|---|---|---|---|---|--- #languages | 10 | 15 | 38 | 49 | 58 | 82 | 129 | 142 | 163 Table 1: Number of languages with at least $N$ documents (a) Number of images (b) Number of tokens (c) Number of images and tokens Figure 2: Distributions of numbers of tokens and images per document mOSCAR is composed of 315M documents (214B tokens, 1.2B images) from 163 languages. Figure 2 shows the distribution of images and tokens per document and their joint distribution. As shown in Figure 2(a), the mean and median number of images per document is 2 and 3.80. ### 4.1 Quality vs Diversity While improving overall data quality, the filtering steps we applied (see Section 3) necessarily have a negative impact on diversity. We therefore study the trade-off between quality and diversity and compare against previously published, well-used datasets. #### 4.1.1 Text content ##### Diversity By contruction, mOSCAR is diverse in terms of number of languages, so we focus on the diversity of mOSCAR’s English documents and compare against mmc4 (Zhu et al., 2023), OBELICS (Laurençon et al., 2023) and the English subset of WIT (Srinivasan et al., 2021). We compute the Vendi score (Friedman and Dieng, 2023) on a set of SimCSE embeddings (Gao et al., 2021) with a RoBERTa encoder (Liu et al., 2019) to evaluate the content diversity. Since embedding-based diversity metrics target content diversity well but are less relevant for lexical diversity (Tevet and Berant, 2021), we measure lexical diversity via the distinct $n$-gram ratio (Li et al., 2016). ##### Comparison with other datasets Figure 3: Perplexity of 100K random documents from different datasets. | Vendi score | Dist. $n$-gram ratio ---|---|--- mOSCAR | 69.05 ($\pm$ 0.14) | 0.472 ($\pm$ 0.002) mmc4 | 67.93 ($\pm$ 0.12) | 0.494 ($\pm$ 0.002) OBELICS | 58.49 ($\pm$ 0.09) | 0.488 ($\pm$ 0.001) WIT | 73.30 ($\pm$ 0.09) | 0.530 ($\pm$ 0.001) Table 2: Average text diversity scores ($\pm$ standard error) of text documents. For content diversity, we randomly sample 30M documents for mOSCAR, mmc4 and OBELICS and 3M documents for WIT and represent the documents by their SimCSE embedding. We compute the Vendi Score with cosine similarity on a randomly sampled subset of 65,536 documents. Table 2 shows that mOSCAR English content is more diverse than mmc4 and OBELICS but less diverse than WIT. For lexical diversity, we randomly sample 3M documents for mOSCAR, mmc4, OBELICS and WIT and compute the distinct $n$-gram ratio on a subset of 8,192 documents for $n$ from 1 to 4. Table 2 shows that mOSCAR is slightly less lexically diverse than OBELICS and mmc4, while WIT is by far the most diverse. ##### Quality To evaluate document quality, we focus on English documents and compute their perplexity using Gemma-2B (Gemma Team et al., 2024). Figure 3 shows the kernel density estimation of the distribution of the perplexity of 100K randomly sampled documents from different datasets: mOSCAR is comparable to mmc4 and WIT, while OBELICS appears to be the of the highest quality. mOSCAR is therefore comparable to other interleaved image-text dataset in terms of quality and diversity of its English subset. It is however more diverse than English-only datasets by its multilingual construction and more than 10 times larger than existing multilingual interleaved image-text datasets such as WIT. #### 4.1.2 Image diversity ##### Comparison with other datasets mOSCAR | LAION-400M | WIT ---|---|--- 55.74 ($\pm$ 0.16) | 67.59 ($\pm$ 0.16) | 36.14 ($\pm$ 0.08) (a) Comparison of different datasets. English | All ---|--- 52.36 ($\pm$ 0.18) | 54.78 ($\pm$ 2.29) (b) mOSCAR (English vs. any language). Table 3: Average Vendi score ($\pm$ standard error) of images. We compute the Vendi Score on random samples of images for different datasets, comparing the images from English mOSCAR documents with those from Conceptual Captions (Changpinyo et al., 2021), LAION-400M (Schuhmann et al., 2021) and WIT (Srinivasan et al., 2021). We represent each image by its SigLIP141414We use siglip-base-patch16-224. (Zhai et al., 2023) embedding and compute the Vendi score on batches of size 65,536 and a total of 1M images for each dataset. In Table 3(a), we notice that the set of images in mOSCAR documents are more diverse than images from WIT documents but less diverse than LAION-400M. ##### Multilingual diversity We also compare the diversity of images from English documents and of images sampled from documents of any language (English included). We use multilingual SigLIP (Chen et al., 2023a) trained on WebLI (Chen et al., 2023b) to compute image embeddings used to get the Vendi score. We again use a batch of size 65,536 and a total of 3M images, and we do not sample multiple images from a same document. For the multilingual setting, we randomly sample 50 languages and an equal number of images for each language to build the batch. As we did not do any image deduplication across languages, we could expect to have less diversity in the multilingual setting. However, Table 3(b) shows that the set of images is on average more diverse when sampled from all documents than from English-only documents. This means that the distribution of images is not exactly the same across languages, potentially due to cultural differences. ## 5 Training a multilingual multimodal language model We train a multilingual Flamingo-like model on mOSCAR. As adding captioning data to training data has been shown to improve zero-shot performance (McKinzie et al., 2024), we additionally train on LAION-400M, which we re- captioned using BLIP (Li et al., 2022), filtered with CLIP score (Radford et al., 2021) and translated using distilled NLLB-600M (Visheratin, 2023) following the proportion of languages found in mOSCAR. We use Gemma-2B (Gemma Team et al., 2024) as the underlying language model and we train the model on 35M mOSCAR documents and 70M randomly sampled image-text pairs. We also train a model on 300M image-text pairs as a comparison baseline. We additionally compare with OpenFlamingo-3B-MPT (Awadalla et al., 2023) as the translate-test baseline. The full list of languages for training and the implementation details can be found in Appendix A.5. ### 5.1 Evaluation setup We evaluate the models using a broad set of image-text multilingual tasks and benchmarks. We use the IGLUE benchmark (Bugliarello et al., 2022) composed of XVNLI, MaRVL (Liu et al., 2021) to test reasoning, xGQA (Pfeiffer et al., 2022) to test visual question answering capabilities and xFlickr&CO (Young et al., 2014; Karpathy and Fei-Fei, 2015; Yoshikawa et al., 2017) for captioning. We also include Crossmodal-3600 (XM3600) (Thapliyal et al., 2022) and MaXM (Changpinyo et al., 2022) as they cover a broader range of languages. To test to what extent models trained on mOSCAR can perform zero-shot multimodal machine translation (MMT), we also test on Multi30K (Elliott et al., 2016, 2017; Barrault et al., 2018) and CoMMuTE (Futeral et al., 2023). For captioning we compute the CideR (Vedantam et al., 2015) score and we tokenize references and model outputs with the Stanford Core NLP tokenizer for English and Stanza (Qi et al., 2020) tokenizers for other languages. To evaluate Multi30k, we compute BLEU (Papineni et al., 2002) score from Sacrebleu (Post, 2018) with 13a tokenization and default parameters. We use accuracy for CoMMuTE. More details can be found in Appendix A.5.3. ### 5.2 Results Tables 4 and 5 show the average results across all languages. Full results are available in Appendix A.6. We notice that the multilingual OpenFlamingo trained additionally on mOSCAR gets better results than the model trained on captioning data only while having seen fewer image-text pairs during training. More importantly, when increasing the number of few-shot examples from 0 to 16, it sees gains of on average +8.19 points on VQA benchmarks and +16.07 CideR points on captioning benchmarks. In contrast, the model trained on text- image pairs only sees gains of +2.82 and +9.08 points respectively. In cross- modal machine translation, the model additionally trained on interleaved data is again far better than the one trained on just captioning data, which is not able to translate the Multi30k benchmark at all.151515Most of the time, the model is not able to follow the prompt and only outputs the end of sequence token. Moreover, mOSCAR helps the model to learn to zero-shot disambiguate translations as shown by the improved average score on CoMMuTE (63.75) compared to the model trained on captions only (61.36). Multilingual Open Flamingo trained on mOSCAR & text-image pairs is also better than OpenFlamingo 3B MPT evaluated on translate test benchmarks. However, we obtain the best results (except for MaXM) by evaluating our multilingual Open Flamingo on the translate-test benchmarks since the underlying language model (Gemma-2B) is far better in English than other languages. We also notice that all models struggle with reasoning classification tasks (MaRVL, XVNLI) where they obtain scores close to random guessing. | | xFlickR&CO | XM3600 | xGQA | MaXM | MaRVL | XVNLI | Multi30K | CoMMuTE ---|---|---|---|---|---|---|---|---|--- | #shots Multi. OF | 0 | 19.07 | 8.73 | 25.08 | 19.64 | 49.77 | 33.01 | 22.70 | 63.75 4 | 34.32 | 20.59 | 31.90 | 23.90 | 49.67 | 36.07 | 22.79 | 63.65 full | 8 | 36.77 | 22.15 | 33.9 | 24.41 | 49.72 | 37.16 | 23.21 | 63.00 | 16 | 37.63 | 22.24 | 35.71 | 25.38 | 49.73 | 35.36 | 23.48 | 62.77 Multi. OF | 0 | 9.57 | 4.21 | 8.62 | 4.01 | 49.88 | 33.76 | 00.00 | 61.36 4 | 13.20 | 9.26 | 13.45 | 4.15 | 49.54 | 32.04 | 00.00 | 61.13 cap. only | 8 | 18.00 | 10.35 | 12.82 | 4.88 | 49.65 | 33.71 | 00.01 | 60.90 | 16 | 19.87 | 12.07 | 13.37 | 4.89 | 49.79 | 32.70 | 00.74 | 60.25 Table 4: Results averaged over all languages. | | xGQA | MaXM | MaRVL | XVNLI ---|---|---|---|---|--- | #shots OF-3B MPT | 0 | 18.34 | 7.68 | 49.75 | 32.73 4 | 22.97 | 7.82 | 49.70 | 35.82 8 | 28.57 | 8.32 | 49.71 | 31.29 16 | 31.82 | 9.04 | 49.72 | 33.29 Multi. OF | 0 | 30.16 | 10.06 | 49.93 | 34.66 4 | 35.55 | 9.89 | 48.99 | 36.10 full | 8 | 36.78 | 10.12 | 50.54 | 39.69 | 16 | 37.75 | 11.49 | 49.57 | 37.97 Table 5: Translate-test results averaged over languages. Figure 4: Score differences averaged over benchmarks and languages between the model trained on mOSCAR + text-image pairs and the model trained only on text- image pairs. We additionally compare results at different training steps, defined by the number of images seen during training. Figure 4 shows the difference of averaged scores between the model trained on all data and the model trained only on text-images pairs. We notice that the gap first decreases until 20M images seen and keep increasing over training at all training steps after that. Particularly, the gap is wider for few-shot learning. ## 6 Conclusion, Limitations and Societal Impacts We introduce mOSCAR, a large-scale multilingual and multimodal dataset covering 163 languages and composed of 315M documents, 214B tokens and 1.2B images. We show that mOSCAR is of good quality, diverse and can be used to train a multilingual and multimodal LLM. We ensure that mOSCAR is as safe as possible by applying a series of filtering steps to remove NSFW content. We however did not conduct any toxicity analysis or evaluate its biases as this is challenging in a multilingual setting. As it is crawled from the internet, it is possible that mOSCAR reflects biases widespread on it. Nevertheless, by its multilingual nature, mOSCAR is a step towards the inclusion of more languages, cultures and people in accessing mLLMs. ## Acknowledgements This work was granted access to the HPC resources of IDRIS under the allocation 2024-AD011014232R1, 2023-AD011014232 and 2023-AD011012254 made by GENCI. It was also partly funded by the last three authors’ chairs in the PRAIRIE institute funded by the French national agency ANR as part of the “Investissements d’avenir” programme under the reference ANR-19- P3IA-0001. We deeply thanks the Jean-Zay support team. We also thanks Filip Šedivý for insightful discussions regarding the removal of CSAM, Thorn for having provided access to their CSAM detector, Zeeshan Khan for discussions regarding the training of the models and Victoria Le Fourner for having manually checked subsamples of NSFW images. ## References * Abadji et al. [2021] Julien Abadji, Pedro Javier Ortiz Suárez, Laurent Romary, and Benoît Sagot. Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus. In Harald Lüngen, Marc Kupietz, Piotr Bański, Adrien Barbaresi, Simon Clematide, and Ines Pisetta, editors, _Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9)_ , pages 1–9, Limerick, 2021. Leibniz-Institut für Deutsche Sprache. doi: 10.14618/ids-pub-10468. URL https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688. * Abadji et al. [2022] Julien Abadji, Pedro Ortiz Suarez, Laurent Romary, and Benoît Sagot. Towards a cleaner document-oriented multilingual crawled corpus. In Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, and Stelios Piperidis, editors, _Proceedings of the Thirteenth Language Resources and Evaluation Conference_ , pages 4344–4355, Marseille, France, June 2022. European Language Resources Association. URL https://aclanthology.org/2022.lrec-1.463. * Abbas et al. [2023] Amro Abbas, Kushal Tirumala, Dániel Simig, Surya Ganguli, and Ari S Morcos. Semdedup: Data-efficient learning at web-scale through semantic deduplication. _arXiv preprint arXiv:2303.09540_ , 2023. * Alayrac et al. [2022] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. _Advances in Neural Information Processing Systems_ , 35:23716–23736, 2022. * Awadalla et al. [2023] Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo: An open-source framework for training large autoregressive vision-language models. _arXiv preprint arXiv:2308.01390_ , 2023. * Barrault et al. [2018] Loïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. Findings of the third shared task on multimodal machine translation. In _Proceedings of the Third Conference on Machine Translation: Shared Task Papers_ , pages 304–323, 2018. * Beaumont [2021] Romain Beaumont. img2dataset: Easily turn large sets of image urls to an image dataset. https://github.com/rom1504/img2dataset, 2021. * Bevendorff et al. [2018] Janek Bevendorff, Benno Stein, Matthias Hagen, and Martin Potthast. Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl. In Leif Azzopardi, Allan Hanbury, Gabriella Pasi, and Benjamin Piwowarski, editors, _Advances in Information Retrieval. 40th European Conference on IR Research (ECIR 2018)_ , Lecture Notes in Computer Science, Berlin Heidelberg New York, 2018. Springer. * Bevendorff et al. [2021] Janek Bevendorff, Martin Potthast, and Benno Stein. Fastwarc: optimizing large-scale web archive analytics. _arXiv preprint arXiv:2112.03103_ , 2021. * Broder [1997] A.Z. Broder. On the resemblance and containment of documents. In _Proceedings of the Compression and Complexity of Sequences 1997_ , pages 21–29. IEEE Computer Society, 1997. doi: 10.1109/SEQUEN.1997.666900. * Brown et al. [2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020. * Bugliarello et al. [2022] Emanuele Bugliarello, Fangyu Liu, Jonas Pfeiffer, Siva Reddy, Desmond Elliott, Edoardo Maria Ponti, and Ivan Vulić. Iglue: A benchmark for transfer learning across modalities, tasks, and languages. In _International Conference on Machine Learning_ , pages 2370–2392. PMLR, 2022. * Burchell et al. [2023] Laurie Burchell, Alexandra Birch, Nikolay Bogoychev, and Kenneth Heafield. An open dataset and model for language identification. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 865–879, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-short.75. URL https://aclanthology.org/2023.acl-short.75. * Byeon et al. [2022] Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset, 2022. * Carlini et al. [2023] Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. In _Proceedings of the 32nd USENIX Conference on Security Symposium_ , SEC ’23, USA, 2023. USENIX Association. ISBN 978-1-939133-37-3. * Carlini et al. [2024] Nicolas Carlini, Matthew Jagielski, Christopher Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, and Florian Tramèr. Poisoning web-scale training datasets is practical. In _Proceedings of the 2024 IEEE Symposium on Security and Privacy (SP)_ , pages 179–179, Los Alamitos, CA, USA, 2024. IEEE Computer Society. doi: 10.1109/SP54263.2024.00179. URL https://doi.ieeecomputersociety.org/10.1109/SP54263.2024.00179. * Changpinyo et al. [2021] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 3558–3568, Nashville, TN, USA, 2021. * Changpinyo et al. [2022] Soravit Changpinyo, Linting Xue, Idan Szpektor, Ashish V Thapliyal, Julien Amelot, Michal Yarom, Xi Chen, and Radu Soricut. Maxm: Towards multilingual visual question answering. _arXiv preprint arXiv:2209.05401_ , 2022. * Chen et al. [2023a] Xi Chen, Xiao Wang, Lucas Beyer, Alexander Kolesnikov, Jialin Wu, Paul Voigtlaender, Basil Mustafa, Sebastian Goodman, Ibrahim Alabdulmohsin, Piotr Padlewski, et al. Pali-3 vision language models: Smaller, faster, stronger. _arXiv preprint arXiv:2310.09199_ , 2023a. * Chen et al. [2023b] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. In _Proceedings of the International Conference on Learning Representations_ , Kigali, Rwanda, 2023b. * Costa-jussà et al. [2022] Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. No language left behind: Scaling human-centered machine translation. _arXiv preprint arXiv:2207.04672_ , 2022. * Dodge et al. [2021] Jesse Dodge, Maarten Sap, Ana Marasović, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih, editors, _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1286–1305, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.98. URL https://aclanthology.org/2021.emnlp-main.98. * Elliott et al. [2016] Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. Multi30k: Multilingual english-german image descriptions. In _Proceedings of the 5th Workshop on Vision and Language_ , pages 70–74. Association for Computational Linguistics, 2016. doi: 10.18653/v1/W16-3210. URL http://www.aclweb.org/anthology/W16-3210. * Elliott et al. [2017] Desmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, and Lucia Specia. Findings of the second shared task on multimodal machine translation and multilingual image description. In _Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers_ , pages 215–233, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W17-4718. * Friedman and Dieng [2023] Dan Friedman and Adji Bousso Dieng. The vendi score: A diversity evaluation metric for machine learning. _Transactions on Machine Learning Research_ , 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=g97OHbQyk1. * Futeral et al. [2023] Matthieu Futeral, Cordelia Schmid, Ivan Laptev, Benoît Sagot, and Rachel Bawden. Tackling ambiguity with images: Improved multimodal machine translation and contrastive evaluation. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 5394–5413, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.295. URL https://aclanthology.org/2023.acl-long.295. * Gadre et al. [2024] Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. _Advances in Neural Information Processing Systems_ , 36, 2024. * Gao et al. [2020] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. _arXiv preprint arXiv:2101.00027_ , 2020. * Gao et al. [2021] Tianyu Gao, Xingcheng Yao, and Danqi Chen. SimCSE: Simple contrastive learning of sentence embeddings. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih, editors, _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6894–6910, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.552. URL https://aclanthology.org/2021.emnlp-main.552. * Gionis et al. [1999] Aristides Gionis, Piotr Indyk, Rajeev Motwani, et al. Similarity search in high dimensions via hashing. In _Proceedings of the 25th VLDB Conference_ , volume 99, pages 518–529, Edinburgh, Scotland, UK, 1999. * Hewitt et al. [2018] John Hewitt, Daphne Ippolito, Brendan Callahan, Reno Kriz, Derry Tanti Wijaya, and Chris Callison-Burch. Learning translations via images with a massively multilingual image dataset. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , Melbourne, Australia, July 2018. Association for Computational Linguistics. * Hoffmann et al. [2022] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Thomas Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karén Simonyan, Erich Elsen, Oriol Vinyals, Jack Rae, and Laurent Sifre. Training compute-optimal large language models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, _Advances in Neural Information Processing Systems_ , volume 35, pages 30016–30030. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/c1e2faff6f588870935f114ebe04a3e5-Paper-Conference.pdf. * Howard et al. [2017] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_ , 2017. * Ilharco et al. [2021] Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. OpenCLIP, July 2021. URL https://doi.org/10.5281/zenodo.5143773. * Jia et al. [2021] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In _Proceedings of the Thirty-Eighth International Conference on Machine Learning_ , pages 4904–4916, online, 2021. PMLR. * Karpathy and Fei-Fei [2015] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 3128–3137, 2015. * Kosar et al. [2022] Vaclav Kosar, Antonín Hoskovec, Milan Šulc, and Radek Bartyzal. GLAMI-1M: A Multilingual Image-Text Fashion Dataset. In _Proceedings of the 33rd British Machine Vision Conference_ , London, UK, 2022. BMVA Press. URL https://bmvc2022.mpi-inf.mpg.de/0607.pdf. * Krishna et al. [2017] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. _International journal of computer vision_ , 123:32–73, 2017. * [39] Gant Laborde. Deep nn for nsfw detection. URL https://github.com/GantMan/nsfw_model. * Laurençon et al. [2022] Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Alexandra Luccioni, and Yacine Jernite. The bigscience roots corpus: A 1.6tb composite multilingual dataset. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, _Advances in Neural Information Processing Systems_ , volume 35, pages 31809–31826. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ce9e92e3de2372a4b93353eb7f3dc0bd-Paper-Datasets_and_Benchmarks.pdf. * Laurençon et al. [2023] Hugo Laurençon, Lucile Saulnier, Leo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander Rush, Douwe Kiela, Matthieu Cord, and Victor Sanh. OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, _Advances in Neural Information Processing Systems_ , volume 36, pages 71683–71702. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/e2cfb719f58585f779d0a4f9f07bd618-Paper-Datasets_and_Benchmarks.pdf. * Laurençon et al. [2024] Hugo Laurençon, Léo Tronchon, Matthieu Cord, and Victor Sanh. What matters when building vision-language models? _arXiv preprint arXiv:2405.02246_ , 2024. * Leong et al. [2022] Colin Leong, Joshua Nemecek, Jacob Mansdorfer, Anna Filighera, Abraham Owodunni, and Daniel Whitenack. Bloom library: Multimodal datasets in 300+ languages for a variety of downstream tasks. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 8608–8621, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.590. URL https://aclanthology.org/2022.emnlp-main.590. * Levenshtein [1966] Vladimir Iosifovich Levenshtein. Binary codes capable of correcting deletions, insertions and reversals. _Soviet Physics Doklady_ , 10(8):707–710, 1966. Doklady Akademii Nauk SSSR, V163 No4 845-848 1965. * Li et al. [2016] Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. In Kevin Knight, Ani Nenkova, and Owen Rambow, editors, _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 110–119, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1014. URL https://aclanthology.org/N16-1014. * Li et al. [2021] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. _Advances in neural information processing systems_ , 34:9694–9705, 2021. * Li et al. [2022] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. BLIP: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _Proceedings of the 39 th International Conference on Machine Learning_ , pages 12888–12900, Baltimore, Maryland, USA, 2022. PMLR. * Li et al. [2023] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In _Proceedings of the 40th International Conference on Machine Learning_ , pages 19730––19742, Honolulu Hawaii USA, 2023. * Lin et al. [2014] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In _Proceedings of the 13th European Conference on Computer Vision_ , pages 740–755, Zurich, Switzerland, 2014. Springer. * Liu et al. [2021] Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, and Desmond Elliott. Visually grounded reasoning across languages and cultures. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih, editors, _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 10467–10485, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.818. URL https://aclanthology.org/2021.emnlp-main.818. * Liu et al. [2019] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ , 2019. * Liu et al. [2020] Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. _Transactions of the Association for Computational Linguistics_ , 8:726–742, 2020. doi: 10.1162/tacl_a_00343. URL https://aclanthology.org/2020.tacl-1.47. * Maaz et al. [2024] Muhammad Maaz, Hanoona Rasheed, Abdelrahman Shaker, Salman Khan, Hisham Cholakal, Rao M Anwer, Tim Baldwin, Michael Felsberg, and Fahad S Khan. PALO: A Polyglot Large Multimodal Model for 5B People. _arXiv preprint arXiv:2402.14818_ , 2024. * Chameleon Team [2024] Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. _arXiv preprint arXiv:2405.09818_ , 2024. * Gemini Team et al. [2023] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. _arXiv preprint arXiv:2312.11805_ , 2023. * Gemma Team et al. [2024] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on Gemini research and technology. _arXiv preprint arXiv:2403.08295_ , 2024. * McKinzie et al. [2024] Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Floris Weers, et al. MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training. _arXiv preprint arXiv:2403.09611_ , 2024. * Mohammed et al. [2023] Owais Khan Mohammed, Kriti Aggarwal, Qiang Liu, Saksham Singhal, Johan Bjorck, and Subhojit Som. Bootstrapping a high quality multilingual multimodal dataset for Bletchley. In Emtiyaz Khan and Mehmet Gonen, editors, _Proceedings of The 14th Asian Conference on Machine Learning_ , volume 189 of _Proceedings of Machine Learning Research_ , pages 738–753. PMLR, 12–14 Dec 2023. URL https://proceedings.mlr.press/v189/mohammed23a.html. * OpenAI [2023] OpenAI. GPT-4 Technical Report. _ArXiv_ , abs/2303.08774, 2023. URL https://arxiv.org/abs/2303.08774. * Ordonez et al. [2011] Vicente Ordonez, Girish Kulkarni, and Tamara Berg. Im2Text: Describing Images Using 1 Million Captioned Photographs. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K.Q. Weinberger, editors, _Advances in Neural Information Processing Systems_ , volume 24. Curran Associates, Inc., 2011. URL https://proceedings.neurips.cc/paper_files/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf. * Ortiz Suárez et al. [2019] Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Piotr Bański, Adrien Barbaresi, Hanno Biber, Evelyn Breiteneder, Simon Clematide, Marc Kupietz, Harald Lüngen, and Caroline Iliadi, editors, _Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7)_ , pages 9 – 16, Cardiff, UK, 2019. Leibniz-Institut für Deutsche Sprache. doi: 10.14618/ids-pub-9021. URL http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215. * Papineni et al. [2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Pierre Isabelle, Eugene Charniak, and Dekang Lin, editors, _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://aclanthology.org/P02-1040. * Penedo et al. [2023] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Hamza Alobeidli, Alessandro Cappelli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: Outperforming curated corpora with web data only. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, _Advances in Neural Information Processing Systems_ , volume 36, pages 79155–79172. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/fa3ed726cc5073b9c31e3e49a807789c-Paper-Datasets_and_Benchmarks.pdf. * Pfeiffer et al. [2022] Jonas Pfeiffer, Gregor Geigle, Aishwarya Kamath, Jan-Martin Steitz, Stefan Roth, Ivan Vulić, and Iryna Gurevych. xGQA: Cross-lingual visual question answering. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, _Findings of the Association for Computational Linguistics: ACL 2022_ , pages 2497–2511, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-acl.196. URL https://aclanthology.org/2022.findings-acl.196. * Plummer et al. [2015] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In _Proceedings of the IEEE international conference on computer vision_ , pages 2641–2649, Santiago, Chile, 2015. * Post [2018] Matt Post. A call for clarity in reporting BLEU scores. In Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Lucia Specia, Marco Turchi, and Karin Verspoor, editors, _Proceedings of the Third Conference on Machine Translation: Research Papers_ , pages 186–191, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-6319. URL https://aclanthology.org/W18-6319. * Qi et al. [2020] Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. Stanza: A Python natural language processing toolkit for many human languages. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , 2020. URL https://nlp.stanford.edu/pubs/qi2020stanza.pdf. * Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _Proceedings of the 38 th International Conference on Machine Learning_ , pages 8748–8763, online, 2021. PMLR. * Raffel et al. [2020] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_ , 21(1):5485–5551, 2020. * Sap et al. [2019] Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. The risk of racial bias in hate speech detection. In Anna Korhonen, David Traum, and Lluís Màrquez, editors, _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 1668–1678, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1163. URL https://aclanthology.org/P19-1163. * Schuhmann et al. [2021] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. In _Proceedings of the Data Centric AI NeurIPS Workshop 2021_ , online, 2021. * Schuhmann et al. [2022] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. Laion-5b: An open large-scale dataset for training next generation image-text models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, _Advances in Neural Information Processing Systems_ , volume 35, pages 25278–25294. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/a1859debfb3b59d094f3504d5ebb6c25-Paper-Datasets_and_Benchmarks.pdf. * Sharma et al. [2018] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Iryna Gurevych and Yusuke Miyao, editors, _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2556–2565, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1238. URL https://aclanthology.org/P18-1238. * Smith et al. [2022] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model. _arXiv preprint arXiv:2201.11990_ , 2022. * Somepalli et al. [2023] Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Diffusion art or digital forgery? investigating data replication in diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 6048–6058, 2023. * Somepalli et al. [2024] Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Understanding and mitigating copying in diffusion models. _Advances in Neural Information Processing Systems_ , 36, 2024. * Srinivasan et al. [2021] Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. WIT: Wikipedia-based image text dataset for multimodal multilingual machine learning. In _Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , pages 2443–2449, online, 2021. * Surís et al. [2022] Dídac Surís, Dave Epstein, and Carl Vondrick. Globetrotter: Connecting languages by connecting images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 16474–16484, Canada, 2022. * Tevet and Berant [2021] Guy Tevet and Jonathan Berant. Evaluating the evaluation of diversity in natural language generation. In Paola Merlo, Jorg Tiedemann, and Reut Tsarfaty, editors, _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 326–346, Online, April 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.eacl-main.25. URL https://aclanthology.org/2021.eacl-main.25. * Thapliyal et al. [2022] Ashish V. Thapliyal, Jordi Pont Tuset, Xi Chen, and Radu Soricut. Crossmodal-3600: A massively multilingual multimodal evaluation dataset. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 715–729, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.45. URL https://aclanthology.org/2022.emnlp-main.45. * Touvron et al. [2023a] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and Efficient Foundation Language Models. _arXiv preprint arXiv:2302.13971_ , 2023a. * Touvron et al. [2023b] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_ , 2023b. * Vedantam et al. [2015] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 4566–4575, 2015. * Visheratin [2023] Alexander Visheratin. NLLB-CLIP – train performant multilingual image retrieval model on a budget. _arXiv preprint arXiv:2309.01859_ , 2023. * Wang et al. [2023] Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, et al. CogVLM: Visual Expert for Pretrained Language Models. _arXiv preprint arXiv:2311.03079_ , 2023. * Wang et al. [2022] Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. _arXiv preprint arXiv:2208.10442_ , 2022. * Webster et al. [2023] Ryan Webster, Julien Rabin, Loic Simon, and Frederic Jurie. On the de-duplication of laion-2b. _arXiv preprint arXiv:2303.12733_ , 2023. * Wenzek et al. [2020] Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. CCNet: Extracting high quality monolingual datasets from web crawl data. In Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, _Proceedings of the Twelfth Language Resources and Evaluation Conference_ , pages 4003–4012, Marseille, France, May 2020. European Language Resources Association. ISBN 979-10-95546-34-4. URL https://aclanthology.org/2020.lrec-1.494. * Wolf et al. [2020] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6. * Xue et al. [2021] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. mT5: A massively multilingual pre-trained text-to-text transformer. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, editors, _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 483–498, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.41. URL https://aclanthology.org/2021.naacl-main.41. * Yoshikawa et al. [2017] Yuya Yoshikawa, Yutaro Shigeto, and Akikazu Takeuchi. STAIR captions: Constructing a large-scale Japanese image caption dataset. In Regina Barzilay and Min-Yen Kan, editors, _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 417–421, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-2066. URL https://aclanthology.org/P17-2066. * Young et al. [2014] Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. _Transactions of the Association for Computational Linguistics_ , 2:67–78, 2014. doi: 10.1162/tacl_a_00166. URL https://aclanthology.org/Q14-1006. * Yu et al. [2022] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. _arXiv preprint arXiv:2205.01917_ , 2022. * Zhai et al. [2023] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_ , pages 11975–11986, October 2023. * Zhu et al. [2023] Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal c4: An open, billion-scale corpus of images interleaved with text. _arXiv preprint arXiv:2304.06939_ , 2023. ## Appendix A Appendix ### A.1 mOSCAR languages & statistics Languages | | Statistics ---|---|--- Lang. name | Code | Family | Script | | #documents | #images | #tokens Acehnese | ace_Latn | Austronesian | Latin | | 7,803 | 32,461 | 2,889,134 Mesopotamian Arabic | acm_Arab | Afro-Asiatic | Arabic | | 2,274 | 10,620 | 1,047,748 Tunisian Arabic | aeb_Arab | Afro-Asiatic | Arabic | | 7,640 | 41,570 | 2,715,187 Afrikaans | afr_Latn | Indo-European | Latin | | 54,895 | 247,774 | 39,956,585 South Levantine Arabic | ajp_Arab | Afro-Asiatic | Arabic | | 12,098 | 87,837 | 5,167,813 Tosk Albanian | als_Latn | Indo-European | Latin | | 861,678 | 2,569,164 | 452,737,251 Amharic | amh_Ethi | Afro-Asiatic | Ge‘ez | | 39,588 | 152,646 | 35,089,019 North Levantine Arabic | apc_Arab | Afro-Asiatic | Arabic | | 19,904 | 128,966 | 9,560,701 Modern Standard Arabic | arb_Arab | Afro-Asiatic | Arabic | | 3,936,851 | 15,126,931 | 3,401,919,964 Najdi Arabic | ars_Arab | Afro-Asiatic | Arabic | | 60,229 | 296,741 | 43,610,873 Moroccan Arabic | ary_Arab | Afro-Asiatic | Arabic | | 142,386 | 698,051 | 204,723,454 Egyptian Arabic | arz_Arab | Afro-Asiatic | Arabic | | 835,529 | 4,054,632 | 653,626,387 Assamese | asm_Beng | Indo-European | Bengali | | 3,948 | 9,210 | 640,390 Asturian | ast_Latn | Indo-European | Latin | | 165,745 | 962,723 | 37,547,944 Awadhi | awa_Deva | Indo-European | Devanagari | | 29,324 | 107,483 | 4,961,635 Central Aymara | ayr_Latn | Aymaran | Latin | | 27,384 | 151,889 | 5,148,970 South Azerbaijani | azb_Arab | Turkic | Arabic | | 8,274 | 38,233 | 5,256,693 North Azerbaijani | azj_Latn | Turkic | Latin | | 516,021 | 1,808,060 | 257,825,849 Bashkir | bak_Cyrl | Turkic | Cyrillic | | 4,532 | 17,174 | 3,038,766 Bambara | bam_Latn | Manding | Latin | | 7,674 | 39,190 | 1,243,332 Balinese | ban_Latn | Austronesian | Latin | | 1,886 | 11,266 | 542,015 Belarusian | bel_Cyrl | Indo-European | Cyrillic | | 63,309 | 287,539 | 72,976,520 Bemba | bem_Latn | Atlantic–Congo | Latin | | 1,096 | 7,479 | 1,340,471 Bengali | ben_Beng | Indo-European | Bengali | | 270,406 | 947,035 | 35,858,814 Bhojpuri | bho_Deva | Indo-European | Devanagari | | 6,366 | 28,131 | 875,463 Banjar | bjn_Latn | Austronesian | Latin | | 5,427 | 27,803 | 1,898,526 Bosnian | bos_Latn | Indo-European | Latin | | 1,960,599 | 7,633,049 | 1,255,000,505 Buginese | bug_Latn | Austronesian | Latin | | 3,312 | 18,648 | 588,678 Bulgarian | bul_Cyrl | Indo-European | Cyrillic | | 2,591,998 | 11,670,028 | 1,760,971,620 Catalan | cat_Latn | Indo-European | Latin | | 1,153,864 | 4,736,634 | 606,447,390 Cebuano | ceb_Latn | Austronesian | Latin | | 16,990 | 91,234 | 10,748,818 Czech | ces_Latn | Indo-European | Latin | | 3,918,837 | 13,291,309 | 2,823,172,996 Central Kurdish | ckb_Arab | Indo-European | Arabic | | 36,725 | 136,566 | 22,322,689 Crimean Tatar | crh_Latn | Turkic | Latin | | 6,376 | 24,124 | 1,742,727 Welsh | cym_Latn | Indo-European | Latin | | 40,408 | 165,897 | 27,748,345 Danish | dan_Latn | Indo-European | Latin | | 2,076,298 | 9,559,600 | 1,238,277,499 German | deu_Latn | Indo-European | Latin | | 20,662,696 | 87,976,200 | 8,544,986,218 Southwestern Dinka | dik_Latn | Nilo-Saharan | Latin | | 1,712 | 6,635 | 1,319,943 Greek | ell_Grek | Indo-European | Greek | | 4,916,081 | 15,209,058 | 2,923,201,041 English | eng_Latn | Indo-European | Latin | | 52,215,013 | 207,904,315 | 33,570,108,782 Esperanto | epo_Latn | Artificial | Latin | | 25,157 | 124,996 | 28,586,195 Estonian | est_Latn | Uralic | Latin | | 1,040,368 | 5,217,366 | 619,215,048 Basque | eus_Latn | Isolate | Latin | | 849,043 | 3,445,539 | 277,145,498 Faroese | fao_Latn | Indo-European | Latin | | 15,411 | 60,340 | 6,691,327 Fijian | fij_Latn | Austronesian | Latin | | 1,528 | 8,776 | 487,388 Finnish | fin_Latn | Uralic | Latin | | 2,396,033 | 10,365,333 | 1,781,044,864 French | fra_Latn | Indo-European | Latin | | 20,305,739 | 78,179,601 | 14,362,579,829 Friulian | fur_Latn | Indo-European | Latin | | 37,290 | 256,456 | 5,949,600 Nigerian Fulfulde | fuv_Latn | Atlantic-Congo | Latin | | 1,568 | 7,124 | 401,852 West Central Oromo | gaz_Latn | Afro-Asiatic | Latin | | 4,058 | 11,763 | 1,786,093 Scottish Gaelic | gla_Latn | Indo-European | Latin | | 29,710 | 153,249 | 14,605,090 Irish | gle_Latn | Indo-European | Latin | | 68,858 | 315,132 | 47,438,400 Galician | glg_Latn | Indo-European | Latin | | 518,973 | 2,381,475 | 217,063,180 Guarani | grn_Latn | Tupian | Latin | | 490,945 | 2,416,633 | 89,921,114 Gujarati | guj_Gujr | Indo-European | Gujarati | | 23,062 | 91,320 | 3,324,866 Haitian Creole | hat_Latn | Indo-European | Latin | | 257,745 | 1,570,699 | 62,847,106 Hausa | hau_Latn | Afro-Asiatic | Latin | | 25,364 | 104,934 | 13,089,932 Hebrew | heb_Hebr | Afro-Asiatic | Hebrew | | 1,109,591 | 4,766,483 | 893,327,320 Hindi | hin_Deva | Indo-European | Devanagari | | 579,430 | 1,830,667 | 122,558,353 Chhattisgarhi | hne_Deva | Indo-European | Devanagari | | 1,581 | 7,263 | 273,174 Croatian | hrv_Latn | Indo-European | Latin | | 1,719,617 | 8,425,510 | 1,010,674,096 Hungarian | hun_Latn | Uralic | Latin | | 3,534,506 | 15,390,083 | 2,831,715,050 Armenian | hye_Armn | Indo-European | Armenian | | 339,962 | 1,141,885 | 205,635,952 Igbo | ibo_Latn | Atlantic-Congo | Latin | | 11,529 | 68,049 | 8,701,070 Ilocano | ilo_Latn | Austronesian | Latin | | 78,872 | 523,195 | 8,116,113 Indonesian | ind_Latn | Austronesian | Latin | | 7,016,291 | 17,324,777 | 3,981,843,468 Icelandic | isl_Latn | Indo-European | Latin | | 244,676 | 1,027,465 | 137,015,973 Italian | ita_Latn | Indo-European | Latin | | 12,937,153 | 47,476,971 | 8,311,790,842 Javanese | jav_Latn | Austronesian | Latin | | 24,785 | 135,583 | 16,908,805 Japanese | jpn_Jpan | Japonic | Kanji | | 14,415,292 | 23,893,768 | 8,923,348,944 Kabyle | kab_Latn | Afro-Asiatic | Latin | | 18,508 | 106,730 | 4,079,553 Kannada | kan_Knda | Dravidian | Kannada | | 12,978 | 42,621 | 1,442,776 Kashmiri | kas_Arab | Indo-European | Arabic | | 3,109 | 11,408 | 5,731,910 Georgian | kat_Geor | Kartvelian | Georgian | | 354,436 | 1,304,281 | 275,223,026 Kazakh | kaz_Cyrl | Turkic | Cyrillic | | 252,242 | 732,648 | 140,049,214 Halh Mongolian | khk_Cyrl | Mongolic | Cyrillic | | 124,412 | 508,217 | 84,535,241 Khmer | khm_Khmr | Austroasiatic | Kher | | 24,495 | 122,243 | 3,043,925 Kinyarwanda | kin_Latn | Atlantic-Congo | Latin | | 30,401 | 172,201 | 12,049,616 Kyrgyz | kir_Cyrl | Uralic | Cyrillic | | 53,010 | 199,713 | 34,404,281 Northern Kurdish | kmr_Latn | Indo-European | Latin | | 39,262 | 164,666 | 23,834,960 Korean | kor_Hang | Koreanic | Hanja | | 2,614,089 | 13,563,283 | 2,006,080,705 Lao | lao_Laoo | Kra-Dai | Lao | | 50,611 | 208,768 | 31,029,380 Ligurian | lij_Latn | Indo-European | Latin | | 8,751 | 56,266 | 2,958,179 Limburgish | lim_Latn | Indo-European | Latin | | 189,547 | 1,076,047 | 42,534,327 Lingala | lin_Latn | Atlantic-Congo | Latin | | 24,614 | 152,132 | 4,053,459 Lithuanian | lit_Latn | Indo-European | Latin | | 1,688,811 | 8,869,443 | 1,161,476,040 Lombard | lmo_Latn | Indo-European | Latin | | 30,506 | 151,855 | 9,058,614 Latgalian | ltg_Latn | Indo-European | Latin | | 11,948 | 61,624 | 4,148,492 Luxembourgish | ltz_Latn | Indo-European | Latin | | 44,987 | 246,346 | 16,676,872 Ganda | lug_Latn | Afro-Asiatic | Latin | | 1,878 | 7,215 | 789,917 Mizo | lus_Latn | Sino-Tibetan | Latin | | 7,880 | 26,817 | 4,978,472 Standard Latvian | lvs_Latn | Indo-European | Latin | | 896,243 | 4,141,648 | 587,653,855 Magahi | mag_Deva | Indo-European | Devanagari | | 1,097 | 3,847 | 205,763 Malayalam | mal_Mlym | Dravidian | Malayalam | | 14,140 | 52,679 | 1,689,010 Marathi | mar_Deva | Indo-European | Devanagari | | 50,391 | 163,868 | 6,689,250 Minangkabau | min_Latn | Austronesian | Latin | | 9,341 | 35,309 | 1,256,931 Macedonian | mkd_Cyrl | Indo-European | Cyrillic | | 542,250 | 1,853,070 | 307,232,151 Maltese | mlt_Latn | Afro-Asiatic | Latin | | 120,888 | 709,242 | 36,097,957 Maori | mri_Latn | Austronesian | Latin | | 24,322 | 130,137 | 24,957,914 Burmese | mya_Mymr | Sino-Tibetan | Mon | | 8,144 | 44,188 | 539,527 Dutch | nld_Latn | Indo-European | Latin | | 17,096,727 | 65,606,013 | 9,670,041,731 Norwegian Nynorsk | nno_Latn | Indo-European | Latin | | 199,355 | 1,012,313 | 67,799,774 Norwegian Bokmål | nob_Latn | Indo-European | Latin | | 2,229,702 | 9,698,128 | 1,294,178,095 Nepali | npi_Deva | Indo-European | Devanagari | | 31,239 | 127,193 | 3,138,539 Nyanja | nya_Latn | Atlantic-Congo | Latin | | 12,047 | 67,192 | 8,596,769 Occitan | oci_Latn | Indo-European | Latin | | 164,852 | 671,881 | 59,309,549 Odia | ory_Orya | Indo-European | Odia | | 4,319 | 15,574 | 378,635 Pangasinan | pag_Latn | Austronesian | Latin | | 4,214 | 32,287 | 546,071 Eastern Panjabi | pan_Guru | Indo-European | Gurmukhi | | 11,497 | 46,168 | 1,887,991 Papiamento | pap_Latn | Indo-European | Latin | | 55,224 | 363,015 | 10,002,655 Southern Pasto | pbt_Arab | Indo-European | Arabic | | 32,604 | 110,807 | 29,170,322 Western Persian | pes_Arab | Indo-European | Arabic | | 7,048,946 | 25,200,571 | 6,210,479,015 Plateau Malgasy | plt_Latn | Austronesian | Latin | | 32,521 | 120,673 | 29,263,848 Polish | pol_Latn | Indo-European | Latin | | 14,549,605 | 60,639,244 | 11,104,144,109 Portuguese | por_Latn | Indo-European | Latin | | 8,145,664 | 26,530,423 | 4,760,063,083 Dari | prs_Arab | Indo-European | Arabic | | 515,041 | 2,589,859 | 517,053,967 Ayacucho Quechua | quy_Latn | Quechuan | Latin | | 1,578 | 11,817 | 362,690 Romanian | ron_Latn | Indo-European | Latin | | 5,180,171 | 17,964,048 | 3,548,291,261 Rundi | run_Latn | Atlantic-Congo | Latin | | 20,001 | 67,096 | 8,686,054 Russian | rus_Cyrl | Indo-European | Cyrillic | | 15,913,845 | 69,542,828 | 18,909,213,208 Sango | sag_Latn | Atlantic-Congo | Latin | | 2,124 | 13,556 | 454,455 Sicilian | scn_Latn | Indo-European | Latin | | 73,199 | 424,362 | 27,110,743 Sinhala | sin_Sinh | Indo-European | Sinhalese | | 58,767 | 221,183 | 14,270,972 Slovak | slk_Latn | Indo-European | Latin | | 3,008,599 | 15,067,234 | 1,963,804,563 Slovenian | slv_Latn | Indo-European | Latin | | 1,472,025 | 7,210,285 | 935,834,754 Samoan | smo_Latn | Austronesian | Latin | | 12,346 | 71,359 | 14,954,824 Shona | sna_Latn | Atlantic-Congo | Latin | | 12,698 | 68,782 | 6,112,600 Sindhi | snd_Arab | Indo-European | Arabic | | 21,095 | 74,289 | 17,647,825 Somali | som_Latn | Afro-Asiatic | Latin | | 77,343 | 301,429 | 34,554,975 Southern Sotho | sot_Latn | Atlantic-Congo | Latin | | 7,718 | 43,146 | 6,156,450 Spanish | spa_Latn | Indo-European | Latin | | 22,713,366 | 78,361,087 | 14,616,773,475 Sardinian | srd_Latn | Indo-European | Latin | | 675,539 | 4,059,493 | 106,159,957 Serbian | srp_Cyrl | Indo-European | Cyrillic | | 604,557 | 2,286,171 | 401,223,741 Sundanese | sun_Latn | Austronesian | Latin | | 44,310 | 236,025 | 13,627,832 Swedish | swe_Latn | Indo-European | Latin | | 3,302,730 | 10,860,518 | 1,779,284,152 Swahili | swh_Latn | Atlantic-Congo | Latin | | 137,134 | 593,418 | 59,454,896 Silesian | szl_Latn | Indo-European | Latin | | 23,535 | 132,459 | 5,996,972 Tamil | tam_Taml | Dravidian | Tamil | | 36,196 | 167,669 | 4,834,946 Tatar | tat_Cyrl | Turkic | Cyrillic | | 37,188 | 143,842 | 22,831,350 Telugu | tel_Telu | Dravidian | Telugu | | 22,974 | 81,033 | 2,273,772 Tajik | tgk_Cyrl | Turkic | Cyrillic | | 125,236 | 417,591 | 90,503,778 Tagalog | tgl_Latn | Austronesian | Latin | | 151,437 | 673,814 | 97,708,639 Thai | tha_Thai | Kra-Dai | Thai | | 2,983,837 | 11,621,786 | 2,839,211,104 Tigrinya | tir_Ethi | Afro-Asiatic | Ge‘ez | | 2,657 | 8,707 | 1,725,422 Tok Pisin | tpi_Latn | Indo-European | Latin | | 5,063 | 35,169 | 460,853 Turkmen | tuk_Latn | Turkic | Latin | | 13,024 | 57,354 | 9,766,999 Turkish | tur_Latn | Turkic | Latin | | 4,478,700 | 12,401,091 | 2,394,669,068 Twi | twi_Latn | Atlantic-Congo | Latin | | 3,305 | 13,634 | 495,220 Uyghur | uig_Arab | Turkic | Arabic | | 10,713 | 41,709 | 6,785,318 Ukrainian | ukr_Cyrl | Indo-European | Cyrillic | | 2,721,424 | 10,929,796 | 1,928,351,595 Urdu | urd_Arab | Indo-European | Arabic | | 407,098 | 1,239,125 | 242,007,283 Northern Uzbek | uzn_Latn | Turkic | Latin | | 156,632 | 798,155 | 89,022,562 Venetian | vec_Latn | Indo-European | Latin | | 330,611 | 1,830,777 | 71,077,531 Vietnamese | vie_Latn | Viet-Muong | Latin | | 12,621,521 | 47,411,488 | 11,616,191,199 Wolof | wol_Latn | Atlantic-Congo | Latin | | 4,658 | 20,380 | 1,596,432 Xhosa | xho_Latn | Atlantic-Congo | Latin | | 25,950 | 142,387 | 15,809,823 Eastern Yiddish | ydd_Hebr | Indo-European | Hebrew | | 12,486 | 57,510 | 17,369,727 Yoruba | yor_Latn | Atlantic-Congo | Latin | | 56,700 | 286,933 | 32,614,558 Yue Chinese | yue_Hant | Sino-Tibetan | Hant | | 33,671 | 203,513 | 24,172,441 Chinese (Simplified) | zho_Hans | Sino-Tibetan | Hanzi | | 9,861,262 | 36,152,754 | 8,078,842,701 Chinese (Traditional) | zho_Hant | Sino-Tibetan | Hant | | 3,967,966 | 16,307,258 | 2,962,854,441 Standard Malay | zsm_Latn | Austronesian | Latin | | 1,179,744 | 5,488,632 | 432,667,199 Zulu | zul_Latn | Atlantic-Congo | Latin | | 30,717 | 156,639 | 11,345,288 Table 6: Languages & Statistics ### A.2 Heuristics to increase the quality of documents We use a set of heuristics to improve the quality of the documents by discarding some text nodes. We first consider text nodes to be written in Latin scripts if more than 50% of the characters are Latin. In detail, we discard the text node if: 1. 1. It is empty. 2. 2. It contains fewer than 5 bytes for Latin scripts and fewer than 15 bytes for non-Latin scripts. 3. 3. More than 30% of the characters are digits. 4. 4. It contains more than one date. 5. 5. It contains the sequence “lorem ipsum”. 6. 6. The ratio of non-alphabetic characters is superior to 0.33. 7. 7. The symbols ‘{’ or ’‘}’ are in the text. 8. 8. The symbols ‘$\geq$’, ‘$\leq$’, ‘>’ or ‘<’ are more than 2 times in the text. 9. 9. “Follow us”, “javascript”, “copyright” or “©” are in the text. 10. 10. The ratio of capitalized letters is superior to 0.2. 11. 11. The text exactly matches with “comment”, “facebook”, “instagram”, “twitter”, “rss”, “newsletter”, “share” or “follow us”. 12. 12. A character is more than 33% of the total number of characters in the string. We then also apply some filters to clean the text as much as possible: 1. 1. Remove URLs from all documents. 2. 2. Normalize consecutive special characters (‘\t’, ‘\n’, ‘#’, ‘/’, ‘$’, ‘)’, ‘(’, ‘[’, ‘]’, ‘!’, ‘?’, ‘%’, ‘<’, ‘>’) to keep only one. Following previous steps, we keep the text node if it is superior to 5 bytes and we keep the final document if it is superior to 100 bytes. ### A.3 Examples of documents Figure 5: Example of a French document. Figure 6: Example of a Japanese document. Figure 7: Example of a Russian document. Figure 8: Example of an Italian document. Figure 9: Example of a Khmer document. Figure 10: Example of an Urdu document. ### A.4 Text-Image similarity and DOM Tree As we rely on the DOM Tree to build the documents and the order of appearance of the nodes could differ from HTML rendering, we attempt to assess to what extent it is a relevant way of constructing a multimodal document. To do so, we rely on the results of the text-image joint filtering step where we compute the ranks of relevant text nodes (resp images) for each image. We plot the distribution of the closest most relevant node for each modality in Figures 11(a) and 11(b). We notice that the most relevant node to either a text node or an image is their closest node in the DOM tree. The cumulative distribution function of the distribution of the closest node reaches 25% for nodes positioned between -5 and 5, which confirms the relevance of using the DOM tree to represent a document. (a) Relative position in the document of relevant text nodes with respect to images. (b) Relative position in the document of relevant images with respect to text nodes. Figure 11: Relative positions of most relevant images and text nodes with respect to the other modality. ### A.5 Implementation details #### A.5.1 Text deduplication parameters Following previous work, we near-deduplicate documents using MinHashLSH. We first vectorize the documents using HashingVectorizer from scikit-learn with 2,097,152 features computed on 4-grams and 5-grams within word boundaries. We then compute MinHashes from those vectors with 256 permutations and we finally run Locality Sensitive Hashing with a threshold Jaccard Similarity of 0.8 for finding near-duplicates. #### A.5.2 Training implementation details We train multilingual OpenFlamingo on mOSCAR and multilingual text-image pairs. We use a batch of size 64 for mOSCAR and 128 for captioning data, limiting the number of tokens to 256 for mOSCAR and 32 for captioning data. Similarly to Flamingo and OpenFlamingo, text tokens can only attend to the previous image in the sequence. To increase diversity in the training batch, we randomly reject 2/3 of the documents if they contain only one image. We limit the maximum number of images in a sequence to 8. We randomly sample 8 languages per batch and upsample low-resource languages. We train multilingual OpenFlamingo on 43 languages covering all the languages of the benchmarks we evaluate the models on (see Section A.5.3). We use Gemma-2B as the underlying language model behind multilingual OpenFlamingo and CLIP ViT-L-14 as the image encoder. We add a cross-attention layer after each decoder layer. Following OpenFlamingo, we add the two special tokens <image> and<|endofchunk|>, whose embeddings were trained. Only the Perceiver Resampler, cross-attention layers and these two embeddings were trained; everything else remained frozen. During training, we apply a factor of 0.2 for the captioning data loss function. We train the model using the Adam optimizer and a maximum learning rate of 1e-4. We use a constant learning rate scheduler with 1875 warm-up steps. We use 4 accumulation gradient steps to have an effective batch of size 256 for mOSCAR and 512 for captioning data. We train the model on 35M documents and 70M image-text pairs on 8 Nvidia A100 for 120h. #### A.5.3 Evaluation details | Metric | #examples | Languages ---|---|---|--- xFlickr&CO | CideR | 2,000 | Chinese, English, German, Indonesian, Japanese, Russian, Spanish, Turkish XM3600 | CideR | 3,600 | Arabic, Czech, Danish, German, Greek, English, Spanish, Farsi, Finnish, French, Hebrew, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Dutch, Norwegian, Poland, Portuguese, Romanian, Russian, Swedish, Telugu, Thai, Turkish, Ukrainian, Vietnamese, Chinese xGQA | Accuracy | 9,666 | Bengali, German, English, Indonesian, Korean, Portuguese, Russian, Chinese MaXM | Accuracy | $\sim$ 170 | English, French, Hindi, Hebrew, Romanian, Thai, Chinese MaRVL | Accuracy | $\sim$ 1,150 | Indonesian, Swahili, Tamil, Turkish, Chinese XVNLI | Accuracy | 1,164 | English, Arabic, Spanish, French, Russian Multi30k | BLEU | 1,000 | French, German, Czech CoMMuTE | Accuracy | 310 | Czech, French, German Table 7: Overview of the benchmarks used to evaluate our multilingual OpenFlamingo. We evaluate on a set of eight benchmarks: xFlickr&CO, XM3600, xGQA, MaXM, MaRVL, XVNLI, Multi30k (Test2016 subset) and CoMMuTE; covering 5 different tasks and 43 languages. Details about the languages, the number of examples and the metric used can be found in Table 7. We used the translate- test161616Benchmark automatically translated into English. samples provided by the authors of the benchmarks if available. No translate test samples were provided for MaXM, so we translated the test set using the NLLB-600M distilled model. As no training set was available for MaXM, we use the few-shot examples from xGQA. Since we use Stanza tokenizers, we could not evaluate on all languages from XM3600 as 3 of them were not available. Filipino was also not into the list of mOSCAR languages, so we skip this language during evaluation. The CoMMuTE evaluation set involves choosing between two different translations of a same source text (one correct and one incorrect depending on an image provided to disambiguate the text). We use the lowest perplexity between the two translations as the model’s prediction. We also use Multi30k training set as few-shot examples. ##### Prompting Following previous works, the zero-shot setting is composed of two few-shot examples without providing the images. The prompts we use for the different tasks are as follows:171717We show the prompts we used with one context example. For captioning tasks, we use the prompt: “<image>Output:[Caption]<|endofchunk|><image>Output:”, where [Caption] is replaced by the caption. For visual question answering tasks, we use the prompt: “<image>Question: [Question] Short Answer: [Answer] <|endofchunk|><image>Question: [Question] Short Answer:”, where [Question] and [Answer] are replaced by the question and the answer respectively. For multimodal machine translation tasks, we use the prompt: “<image>Sentence:‘[Caption]’. Translation: [Translation] <|endofchunk|><image>Output:”, where [Caption] is replaced by the sentence to translate and [Translation] is replaced by its translation. For MaRVL, we use the prompt: “<image> ‘[Statement]’. True of False? [Answer]<|endofchunk|><image> ‘[Statement]’. True of False?”, where [Statement] is replaced by the statement and [Answer] by the answer. We also concatenate the left and right image into a single image. For XVNLI, we use the prompt: “<image> ‘[Statement1]’ - ‘[Statement2]’. entailment, neutral or contradiction? Output: [Answer]<|endofchunk|><image> ‘[Statement1]’ - ‘[Statement2]’. entailment, neutral or contradiction? Output:”, where [Statement1], [Statement2] and [Answer] are replaced by XVNLI test data. ### A.6 Detailed results | | De | En | Es | Id | Ja | Ru | Tr | Zh ---|---|---|---|---|---|---|---|---|--- | #shots | 0 | 28.87 | 33.42 | 16.35 | 33.26 | 04.33 | 13.99 | 03.68 | 18.65 Multilingual OF | 4 | 53.99 | 46.53 | 35.87 | 48.85 | 12.44 | 31.88 | 11.35 | 33.64 mOSCAR + caps. | 8 | 56.05 | 52.99 | 37.28 | 52.51 | 15.13 | 32.76 | 11.69 | 35.78 | 16 | 55.44 | 56.78 | 37.72 | 49.06 | 17.10 | 35.43 | 12.72 | 36.82 | 0 | 16.72 | 24.57 | 03.80 | 10.82 | 02.82 | 08.20 | 02.79 | 06.82 Multilingual OF | 4 | 21.10 | 31.05 | 07.52 | 09.63 | 03.84 | 13.21 | 07.01 | 12.20 captions only | 8 | 32.56 | 35.73 | 13.35 | 15.85 | 05.96 | 18.13 | 06.97 | 15.47 | 16 | 29.86 | 40.57 | 13.75 | 23.83 | 06.92 | 20.40 | 07.90 | 15.73 Table 8: Captioning results (CideR scores) on xFlickr&CO. Bold is best result. | | Ar | Cs | Da | De | El | En | Es | Fa | Fi | Fr | He ---|---|---|---|---|---|---|---|---|---|---|---|--- | #shots | 0 | 06.18 | 02.02 | 11.06 | 09.12 | 00.90 | 43.49 | 18.01 | 10.41 | 01.30 | 20.61 | 03.21 Multi. OF | 4 | 21.55 | 05.93 | 28.21 | 21.97 | 03.38 | 74.71 | 36.93 | 26.27 | 06.48 | 39.41 | 10.34 full | 8 | 23.15 | 06.23 | 31.33 | 24.53 | 03.47 | 75.29 | 37.58 | 28.84 | 07.65 | 43.40 | 10.83 | 16 | 23.80 | 06.86 | 31.92 | 24.92 | 03.59 | 76.24 | 38.39 | 26.78 | 07.61 | 43.26 | 11.78 | 0 | 02.24 | 00.97 | 06.42 | 06.46 | 03.68 | 10.02 | 09.32 | 04.95 | 01.14 | 16.15 | 00.78 Multi. OF | 4 | 05.36 | 01.36 | 13.11 | 11.82 | 07.78 | 35.52 | 19.96 | 09.62 | 01.86 | 22.48 | 02.29 Caps only | 8 | 06.76 | 01.40 | 15.29 | 14.39 | 07.21 | 37.28 | 21.90 | 12.19 | 02.08 | 23.27 | 01.71 | 16 | 06.25 | 02.29 | 17.96 | 15.11 | 07.64 | 48.03 | 25.39 | 09.21 | 02.10 | 30.16 | 02.72 | | Hi | Hr | Hu | Id | It | Ja | Ko | Nl | No | Pl | Pt | #shots | 0 | 02.80 | 01.47 | 01.85 | 09.98 | 11.15 | 02.07 | 01.67 | 18.97 | 09.63 | 04.32 | 15.49 Multi. OF | 4 | 10.62 | 07.48 | 05.51 | 22.63 | 27.88 | 16.87 | 09.24 | 42.30 | 22.35 | 14.20 | 29.35 full | 8 | 11.12 | 07.54 | 05.91 | 25.39 | 29.34 | 19.35 | 09.99 | 46.79 | 23.54 | 15.43 | 32.69 | 16 | 12.18 | 07.71 | 06.03 | 23.89 | 29.17 | 18.84 | 09.75 | 46.95 | 23.79 | 15.93 | 31.98 | 0 | 02.29 | 00.97 | 03.51 | 02.98 | 07.96 | 01.85 | 01.05 | 04.88 | 05.78 | 00.92 | 09.79 Multi. OF | 4 | 04.57 | 01.72 | 07.57 | 06.39 | 16.23 | 03.47 | 04.33 | 11.26 | 11.99 | 01.16 | 15.93 Caps only | 8 | 05.94 | 02.17 | 07.83 | 09.93 | 15.40 | 07.93 | 05.34 | 11.87 | 13.79 | 01.38 | 17.50 | 16 | 06.36 | 02.42 | 09.55 | 11.77 | 17.43 | 10.44 | 06.03 | 12.98 | 14.65 | 01.28 | 20.32 | | Ro | Ru | Sv | Te | Th | Tr | Uk | Vi | Zh | | | #shots | | | 0 | 02.19 | 06.80 | 11.45 | 00.87 | 08.36 | 03.11 | 03.04 | 21.99 | 07.19 | | Multi. OF | 4 | 05.63 | 20.50 | 25.59 | 02.29 | 21.66 | 10.87 | 10.43 | 38.43 | 19.34 | | full | 8 | 06.04 | 23.11 | 26.44 | 03.06 | 25.19 | 12.95 | 10.56 | 39.75 | 20.18 | | | 16 | 06.43 | 21.15 | 26.51 | 03.42 | 25.31 | 13.57 | 10.45 | 40.85 | 20.46 | | | 0 | 02.24 | 01.93 | 04.55 | 00.67 | 02.34 | 02.68 | 00.80 | 08.55 | 02.70 | | Multi. OF | 4 | 05.35 | 06.29 | 15.66 | 00.77 | 07.21 | 05.94 | 01.76 | 20.69 | 07.80 | | Caps only | 8 | 05.18 | 07.58 | 14.01 | 01.00 | 06.81 | 08.90 | 02.73 | 23.05 | 08.99 | | | 16 | 05.06 | 09.06 | 20.60 | 01.18 | 08.35 | 10.25 | 03.47 | 25.16 | 11.05 | | Table 9: Captioning results (CideR scores) on XM3600. Bold is best result. | | Bn | De | En | Id | Ko | Pt | Ru | Zh ---|---|---|---|---|---|---|---|---|--- | #shots | 0 | 20.99 | 22.57 | 32.60 | 22.45 | 26.44 | 23.54 | 25.31 | 26.77 Multilingual OF | 4 | 25.72 | 32.71 | 38.02 | 30.74 | 31.49 | 31.53 | 31.75 | 33.26 mOSCAR + caps. | 8 | 26.87 | 35.60 | 39.16 | 34.25 | 32.25 | 34.93 | 33.59 | 34.55 | 16 | 29.11 | 37.01 | 40.64 | 36.37 | 34.20 | 37.15 | 34.87 | 36.29 | 0 | 10.54 | 06.51 | 10.43 | 07.74 | 07.50 | 07.79 | 08.62 | 09.84 Multilingual OF | 4 | 12.54 | 11.90 | 15.78 | 13.95 | 13.70 | 12.01 | 12.73 | 15.03 captions only | 8 | 11.62 | 11.70 | 17.29 | 13.86 | 12.85 | 11.60 | 12.65 | 15.35 | 16 | 09.77 | 11.86 | 18.37 | 13.24 | 12.48 | 11.25 | 11.24 | 14.33 Translate Test OF-3B MPT | 0 | 18.64 | 18.67 | - | 18.36 | 17.54 | 19.21 | 18.88 | 17.11 4 | 23.23 | 23.40 | - | 22.95 | 22.46 | 23.52 | 22.41 | 22.85 8 | 28.22 | 29.44 | - | 28.21 | 27.67 | 29.58 | 28.21 | 28.63 16 | 31.31 | 32.58 | - | 31.82 | 31.42 | 32.74 | 31.62 | 31.22 | 0 | 30.41 | 32.1 | - | 29.35 | 29.99 | 31.39 | 29.06 | 28.81 Multilingual OF | 4 | 34.89 | 36.32 | - | 35.50 | 35.64 | 36.84 | 35.05 | 34.60 mOSCAR + caps. | 8 | 35.95 | 37.65 | - | 36.78 | 37.14 | 37.81 | 36.17 | 35.98 | 16 | 36.78 | 38.78 | - | 37.52 | 37.73 | 38.68 | 37.91 | 36.84 Table 10: VQA results on xGQA. Bold is best result. | | En | Fr | Hi | He | Ro | Th | Zh ---|---|---|---|---|---|---|---|--- | #shots | 0 | 34.24 | 15.91 | 18.08 | 16.43 | 13.73 | 25.75 | 13.36 Multi. OF | 4 | 36.19 | 21.21 | 20.77 | 19.29 | 19.01 | 31.72 | 19.13 mOSCAR + caps | 8 | 36.96 | 17.80 | 20.00 | 20.36 | 17.96 | 33.96 | 23.83 | 16 | 36.19 | 18.18 | 21.92 | 20.71 | 16.90 | 41.04 | 22.74 | 0 | 09.73 | 00.38 | 07.69 | 01.43 | 00.00 | 05.22 | 03.61 Multi. OF | 4 | 09.34 | 02.65 | 05.00 | 02.50 | 00.00 | 05.60 | 03.97 captions only | 8 | 09.34 | 01.89 | 08.08 | 05.00 | 01.06 | 03.36 | 05.42 | 16 | 08.56 | 01.14 | 05.00 | 08.21 | 00.35 | 03.36 | 07.58 Translate test OF-3B MPT | 0 | - | 12.50 | 22.31 | 00.36 | 10.92 | 00.00 | 00.00 4 | - | 10.98 | 25.38 | 00.36 | 10.21 | 00.00 | 00.00 8 | - | 10.98 | 27.31 | 00.36 | 11.27 | 00.00 | 00.00 16 | - | 13.26 | 26.54 | 01.07 | 13.38 | 00.00 | 00.00 | 0 | - | 18.18 | 28.08 | 00.00 | 13.73 | 00.00 | 00.36 Multi. OF | 4 | - | 15.91 | 30.38 | 00.36 | 12.68 | 00.00 | 00.00 mOSCAR + caps | 8 | - | 15.15 | 30.77 | 00.00 | 14.79 | 00.00 | 00.00 | 16 | - | 15.91 | 35.77 | 00.36 | 16.90 | 00.00 | 00.00 Table 11: VQA results on MaXM. Bold is best result. | | Id | Sw | Ta | Tr | Zh ---|---|---|---|---|---|--- | #shots Random chance | | 50.00 | 50.00 | 50.00 | 50.00 | 50.00 | 0 | 50.00 | 49.46 | 49.76 | 49.83 | 49.80 Multilingual OF | 4 | 49.91 | 49.55 | 49.28 | 49.83 | 49.80 mOSCAR + caps | 8 | 50.62 | 48.65 | 49.68 | 49.83 | 49.80 | 16 | 50.18 | 49.01 | 49.76 | 49.83 | 49.90 | 0 | 51.33 | 49.01 | 49.52 | 49.83 | 49.70 Multilingual OF | 4 | 49.73 | 49.64 | 49.19 | 49.41 | 49.70 captions only | 8 | 49.91 | 49.10 | 49.60 | 49.75 | 49.90 | 16 | 50.09 | 49.73 | 49.60 | 49.75 | 49.80 Translate test OF-3B MPT | 0 | 50.00 | 49.37 | 49.76 | 49.83 | 49.80 4 | 50.00 | 49.64 | 49.52 | 49.75 | 49.60 8 | 49.82 | 49.46 | 49.28 | 50.08 | 49.90 16 | 50.00 | 49.37 | 49.44 | 50.00 | 49.80 | 0 | 49.07 | 49.79 | 49.52 | 50.34 | 49.60 Multilingual OF | 4 | 49.99 | 49.79 | 48.23 | 49.75 | 49.76 mOSCAR + caps | 8 | 50.00 | 48.92 | 50.64 | 50.42 | 48.90 | 16 | 49.84 | 50.00 | 50.24 | 48.90 | 49.75 Table 12: Classification results on MaRVL. Bold is best result. | | Ar | En | Es | Fr | Ru ---|---|---|---|---|---|--- | #shots Random chance | | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 0 | 32.90 | 34.02 | 31.44 | 33.85 | 32.82 Multilingual OF. | 4 | 36.94 | 36.17 | 34.19 | 34.54 | 38.49 mOSCAR + caps. | 8 | 37.80 | 39.86 | 34.97 | 35.74 | 37.46 | 16 | 34.97 | 38.92 | 32.30 | 34.97 | 35.65 | 0 | 35.48 | 34.02 | 33.51 | 34.45 | 31.36 Multilingual OF. | 4 | 32.04 | 31.79 | 32.73 | 32.22 | 31.44 captions only | 8 | 34.02 | 33.76 | 32.04 | 35.57 | 33.16 | 16 | 32.04 | 32.99 | 33.76 | 33.17 | 31.53 Translate test OF-3B MPT | 0 | 32.65 | - | 31.01 | 31.44 | 35.82 4 | 36.25 | - | 35.82 | 35.57 | 35.65 8 | 31.27 | - | 31.10 | 31.10 | 31.70 16 | 33.68 | - | 33.25 | 32.99 | 33.25 | 0 | 34.88 | - | 34.88 | 34.54 | 34.36 Multilingual OF. | 4 | 36.25 | - | 36.17 | 35.91 | 36.08 mOSCAR + caps. | 8 | 39.60 | - | 39.52 | 40.29 | 39.35 | 16 | 37.54 | - | 37.89 | 37.46 | 39.00 Table 13: Classification results on XVNLI. Bold is best result. | | Cs | De | Fr ---|---|---|---|--- | #shots | 0 | 3.22 | 28.86 | 36.01 Multi. OF | 4 | 3.16 | 28.99 | 36.22 full | 8 | 3.44 | 28.76 | 37.41 | 16 | 3.73 | 29.19 | 37.53 | 0 | 0.00 | 00.00 | 00.00 Multi. OF | 4 | 0.00 | 00.00 | 00.00 caps. only | 8 | 0.00 | 00.00 | 00.03 | 16 | 0.00 | 00.40 | 01.82 Table 14: En$\rightarrow$X translation results on Multi30k. Bold is best result. | | Cs | De | Fr ---|---|---|---|--- | #shots | 0 | 59.09 | 63.67 | 68.51 Multi. OF | 4 | 58.77 | 63.67 | 68.51 full | 8 | 56.82 | 63.67 | 68.51 | 16 | 57.79 | 62.67 | 67.86 | 0 | 58.12 | 61.67 | 64.29 Multi. OF | 4 | 59.09 | 61.00 | 63.31 caps. only | 8 | 59.09 | 59.34 | 64.29 | 16 | 58.12 | 58.67 | 63.96 Table 15: En$\rightarrow$X CoMMuTE results. Bold is best result.
# Evolution of the single-mode squeezed vacuum state in amplitude dissipative channel Hong-Yi Fan1, Shuai Wang1 and Li-Yun Hu2∗ E-mail<EMAIL_ADDRESS>1Department of Physics, Shanghai Jiao Tong University, Shanghai 200240,China 2College of Physics & Communication Electronics, Jiangxi Normal University, Nanchang 330022, China $\ast$Corresponding author.E-mail<EMAIL_ADDRESS> ###### Abstract Using the way of deriving infinitive sum representation of density operator as a solution to the master equation describing the amplitude dissipative channel by virtue of the entangled state representation, we show manifestly how the initial density operator of a single-mode squeezed vacuum state evolves into a definite mixed state which turns out to be a squeezed chaotic state with decreasing-squeezing. We investigate average photon number, photon statistics distributions for this state. ## I Introduction Squeezed states are such for which the noise in one of the chosen pair of observables is reduced below the vacuum or ground-state noise level, at the expense of increased noise in the other observable. The squeezing effect indeed improves interferometric and spectroscopic measurements, so in the context of interferometric detection and of gravitational waves the squeezed state is very useful 01 ; 02 . In a very recently published paper, Agarwal r1 revealed that a vortex state of a two-mode system can be generated from a squeezed vacuum by subtracting a photon, such a subtracting mechanism may happen in a quantum channel with amplitude damping. Usually, in nature every system is not isolated, dissipation or dephasing usually happens when a system is immersed in a thermal environment, or a signal (a quantum state) passes through a quantum channel which is described by a master equation 03 . For example, when a pure state propagates in a medium, it inevitably interacts with it and evolves into a mixed state 04 . Dissipation or dephasing will deteriorate the degree of nonclassicality of photon fields, so physicists pay much attention to it 05 ; 06 ; 07 . In this present work we investigate how an initial single-mode squeezed vacuum state evolves in an amplitude dissipative channel (ADC). When a system is described by its interaction with a channel with a large number of degrees of freedom, master equations are set up for a better understanding how quantum decoherence is processed to affect unitary character in the dissipation or gain of the system. In most cases people are interested in the evolution of the variables associated with the system only. This requires us to obtain the equations of motion for the system of interest only after tracing over the reservoir variables. A quantitative measure of nonclassicality of quantum fields is necessary for further investigating the system’s dynamical behavior. For this channel, the associated loss mechanism in physical processes is governed by the following master equation 03 $\frac{d\rho\left(t\right)}{dt}=\kappa\left(2a\rho a^{\dagger}-a^{\dagger}a\rho-\rho a^{\dagger}a\right),$ (1) where $\rho$ is the density operator of the system, and $\kappa$ is the rate of decay. We have solved this problem with use of the thermo entangled state representation 08 . Our questions are: What kind of mixed state does the initial squeezed state turns into? How does the photon statistics distributions varies in the ADC? Thus solving master equations is one of the fundamental tasks in quantum optics. Usually people use various quasi-probability representations, such as P-representation, Q-representation, complex P-representation, and Wigner functions, etc. for converting the master equations of density operators into their corresponding c-number equations. Recently, a new approach 08 ; 09 , using the thermal entangled state representation 10 ; 11 to convert operator master equations to their c-number equations is presented which can directly lead to the corresponding Kraus operators (the infinitive representation of evolved density operators) in many cases. The work is arranged as follows. In Sec. 2 by virtue of the entangled state representation we briefly review our way of deriving the infinitive sum representation of density operator as a solution of the master equation. In Sec. 3 we show that a pure squeezed vacuum state (with squeezing parameter $\lambda)$ will evolves into a mixed state (output state), whose exact form is derived, which turns out to be a squeezed chaotic state. We investigate average photon number, photon statistics distributions for this state. The probability of finding $n$ photons in this mixed state is obtained which turns out to be a Legendre polynomial function relating to the squeezing parameter $\lambda$ and the decaying rate $\kappa$. In Sec. 4 we discuss the photon statistics distributions of the output state. In Sec. 5 and 6 we respectively discuss the Wigner function and tomogram of the output state. ## II Brief review of deducing the infinitive sum representation of $\rho\left(t\right)$ For solving the above master equation, in a recent review paper 12 we have introduced a convenient approach in which the two-mode entangled state 10 ; 11 $|\eta\rangle=\exp(-\frac{1}{2}|\eta|^{2}+\eta a^{{\dagger}}-\eta^{\ast}\tilde{a}^{{\dagger}}+a^{{\dagger}}\tilde{a}^{{\dagger}})|0\tilde{0}\rangle,$ (2) is employed, where $\tilde{a}^{{\dagger}}$ is a fictitious mode independent of the real mode $a^{\dagger},$ $[\tilde{a},a^{\dagger}]=0$. $|\eta=0\rangle$ possesses the properties $\displaystyle a|\eta$ $\displaystyle=0\rangle=\tilde{a}^{{\dagger}}|\eta=0\rangle,$ $\displaystyle a^{{\dagger}}|\eta$ $\displaystyle=0\rangle=\tilde{a}|\eta=0\rangle,$ (3) $\displaystyle(a^{{\dagger}}a)^{n}|\eta$ $\displaystyle=0\rangle=(\tilde{a}^{{\dagger}}\tilde{a})^{n}|\eta=0\rangle.$ Acting the both sides of Eq.(1) on the state $|\eta=0\rangle\equiv\left|I\right\rangle$, and denoting $\left|\rho\right\rangle=\rho\left|I\right\rangle$, we have $\displaystyle\frac{d}{dt}\left|\rho\right\rangle$ $\displaystyle=\kappa\left(2a\rho a^{\dagger}-a^{\dagger}a\rho-\rho a^{\dagger}a\right)\left|I\right\rangle$ $\displaystyle=\kappa\left(2a\tilde{a}-a^{\dagger}a-\tilde{a}^{\dagger}\tilde{a}\right)\left|\rho\right\rangle,$ (4) so its formal solution is $\left|\rho\right\rangle=\exp\left[\kappa t\left(2a\tilde{a}-a^{\dagger}a-\tilde{a}^{\dagger}\tilde{a}\right)\right]\left|\rho_{0}\right\rangle,$ (5) where $\left|\rho_{0}\right\rangle\equiv\rho_{0}\left|I\right\rangle,$ $\rho_{0}$ is the initial density operator. Noticing that the operators in Eq.(5) obey the following commutative relation, $\left[a\tilde{a},a^{\dagger}a\right]=\left[a\tilde{a},\tilde{a}^{\dagger}\tilde{a}\right]=\tilde{a}a$ (6) and $\left[\frac{a^{\dagger}a+\tilde{a}^{\dagger}\tilde{a}}{2},a\tilde{a}\right]=-\tilde{a}a,$ (7) as well as using the operator identity 13 $e^{\lambda\left(A+\sigma B\right)}=e^{\lambda A}e^{\sigma\left(1-e^{-\lambda\tau}\right)B/\tau},$ (8) (which is valid for $\left[A,B\right]=\tau B$), we have $e^{-2\kappa t\left(\frac{a^{\dagger}a+\tilde{a}^{\dagger}\tilde{a}}{2}-a\tilde{a}\right)}=e^{-\kappa t\left(a^{\dagger}a+\tilde{a}^{\dagger}\tilde{a}\right)}e^{T^{\prime}a\tilde{a}},$ (9) where $T^{\prime}=1-e^{-2\kappa t}.$ Then substituting Eq.(9) into Eq.(5) yields 12 $\displaystyle\left|\rho\right\rangle$ $\displaystyle=e^{-\kappa t\left(a^{\dagger}a+\tilde{a}^{\dagger}\tilde{a}\right)}\sum_{n=0}^{\infty}\frac{T^{\prime n}}{n!}a^{n}\tilde{a}^{n}\left|\rho_{0}\right\rangle$ $\displaystyle=e^{-\kappa ta^{\dagger}a}\sum_{n=0}^{\infty}\frac{T^{\prime n}}{n!}a^{n}\rho_{0}a^{{\dagger}n}e^{-\kappa t\tilde{a}^{\dagger}\tilde{a}}\left|I\right\rangle$ $\displaystyle=\sum_{n=0}^{\infty}\frac{T^{\prime n}}{n!}e^{-\kappa ta^{\dagger}a}a^{n}\rho_{0}a^{{\dagger}n}e^{-\kappa ta^{\dagger}a}\left|I\right\rangle,$ (10) which leads to the infinitive operator-sum representation of$\ \rho$, $\rho=\sum_{n=0}^{\infty}M_{n}\rho_{0}M_{n}^{\dagger},$ (11) where $M_{n}\equiv\sqrt{\frac{T^{\prime n}}{n!}}e^{-\kappa ta^{\dagger}a}a^{n}.$ (12) We can prove $\displaystyle\sum_{n}M_{n}^{\dagger}M_{n}$ $\displaystyle=\sum_{n}\frac{T^{\prime n}}{n!}a^{{\dagger}n}e^{-2\kappa ta^{\dagger}a}a^{n}$ $\displaystyle=\sum_{n}\frac{T^{\prime n}}{n!}e^{2n\kappa t}\colon a^{{\dagger}n}a^{n}\colon e^{-2\kappa ta^{\dagger}a}$ $\displaystyle=\left.:e^{T^{\prime}e^{2\kappa t}a^{\dagger}a}:\right.e^{-2\kappa ta^{\dagger}a}$ $\displaystyle=\left.:e^{\left(e^{2\kappa t}-1\right)a^{\dagger}a}:\right.e^{-2\kappa ta^{\dagger}a}=1,$ (13) where $\colon\colon$ stands for the normal ordering. Thus $M_{n}$ is a kind of Kraus operator, and $\rho$ in Eq.(11) is qualified to be a density operator, i.e., $Tr\left[\rho\left(t\right)\right]=Tr\left[\sum_{n=0}^{\infty}M_{n}\rho_{0}M_{n}^{\dagger}\right]=Tr\rho_{0}.$ (14) Therefore, for any given initial state $\rho_{0}$, the density operator $\rho\left(t\right)$ can be directly calculated from Eq.(11). The entangled state representation provides us with an elegant way of deriving the infinitive sum representation of density operator as a solution of the master equation. ## III Evolving of an initial single-mode squeezed vacuum state in ADC It is seen from Eq.(11) that for any given initial state $\rho_{0}$, the density operator $\rho\left(t\right)$ can be directly calculated. When $\rho_{0}$ is a single-mode squeezed vacuum state, $\rho_{0}=\text{sech}\lambda\exp\left(\frac{\tanh\lambda}{2}a^{{\dagger}2}\right)\left|0\right\rangle\left\langle 0\right|\exp\left(\frac{\tanh\lambda}{2}a^{2}\right),$ (15) we see $\displaystyle\rho\left(t\right)$ $\displaystyle=$ $\displaystyle\text{sech}\lambda\sum_{n=0}^{\infty}\frac{T^{\prime n}}{n!}e^{-\kappa ta^{\dagger}a}a^{n}\exp\left(\frac{\tanh\lambda}{2}a^{{\dagger}2}\right)\left|0\right\rangle$ (16) $\displaystyle\times\left\langle 0\right|\exp\left(\frac{\tanh\lambda}{2}a^{2}\right)a^{{\dagger}n}e^{-\kappa ta^{\dagger}a}.$ Using the Baker-Hausdorff lemma 14 , $e^{\lambda\hat{A}}\hat{B}e^{-\lambda\hat{A}}=\hat{B}+\lambda\left[\hat{A},\hat{B}\right]+\frac{\lambda^{2}}{2!}\left[\hat{A},\left[\hat{A},\hat{B}\right]\right]+\cdots.$ (17) we have $\displaystyle a^{n}\exp\left(\frac{\tanh\lambda}{2}a^{{\dagger}2}\right)\left|0\right\rangle$ $\displaystyle=$ $\displaystyle e^{\frac{\tanh\lambda}{2}a^{{\dagger}2}}e^{-\frac{\tanh\lambda}{2}a^{{\dagger}2}}a^{n}e^{\frac{\tanh\lambda}{2}a^{{\dagger}2}}\left|0\right\rangle$ (18) $\displaystyle=$ $\displaystyle e^{\frac{\tanh\lambda}{2}a^{{\dagger}2}}\left(a+a^{\dagger}\tanh\lambda\right)^{n}\left|0\right\rangle.$ Further employing the operator identity 15 $\left(\mu a+\nu a^{\dagger}\right)^{m}=\left(-i\sqrt{\frac{\mu\nu}{2}}\right)^{m}\colon H_{m}\left(i\sqrt{\frac{\mu}{2\nu}}a+i\sqrt{\frac{\nu}{2\mu}}a^{\dagger}\right)\colon,$ (19) where $H_{m}(x)$ is the Hermite polynomial, we know $\displaystyle\left(a+a^{\dagger}\tanh\lambda\right)^{n}$ (20) $\displaystyle=$ $\displaystyle\left(-i\sqrt{\frac{\tanh\lambda}{2}}\right)^{n}\colon H_{n}\left(i\sqrt{\frac{1}{2\tanh\lambda}}a+i\sqrt{\frac{\tanh\lambda}{2}}a^{\dagger}\right)\colon.$ From Eq.(18), it follows that $\displaystyle a^{n}e^{\frac{\tanh\lambda}{2}a^{{\dagger}2}}\left|0\right\rangle$ $\displaystyle=$ $\displaystyle\left(-i\sqrt{\frac{\tanh\lambda}{2}}\right)^{n}e^{\frac{\tanh\lambda}{2}a^{{\dagger}2}}$ (21) $\displaystyle\times H_{n}\left(i\sqrt{\frac{\tanh\lambda}{2}}a^{\dagger}\right)\left|0\right\rangle.$ On the other hand, noting $e^{-\kappa ta^{\dagger}a}a^{\dagger}e^{\kappa ta^{\dagger}a}=a^{\dagger}e^{-\kappa t},e^{\kappa ta^{\dagger}a}ae^{-\kappa ta^{\dagger}a}=ae^{-\kappa t}$ and the normally ordered form of the vacuum projector $\left|0\right\rangle\left\langle 0\right|=\colon e^{-a^{\dagger}a}\colon,$ we have $\displaystyle\rho\left(t\right)$ $\displaystyle=\text{sech}\lambda\sum_{n=0}^{\infty}\frac{T^{\prime n}}{n!}e^{-\kappa ta^{\dagger}a}a^{n}e^{\frac{\tanh\lambda}{2}a^{{\dagger}2}}\left|0\right\rangle$ $\displaystyle\times\left\langle 0\right|e^{\frac{\tanh\lambda}{2}a^{2}}a^{{\dagger}n}e^{-\kappa ta^{\dagger}a}$ $\displaystyle=\text{sech}\lambda\sum_{n=0}^{\infty}\frac{\left(T^{\prime}\tanh\lambda\right)^{n}}{2^{n}n!}e^{\frac{e^{-2\kappa t}a^{{\dagger}2}\tanh\lambda}{2}}$ $\displaystyle\times H_{n}\left(i\sqrt{\frac{\tanh\lambda}{2}}a^{\dagger}e^{-\kappa t}\right)\left|0\right\rangle\left\langle 0\right|$ $\displaystyle\times H_{n}\left(-i\sqrt{\frac{\tanh\lambda}{2}}ae^{-\kappa t}\right)e^{\frac{e^{-2\kappa t}a^{2}\tanh\lambda}{2}}$ $\displaystyle=\text{sech}\lambda\sum_{n=0}^{\infty}\frac{\left(T^{\prime}\tanh\lambda\right)^{n}}{2^{n}n!}\colon e^{\frac{e^{-2\kappa t}\left(a^{2}+a^{{\dagger}2}\right)\tanh\lambda}{2}-a^{\dagger}a}$ $\displaystyle\times H_{n}\left(i\sqrt{\frac{\tanh\lambda}{2}}a^{\dagger}e^{-\kappa t}\right)H_{n}\left(-i\sqrt{\frac{\tanh\lambda}{2}}ae^{-\kappa t}\right)\colon$ (22) then using the following identity 16 $\displaystyle\sum_{n=0}^{\infty}\frac{t^{n}}{2^{n}n!}H_{n}\left(x\right)H_{n}\left(y\right)$ (23) $\displaystyle=$ $\displaystyle\left(1-t^{2}\right)^{-1/2}\exp\left[\frac{t^{2}\left(x^{2}+y^{2}\right)-2txy}{t^{2}-1}\right],$ and $e^{\lambda a^{{\dagger}}a}=\colon e^{\left(e^{\lambda}-1\right)a^{{\dagger}}a}\colon,$ we finally obtain the expression of the output state $\rho\left(t\right)=We^{\frac{\text{\ss}}{2}a^{{\dagger}2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}e^{\frac{\text{\ss}}{2}a^{2}},$ (24) with $T^{\prime}=1-e^{-2\kappa t}$ and $W\equiv\frac{\text{sech}\lambda}{\sqrt{1-T^{\prime 2}\tanh^{2}\lambda}},\text{\ss}\equiv\frac{e^{-2\kappa t}\tanh\lambda}{1-T^{\prime 2}\tanh^{2}\lambda}.$ (25) By comparing Eq.(15) with (23) one can see that after going through the channel the initial squeezing parameter $\tanh\lambda$ in Eq.( 15) becomes to ß$\equiv\frac{e^{-2\kappa t}\tanh\lambda}{1-T^{\prime 2}\tanh^{2}\lambda},$ and $\left|0\right\rangle\left\langle 0\right|\rightarrow\frac{1}{\sqrt{1-T^{\prime 2}\tanh^{2}\lambda}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)},$ a chaotic state (mixed state), due to $T^{\prime}>0,$ we can prove $\frac{e^{-2\kappa t}}{1-T^{\prime 2}\tanh^{2}\lambda}<1,$ which means a squeezing-decreasing process. When $\kappa t=0$, then $T^{\prime}=0$ and ß $=\tanh\lambda$, Eq.(22) becomes the initial squeezed vacuum state as expected. It is important to check: if Tr$\rho(t)=1$. Using Eq.(22) and the completeness of coherent state $\int\frac{d^{2}z}{\pi}\left|z\right\rangle\left\langle z\right|=1$ as well as the following formula 17 $\int\frac{d^{2}z}{\pi}e^{\zeta\left|z\right|^{2}+\xi z+\eta z^{\ast}+fz^{2}+gz^{\ast 2}}=\frac{1}{\sqrt{\zeta^{2}-4fg}}e^{\frac{-\zeta\xi\eta+f\eta^{2}+g\xi^{2}}{\zeta^{2}-4fg}},$ (26) whose convergent condition is Re$\left(\zeta\pm f\pm g\right)<0$ and$\ \mathtt{Re}\left(\frac{\zeta^{2}-4fg}{\zeta\pm f\pm g}\right)<0$, we really see $\displaystyle\text{Tr}\rho\left(t\right)$ $\displaystyle=$ $\displaystyle W\int\frac{d^{2}z}{\pi}\left\langle z\right|e^{\frac{\text{\ss}}{2}a^{{\dagger}2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}e^{\frac{\text{\ss}}{2}a^{2}}\left|z\right\rangle$ (27) $\displaystyle=$ $\displaystyle\frac{W}{\sqrt{\left(\text{\ss}T^{\prime}\tanh\lambda-1\right)^{2}-\text{\ss}^{2}}}=1.$ so $\rho\left(t\right)$ is qualified to be a mixed state, thus we see an initial pure squeezed vacuum state evolves into a squeezed chaotic state with decreasing-squeezing after passing through an amplitude dissipative channel. ## IV Average photon number Using the completeness relation of coherent state and the normally ordering form of $\rho\left(t\right)$ in Eq. (22), and using $e^{\frac{\text{\ss}}{2}a^{2}}a^{\dagger}e^{-\frac{\text{\ss}}{2}a^{2}}=a^{\dagger}+$ß$a$, as well as $e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}a^{\dagger}e^{-a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}$=$a^{\dagger}$ß$T^{\prime}\tanh\lambda,$ we have $\displaystyle\mathtt{Tr}\left(\rho\left(t\right)a^{\dagger}a\right)$ $\displaystyle=W\int\frac{d^{2}z}{\pi}\left\langle z\right|e^{\frac{\text{\ss}}{2}a^{{\dagger}2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}e^{\frac{\text{\ss}}{2}a^{2}}a^{\dagger}a\left|z\right\rangle$ $\displaystyle=W\int\frac{d^{2}z}{\pi}\left\langle z\right|e^{\frac{\text{\ss}}{2}a^{{\dagger}2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}e^{\frac{\text{\ss}}{2}a^{2}}za^{\dagger}\left|z\right\rangle$ $\displaystyle=W\int\frac{d^{2}z}{\pi}z\left\langle z\right|e^{\frac{\text{\ss}}{2}a^{{\dagger}2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}\left(a^{\dagger}+\text{\ss}a\right)e^{\frac{\text{\ss}}{2}a^{2}}\left|z\right\rangle$ $\displaystyle=W\text{\ss}\int\frac{d^{2}z}{\pi}ze^{\frac{\text{\ss}}{2}\left(z^{\ast 2}+z^{2}\right)}\left\langle z\right|\left(a^{\dagger}T^{\prime}\tanh\lambda+z\right)e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}\left|z\right\rangle$ $\displaystyle=W\text{\ss}\int\frac{d^{2}z}{\pi}\left(|z|^{2}T^{\prime}\tanh\lambda+z^{2}\right)$ $\displaystyle\times\exp\left[\left(\text{\ss}T^{\prime}\tanh\lambda-1\right)|z|^{2}+\frac{\text{\ss}}{2}\left(z^{\ast 2}+z^{2}\right)\right].$ (28) In order to perform the integration, we reform Eq.(28) as $\displaystyle\mathtt{Tr}\left(\rho\left(t\right)a^{\dagger}a\right)$ $\displaystyle=$ $\displaystyle W\text{\ss}\left\\{T^{\prime}\tanh\lambda\frac{\partial}{\partial f}+\frac{2}{\text{ \ss}}\frac{\partial}{\partial s}\right\\}$ (29) $\displaystyle\times\int\frac{d^{2}z}{\pi}\exp\left[\left(\text{\ss}T^{\prime}\tanh\lambda-1+f\right)|z|^{2}\right.$ $\displaystyle+\left.\frac{\text{\ss}}{2}\left(z^{\ast 2}+\left(1+s\right)z^{2}\right)\right]_{f=s=0}$ $\displaystyle=$ $\displaystyle\frac{1-\text{\ss}T^{\prime}\tanh\lambda}{\left(\text{\ss}T^{\prime}\tanh\lambda-1\right)^{2}-\text{\ss}^{2}}-1$ in the last step, we have used Eq.(27). Using Eq.(29), we present the time evolution of the average photon number in Fig. 1, from which we find that the average photon number of the single-mode squeezed vacuum state in the amplitude damping channel reduces gradually to zero when decay time goes. Figure 1: (Color online) The average $\bar{n}\left(\kappa t\right)$ as the function of $\kappa t$ for different values of squeezing parameter $\lambda$ (from bottom to top $\lambda=0,0.1,0.3,0.5,1$.) ## V Photon statistics distribution Next, we shall derive the photon statistics distributions of $\rho\left(t\right)$. The photon number is given by $p\left(n,t\right)=\left\langle n\right|\rho\left(t\right)\left|n\right\rangle$. Noticing $a^{{\dagger}m}\left|n\right\rangle=\sqrt{(m+n)!/n!}\left|m+n\right\rangle$ and using the un-normalized coherent state $\left|\alpha\right\rangle=\exp[\alpha a^{{\dagger}}]\left|0\right\rangle$, 18 ; 19 leading to $\left|n\right\rangle=\frac{1}{\sqrt{n!}}\frac{\mathtt{d}^{n}}{\mathtt{d}\alpha^{n}}\left|\alpha\right\rangle\left|{}_{\alpha=0}\right.,$ $\left(\left\langle\beta\right.\left|\alpha\right\rangle=e^{\alpha\beta^{\ast}}\right)$, as well as the normal ordering form of $\rho\left(t\right)$ in Eq. (22), the probability of finding $n$ photons in the field is given by $\displaystyle p\left(n,t\right)$ (30) $\displaystyle=$ $\displaystyle\left\langle n\right|\rho\left(t\right)\left|n\right\rangle$ $\displaystyle=$ $\displaystyle\frac{W}{n!}\frac{\mathtt{d}^{n}}{\mathtt{d}\beta^{\ast n}}\frac{\mathtt{\ d}^{n}}{\mathtt{d}\alpha^{n}}\left.\left\langle\beta\right|e^{\frac{\text{\ss}}{2}\beta^{\ast 2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}e^{\frac{\text{\ss}}{2}\alpha^{2}}\left|\alpha\right\rangle\right|_{\alpha,\beta^{\ast}=0}$ $\displaystyle=$ $\displaystyle\frac{W}{n!}\frac{\mathtt{d}^{n}}{\mathtt{d}\beta^{\ast n}}\frac{\mathtt{\ d}^{n}}{\mathtt{d}\alpha^{n}}\left.\exp\left[\beta^{\ast}\alpha\text{ \ss}T^{\prime}\tanh\lambda+\frac{\text{\ss}}{2}\beta^{\ast 2}+\frac{\text{\ss}}{2}\alpha^{2}\right]\right|_{\alpha,\beta^{\ast}=0}.$ Note that $\left[e^{\frac{\text{\ss}}{2}a^{\dagger 2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}e^{\frac{\text{\ss}}{2}\alpha^{2}}\right]^{\dagger}=e^{\frac{\text{\ss}}{2}a^{\dagger 2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}e^{\frac{\text{\ss}}{2}\alpha^{2}}$ so $\left\langle n\right|\rho\left(t\right)\left|n\right\rangle^{\ast}=\left\langle n\right|\rho\left(t\right)^{\dagger}\left|n\right\rangle=\left\langle n\right|\rho\left(t\right)\left|n\right\rangle$ $\displaystyle\frac{\partial^{n+n}}{\partial t^{n}\partial t^{\prime n}}\exp\left[2xtt^{\prime}-t^{2}-t^{\prime 2}\right]_{t=t^{\prime}=0}$ (31) $\displaystyle=$ $\displaystyle 2^{n}n!\sum_{m=0}^{[n/2]}\frac{n!}{2^{2m}\left(m!\right)^{2}(n-2m)!}x^{n-2m},$ we derive the compact form for $\mathfrak{p}\left(n,t\right)$, i.e., $\displaystyle p\left(n,t\right)$ (32) $\displaystyle=$ $\displaystyle\frac{W}{n!}\left(-\frac{\text{\ss}}{2}\right)^{n}\frac{\mathtt{d}^{n}}{\mathtt{d}\beta^{\ast n}}\frac{\mathtt{d}^{n}}{\mathtt{d}\alpha^{n}}\left.e^{-2T^{\prime}\tanh\lambda\beta^{\ast}\alpha-\beta^{\ast 2}-\alpha^{2}}\right|_{\alpha,\beta^{\ast}=0}$ $\displaystyle=$ $\displaystyle W\left(\text{\ss}T^{\prime}\tanh\lambda\right)^{n}\sum_{m=0}^{[n/2]}\frac{n!\left(T^{\prime}\tanh\lambda\right)^{-2m}}{2^{2m}\left(m!\right)^{2}(n-2m)!}.$ Using the newly expression of Legendre polynomials found in Ref. 20 $x^{n}\sum_{m=0}^{[n/2]}\frac{n!}{2^{2m}\left(m!\right)^{2}(n-2m)!}\left(1-\frac{1}{x^{2}}\right)^{m}=P_{n}\left(x\right),$ (33) we can formally recast Eq.(32) into the following compact form, i.e., $p\left(n,t\right)=W\left(e^{-\kappa t}\sqrt{-\text{\ss}\tanh\lambda}\right)^{n}P_{n}\left(e^{\kappa t}T^{\prime}\sqrt{-\text{\ss}\tanh\lambda}\right)$ note that since $\sqrt{-\text{\ss}\tanh\lambda}$ is pure imaginary, while $p\left(n,t\right)$ is real, so we must still use the power-series expansion on the right-hand side of Eq.(32) to depict figures of the variation of $p\left(n,t\right)$. In particular, when $t=0$, Eq.(32 ) reduces to $\displaystyle p\left(n,0\right)$ $\displaystyle=$ $\displaystyle\text{sech}\lambda\left(\tanh\lambda\right)^{n}\lim_{T^{\prime}\rightarrow 0}\sum_{m=0}^{[n/2]}\frac{n!\left(T^{\prime}\tanh\lambda\right)^{n-2m}}{2^{2m}\left(m!\right)^{2}(n-2m)!}$ (36) $\displaystyle=$ $\displaystyle\left\\{\begin{array}[]{cc}\frac{\left(2k\right)!}{2^{2k}k!k!}\text{sech}\lambda\tanh^{2k}\lambda,&n=2k\\\ 0&n=2k+1\end{array}\right.,$ which just correspond to the number distributions of the squeezed vacuum state 21 ; 22 . From Eq.(36) it is not difficult to see that the photocount distribution decreases as the squeezing parameter $\lambda$ increases. While for $\kappa t\rightarrow\infty,$ we see that $p\left(n,\infty\right)=0.$ This indicates that there is no photon when a system interacting with a amplitude dissipative channel for enough long time, as expected. In Fig. 2, the photon number distribution is shown for different $\kappa t$. Figure 2: (Color online) Photon number distribution of the squeezed vacuum state in amplitude damping channel for $\lambda=1$, and different $\kappa t$: ($a)$ $\kappa t=0$, ($b$) $\kappa t=0.5,$ ($c$) $\kappa t=1$ and ($d$) $\kappa t=2$. ## VI Wigner functions In this section, we shall use the normally ordering for of density operators to calculate the analytical expression of Wigner function. For a single-mode system, the WF is given by 23 $W\left(\alpha,\alpha^{\ast},t\right)=e^{2\left|\alpha\right|^{2}}\int\frac{d^{2}\beta}{\pi^{2}}\left\langle-\beta\right|\rho\left(t\right)\left|\beta\right\rangle e^{-2\left(\beta\alpha^{\ast}-\beta^{\ast}\alpha\right)},$ (37) where $\left|\beta\right\rangle$ is the coherent state 18 ; 19 . From Eq.(22) it is easy to see that once the normal ordered form of $\rho\left(t\right)$ is known, we can conveniently obtain the Wigner function of $\rho\left(t\right)$. On substituting Eq.(24) into Eq.(37) we obtain the WF of the single-mode squeezed state in the ADC, $\displaystyle W\left(\alpha,\alpha^{\ast},t\right)$ (38) $\displaystyle=$ $\displaystyle We^{2\left|\alpha\right|^{2}}\int\frac{d^{2}\beta}{\pi^{2}}\exp\left[-\left(1+\text{\ss}T^{\prime}\tanh\lambda\right)\left|\beta\right|^{2}\right.$ $\displaystyle\left.-2\left(\beta\alpha^{\ast}-\beta^{\ast}\alpha\right)+\frac{\text{\ss}}{2}\beta^{\ast 2}+\frac{\text{\ss}}{2}\beta^{2}\right]$ $\displaystyle=$ $\displaystyle\frac{W}{\pi\sqrt{\left(1+\text{\ss}T^{\prime}\tanh\lambda\right)^{2}-\text{\ss}^{2}}}\exp\left[2\left|\alpha\right|^{2}\right]$ $\displaystyle\times\exp\left[2\frac{-2\left(1+\text{\ss}T^{\prime}\tanh\lambda\right)\left|\alpha\right|^{2}+\text{\ss}\left(\alpha^{\ast 2}+\alpha^{2}\right)}{\left(1+\text{\ss}T^{\prime}\tanh\lambda\right)^{2}-\text{\ss}^{2}}\right]$ In particular, when $t=0$ and $t\rightarrow\infty$, Eq.(38) reduces to $W\left(\alpha,\alpha^{\ast},0\right)=\frac{1}{\pi}\exp[-2\left|\alpha\right|^{2}\cosh 2\lambda+\left(\alpha^{\ast 2}+\alpha^{2}\right)\sinh 2\lambda]$, and $W\left(\alpha,\alpha^{\ast},\infty\right)=\frac{1}{\pi}\exp\left[-2\left|\alpha\right|^{2}\right]$, which are just the WF of the single-mode squeezed vacuum state and the vacuum state, respectively. In Fig. 3, the WF of the single-mode squeezed vacuum state in the amplitude damping channel is shown for different decay time $\kappa t$. Figure 3: (Color online) Wigner function of the squeezed vacuum state in amplitude damping channel for $\lambda=1.0$, different $\kappa t$: ($a)$ $\kappa t=0.0$, ($b$) $\kappa t=0.5$, ($c$) $\kappa t=1$, and ($d$) $\kappa t=2$. ## VII Tomogram As we know, once the probability distributions $P_{\theta}\left(\hat{x}_{\theta}\right)$ of the quadrature amplitude are obtained, one can use the inverse Radon transformation familiar in tomographic imaging to obtain the WF and density matrix 24 . Thus the Radon transform of the WF is corresponding to the probability distributions $P_{\theta}\left(\hat{x}_{\theta}\right)$. In this section we derive the tomogram of $\rho\left(t\right)$. For a single-mode system, the Radon transform of WF, denoted as $\mathcal{R}$ is defined by 25 $\displaystyle\mathcal{R}\left(q\right)_{f,g}$ $\displaystyle=$ $\displaystyle\int\delta\left(q-fq^{\prime}-gp^{\prime}\right)Tr\left[\Delta\left(\beta\right)\rho\left(t\right)\right]dq^{\prime}dp^{\prime}$ (39) $\displaystyle=$ $\displaystyle Tr\left[\left|q\right\rangle_{f,g\text{ }f,g}\left\langle q\right|\rho\left(t\right)\right]=_{f,g}\left\langle q\right|\rho\left(t\right)\left|q\right\rangle_{f,g}$ where the operator $\left|q\right\rangle_{f,g\text{ }f,g}\left\langle q\right|$ is just the Radon transform of single-mode Wigner operator $\Delta\left(\beta\right)$, and $\left|q\right\rangle_{f,g}=A\exp\left[\frac{\sqrt{2}qa^{{\dagger}}}{B}-\frac{B^{\ast}}{2B}a^{{\dagger}2}\right]\left|0\right\rangle,$ (40) as well as $B=f-ig,$ $A=\left[\pi\left(f^{2}+g^{2}\right)\right]^{-1/4}\exp[-q^{2}/2\left(f^{2}+g^{2}\right)]$. Thus the tomogram of a quantum state $\rho\left(t\right)$ is just the quantum average of $\rho\left(t\right)$ in $\left|q\right\rangle_{f,g}$ representation (a kind of intermediate coordinate-momentum representation) 26 . Substituting Eqs.(24) and (40) into Eq.(39), and using the completeness relation of coherent state, we see that the Radom transform of WF of $\rho\left(t\right)$ is given by $\displaystyle\mathcal{R}\left(q\right)_{f,g}$ (41) $\displaystyle=$ $\displaystyle W_{f,g}\left\langle q\right|e^{\frac{\text{\ss}}{2}a^{{\dagger}2}}e^{a^{\dagger}a\ln\left(\text{\ss}T^{\prime}\tanh\lambda\right)}e^{\frac{\text{\ss}}{2}a^{2}}\left|q\right\rangle_{f,g}$ $\displaystyle=$ $\displaystyle\frac{WA^{2}}{\sqrt{E}}\exp\left\\{\frac{q^{2}\text{\ss}}{E\left|B\right|^{4}}\left(B^{2}+B^{\ast}{}^{2}\right)\right.$ $\displaystyle+\left.\frac{2q^{2}\text{\ss}}{E\left|B\right|^{2}}\left(T\tanh\lambda+\text{\ss}-\text{\ss}T^{2}\tanh^{2}\lambda\right)\right\\},$ where we have used the formula (26) and $\left\langle\alpha\right|\left.\gamma\right\rangle=\exp[-\left|\alpha\right|^{2}/2-\left|\gamma\right|^{2}/2+\alpha^{\ast}\gamma]$, as well as $\displaystyle E$ $\displaystyle=$ $\displaystyle\left(1+\text{\ss}\frac{B}{B^{\ast}}\right)\left(1+\frac{B^{\ast}}{B}\text{\ss}-B^{\ast}\frac{\left(\text{\ss}T^{\prime}\tanh\lambda\right)^{2}}{B^{\ast}+\text{\ss}B}\right)$ (42) $\displaystyle=$ $\displaystyle\left|1+\frac{\text{\ss}B}{B^{\ast}}\right|^{2}-\left(\text{ \ss}T^{\prime}\tanh\lambda\right)^{2}.$ In particular, when $t=0,$ ($T=0$), then Eq.(41) reduces to ($\frac{B}{B^{\ast}}=e^{2i\phi}$) $\displaystyle\mathcal{R}\left(q\right)_{f,g}$ $\displaystyle=$ $\displaystyle\frac{A^{2}\text{sech}\lambda}{\left|1+e^{2i\phi}\tanh\lambda\right|}$ (43) $\displaystyle\times\exp\left\\{\frac{q^{2}\left(B^{2}+B^{\ast}{}^{2}+2\left|B\right|^{2}\tanh\lambda\right)\tanh\lambda}{\left|1+e^{2i\phi}\left|B\right|^{4}\tanh\lambda\right|^{2}}\right\\},$ which is a tomogram of single-mode squeezed vacuum state; while for $\kappa t\rightarrow\infty,$($T=1$), then $\mathcal{R}\left(q\right)_{f,g}=A^{2},$ which is a Gaussian distribution corresponding to the vacuum state. In summary, using the way of deriving infinitive sum representation of density operator by virtue of the entangled state representation describing, we conclude that in the amplitude dissipative channel the initial density operator of a single-mode squeezed vacuum state evolves into a squeezed chaotic state with decreasing-squeezing. We investigate average photon number, photon statistics distributions, Wigner functions and tomogram for the output state. ## Acknowledgments This work is supported by the National Natural Science Foundation of China (Grant No.11175113 and 11047133), Shandong Provincial Natural Science Foundation in China (Gant No.ZR2010AQ024), and a grant from the Key Programs Foundation of Ministry of Education of China (Grant No. 210115),as well as Jiangxi Provincial Natural Science Foundation in China (No. 2010GQW0027). ## References * (1) Caves C M 1981 Phys. Rev. D 23 1693 * (2) Loudon R 1981 Phys. Rev. Lett. 47 815 * (3) Agarwal G S 2011 New J. Phys. 13 073008 * (4) Gardner C W and Zoller P 2000 Quantum Noise (Berlin: Spinger) * (5) Louisell W H 1973 Quantum Statistical Properties of Radiation (New York: Wiley) * (6) Biswas A and Agarwal G S 2007 Phys. Rev. A 75 032104 * (7) Hu L Y and Fan H Y 2008 J. Opt. Soc. Am. B 25 1955 * (8) Hu L Y, Xu X X, Wang Z S and Xu X F 2010 Phys. Rev. A 82 043842 * (9) Fan H Y and Hu L Y 2008 Opt. Commun. 281 5571 * (10) Hu L Y and Fan H Y, Opt. Commun. 282, 4379 (2009) * (11) Fan H Y and Fan Y 1998 Phys. Lett. A 246 242 * (12) Fan H Y and Fan Y 2001 Phys. Lett. A 282 269 * (13) Fan H Y and Hu L Y 2008 Mod. Phys. Lett. B 22 2435 * (14) Fan H Y 1997 Representation and Transformation Theory in Quantum Mechanics (Shanghai: Shanghai Scientific and Technical) (in Chinese) * (15) Klauder J R and Skargerstam B S 1985 Coherent States (Singapore: World Scientific) * (16) Xu X X, Yuan H C, Hu L Y and Fan H Y 2011 J. Phys. A: Math. Theor. 44 445306 * (17) Rainville E D 1960 Special Functions (New York: MacMillan Company) * (18) Puri R R 2001 Mathematical Methods of Quantum Optics (Berlin/Heidelberg/New York: Springer-Verlag) * (19) Glauber R J 1963 Phys. Rev. 130 2529 * (20) Glauber R J 1963 Phys. Rev. 131 2766 * (21) Fan H Y, Hu L Y and Xu X X 2009 Mod. Phys. Lett. A 24 1597 * (22) Kim M S, de Oliveira F A M and Knight P L 1989 Phys. Rev. A 40 2494 * (23) Paulina Marian 1992 Phys. Rev. A 45 2044 * (24) Fan H Y andZaidi H R 1987 Phys. Lett. A 124 303 * (25) Vogel K and Risken H 1989 Phys. Rev. A 40 2847 * (26) Fan H Y and Niu J B 2010 Opt. Commun. 283 3296 * (27) Fan H Y and Hu L Y 2009 Opt. Commun. 282 3734
$\mathfrak{g}$ has positive roots. In particular, an $\mathfrak{g}=\mathfrak{sl}_{n}$ isospin variable has $\frac{n(n-1)}{2}$ components, and an $\mathfrak{g}=\mathfrak{sl}_{2}$ isospin variable has one component. We already know differential operators $D^{j}_{x}(t^{a})$ (2.3.8) which obey the $\mathfrak{sl}_{2}$ commutation relations (2.3.9) for any choice of the spin $j$. The eigenvalue of the quadratic Casimir operator in the corresponding representation of $\mathfrak{sl}_{2}$ is $\displaystyle C_{2}(j)=K_{ab}D_{x}^{j}(t^{a})D_{x}^{j}(t^{b})=2j(j+1)\ ,$ (4.2.23) and the conformal dimension of the corresponding field is $\displaystyle\Delta_{j}=\frac{j(j+1)}{k+2}\ .$ (4.2.24) Another triplet of $\mathfrak{sl}_{2}$ differential operators is given by $\displaystyle\left\\{\begin{array}[]{l}D_{\mu}^{j}(t^{-})=-\mu\ ,\\\ D_{\mu}^{j}(t^{0})=-\mu{\frac{\partial}{\partial\mu}}\ ,\\\ D_{\mu}^{j}(t^{+})=\mu\frac{\partial^{2}}{\partial\mu^{2}}-\frac{j(j+1)}{\mu}\ .\end{array}\right.$ (4.2.28) The $x$-basis field $\Phi^{j}_{x}(z_{0})$ and $\mu$-basis field $\Phi^{j}_{\mu}(z_{0})$ are related by the formal Fourier transform $\displaystyle\Phi_{x}^{j}(z_{0})=\int d\mu\ \mu^{-j-1}e^{\mu x}\Phi_{\mu}^{j}(z_{0})\ .$ (4.2.29) The $\widehat{\mathfrak{sl}}_{2}$ currents and primary fields can be represented in terms of free fields, in the same way as the Virasoro algebra and primary fields were represented in terms of the $\hat{\mathfrak{u}}_{1}$ current in Section 4.1.1. This Wakimoto free-field representation of $\widehat{\mathfrak{sl}}_{2}$ is described in Exercise 4.6. Let us now consider the states $|v^{R}\rangle$ which correspond to the $R$-valued field $\Phi^{R}(z_{0})$ by the state-field correspondence. The definition (4.2.18) of $\Phi^{R}(z_{0})$ is equivalent to $\displaystyle\left\\{\begin{array}[]{l}J^{a}_{n>0}|v^{R}\rangle=0\ ,\\\ J^{a}_{0}|v^{R}\rangle=-t^{a}|v^{R}\rangle\ .\end{array}\right.$ (4.2.32) The states $|v^{R}\rangle$ are killed by the annihilation operators, and are called affine primary states. They transform in the representation $R$ of the horizontal subalgebra $\mathfrak{g}\subset\hat{\mathfrak{g}}$ generated by $\\{J^{a}_{0}\\}$. Acting on the affine primary states with creation operators $J^{a}_{n<0}$ generates affine descendent states, which form an affine highest-weight representation $\hat{R}$ of the affine Lie algebra $\hat{\mathfrak{g}}$. Let us study the fusion rules of affine highest-weight representations, equivalently the OPEs of affine primary fields. To write such OPEs, we will omit the dependences on the positions $z_{i}$ of the fields, which are dictated by conformal symmetry since affine primary fields are also primary fields. We will also omit the affine descendent fields because, as in the cases of the Virasoro and $\hat{\mathfrak{u}}_{1}$ algebras, the contributions of affine descendent fields are determined by the contributions of the affine primary fields. (This would however not be true in the case of a $W$ algebra.) So we write a generic OPE as $\displaystyle\Phi^{R_{1}}_{X_{1}}\Phi^{R_{2}}_{X_{2}}\sim\sum_{R_{3}}\int dX_{3}\ C^{R_{1},R_{2}}_{R_{3}}(X_{1},X_{2}|X_{3})\ \Phi^{R_{3}}_{X_{3}}\ .$ (4.2.33) Inserting $\oint dz\ J^{a}(z)$ on both sides, and using the linear independence of the operators $\Phi^{R_{3}}_{X_{3}}$, we obtain an equation for the structure function $C^{R_{1},R_{2}}_{R_{3}}(X_{1},X_{2}|X_{3})$, $\displaystyle\left(D_{X_{1}}^{R_{1}}(t^{a})+D_{X_{2}}^{R_{2}}(t^{a})-\left(D_{X_{3}}^{R_{3}}(t^{a})\right)^{\dagger}\right)C^{R_{1},R_{2}}_{R_{3}}(X_{1},X_{2}|X_{3})=0\ ,$ (4.2.34) where the dagger denotes the hermitian conjugate for $X_{3}$-differential operators, such that for any functions $f,g$ we have $\int fDg=\int gD^{\dagger}f$. This equation characterizes $C^{R_{1},R_{2}}_{R_{3}}(X_{1},X_{2}|X_{3})$ as an intertwiner between the representations $R_{1}\otimes R_{2}$ and $R_{3}$ of the Lie algebra $\mathfrak{g}$. This shows that the fusion multiplicities of affine highest- weight representations of $\hat{\mathfrak{g}}$ are bounded by the tensor product multiplicities of the underlying representations of $\mathfrak{g}$, $\displaystyle m_{\hat{R}_{1},\hat{R}_{2}}^{\hat{R}_{3}}\leq m_{R_{1},R_{2}}^{R_{3}}\ .$ (4.2.35) The presence of null vectors in the representations $\hat{R}_{i}$ can lead to extra conditions on the structure function $C^{R_{1},R_{2}}_{R_{3}}(X_{1},X_{2}|X_{3})$, in which case $m_{\hat{R}_{1},\hat{R}_{2}}^{\hat{R}_{3}}<m_{R_{1},R_{2}}^{R_{3}}$. And nothing guarantees that only highest-weight representations appear in the fusion product of two highest-weight representations. In the case of the $\widetilde{SL}_{2}(\mathbb{R})$ WZW model, other types of representations do appear. (See Section 4.4.3.) Finally, let us point out that the maximum multiplicity $m_{R_{1},R_{2}}^{R_{3}}$ in a fusion rule – that is, the number of linearly independent solutions of eq. (4.2.34) in the absence of further constraints, is $\displaystyle m_{\mathrm{max}}=\left\\{\begin{array}[]{l}2\quad\text{if}\ \mathfrak{g}=\mathfrak{sl}_{2}\ ,\\\ \infty\quad\text{if}\ \mathfrak{g}=\mathfrak{sl}_{n\geq 3}\ .\end{array}\right.$ (4.2.38) We indeed have a set of $\dim\mathfrak{sl}_{n}=n^{2}-1$ equations, for a function $C^{R_{1},R_{2}}_{R_{3}}(X_{1},X_{2}|X_{3})$ of $3\frac{n(n-1)}{2}$ variables – the components of $X_{1},X_{2}$ and $X_{3}$. If $n=2$ there are three equations and three variables, and this can actually be reduced to one second-order differential equation for a function of one variable. (See eq. (4.2.46).) If $n\geq 3$ there are more variables than equations, so that $C^{R_{1},R_{2}}_{R_{3}}(X_{1},X_{2}|X_{3})$ has an arbitrary dependence on a number of variables. In the case of finite-dimensional representations of $\mathfrak{sl}_{n\geq 3}$, the multiplicities are of course finite, but they can take arbitrarily high values, depending on the involved representations. #### 4.2.3 Ward identities and Knizhnik-Zamolodchikov equations Let us study the $\hat{\mathfrak{g}}$ Ward identities for correlation functions. To do this, we need to know how the $\hat{\mathfrak{g}}$ currents $J^{a}(y)$ behave as $y\to\infty$. The relation (4.2.11) with the energy- momentum tensor $T(y)$, and the behaviour (2.2.14) of $T(y)$, suggest $\displaystyle\boxed{J^{a}(y)\underset{y\to\infty}{=}O\left(\frac{1}{y^{2}}\right)}\ .$ (4.2.39) For any meromorphic function $\epsilon(z)$, with no poles outside $\\{z_{1},\cdots z_{N}\\}$, we have $\displaystyle\oint_{\infty}dy\ \epsilon(y)\left\langle J^{a}(y)\prod_{i=1}^{N}\Phi^{\sigma_{i}}(z_{i})\right\rangle=0\quad\text{provided}\quad\epsilon(y)\underset{y\to\infty}{=}O(1)\ ,$ (4.2.40) In the case $\epsilon(y)=1$, we obtain the $\hat{\mathfrak{g}}$ global Ward identities, $\displaystyle\left\langle\sum_{i=1}^{N}(J_{0}^{a})^{(z_{i})}\prod_{i=1}^{N}\Phi^{\sigma_{i}}(z_{i})\right\rangle=0\ .$ (4.2.41) In the case $\epsilon(y)=\frac{1}{(y-z_{i})^{n}}$ with $n\geq 1$, we obtain the $\hat{\mathfrak{g}}$ local Ward identities, which are formally identical to the $\hat{\mathfrak{u}}_{1}$ local Ward identities (4.1.8). Let us specialize to correlation functions involving affine primary fields. Knowing the poles (4.2.18) of $J^{a}(y)$ and its behaviour near $y=\infty$, we have $\displaystyle\left\langle J^{a}(y)\prod_{i=1}^{N}\Phi^{R_{i}}_{X_{i}}(z_{i})\right\rangle=-\sum_{i=1}^{N}\frac{D^{R_{i}}_{X_{i}}(t^{a})}{y-z_{i}}\left\langle\prod_{i=1}^{N}\Phi^{R_{i}}_{X_{i}}(z_{i})\right\rangle\ .$ (4.2.42) In the case of an $N$-point function where the fields with indices $j\neq i$ are affine primaries, the local Ward identities become $\displaystyle\left\langle\left(J^{a}_{-n}\right)^{(z_{i})}\Phi^{\sigma_{i}}(z_{i})\prod_{j\neq i}\Phi^{R_{j}}_{X_{j}}(z_{j})\right\rangle$ $\displaystyle=\sum_{j\neq i}\frac{D_{X_{j}}^{R_{j}}(t^{a})}{(z_{j}-z_{i})^{n}}\left\langle\Phi^{\sigma_{i}}(z_{i})\prod_{j\neq i}\Phi^{R_{j}}_{X_{j}}(z_{j})\right\rangle\ .$ (4.2.43) In the case of an $N$-point function of affine primary fields, the global Ward identites become $\displaystyle\sum_{i=1}^{N}D_{X_{i}}^{R_{i}}(t^{a})\ \left\langle\prod_{i=1}^{N}\Phi^{R_{i}}_{X_{i}}(z_{i})\right\rangle=0\ .$ (4.2.44) For example, let us solve the $\mathfrak{sl}_{2}$ global Ward identities for a three-point function. In the $x$-basis, these identities are formally identical to the global Ward identities (2.3.4), whose solution is eq. (2.3.25). We thus find $\displaystyle\left\langle\prod_{i=1}^{3}\Phi^{j_{i}}_{x_{i}}\right\rangle\propto\ x_{12}^{j_{1}+j_{2}-j_{3}}x_{23}^{j_{2}+j_{3}-j_{1}}x_{31}^{j_{3}+j_{1}-j_{2}}\ ,$ (4.2.45) where we omitted the dependence on $z_{i}$. In the $\mu$-basis, we find $\displaystyle\left\langle\prod_{i=1}^{3}\Phi^{j_{i}}_{\mu_{i}}\right\rangle\propto\ \mu_{2}\delta(\mu_{1}+\mu_{2}+\mu_{3})\ \mathcal{H}\left(-\frac{\mu_{1}}{\mu_{2}}\right)\ ,$ (4.2.46) where the function $\mathcal{H}(x)$, which parametrizes the general solution of the $t^{-}$ and $t^{0}$ equations, is constrained by the $t^{+}$ equation to obey the twisted hypergeometric differential equation (2.3.73). It may seem strange that in the $\mu$-basis we obtain a second-order differential equation, whereas in the $x$-basis the solution appeared to be unique. The number of solutions of the global Ward identities has the algebraic interpretation of a tensor product multiplicity for $\mathfrak{sl}_{2}$ representations, and this should not depend on our choice of isospin variable. Actually it is the $x$-basis calculation which is misleading: analyticity conditions on the $x_{i}$-dependence of $\left\langle\prod_{i=1}^{3}\Phi^{j_{i}}_{x_{i}}\right\rangle$ in general allow the existence of two solutions which differ globally, although they are locally identical [18]. The tensor product multiplicity for generic $\mathfrak{sl}_{2}$ representations is two, as correctly suggested by the $\mu$-basis calculation. Let us now assume that the Virasoro field, which we defined by the Sugawara construction (4.2.11), is actually the energy-momentum tensor, so that eq. (2.3.1) is obeyed. Inserting the equation (4.2.20) in an $N$-point function of affine primary fields, and applying the local Ward identity (4.2.43) to $J^{b}_{-1}\Phi^{R}_{X}=(J^{b}\Phi^{R}_{X})$, we obtain the Knizhnik- Zamolodchikov equations or KZ equations, $\displaystyle\boxed{\left\\{(k+g){\frac{\partial}{\partial z_{i}}}+\sum_{j\neq i}\frac{K_{ab}D_{X_{i}}^{R_{i}}(t^{a})D_{X_{j}}^{R_{j}}(t^{b})}{z_{i}-z_{j}}\right\\}\left\langle\prod_{i=1}^{N}\Phi^{R_{i}}_{X_{i}}(z_{i})\right\rangle=0}\ .$ (4.2.47) These are first-order differential equations in $z_{i}$, and like the analogous equations (4.1.13) for free boson correlation functions, they determine the dependence on $z_{i}$ of correlation functions of primary fields. However, unlike the free boson equations, the KZ equations do not have simple solutions in general. It can be checked that the KZ equations imply the conformal global Ward identities. (See Exercise 4.7.) The KZ equations can be rewritten as $\displaystyle\left\\{(k+g){\frac{\partial}{\partial z_{i}}}+H_{i}\right\\}\left\langle\prod_{i=1}^{N}\Phi^{R_{i}}_{X_{i}}(z_{i})\right\rangle=0\ ,$ (4.2.48) where $H_{i}$ are the mutually commuting Gaudin Hamiltonians, the Hamiltonians of the Gaudin model – an integrable model associated to the Lie algebra $\mathfrak{g}$. Techniques developed for studying the $\mathfrak{g}$ Gaudin model can be useful for solving the $\mathfrak{g}$ KZ equations, as we will now see in the case $\mathfrak{g}=\mathfrak{sl}_{2}$. #### 4.2.4 $\mathfrak{sl}_{2}$ case: the KZ-BPZ relation We will now show that the $\mathfrak{sl}_{2}$ KZ equations are equivalent to certain BPZ equations via Sklynanin’s separation of variables for the $\mathfrak{sl}_{2}$ Gaudin model. Our derivation of the KZ equations amounted to inserting the Sugawara construction (4.2.11) at the points $z_{i}$ in the $N$-point function $\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle$ of affine primary fields. Inserting the Sugawara construction at an arbitrary point $y$ instead, we obtain the identity $\displaystyle\left\langle\left(T(y)-\frac{1}{2(k+2)}\left[2(J^{0}J^{0})(y)+(J^{+}J^{-})(y)+(J^{-}J^{+})(y)\right]\right)\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle=0\ .$ (4.2.49) According to eqs. (2.3.3) and (4.2.42), inserting the fields $T(y)$ and $J^{a}(y)$ amounts to acting with the differential operators $\displaystyle\hat{T}(y)$ $\displaystyle=\sum_{i=1}^{N}\left(\frac{\Delta_{j_{i}}}{(z-z_{i})^{2}}+\frac{1}{z-z_{i}}{\frac{\partial}{\partial z_{i}}}\right)\ ,$ (4.2.50) $\displaystyle\hat{J}^{a}(y)$ $\displaystyle=-\sum_{i=1}^{N}\frac{D^{R_{i}}_{X_{i}}(t^{a})}{y-z_{i}}\ ,$ (4.2.51) where $\Delta_{j}$ is defined in eq. (4.2.24). These differential operators obey the commutation relations $\displaystyle\left[\hat{T}(y),\hat{J}^{a}(z)\right]$ $\displaystyle={\frac{\partial}{\partial z}}\frac{\hat{J}^{a}(y)-\hat{J}^{a}(z)}{y-z}\ ,$ (4.2.52) $\displaystyle\left[\hat{J}^{a}(y),\hat{J}^{b}(z)\right]$ $\displaystyle=f^{ab}_{c}\frac{\hat{J}^{c}(y)-\hat{J}^{c}(z)}{y-z}\ ,$ (4.2.53) where $f^{ab}_{c}$ are the structure constants of the Lie algebra $\mathfrak{sl}_{2}$, as encoded in the commutation relations (2.3.9). And one can easily show that inserting a normal-ordered product, for instance $(J^{-}J^{+})(y)$, amounts to acting with the product of the corresponding differential operators. Therefore, the identity (4.2.49) amounts to the differential equation $\displaystyle\left(\hat{T}(y)-\frac{1}{2(k+2)}\left[2\hat{J}^{0}(y)\hat{J}^{0}(y)+\hat{J}^{+}(y)\hat{J}^{-}(y)+\hat{J}^{-}(y)\hat{J}^{+}(y)\right]\right)\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle=0\ .$ (4.2.54) The idea is to simplify this identity by taking $y$ to be one of the zeros $\hat{y}_{j}$ of $\hat{J}^{-}(y)$. These zeros are differential operators, and the value of a $y$-dependent differential operator $\hat{O}(y)$ at $y=\hat{y}_{j}$ is defined by inserting $\hat{y}_{j}$ from the left, that is $\hat{O}(\hat{y}_{j})=\frac{1}{2\pi i}\oint_{\hat{y}_{j}}\frac{dy}{y-\hat{y}_{j}}\hat{O}(y)$. Using eq. (4.2.53) for bringing the $\hat{J}^{-}(y)$ factors to the left, we obtain $\displaystyle\left(\hat{T}(\hat{y}_{j})-\frac{1}{k+2}\left[(\hat{J}^{0})^{2}(\hat{y}_{j})+\partial\hat{J}^{0}(\hat{y}_{j})\right]\right)\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle=0\ .$ (4.2.55) Let us further study the differential operators $\hat{y}_{j}$ and $\hat{J}^{0}(\hat{y}_{j})$. According to eq. (4.2.53), we have $[\hat{J}^{-}(y),\hat{J}^{-}(z)]=0$. Therefore $[\hat{y}_{j},\hat{y}_{k}]=0$, and $\displaystyle\boxed{\hat{J}^{-}(y)=\hat{Y}_{2}\frac{\prod_{j}(y-\hat{y}_{j})}{\prod_{i}(y-z_{i})}}\ ,$ (4.2.56) where $\hat{Y}_{2}$ is a differential operator such that $[\hat{Y}_{2},\hat{y}_{j}]=0$. Using eq. (4.2.53) we find $[\hat{J}^{0}(\hat{y}_{j}),\hat{J}^{-}(z)]=\frac{\hat{J}^{-}(z)}{\hat{y}_{j}-z}$, and deduce $\displaystyle[\hat{p}_{j},\hat{Y}_{2}]=0\quad\text{and}\quad[\hat{p}_{j},\hat{y}_{k}]=\delta_{j,k}\quad\text{where}\quad\hat{p}_{j}=\hat{J}^{0}(\hat{y}_{j})\ .$ (4.2.57) We can now simplify the second term of eq. (4.2.55), $(\hat{J}^{0})^{2}(\hat{y}_{j})+\partial\hat{J}^{0}(\hat{y}_{j})=\frac{1}{2\pi i}\oint_{\hat{y}_{j}}\frac{dy}{y-\hat{y}_{j}}\left(\hat{J}^{0}(y)+{\frac{\partial}{\partial y}}\right)\hat{J}^{0}(y)\\\ =\frac{1}{2\pi i}\oint_{\hat{y}_{j}}\frac{dy}{y-\hat{y}_{j}}\left(\hat{p}_{j}+{\frac{\partial}{\partial y}}\right)\hat{J}^{0}(y)=\hat{p}_{j}\frac{1}{2\pi i}\oint_{\hat{y}_{j}}\frac{dy}{y-\hat{y}_{j}}\hat{J}^{0}(y)=\hat{p}_{j}^{2}\ .$ (4.2.58) So eq. (4.2.55) becomes $\displaystyle\boxed{\left(\frac{1}{k+2}\hat{p}_{j}^{2}-\hat{T}(\hat{y}_{j})\right)\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle=0}\ .$ (4.2.59) We have therefore replaced the differential operators $D^{R_{i}}_{X_{i}}(t^{a})$, which appeared in $\hat{J}^{a}(y)$ eq. (4.2.51), with the operators $\hat{y}_{j}$ and $\hat{p}_{j}$. Therefore, it is natural to introduce the Sklyanin variables $y_{j}$ as the eigenvalues of the mutually commuting operators $\hat{y}_{j}$. But how many Sklyanin variables do we have? The $\widehat{\mathfrak{sl}}_{2}$ global Ward identity (4.2.44) associated to the current $J^{a}$ can be written as $\displaystyle\underset{y\to\infty}{\lim}y\hat{J}^{a}(y)\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle=0\ .$ (4.2.60) When combined with the definition (4.2.56) of $\hat{y}_{j}$, the identity associated to $J^{-}$ suggests that we have $N-2$ Sklyanin variables $y_{j}$. Let us define two more variables as the eigenvalues $Y_{1}$ and $Y_{2}$ of the operators $\hat{Y}_{1}$ and $\hat{Y}_{2}$, where we define $\displaystyle\hat{Y}_{1}=\underset{y\to\infty}{\lim}y\hat{J}^{-}(y)\ .$ (4.2.61) We can now define Sklyanin’s separation of variables for the $\mathfrak{sl}_{2}$ Gaudin model as the linear map $\mathcal{K}$ from functions of $(X_{1},\cdots X_{N})$ to functions of $(Y_{1},Y_{2},y_{1},\cdots y_{N-2})$, which diagonalizes the operators $(\hat{Y}_{1},\hat{Y}_{2},\hat{y}_{1},\cdots\hat{y}_{N-2})$, so that in particular $\mathcal{K}\hat{y}_{j}f(X_{1},\cdots X_{N})=y_{j}\mathcal{K}f(X_{1},\cdots X_{N})$. We actually only define this map on functions which obey the global Ward identity associated with $J^{-}$ i.e. which are killed by $\hat{Y}_{1}$, whose images therefore have a $\delta(Y_{1})$ prefactor. Using in addition the global Ward identity associated to $J^{0}$, we find that the combined dependence of $\mathcal{K}\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle$ on $Y_{1}$ and $Y_{2}$ is a $\delta(Y_{1})Y_{2}$ prefactor. (This is done with the help of eq. (4.2.53), which implies $[\underset{y\to\infty}{\lim}y\hat{J}^{0}(y),\hat{J}^{-}(z)]=-\hat{J}^{-}(z)$.) Let us rewrite our equation (4.2.59) in terms of Sklyanin variables. Using eq. (4.2.57), we find $\mathcal{K}\hat{p}_{j}\mathcal{K}^{-1}={\frac{\partial}{\partial y_{j}}}$. Therefore, eq. (4.2.59) is equivalent to the following equation for $\mathcal{K}\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle$, $\displaystyle\left(\frac{1}{k+2}\frac{\partial^{2}}{\partial y_{j}^{2}}-\mathcal{K}\hat{T}(\hat{y}_{j})\mathcal{K}^{-1}\right)\mathcal{K}\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle=0\ .$ (4.2.62) This equation apparently only involves $y_{j}$ for a given index $j$, whereas each KZ equation involves $(X_{1},\cdots X_{N})$. So it may seem that using Sklyanin variables leads to a separation of variables no only in the Gaudin model, but also in the KZ equations. This is however not true, because the differential operator $\hat{T}(y)$ (4.2.50) involves $z_{i}$-derivatives at fixed isospin variables $X_{i}$. When writing $\mathcal{K}\hat{T}(\hat{y}_{j})\mathcal{K}^{-1}$, we have to use $z_{i}$-derivatives at fixed Sklyanin variables $y_{k}$, and we find $\displaystyle\mathcal{K}\hat{T}(\hat{y}_{j})\mathcal{K}^{-1}=\sum_{i}\left[\frac{\Delta_{j_{i}}}{(y_{j}-z_{i})^{2}}+\frac{1}{y_{j}-z_{i}}\left({\frac{\partial}{\partial z_{i}}}+{\frac{\partial}{\partial y_{j}}}\right)\right]-\sum_{k\neq j}\frac{1}{y_{jk}}\left({\frac{\partial}{\partial y_{j}}}-{\frac{\partial}{\partial y_{k}}}\right)\ .$ (4.2.63) (See Exercise 4.8.) Introducing the function $\displaystyle\Theta=\frac{\prod_{i<i^{\prime}}(z_{i}-z_{i^{\prime}})\prod_{j<j^{\prime}}(y_{j}-y_{j^{\prime}})}{\prod_{i,j}(z_{i}-y_{j})}\ ,$ (4.2.64) the equation (4.2.62) is equivalent to $\left\\{\frac{1}{k+2}\frac{\partial^{2}}{\partial y_{j}^{2}}-\sum_{i}\frac{1}{y_{j}-z_{i}}{\frac{\partial}{\partial z_{i}}}-\sum_{k\neq j}\frac{1}{y_{j}-y_{k}}{\frac{\partial}{\partial y_{k}}}\right.\\\ \left.-\sum_{i}\frac{\Delta_{j_{i}}-\frac{k}{4}}{(y_{j}-z_{i})^{2}}-\sum_{k\neq j}\frac{\frac{3k}{4}+1}{(y_{j}-y_{k})^{2}}\right\\}\Theta^{\frac{k+2}{2}}\mathcal{K}\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle=0\ .$ (4.2.65) As an equation for $\Theta^{\frac{k+2}{2}}\mathcal{K}\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle$, this coincides with the BPZ equation (2.3.54) for the correlation function of $\left\langle\prod_{i=1}^{N}V_{\alpha_{i}}(z_{i})\prod_{j=1}^{N-2}V_{\langle 2,1\rangle}(y_{j})\right\rangle$, provided the parameter $b$ of the Virasoro algebra is given by $\displaystyle\boxed{b^{2}=-k-2}\ ,$ (4.2.66) and the momentums $\alpha_{i}$ are given in terms of the spins $j_{i}$ by $\displaystyle\boxed{\alpha=-b^{-1}j+\frac{b}{2}}\quad\Rightarrow\quad\boxed{\Delta(\alpha(j))=\Delta_{j}-\frac{k}{4}}\ ,$ (4.2.67) where $\Delta(\alpha)$ is given by eq. (2.1.30). The BPZ equations only constrain the dependence on $y_{j}$, but we already determined the dependence of $\mathcal{K}\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i}}(z_{i})\right\rangle$ on the variables $Y_{1}$ and $Y_{2}$. This leads to Feigin, Frenkel and Stoyanovsky’s KZ-BPZ relation, $\displaystyle\boxed{\mathcal{K}\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{X_{i},\bar{X}_{i}}(z_{i},\bar{z}_{i})\right\rangle=\delta^{(2)}(Y_{1})|Y_{2}|^{2}|\Theta|^{b^{2}}\left\langle\prod_{i=1}^{N}V_{\alpha_{i}}(z_{i},\bar{z}_{i})\prod_{j=1}^{N-2}V_{\langle 2,1\rangle}(y_{j},\bar{y}_{j})\right\rangle}\ .$ (4.2.68) This equation means that the differential equations obeyed by both sides are related by Sklyanin’s separation of variables. Solutions of the KZ equations, which are called $\widehat{\mathfrak{sl}}_{2}$ conformal blocks, are therefore related to certain Virasoro $2N-2$-point conformal blocks. In the diagrammatic representation of conformal blocks as trees, the relevant Virasoro block is obtained from an $\widehat{\mathfrak{sl}}_{2}$ conformal block by adding a degenerate field near each node, and identifying the two fusion channels of that degenerate field with $\widehat{\mathfrak{sl}}_{2}$ fusion multiplicities: $\displaystyle\leavevmode\hbox to131.94pt{\vbox to83.49pt{\pgfpicture\makeatletter\hbox{\hskip 31.82832pt\lower-41.74649pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{ } {}{{}}{}{{}} {}{}{{}} {}{}{}{}{{{}{}}}{{}} {}{}{{}}{}\pgfsys@moveto{-17.07182pt}{34.14365pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@lineto{68.28731pt}{0.0pt}\pgfsys@lineto{85.35913pt}{34.14365pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-28.49532pt}{31.81824pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$j_{1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{3.93301pt}{-8.23856pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\epsilon$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{30.4859pt}{5.87744pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$j_{s}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{58.64018pt}{-8.23856pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\sigma$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{89.29214pt}{31.81824pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$j_{4}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{}{{}} {}{}{}\pgfsys@moveto{-17.07182pt}{-34.14365pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-28.49532pt}{-36.46906pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$j_{2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{} {}{}{{}}{}\pgfsys@moveto{68.28731pt}{0.0pt}\pgfsys@lineto{85.35913pt}{-34.14365pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{89.29214pt}{-36.46906pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$j_{3}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad\rightarrow\quad\leavevmode\hbox to184.29pt{\vbox to102.52pt{\pgfpicture\makeatletter\hbox{\hskip 46.00316pt\lower-42.47665pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{ } {}{{}}{}{{}} {}{} {}{} {}{}{}{}{{{}{}}} {}{}{{}}{}\pgfsys@moveto{-17.07182pt}{34.14365pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@lineto{17.07182pt}{0.0pt}\pgfsys@lineto{85.35913pt}{0.0pt}\pgfsys@lineto{107.55257pt}{44.38684pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-42.67015pt}{31.64365pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha(j_{1})$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{40.47034pt}{6.43301pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha(j_{s})$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{111.48558pt}{41.88684pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha(j_{4})$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{}{{}} {}{}{}\pgfsys@moveto{-17.07182pt}{-34.14365pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-42.67015pt}{-36.64365pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha(j_{2})$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{} {}{}{{}}{}\pgfsys@moveto{85.35913pt}{0.0pt}\pgfsys@lineto{102.43097pt}{-34.14365pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{106.36398pt}{-36.64365pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha(j_{3})$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{} {}{}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{17.07182pt}{0.0pt}\pgfsys@lineto{17.07182pt}{37.55797pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{2.07181pt}{43.43542pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\left<2,1\right>$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{{}}{} {}{}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{90.48073pt}{10.2432pt}\pgfsys@lineto{73.4089pt}{44.38684pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{58.40889pt}{50.2643pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\left<2,1\right>$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{91.85281pt}{0.9143pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha(j_{4})+\sigma\frac{b}{2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{} {{}{}}{{}} {{{}}{{}}}{{}}{{{}}{{}}}{}{{}}{}{}{}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }{}{}{}{}{{}}\pgfsys@moveto{8.5359pt}{-7.01425pt}\pgfsys@curveto{8.5359pt}{-9.72206pt}{10.76407pt}{-17.07182pt}{17.07182pt}{-17.07182pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{0.0}{1.0}{-1.0}{0.0}{8.5359pt}{-7.01425pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{20.60483pt}{-19.57182pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha(j_{s})+\epsilon\frac{b}{2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$ (4.2.69) The separation of variables $\mathcal{K}$ is in general an integral transformation, but in the case of the $\mu$-basis (4.2.28) of isospin variables, we have $\displaystyle\hat{J}^{-}(y)=\sum_{i=1}^{N}\frac{\mu_{i}}{y-z_{i}}\quad\Rightarrow\quad\left\\{\begin{array}[]{l}Y_{1}=\sum_{i=1}^{N}\mu_{i}\ ,\\\ Y_{2}=\sum_{i=1}^{N}\mu_{i}z_{i}\ ,\\\ \mu_{i}=Y_{2}\frac{\prod_{j=1}^{N-2}(z_{i}-y_{j})}{\prod_{i^{\prime}\neq i}(z_{i}-z_{i^{\prime}})}\ ,\end{array}\right.$ (4.2.73) and the variables on both sides of the relation are functions of one another. The KZ-BPZ relation plays in important role in the study of $\widehat{\mathfrak{sl}}_{2}$-symmetric theories such as the $H_{3}^{+}$ model, and one may wonder whether the $\mathfrak{g}$ KZ equations are involved in similar relations for more general choices of the Lie algebra $\mathfrak{g}$. While Sklyanin’s separation of variables for the $\mathfrak{sl}_{3}$ Gaudin model exists, writing the $\mathfrak{sl}_{3}$ KZ equations in Sklyanin variables however does not lead to the expected generalizations of the BPZ equations [19], and the reason for this discrepancy is not known. Another tentative generalization of the KZ-BPZ relation (4.2.68) is to replace the field $V_{\langle 2,1\rangle}$ on the right-hand side with another primary field (degenerate or not). Then the resulting expression can be interpreted as an $N$-point function in a theory whose symmetry algebra is a generalization of $\widehat{\mathfrak{sl}}_{2}$, as argued in [20]. ### 4.3 The $H_{3}^{+}$ model The $H_{3}^{+}$ model is to the $\widehat{\mathfrak{sl}}_{2}$ algebra what Liouville theory is to the Virasoro algebra: the simplest non-rational model with the given symmetry. However, in contrast to Liouville theory, there is no value of the central charge such that the $H_{3}^{+}$ model is unitary. As we did with Liouville theory, we will define the $H_{3}^{+}$ model by its symmetries, and other assumptions on the correlation functions. We could follow the same route as with Liouville theory, of studying the representations of $\widehat{\mathfrak{sl}}_{2}$, and deducing the three-point function from the associativity of OPEs involving degenerate fields, as was done by Teschner [21]. Rather, we will follow the shortcut of assuming that correlation functions of the $H_{3}^{+}$ model and Liouville theory are related as suggested by the KZ-BPZ relation (4.2.68) – this relation, when applied to correlation functions (instead of differential equations or conformal blocks), is then called the $H_{3}^{+}$-Liouville relation. #### 4.3.1 Spectrum and correlation functions Let us study the correlation functions in the $\mu$-basis, taking advantage of the simplicity of the relation (4.2.73) between the variables $(\mu_{i})$ and $(Y_{1},Y_{2},y_{j})$. Let us first consider the two-point function. In this case there are no Sklyanin variables $y_{j}$, and the separation of variables is simply $\displaystyle N=2\quad\Rightarrow\quad\left\\{\begin{array}[]{l}Y_{1}=\mu_{1}+\mu_{2}\ ,\\\ Y_{2}=\mu_{1}(z_{1}-z_{2})\ .\end{array}\right.$ (4.3.3) Using the two-point function (3.1.11) of Liouville theory, we obtain $\displaystyle\left\langle\Phi^{j_{1}}_{\mu_{1},\bar{\mu}_{1}}\Phi^{j_{2}}_{\mu_{2},\bar{\mu}_{2}}\right\rangle=\delta^{(2)}(\mu_{1}+\mu_{2})|\mu_{1}|^{2}b\Big{(}\delta(j_{1}+j_{2}+1)+R\left(-b^{-1}j_{1}+\tfrac{b}{2}\right)\delta(j_{2}-j_{1})\Big{)}\ ,$ (4.3.4) where we omitted the dependence on $z_{i}$, and we used the parameter $b$ (4.2.66) and reflection coefficient $R(\alpha)$ (3.1.42) of Liouville theory. Actually, the quantity $R\left(-b^{-1}j_{1}+\frac{b}{2}\right)$ is the reflection coefficent of the $H_{3}^{+}$ model in the $\mu$-basis, $\displaystyle\Phi^{j}_{\mu,\bar{\mu}}=R(-b^{-1}j+\tfrac{b}{2})\ \Phi^{-j-1}_{\mu,\bar{\mu}}\ .$ (4.3.5) Then let us consider the three-point function. In this case there is one Sklyanin variables $y_{1}$, and we have $\displaystyle N=3\quad\Rightarrow\quad\left\\{\begin{array}[]{l}y_{1}=-\frac{\mu_{1}z_{2}z_{3}+\mu_{2}z_{3}z_{1}+\mu_{3}z_{1}z_{2}}{Y_{2}}\ ,\\\ \frac{(y_{1}-z_{1})(z_{2}-z_{3})}{(y_{1}-z_{2})(z_{1}-z_{3})}=-\frac{\mu_{1}}{\mu_{2}}\ .\end{array}\right.$ (4.3.8) The relation (4.2.68) then reads $\displaystyle\left\langle\prod_{i=1}^{3}\Phi^{j_{i}}_{\mu_{i},\bar{\mu}_{i}}\right\rangle=\delta^{(2)}(\textstyle{\sum}_{i=1}^{3}\mu_{i})|\mu_{2}|^{2}\sum_{\pm}C_{\pm}(\alpha_{1})C_{\alpha_{1}\mp\frac{b}{2},\alpha_{2},\alpha_{3}}\left|\mathcal{H}^{(s)}_{\pm}(-\tfrac{\mu_{1}}{\mu_{2}})\right|^{2}\ ,$ (4.3.9) where we used the momentums $\alpha_{i}$ (4.2.67), the Liouville theory structure constants $C_{\pm}(\alpha)$ and $C_{\alpha_{1},\alpha_{2},\alpha_{3}}$, and the functions $\mathcal{H}^{(s)}_{\pm}(x)$, which are related to the $s$-channel Virasoro conformal blocks $\mathcal{F}^{(s)}_{\pm}(x)=\leavevmode\hbox to97.17pt{\vbox to48.86pt{\pgfpicture\makeatletter\hbox{\hskip 38.02415pt\lower-25.40483pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{ } {}{{}}{}{{}} {}{} {}{}{}{}{{{}{}}} {}{}{{}}{}\pgfsys@moveto{-8.5359pt}{17.07182pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@lineto{34.14365pt}{0.0pt}\pgfsys@lineto{42.67957pt}{17.07182pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-21.66594pt}{15.82127pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha_{1}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{6.83441pt}{6.18857pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha_{1}\mp\frac{b}{2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{46.61258pt}{15.82127pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha_{2}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-8.5359pt}{-17.07182pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-34.69115pt}{-19.57182pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\langle 2,1\rangle$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{{}}{} {}{}{{}}{}\pgfsys@moveto{34.14365pt}{0.0pt}\pgfsys@lineto{42.67957pt}{-17.07182pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{46.61258pt}{-18.32237pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\alpha_{3}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{ {}{}{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$ by eq. (2.3.78). However, the three-point function is simpler in the $x$-basis, see eq. (4.2.45). Assuming correlation functions are single-valued as functions of the isospin variables, we must have $\displaystyle\left\langle\prod_{i=1}^{3}\Phi^{j_{i}}_{x_{i},\bar{x}_{i}}\right\rangle=C^{H_{3}^{+}}_{j_{1},j_{2},j_{3}}\ |x_{12}|^{2(j_{1}+j_{2}-j_{3})}|x_{23}|^{2(j_{2}+j_{3}-j_{1})}|x_{31}|^{2(j_{3}+j_{1}-j_{2})}\ ,$ (4.3.10) which provides a natural definition of the three-point structure constant $C^{H_{3}^{+}}_{j_{1},j_{2},j_{3}}$. Let us compute this structure constant, by relating the $x$-basis and $\mu$-basis three-point functions. The relation between the two bases is a single-valued version of the holomorphic Fourier transformation of eq. (4.2.29), $\displaystyle\Phi^{j}_{x,\bar{x}}=\gamma(2j+1)\int_{{\mathbb{C}}}d^{2}\mu\ |\mu|^{-2j-2}e^{\mu x-\bar{\mu}\bar{x}}\Phi^{j}_{\mu,\bar{\mu}}\ .$ (4.3.11) Here we introduce a prefactor $\gamma(2j+1)$, which will be needed in Section 4.4.2 for ensuring that the $H_{3}^{+}$ OPE has a finite limit when the spin becomes half-integer. Moreover, we flipped the sign of the second term of the exponent of $e^{\mu x-\bar{\mu}\bar{x}}$, as compared to the expected $e^{\mu x+\bar{\mu}\bar{x}}$, in order to ensure the convergence of the integral when $x$ and $\bar{x}$ are complex conjugates. This affects the relation between the differential operators $\bar{D}^{j}_{\bar{x}}(t^{a})$ and $\bar{D}^{j}_{\bar{\mu}}(t^{a})$, which we define in terms of their holomorphic counterparts $D^{j}_{x}(t^{a}),D^{j}_{\mu}(t^{a})$ as $\displaystyle\bar{D}^{j}_{\bar{x}}(t^{a})=D^{j}_{\bar{x}}(t^{a})\quad,\quad\bar{D}^{j}_{\bar{\mu}}(t^{a})=D^{j}_{-\bar{\mu}}(t^{a})\ .$ (4.3.12) Notice that the minus sign in the operators $D^{j}_{-\bar{\mu}}(t^{a})$ modifies neither their commutation relations, nor the KZ equations. The $x$-basis and $\mu$-basis three-point functions can be compared using the formula [22] $\prod_{i=1}^{3}\left(\frac{|\mu_{i}|^{2j_{i}+2}}{\pi^{2}}\int_{{\mathbb{C}}}d^{2}x_{i}\ e^{-\mu_{i}x_{i}+\bar{\mu}_{i}\bar{x}_{i}}\right)|x_{12}|^{2(j_{1}+j_{2}-j_{3})}|x_{23}|^{2(j_{2}+j_{3}-j_{1})}|x_{31}|^{2(j_{3}+j_{1}-j_{2})}\\\ =\frac{1}{\pi^{2}}\delta^{(2)}(\textstyle{\sum}_{i=1}^{3}\mu_{i})|\mu_{2}|^{2}\sum_{\pm}d_{\pm}\left|\mathcal{H}^{(s)}_{\pm}(-\tfrac{\mu_{1}}{\mu_{2}})\right|^{2}\ ,$ (4.3.13) where we define $\displaystyle d_{+}=\frac{\gamma(-j_{1}+j_{2}+j_{3}+1)}{\gamma(-2j_{1})}\quad,\quad d_{-}=\frac{\gamma(j_{1}+j_{2}-j_{3}+1)\gamma(j_{1}-j_{2}+j_{3}+1)}{\gamma(-j_{1}-j_{2}-j_{3}-1)\gamma(2j_{1}+2)}\ .$ (4.3.14) The right-hand side of eq. (4.3.13) is a combination of solutions (4.2.46) of the $\mu$-basis global Ward identities, written using the functions $\mathcal{H}^{(s)}_{\pm}(x)$ of eq. (2.3.88). Comparing with eq. (4.3.9), we obtain the three-point structure constant in terms of Liouville theory structure constants, $\displaystyle C^{H_{3}^{+}}_{j_{1},j_{2},j_{3}}=\frac{\pi^{2}\prod_{i=1}^{3}\gamma(2j_{i}+1)}{d_{\pm}}C_{\pm}(\alpha_{1})C_{\alpha_{1}\mp\frac{b}{2},\alpha_{2},\alpha_{3}}\ .$ (4.3.15) The conformal boostrap equations of Liouville theory guarantee that this does not depend on the sign $\pm$, see Section 3.1.3. Using eqs. (3.1.43) and (3.1.59) for $C_{\pm}(\alpha_{1})$ and $C_{\alpha_{1}\mp\frac{b}{2},\alpha_{2},\alpha_{3}}$ respectively, we find $\displaystyle\boxed{C^{H_{3}^{+}}_{j_{1},j_{2},j_{3}}=\frac{\pi^{2}b^{-1}\left[b^{\frac{2}{b^{2}}}\mu^{\frac{1}{b}}\right]^{j_{1}+j_{2}+j_{3}+1}\Upsilon^{\prime}_{b}(0)\Upsilon_{b}(-\frac{2j_{1}}{b})\Upsilon_{b}(-\frac{2j_{2}}{b})\Upsilon_{b}(-\frac{2j_{3}}{b})}{\Upsilon_{b}(-\frac{j_{1}+j_{2}+j_{3}+1}{b})\Upsilon_{b}(-\frac{j_{1}+j_{2}-j_{3}}{b})\Upsilon_{b}(-\frac{j_{1}-j_{2}+j_{3}}{b})\Upsilon_{b}(-\frac{-j_{1}+j_{2}+j_{3}}{b})}}\ ,$ (4.3.16) where the parameter $b$ is given by eq. (4.2.66), and the cosmological constant $\mu$ is inherited from Liouville theory. (We would have a different formula in the case $b\in i\mathbb{R}$, based on the alternative Liouville structure constant (3.1.60).) The $x$-basis two-point function can be obtained from the $\mu$-basis two- point function with the help of the formula $\displaystyle\int_{{\mathbb{C}}}d^{2}\mu\ e^{\mu x-\bar{\mu}\bar{x}}|\mu|^{-4j-2}=|x|^{4j}\pi\gamma(-2j)\ ,$ (4.3.17) and we find $\displaystyle\left\langle\Phi^{j_{1}}_{x_{1},\bar{x}_{1}}\Phi^{j_{2}}_{x_{2},\bar{x}_{2}}\right\rangle=\frac{-\pi^{2}b}{(2j_{1}+1)^{2}}\,\delta(j_{1}+j_{2}+1)\delta^{(2)}(x_{12})+\frac{\pi}{b}\mu^{\frac{2j_{1}+1}{b}}\gamma(-\tfrac{2j_{1}+1}{b^{2}})\,\delta(j_{1}-j_{2})|x_{12}|^{4j_{1}}\,.$ (4.3.18) Using the same formula, we also obtain the $x$-basis reflection relation from the $\mu$-basis reflection relation, $\displaystyle\Phi^{j}_{x,\bar{x}}=\frac{b^{2}}{\pi}\mu^{\frac{2j+1}{b}}\gamma(1-\tfrac{2j+1}{b^{2}})\int_{{\mathbb{C}}}d^{2}x^{\prime}\ |x-x^{\prime}|^{4j}\Phi^{-j-1}_{x^{\prime},\bar{x}^{\prime}}\ .$ (4.3.19) The OPEs of affine primary fields are $\displaystyle\Phi^{j_{1}}_{x_{1},\bar{x}_{1}}\Phi^{j_{2}}_{x_{2},\bar{x}_{2}}$ $\displaystyle\sim\frac{1}{2}\int_{-\frac{1}{2}+ib\mathbb{R}}\frac{(2j+1)^{2}dj}{-\pi^{2}b}\int_{{\mathbb{C}}}d^{2}x\left\langle\Phi^{j_{1}}_{x_{1},\bar{x}_{1}}\Phi^{j_{2}}_{x_{2},\bar{x}_{2}}\Phi^{-j-1}_{x,\bar{x}}\right\rangle\Phi^{j}_{x,\bar{x}}\ ,$ (4.3.20) $\displaystyle\Phi^{j_{1}}_{\mu_{1},\bar{\mu}_{1}}\Phi^{j_{2}}_{\mu_{2},\bar{\mu}_{2}}$ $\displaystyle\sim\frac{1}{2}\int_{-\frac{1}{2}+ib\mathbb{R}}dj\int_{{\mathbb{C}}}\frac{d^{2}\mu}{b|\mu|^{2}}\left\langle\Phi^{j_{1}}_{\mu_{1},\bar{\mu}_{1}}\Phi^{j_{2}}_{\mu_{2},\bar{\mu}_{2}}\Phi^{-j-1}_{-\mu,-\bar{\mu}}\right\rangle\Phi^{j}_{\mu,\bar{\mu}}\ .$ (4.3.21) Notice that the measures of integration on the spin and isospin variables are dictated by the $\delta(j_{1}+j_{2}+1)$ term in the two-point function. These equations are analogous to eq. (3.1.16) for the OPE in Liouville theory. The values of the spin $j$, $\displaystyle\boxed{j\in-\frac{1}{2}+ib\mathbb{R}}\ ,$ (4.3.22) are deduced from Liouville theory via the relation (4.2.67). These values are the same for the left- and right-moving $\widehat{\mathfrak{sl}}_{2}$ algebras – the spectrum is diagonal. Our treatment of the $H_{3}^{+}$ model is valid for levels $k\neq-2$. If $k=-2$, the Sugawara construction breaks down, and the limit of the model is no longer a conformal field theory. The central charges of the $H_{3}^{+}$ model (4.2.12) and of the corresponding Liouville theory (4.2.66) take the values $\displaystyle\boxed{c^{H_{3}^{+}}=\frac{3k}{k+2}\in\mathbb{C}-\\{3\\}}\ ,\quad c^{\text{Liouville}}=1-6k-\frac{6}{k+2}\in\mathbb{C}\ .$ (4.3.23) #### 4.3.2 Large level limit and geometrical interpretation Having defined the $H_{3}^{+}$ model by its spectrum and correlation functions, we still have to understand these objects and their interpretation. To do this, we will consider the large level limit $k\to\infty$ of the model, which is sometimes called the minisuperspace limit. The KZ equations (4.2.47) imply that correlation functions do not depend on the positions $z_{i}$ in this limit, and are functions of the sole isospin variables. Moreover, given a generator $J^{a}_{m}$, the commutator $[J^{a}_{m},J^{b}_{-m}]$ (4.2.14) tends to infinity for some index $b$, unless $m=0$. So the generators $J^{a}_{m\neq 0}$, and the descendent states they create, disappear from the theory, and only the horizontal subalgebra of $\widehat{\mathfrak{sl}}_{2}$ survives. This corresponds to an $SL_{2}({\mathbb{C}})$ symmetry group, which acts on a field of spin $j$ as $\displaystyle U_{g}\Phi^{j}_{x,\bar{x}}=|cx+d|^{4j}\Phi^{j}_{\frac{ax+b}{cx+d},\frac{\bar{a}\bar{x}+\bar{b}}{\bar{c}\bar{x}+\bar{d}}}\quad\text{with}\quad g=\left(\begin{array}[]{cc}a&b\\\ c&d\end{array}\right)\in SL_{2}({\mathbb{C}})\ .$ (4.3.26) (This action is formally identical to the action (2.3.17) of global conformal transformations on quasi-primary fields.) Now we notice that the transformation $U_{g}$ also describes the action of $SL_{2}({\mathbb{C}})$ on functions on the space $H_{3}^{+}$ of hermitian matrices of size two and determinant one. Consider indeed the function $\displaystyle\Phi^{j}_{x,\bar{x}}(h)=\left(\begin{bmatrix}x\\\ 1\end{bmatrix}^{\dagger}h\begin{bmatrix}x\\\ 1\end{bmatrix}\right)^{2j}\ ,$ (4.3.27) and the natural action of $g\in SL_{2}({\mathbb{C}})$ on the space $\mathcal{F}(H_{3}^{+})$ of functions on $H_{3}^{+}$, $\displaystyle g\cdot f(h)=f(g^{\dagger}hg)\ ,$ (4.3.28) then we have $\displaystyle g\cdot\Phi^{j}_{x,\bar{x}}(h)=U_{g}\Phi^{j}_{x,\bar{x}}(h)\ .$ (4.3.29) This suggests that we identitfy the large level limit of the field $\Phi^{j}_{x,\bar{x}}$ with the function $\Phi^{j}_{x,\bar{x}}(h)$. We can then define a scalar product on the large level limit of the spectrum, using the natural, positive definite scalar product on $\mathcal{F}(H_{3}^{+})$, $\displaystyle\left\langle f\middle|f^{\prime}\right\rangle=\int_{H_{3}^{+}}dh\ \overline{f(h)}f^{\prime}(h)\ ,$ (4.3.30) where $dh$ is the $SL_{2}({\mathbb{C}})$-invariant measure on $H_{3}^{+}$. For this scalar product, the action of $g\in SL_{2}({\mathbb{C}})$ is a unitary tranformation: $\displaystyle\left\langle f\middle|g\cdot f^{\prime}\right\rangle=\int_{H_{3}^{+}}dh\ \overline{f(h)}f^{\prime}(g^{\dagger}hg)=\int_{H_{3}^{+}}dh\ \overline{f((g^{-1})^{\dagger}hg^{-1})}f^{\prime}(h)=\left\langle g^{-1}\cdot f\middle|f^{\prime}\right\rangle\ .$ (4.3.31) Let us interpret this at the level of the symmetry algebra. The Lie algebra $\mathfrak{sl}_{2}({\mathbb{C}})$ of the symmetry group $SL_{2}({\mathbb{C}})$ can be viewed as a six-dimensional real space, whose generators are related to $J^{a}_{0},\bar{J}^{a}_{0}$ by the ${\mathbb{R}}$-linear map $\displaystyle\left\\{\begin{array}[]{lcl}t^{a}&\mapsto&J_{0}^{a}+\bar{J}_{0}^{a}\ ,\\\ it^{a}&\mapsto&i(J^{a}_{0}-\bar{J}^{a}_{0})\ .\end{array}\right.$ (4.3.34) (The minus sign in the image of $it^{a}$ comes from the complex conjugation of the elements of $g$ in $U_{g}\Phi^{j}_{x,\bar{x}}$ (4.3.26).) Our scalar product is such that $t^{a}$ and $it^{a}$ are antihermitian, which is equivalent to $\displaystyle(J^{a}_{0})^{\dagger}=-\bar{J}^{a}_{0}\ .$ (4.3.35) In the large level limit, the OPE of the fields $\Phi^{j}_{x,\bar{x}}$ must remain associative and commutative. The product of functions in $\mathcal{F}(H_{3}^{+})$ has these properties, which suggests that it coincides with the large level limit of the OPE – given how stringent the condition of associativity is, we do not expect alternative products to exist. Then the large level limit of the spectrum is the subspace of $\mathcal{F}(H_{3}^{+})$ which is spanned by the functions $\Phi^{j}_{x,\bar{x}}$. The OPE (4.3.20) implies that this subspace is closed under multiplication, which suggests that the subspace in question is $\mathcal{F}(H_{3}^{+})$ itself. Calling $C^{j}$ the principal series representation of $\mathfrak{sl}_{2}({\mathbb{C}})$ of spin $j\in-\frac{1}{2}+i{\mathbb{R}}$, which is defined by the action of $\mathfrak{sl}_{2}({\mathbb{C}})$ on $\Phi^{j}_{x,\bar{x}}$, we obtain $\displaystyle\mathcal{F}(H_{3}^{+})=\bigoplus_{j\in-\frac{1}{2}+i{\mathbb{R}}_{+}}C^{j}\ .$ (4.3.36) (For a more rigorous definition of $\mathcal{F}(H_{3}^{+})$, and proof of its decomposition into representations of $\mathfrak{sl}_{2}({\mathbb{C}})$, see [23].) Furthermore, the identification of the limit of the OPE with the product of functions suggests $\displaystyle\underset{k\to\infty}{\lim}\left\langle\prod_{i=1}^{N}\Phi^{j_{i}}_{x_{i},\bar{x}_{i}}(z_{i},\bar{z}_{i})\right\rangle\propto\int_{H_{3}^{+}}dh\ \prod_{i=1}^{N}\Phi^{j_{i}}_{x_{i},\bar{x}_{i}}(h)\ ,$ (4.3.37) where the unknown proportionality factor is an $x,\bar{x}$-independent field normalization. Both sides of this equation can be computed explicitly in the case $N=3$, and are found to agree [23]. Notice that the large level limit of the $H_{3}^{+}$ $N$-point function is formally identical to the light asymptotic limit of the Liouville $N$-point function (4.1.54), whose interpretation is however quite different as it depends on positions $z,\bar{z}$ instead of isospin variables $x,\bar{x}$. So the large level limit of our two-dimensional conformal field theory is the quantum mechanics of a point particle on the space $H_{3}^{+}$, which justifies naming the theory the $H_{3}^{+}$ model. Other names for the same theory include the $H_{3}^{+}$ WZW model and the $SL_{2}({\mathbb{C}})/SU_{2}$ WZW model; the latter name comes from the realization of the space $H_{3}^{+}$ as a coset. We can now investigate the unitarity of the $H_{3}^{+}$ model, using the conjugation rule (4.3.35) for the horizontal subalgebra, which we now extend to the entire affine Lie algebra as $\displaystyle\boxed{(J^{a}_{n})^{\dagger}=-\bar{J}^{a}_{-n}}\ .$ (4.3.38) This is compatible with the structure (4.2.14) of the affine Lie algebra $\hat{\mathfrak{sl}}_{2}$ provided $\displaystyle k\in\mathbb{R}\ .$ (4.3.39) Via the Sugawara construction (4.2.16), the conjugation rule for $J^{a}_{n}$ implies $L_{n}^{\dagger}=\bar{L}_{-n}$, which differs from the conjugation rule (1.4.3) which we previously assumed. We refrain from discussing how this affects the geometrical interpretation of conformal transformations. Let us rather investigate the unitarity of the theory. Given a primary state $|v\rangle$, the norm square of some of the level one descendents of $|v\rangle$ is $\displaystyle\left\langle(J^{0}_{-1}+\lambda\bar{J}^{0}_{-1})v\middle|(J^{0}_{-1}+\lambda\bar{J}^{0}_{-1})v\right\rangle=-\left\langle v\middle|(\bar{\lambda}J^{0}_{1}+\bar{J}^{0}_{1})(J^{0}_{-1}+\lambda\bar{J}^{0}_{-1})v\right\rangle=-\frac{k}{2}(\lambda+\bar{\lambda})\langle v|v\rangle\ .$ (4.3.40) This cannot be positive for all values of $\lambda$, so the theory cannot be unitary – except of course in the large level limit, where descendent modes disappear. ### 4.4 WZW models #### 4.4.1 Definition and general properties Given a simple Lie group $G$, the $G$ Wess–Zumino–Witten model or $G$ WZW model is usually defined by a Lagrangian, which depends on a parameter $k$. This model can then be shown to be a conformal field theory with a $\hat{\mathfrak{g}}$ symmetry algebra, where $\mathfrak{g}$ is the Lie algebra of $G$. The parameter $k$ coincides with the level of $\hat{\mathfrak{g}}$, and may have to be quantized for the model to be consistent, depending on the group $G$. In any case, the allowed values of $k$ accumulate near $k=\infty$. We will of course not use the Lagrangian definition of WZW models, which raises the question of characterizing these models in the conformal bootstrap approach. The fundamental axiom is the presence of the symmetry algebra $\hat{\mathfrak{g}}$, and some authors call all models with this symmetry algebra WZW models. Here we will insist that, among models with this symmetry, a WZW model can be unambiguously associated to a Lie group $G$, such that its spectrum $S$ obeys $\displaystyle\boxed{\underset{k\to\infty}{\lim}S=\mathcal{F}(G)}\ ,$ (4.4.1) where $\mathcal{F}(G)$ is the space of functions on $G$. This property brings the group $G$, and not just the Lie algebra $\mathfrak{g}$, into the definition of a WZW model. This property still does not fully characterize WZW models: in particular, nothing forces the level $k$ to be quantized whenever the Lagrangian definition dictates it. It is plausible that the spectrum of WZW models can be characterized in terms of functions on a manifold related to the loop group of $G$, but such a characterization is not known. Instead of a proper definition of WZW models, we will limit ourselves to giving a few known properties of the spectrum of the $G$ WZW model: 1. 1. If $G$ is compact, then the $G$ WZW model is rational. 2. 2. If $G$ is compact, then the level $k$ takes positive integer values. 3. 3. The $G$ WZW model is diagonal if and only if $G$ is simply connected. The relation between the simple connectedness of $G$ and the diagonality of the associated model is already manifest in the case of abelian groups. The free bosonic theory whose spectrum was given in eq. (4.1.17) (with $Q=0$) may be called the ${\mathbb{R}}$ WZW model, and it is diagonal. The compactified free boson, whose spectrum was given in eq. (4.1.21), may be called the $U(1)$ WZW model, and it is not diagonal. Let us study the consequences of the axiom (4.4.1). The space $\mathcal{F}(G)$ of functions on $G$ has a natural action of $G\times\bar{G}$, where the bar is here for distinguishing the two copies of $G$, such that for $f\in\mathcal{F}(G)$ we have $\displaystyle\left((g,\bar{g})\cdot f\right)(h)=f(g^{-1}h\bar{g})\ .$ (4.4.2) We identify the corresponding infinitesimal symmetry algebra $\mathfrak{g}\times\bar{\mathfrak{g}}$ with the large level limit of the symmetry algebra $\hat{\mathfrak{g}}\times\bar{\hat{\mathfrak{g}}}$ of the WZW model. That is, $\mathfrak{g}$ and its generators $t^{a}$ are identified with the horizontal subalgebra of $\hat{\mathfrak{g}}$ and its generators $J^{a}_{0}$. The natural scalar product on $\mathcal{F}(G)$ is $\displaystyle\langle f|f^{\prime}\rangle=\int_{G}dh\ \overline{f(h)}f^{\prime}(h)\ ,$ (4.4.3) where $dh$ is the Haar measure, which is invariant under the left and right actions of $G$ on itself. As in the case of the $H_{3}^{+}$ model, the generators of the symmetry algebra are antihermitian for this scalar product, which now implies $\displaystyle(J^{a}_{0})^{\dagger}=-J^{a}_{0}\ .$ (4.4.4) This conjugation rule is naturally extended to the following conjugation rule on the affine Lie algebra, $\displaystyle\boxed{(J^{a}_{n})^{\dagger}=-J^{a}_{-n}}\ .$ (4.4.5) This is compatible with the commutation relations (4.2.14) of the affine Lie algebra $\hat{\mathfrak{g}}$, provided the structure constants $f^{ab}_{c}$ and level $k$ are real. This is also compatible with the conjugation rule (1.4.3) for the generators of the Virasoro algebra, via the Sugawara construction (4.2.16). If $G$ is compact, then the space $\mathcal{F}(G)$ of functions on $G$ can be decomposed into irreducible representations of the symmetry group $G\times\bar{G}$ using the Peter-Weyl theorem, $\displaystyle\mathcal{F}(G)=\bigoplus_{R\in\mathcal{R}}R\otimes\bar{R}\ ,$ (4.4.6) where $\mathcal{R}$ is the set of irreducible unitary representations of $G$, which coincides with the set of irreducible finite-dimensional representations of $G$. Since the scalar product (4.4.3) is positive definite, it is obvious that only unitary representations can appear. The nontrivial statement is that all unitary irreducible representations do appear, and that they couple diagonally. If $G$ is simply connected, then the spectrum of the WZW model is still diagonal for all positive integer values of $k$, [4] $\displaystyle S=\bigoplus_{R\in\mathcal{R}_{k}}\hat{R}\otimes\bar{\hat{R}}\ ,$ (4.4.7) where $\hat{R}$ is the affine highest-weight representation of $\hat{\mathfrak{g}}$ which is built from $R$ by acting with the creation modes and removing the null vectors, and the finite subset $\mathcal{R}_{k}$ of $\mathcal{R}$ is defined by certain $k$-dependent conditions. The resulting representations $\hat{R}$ are called the integrable highest-weight representations of $\hat{\mathfrak{g}}$. We have $\underset{k\to\infty}{\lim}\mathcal{R}_{k}=\mathcal{R}$ and $\underset{k\to\infty}{\lim}\hat{R}=R$, so that the spectrum $S$ has the desired large level limit (4.4.1). These results on the spectrum of the $G$ WZW model when $G$ is compact can be derived by looking for rational models, whose spectrums are be made of multiply degenerate representations – the logic we followed with Virasoro minimal models. We will now do this in the case $G=SU_{2}$. #### 4.4.2 The $SU_{2}$ WZW model We want to build theories with the $\widehat{\mathfrak{sl}}_{2}$ symmetry algebra, whose spectrums are discrete or even rational. These spectrums must therefore be made of degenerate representations. Rather than studying the structure and null vectors of the representations of $\widehat{\mathfrak{sl}}_{2}$, as we did in Section 2.1.2 in the case of the Virasoro algebra, we will use a shortcut. Remembering that we recovered the Virasoro degenerate fields and their fusion rules from the Liouville OPE in Section 3.1.4, we will use the $H_{3}^{+}$ OPE for studying $\widehat{\mathfrak{sl}}_{2}$ degenerate fields. According to our axiom (4.4.1), and to the Peter-Weyl theorem, the fields which are relevant for the $SU_{2}$ WZW model correspond to the finite- dimensional representations of $SU_{2}$. We will admit the well-known result that such representations have spins $j\in\\{0,\frac{1}{2},1,\frac{3}{2}\cdots\\}$ and dimensions $2j+1$. We will soon see that the corresponding fields are degenerate, in the sense that their OPEs with other fields involve finitely many primary fields. Experience with Virasoro minimal models however suggests that we should consider multiply degenerate fields, and it will turn out that we also need fields with spins $j\in\frac{1}{2}{\mathbb{N}}+\frac{1}{2}b^{2}=\frac{1}{2}{\mathbb{N}}-1-\frac{k}{2}$, which will also turn out to be degenerate. Let us introduce notations for the limits of $H_{3}^{+}$ field when the spin takes such particular values, $\displaystyle J\in\frac{1}{2}{\mathbb{N}}\quad\Rightarrow\quad\left\\{\begin{array}[]{ccl}\Phi^{J}&=&\underset{j\to J}{\lim}\ \Phi^{j}\ ,\\\ \Phi^{(J,1)}&=&\underset{j\to J-1-\frac{k}{2}}{\lim}\Phi^{j}\ .\end{array}\right.$ (4.4.10) Let us now consider the OPE $\Phi^{j_{1}}_{x_{1},\bar{x}_{1}}\Phi^{j_{2}}_{x_{2},\bar{x}_{2}}$ (4.3.20) where we initially assume $j_{1},j_{2}\in-\frac{1}{2}+ib{\mathbb{R}}$. The positions of the line of integration $j\in-\frac{1}{2}+ib{\mathbb{R}}$ and the eight cones of poles of the OPE coefficient are (4.4.11) Taking the limit $j\to J$ or $j\to J-1-\frac{k}{2}$, the OPE coefficient vanishes, and the only surviving contributions are from poles that cross the line of integration. (See Section 3.1.5 for more explanations.) The resulting OPEs are schematically written as $\displaystyle\Phi^{J}\Phi^{j_{2}}$ $\displaystyle\sim\sum_{j=j_{2}-J}^{j_{2}+J}\Phi^{j}\ ,$ (4.4.12) $\displaystyle\Phi^{(J,1)}\Phi^{j_{2}}$ $\displaystyle\sim\sum_{j=j_{2}-1-\frac{k}{2}-J}^{j_{2}-1-\frac{k}{2}+J}\Phi^{j}+\sum_{j=j_{2}+1+\frac{k}{2}-J}^{j_{2}+1+\frac{k}{2}+J}\Phi^{j}\ ,$ (4.4.13) where the sums run by increments of $1$, and we have taken the reflection relation $\Phi^{j}\propto\Phi^{-j-1}$ into account. For generic values of the level $k$, the OPE of two fields with half-integer spins is therefore $\displaystyle\boxed{\Phi^{J_{1}}\Phi^{J_{2}}\sim\sum_{J=|J_{1}-J_{2}|}^{J_{1}+J_{2}}\Phi^{J}}\ .$ (4.4.14) This suggests that there exists a generalized $SU_{2}$ WZW model, whose spectrum is $\displaystyle\boxed{S=\bigoplus_{J=0,\frac{1}{2},1,\frac{3}{2},\cdots}\hat{R}_{J}\otimes\bar{\hat{R}}_{J}}\ ,$ (4.4.15) where $\hat{R}_{J}$ is the affine highest-weight representation of $\widehat{\mathfrak{sl}}_{2}$ built from the finite-dimensional representation of $\mathfrak{sl}_{2}$ of spin $J$. The two-point function of the generalized $SU_{2}$ WZW model is obtained from the corresponding $H_{3}^{+}$ two-point function, $\displaystyle\left\langle\Phi^{J_{1}}_{x_{1},\bar{x}_{1}}\Phi^{J_{2}}_{x_{2},\bar{x}_{2}}\right\rangle=\frac{\pi}{\sqrt{-k-2}}\mu^{\frac{2J_{1}+1}{\sqrt{-k-2}}}\gamma(\tfrac{2J_{1}+1}{k+2})\,\delta_{J_{1},J_{2}}|x_{12}|^{4J_{1}}\,.$ (4.4.16) In complete analogy with the case of the generalized minimal models, the three-point function of a generalized $SU_{2}$ WZW model is obtained from the three-point function (4.3.16) of the $H_{3}^{+}$ model by sending the spins to their half-integer values, and replacing the (vanishing) $\Upsilon_{b}$ functions with their derivatives. Let us now look for rational WZW models, by considering doubly degenerate fields. We restrict our attention to the case when $\Phi^{J}$ and $\Phi^{(J^{\prime},1)}$ are related by reflection, so that $J+J^{\prime}=\frac{k}{2}$. This restricts the level $k$ to integer values, $\displaystyle\boxed{k\in{\mathbb{N}}}\ ,$ (4.4.17) and the spin $J$ to the finite set $\displaystyle J\in\left\\{0,\frac{1}{2},\cdots\frac{k}{2}\right\\}\ .$ (4.4.18) The OPE of the doubly degenerate field $\Phi^{J}\propto\Phi^{(\frac{k}{2}-J,1)}$ with $\Phi^{j_{2}}$ is constrained by both equations (4.4.12) and (4.4.13). Taking the reflection relation $\Phi^{j}\propto\Phi^{-j-1}$ into account, we find that the OPE is nonvanishing only if $j_{2}$ itself belongs to the set (4.4.18). In this case, we have $\displaystyle\boxed{\Phi^{J_{1}}\Phi^{J_{2}}=\sum_{J=|J_{1}-J_{2}|}^{\operatorname{min}(J_{1}+J_{2},k-J_{1}-J_{2})}\Phi^{J}}\ .$ (4.4.19) This suggests that the set of our doubly degenerate fields is closed under fusion, and that there exists a rational model whose spectrum is $\displaystyle\boxed{S=\bigoplus_{J=0,\frac{1}{2},\cdots\frac{k}{2}}\hat{R}_{J}\otimes\bar{\hat{R}}_{J}}\ .$ (4.4.20) This model, which exists for any positive integer value of $k$, is the $SU_{2}$ WZW model. The two- and three-point functions of this model are special cases of corresponding correlation functions of the generalized $SU_{2}$ WZW model. The space of models with the $\widehat{\mathfrak{sl}}_{2}$ symmetry algebra is therefore similar to the space of models with Virasoro symmetry algebra, according to the following table: (4.4.25) In addition, for both symmetry algebras, there exist rational, non-diagonal models, which in both cases fit in an A-D-E classification [4]. Non-diagonal models with $\widehat{\mathfrak{sl}}_{2}$ symmetry include the $SO_{3}$ WZW models, where $SO_{3}=\frac{SU_{2}}{\mathbb{Z}_{2}}$ is not simply connected. #### 4.4.3 The $\widetilde{SL}_{2}(\mathbb{R})$ WZW model After the $H_{3}^{+}$ model, the $\widetilde{SL}_{2}(\mathbb{R})$ WZW model is our second example of a non-rational model with an $\widehat{\mathfrak{sl}}_{2}$ symmetry algebra. Work on the $\widetilde{SL}_{2}(\mathbb{R})$ WZW model has been motivated by its relevance to string theory in $AdS_{3}$. The model has not been fully solved: the three- point function is known only partially, and crossing symmetry has not been proved. We will limit ourselves to working out the fusion rules, and deriving Maldacena and Ooguri’s widely believed and well-tested conjecture for the spectrum of the model. This spectrum is more complicated than the spectrum of the $H_{3}^{+}$ model, which is why solving the $\widetilde{SL}_{2}(\mathbb{R})$ WZW model is more difficult. The Lie group $\widetilde{SL}_{2}(\mathbb{R})$ is defined as the universal covering group of the group $SL_{2}({\mathbb{R}})$ of matrices of size two with real coefficients and determinant one. The group $SL_{2}({\mathbb{R}})$ is not simply-connected, as the matrices of the type $\left(\begin{smallmatrix}\cos\tau&\sin\tau\\\ -\sin\tau&\cos\tau\end{smallmatrix}\right)$ form a non-contractible loop, and the first homotopy group of $SL_{2}({\mathbb{R}})$ is ${\mathbb{Z}}$. So $\widetilde{SL}_{2}(\mathbb{R})$ is obtained from $SL_{2}({\mathbb{R}})$ by decompactifying the $\tau$ direction, and we have $SL_{2}({\mathbb{R}})=\widetilde{SL}_{2}(\mathbb{R})/{\mathbb{Z}}$. Let us study the space $\mathcal{F}(\widetilde{SL}_{2}(\mathbb{R}))$ of functions on $\widetilde{SL}_{2}(\mathbb{R})$, which is assumed to be the large level limit of the spectrum of our model. In this limit, a basis of symmetry generators is $(J^{a}_{0})$, and the elements of this basis are antihermitian for the natural scalar product (4.4.3). We will however work with a basis that includes a hermitian generator, whose eigenvalues are therefore real in unitary representations. We still call this basis $(J^{a}_{0})$, although it is obtained from the original basis by a complex change of bases, such that the conjugation rule becomes $\displaystyle\left\\{\begin{array}[]{l}(J_{0}^{0})^{\dagger}=J_{0}^{0}\ ,\\\ (J_{0}^{\pm})^{\dagger}=-J_{0}^{\mp}\ .\end{array}\right.$ (4.4.28) In the matrix representation (2.3.13) of $\mathfrak{sl}_{2}$, this corresponds to the $\mathfrak{su}_{1,1}$ conjugation rule $M^{\dagger}=\left(\begin{smallmatrix}1&0\\\ 0&-1\end{smallmatrix}\right)\bar{M}^{T}\left(\begin{smallmatrix}1&0\\\ 0&-1\end{smallmatrix}\right)$. (The matrices which satisfy $M^{\dagger}=-M$ for the $\mathfrak{su}_{1,1}$ conjugation rule are by definition the elements of $\mathfrak{su}_{1,1}$ itself). The decomposition of $\mathcal{F}(\widetilde{SL}_{2}(\mathbb{R}))$ into irreducible unitary representations of the global symmetry algebra $\mathfrak{sl}_{2}\times\overline{\mathfrak{sl}}_{2}$ involves two types of representations: the principal series representation of $\mathfrak{sl}_{2}$ $C^{j}_{\alpha}$ and the discrete series representation of $\mathfrak{sl}_{2}$ $D^{j,\pm}$, where $j$ is the spin. These representations can be characterized by the eigenvalues of the hermitian generator $J_{0}^{0}$, (4.4.33) Given a representation $R$, let $R^{*}$ be the dual representation, that is the representation with opposite $J_{0}^{0}$ eigenvalues: $\displaystyle(C^{j}_{\alpha})^{*}$ $\displaystyle=C^{j}_{-\alpha}\ ,$ (4.4.34) $\displaystyle(D^{j,\pm})^{*}$ $\displaystyle=D^{j,\mp}\ ,$ (4.4.35) So, according to [18] and references therein, we have $\displaystyle\mathcal{F}(\widetilde{SL}_{2}(\mathbb{R}))$ $\displaystyle=\int^{\oplus}_{-\frac{1}{2}+i{\mathbb{R}}_{+}}dj\int^{\oplus}_{]0,1[}d\alpha\ C^{j}_{\alpha}\otimes\bar{C}^{j}_{\alpha}\oplus\bigoplus_{\pm}\int^{\oplus}_{]-\frac{1}{2},\infty[}dj\ D^{j,\pm}\otimes\bar{D}^{j,\pm}\ ,$ (4.4.36) $\displaystyle\mathcal{F}(SL_{2}(\mathbb{R}))$ $\displaystyle=\int^{\oplus}_{-\frac{1}{2}+i{\mathbb{R}}_{+}}dj\bigoplus_{\alpha\in\\{0,\frac{1}{2}\\}}C^{j}_{\alpha}\otimes\bar{C}^{j}_{\alpha}\oplus\bigoplus_{\pm}\bigoplus_{j=-\frac{1}{2},0,\frac{1}{2}\cdots}D^{j,\pm}\otimes\bar{D}^{j,\pm}\ ,$ (4.4.37) where the identification of $\mathcal{F}(SL_{2}(\mathbb{R}))$ with the space of $\tau$-periodic functions on $\widetilde{SL}_{2}(\mathbb{R})$ amounts to restricting the eigenvalues of our hermitian generator to be half-integers. Tensor products of principal and discrete series representations can be written as $\displaystyle R_{1}\otimes R_{2}=\bigoplus_{R_{3}}N_{R_{1},R_{2},R_{3}^{*}}R_{3}\ ,$ (4.4.38) where $N_{R_{1},R_{2},R_{3}}$ is an integer-valued function, which obeys $N_{R_{1}^{*},R_{2}^{*},R_{3}^{*}}=N_{R_{1},R_{2},R_{3}}$ and is symmetric under permutations. Modulo these symmetries, the only nonzero values of $N_{R_{1},R_{2},R_{3}}$ are $\displaystyle N_{D^{j_{1},+},D^{j_{2},+},D^{j_{3},-}}$ $\displaystyle=1\quad\text{if}\ j_{3}\in j_{1}+j_{2}+1+{\mathbb{N}}\ ,$ (4.4.39) $\displaystyle N_{C^{j_{1}}_{\alpha_{1}},D^{j_{2},+},D^{j_{3},-}}$ $\displaystyle=1\quad\text{if}\ \alpha_{1}+j_{2}-j_{3}\in{\mathbb{Z}}\ ,$ (4.4.40) $\displaystyle N_{C^{j_{1}}_{\alpha_{1}},C^{j_{2}}_{\alpha_{2}},D^{j_{3},+}}$ $\displaystyle=1\quad\text{if}\ \alpha_{1}+\alpha_{2}+j_{3}\in{\mathbb{Z}}\ ,$ (4.4.41) $\displaystyle N_{C^{j_{1}}_{\alpha_{1}},C^{j_{2}}_{\alpha_{2}},C^{j_{3}}_{\alpha_{3}}}$ $\displaystyle=2\quad\text{if}\ \alpha_{1}+\alpha_{2}+\alpha_{3}\in{\mathbb{Z}}\ .$ (4.4.42) For example, the tensor product of two principal series representations is $\displaystyle C^{j_{1}}_{\alpha_{1}}\otimes C^{j_{2}}_{\alpha_{2}}$ $\displaystyle=2\int^{\oplus}_{-\frac{1}{2}+i{\mathbb{R}}_{+}}dj\ C^{j}_{\alpha_{1}+\alpha_{2}}\oplus\bigoplus_{\pm}\bigoplus_{\begin{subarray}{c}j\in\pm\alpha_{1}\pm\alpha_{2}+{\mathbb{Z}}\\\ j\in]-\frac{1}{2},\infty[\end{subarray}}D^{j,\pm}\ ,$ (4.4.43) where the factor of $2$ is the tensor product multiplicity which we already encountered in Section 4.2.3. Actually, all the values of $N_{R_{1},R_{2},R_{3}}$ can be deduced from the values of $N_{D^{j_{1},+},D^{j_{2},+},D^{j_{3},-}}$, using the remark that as far as the eigenvalues (4.4.33) of $J^{0}_{0}$ are concerned, we have the identification $\displaystyle C^{j}_{\alpha}\sim D^{\alpha-1,+}\oplus D^{-\alpha,-}\ .$ (4.4.44) This identification apparently predicts a nonzero value for $N_{C^{j}_{\alpha},D^{j_{2},+},D^{j_{3},+}}$. That value should be discarded, because it depends on $\alpha$ whereas $C^{j}_{\alpha}$ only depends on $\alpha\ \text{mod}\ {\mathbb{Z}}$. After these reminders on $\widetilde{SL}_{2}(\mathbb{R})$ and its representations, we are ready to consider the associated WZW model. The natural extension of the conjugation rule (4.4.28) to the symmetry algebra $\widehat{\mathfrak{sl}}_{2}$ is $\displaystyle\left\\{\begin{array}[]{l}(J_{n}^{0})^{\dagger}=J_{-n}^{0}\ ,\\\ (J_{n}^{\pm})^{\dagger}=-J_{-n}^{\mp}\ ,\end{array}\right.$ (4.4.47) which is compatible with the commutation relations (4.2.14) provided the level $k$ is real. The principal and discrete series representations of $\mathfrak{sl}_{2}$ are naturally extended to affine highest-weight representations of $\widehat{\mathfrak{sl}}_{2}$: the principal series representations $\hat{C}^{j}_{\alpha}$ and discrete series representations $\hat{D}^{j,\pm}$. By computing the norms of level one states, it is easy to see that such representations are not unitary. But the $\widetilde{SL}_{2}(\mathbb{R})$ WZW model is not expected to be unitary, as the metric on the underlying group has a mixed signature. (What matters for applications to string theory is not unitarity, but another property called the no-ghost theorem.) So, can we build the spectrum of the model from such affine highest-weight representations? The answer turns out to be no, as this would be incompatible with an algebraic feature of $\widehat{\mathfrak{sl}}_{2}$ called the spectral flow [24]. The spectral flow is a family $(\rho_{w})_{w\in{\mathbb{Z}}}$ of automorphisms of $\widehat{\mathfrak{sl}}_{2}$, which obey $\rho_{w}\rho_{w^{\prime}}=\rho_{w+w^{\prime}}$ and act as $\displaystyle\rho_{w}(J^{0}_{n})$ $\displaystyle=J^{0}_{n}+\frac{1}{2}kw\delta_{n,0}\ ,$ (4.4.48) $\displaystyle\rho_{w}(J^{\pm}_{n})$ $\displaystyle=J^{\pm}_{n\pm w}\ .$ (4.4.49) According to eqs. (4.2.16) and (4.2.17), this implies $\displaystyle\rho_{w}(L_{n})=L_{n}+wJ^{0}_{n}+\frac{1}{4}kw^{2}\delta_{n,0}\ .$ (4.4.50) Given a representation $R$ of $\widehat{\mathfrak{sl}}_{2}$, that is an action of the generators $J^{a}_{n}$ on some vectors $|v\rangle$, we define the spectrally flowed representation $\rho_{w}(R)$ by the action $\rho_{-w}(J^{a}_{n})|v\rangle$. It follows from this definition that $\displaystyle\rho_{w}(R)^{*}=\rho_{-w}(R^{*})\ .$ (4.4.51) Moreover, it is believed that the action of spectral flow commutes with fusion, in the sense that [25] $\displaystyle\rho_{w}(R)\times\rho_{w^{\prime}}(R^{\prime})=\rho_{w+w^{\prime}}(R\times R^{\prime})\ .$ (4.4.52) We assume that fusion products of representations of $\widehat{\mathfrak{sl}}_{2}$ have the form $\displaystyle R_{1}\times R_{2}=\bigoplus_{R_{3}}N_{R_{1},R_{2},R_{3}^{*}}R_{3}\ ,$ (4.4.53) where the $N_{R_{1},R_{2},R_{3}}$ are a permutation-symmetric, integer-valued fusion multiplicities such that $N_{R_{1}^{*},R_{2}^{*},R_{3}^{*}}=N_{R_{1},R_{2},R_{3}}$. From eq. (4.4.52), we must then have $\displaystyle w_{1}+w_{2}+w_{3}=0\quad\Rightarrow\quad N_{\rho_{w_{1}}(R_{1}),\rho_{w_{2}}(R_{2}),\rho_{w_{3}}(R_{3})}=N_{R_{1},R_{2},R_{3}}\ .$ (4.4.54) Let us consider the action of spectral flow on our affine highest-weight representations. We introduce the notations $\displaystyle\hat{C}^{j,w}_{\alpha}$ $\displaystyle=\rho_{w}(\hat{C}^{j}_{\alpha})\quad,\quad(w\in{\mathbb{Z}})\ ,$ (4.4.55) $\displaystyle\hat{D}^{j,w}$ $\displaystyle=\rho_{w-\frac{1}{2}}(\hat{D}^{j,+})\quad,\quad(w\in\tfrac{1}{2}+{\mathbb{Z}})\ .$ (4.4.56) If $w\neq 0$, then $\hat{C}^{j,w}_{\alpha}$ cannot be an affine highest-weight representation, because the eigenvalues of $\rho_{-w}(L_{0})=L_{0}-wJ^{0}_{0}+\frac{1}{4}kw^{2}$ in $\hat{C}^{j}_{\alpha}$ are not bounded from below – and actually, the representations $(\hat{C}^{j,w}_{\alpha})_{w\in{\mathbb{Z}}}$ all differ from one another. Let us now focus on discrete series representations. The representation $\hat{D}^{j,\pm}$ can be characterized by the existence of a state $|v^{j,\pm}\rangle$ such that $\displaystyle J^{\mp}_{n\geq 0}|v^{j,\pm}\rangle=J^{0}_{n>0}|v^{j,\pm}\rangle=J^{\pm}_{n>0}|v^{j,\pm}\rangle=(J^{0}_{0}\mp(j+1))|v^{j,\pm}\rangle=0\ .$ (4.4.57) So we can characterize $\rho_{w}(\hat{D}^{j,\pm})$ by the action of $\rho_{-w}(J^{a}_{n})$ on $|v^{j,\pm}\rangle$. In particular, we notice that $\rho_{1}(J^{a}_{n})|v^{j,+}\rangle=0\ \Leftrightarrow\ J^{a}_{n}|v^{-j-2-\frac{k}{2},-}\rangle=0$, which leads to $\displaystyle\rho_{-1}(\hat{D}^{j,+})=\hat{D}^{-j-2-\frac{k}{2},-}\ .$ (4.4.58) So the spectral flow orbit of $\hat{D}^{j,+}$ contains two affine highest- weight representations, namely $\hat{D}^{j,\frac{1}{2}}=\hat{D}^{j,+}$ itself and $\hat{D}^{j,-\frac{1}{2}}=\hat{D}^{-j-2-\frac{k}{2},-}$. The rest of the orbit is made of representations where the eigenvalues of $L_{0}$ are not bounded from below. And the dual representations of our spectrally flowed representations are $\displaystyle(\hat{C}^{j,w}_{\alpha})^{*}$ $\displaystyle=\hat{C}^{j,-w}_{-\alpha}\ ,$ (4.4.59) $\displaystyle(\hat{D}^{j,w})^{*}$ $\displaystyle=\hat{D}^{-j-2-\frac{k}{2},-w}\ .$ (4.4.60) This concludes our discussion of spectral flow. We will now be able to argue that spectrally flowed representations must appear in the spectrum. By our definition of WZW models, the large level limit of the spectrum $\tilde{S}$ of the $\widetilde{SL}_{2}(\mathbb{R})$ WZW model is $\mathcal{F}(\widetilde{SL}_{2}(\mathbb{R}))$ (4.4.36), and we therefore expect $\tilde{S}$ to contain affine discrete representations of both series $\hat{D}^{j,\pm}$. Let us show that $\tilde{S}$ cannot contain only affine highest-weight representations. Consider spins $j_{1},j_{2},j_{3}$ such that $N_{D^{j_{1},+},D^{j_{2},-},D^{j_{3},+}}=1$. If the level $k$ is large enough, we must then have $N_{\hat{D}^{j_{1},\frac{1}{2}},\hat{D}^{j_{2},-\frac{1}{2}},\hat{D}^{j_{3},\frac{1}{2}}}=1$. Using the behaviour (4.4.54) of fusion multiplicities under spectral flow, this implies $N_{\hat{D}^{j_{1},-\frac{1}{2}},\hat{D}^{j_{2},-\frac{1}{2}},\hat{D}^{j_{3},\frac{3}{2}}}=1$. So the fusion product $\hat{D}^{j_{1},-\frac{1}{2}}\times\hat{D}^{j_{2},-\frac{1}{2}}$ contains the representation $(\hat{D}^{j_{3},\frac{3}{2}})^{*}$, which is not an affine highest-weight representation. Generalizing this argument, the spectrum must in fact contain representations of the type $\hat{D}^{j,w}$ for all values of $w\in\frac{1}{2}+{\mathbb{Z}}$. Then it is natural to assume that the spectral flow actually leaves the spectrum invariant. But what are the allowed values of the spin $j$ of $\hat{D}^{j,w}$? We still impose the constraint $j>-\frac{1}{2}$ which comes from the representation $D^{j,\pm}$ of $\mathfrak{sl}_{2}$. If this applies to both spins in the relation (4.4.58), we must then have $\displaystyle-\frac{1}{2}<j<-\frac{k+3}{2}\ .$ (4.4.61) This defines a non-empty interval provided $\displaystyle\boxed{k\in]-\infty,-2[}\ .$ (4.4.62) (Nevertheless, the model surely exists for $k\in\mathbb{C}-\\{2\\}$.) The natural conjectures for the spectrums of the $\widetilde{SL}_{2}(\mathbb{R})$ and $SL_{2}(\mathbb{R})$ WZW models are then $\displaystyle\tilde{S}$ $\displaystyle=\bigoplus_{w\in{\mathbb{Z}}}\int^{\oplus}_{-\frac{1}{2}+i{\mathbb{R}}_{+}}dj\int^{\oplus}_{]0,1[}d\alpha\ \hat{C}^{j,w}_{\alpha}\otimes\bar{\hat{C}}^{j,w}_{\alpha}\oplus\bigoplus_{w\in\frac{1}{2}+{\mathbb{Z}}}\int^{\oplus}_{]-\frac{1}{2},-\frac{k+3}{2}[}dj\ \hat{D}^{j,w}\otimes\bar{\hat{D}}^{j,w}\ ,$ (4.4.63) $\displaystyle S$ $\displaystyle=\bigoplus_{\begin{subarray}{c}w_{L},w_{R}\in{\mathbb{Z}}\\\ w_{L}-w_{R}\in 2{\mathbb{Z}}\end{subarray}}\int^{\oplus}_{-\frac{1}{2}+i{\mathbb{R}}_{+}}dj\bigoplus_{\alpha\in\\{0,\frac{1}{2}\\}}\hat{C}^{j,w_{L}}_{\alpha}\otimes\bar{\hat{C}}^{j,w_{R}}_{\alpha}\oplus\bigoplus_{\begin{subarray}{c}w_{L},w_{R}\in\frac{1}{2}+{\mathbb{Z}}\\\ w_{L}-w_{R}\in 2{\mathbb{Z}}\end{subarray}}\ \bigoplus_{\begin{subarray}{c}j=-\frac{1}{2},0,\frac{1}{2},\cdots\\\ j<-\frac{k+3}{2}\end{subarray}}\hat{D}^{j,w_{L}}\otimes\bar{\hat{D}}^{j,w_{R}}\ .$ (4.4.64) Notice that the left and right spectral flow numbers $w_{L}$ and $w_{R}$ are independent in the case of $SL_{2}(\mathbb{R})$, and equal in the case of $\widetilde{SL}_{2}(\mathbb{R})$, so that the spectrum $\tilde{S}$ is diagonal. The rule, which can only be heuristic in the absence of a definition of WZW models based on the corresponding loop groups, is: > In the $G$ WZW model the spectral flow takes values in the first homotopy > group of the global symmetry group, that is > $\pi_{1}(\frac{G\times\bar{G}}{Z(G)})$ where $Z(G)$ is the center of $G$. For our WZW models, the relevant homotopy groups are $\pi_{1}(\frac{\widetilde{SL}_{2}(\mathbb{R})\times\overline{\widetilde{SL}_{2}(\mathbb{R})}}{{\mathbb{Z}}})={\mathbb{Z}}$ and $\pi_{1}(\frac{SL_{2}({\mathbb{R}})\times\overline{SL_{2}({\mathbb{R}})}}{{\mathbb{Z}}_{2}})=\frac{{\mathbb{Z}}\times\overline{{\mathbb{Z}}}}{{\mathbb{Z}}_{2}}$. The rule also applies to the case $G=U(1)$ of the compactified free boson, if we consider the winding number as a spectral flow number. Finally, let us check that we can find fusion rules for the representations of $\widehat{\mathfrak{sl}}_{2}$, such that 1. 1. the rule (4.4.54) is obeyed, 2. 2. in the large level limit $k\to-\infty$, the fusion rules reduce to the tensor product rules for representations of $\mathfrak{sl}_{2}$, 3. 3. the conjectured spectrums of the $\widetilde{SL}_{2}(\mathbb{R})$ and $SL_{2}(\mathbb{R})$ WZW models are closed under fusion. We first obtain the fusion multiplicities $N_{\hat{D}^{j_{1},\pm},\hat{D}^{j_{2},\pm},\hat{D}^{j_{3},\pm}}$ for affine highest-weight representations of the discrete series by assuming that it can be nonzero only when the corresponding tensor product multiplicity (4.4.39) is nonzero, and when all spins obey the condition (4.4.61). The rest of the nonzero fusion multiplicities of the type $N_{\hat{D}^{j_{1},w_{1}},\hat{D}^{j_{2},w_{2}},\hat{D}^{j_{3},w_{3}}}$ are obtained by the rule (4.4.54). Then we generalize the relation (4.4.44) between $\mathfrak{sl}_{2}$ representations of the principal and discrete series, and obtain $\displaystyle\hat{C}^{j,w}_{\alpha}\sim\hat{D}^{\alpha-1,w+\frac{1}{2}}\oplus\hat{D}^{\alpha-2-\frac{k}{2},w-\frac{1}{2}}\ .$ (4.4.65) This enables us to compute fusion multiplicities involving representations of the type $\hat{C}^{j,w}_{\alpha}$. The only subtlety is that we obtain aberrant terms in $N_{\hat{C}^{j_{1},w_{1}}_{\alpha_{1}},\hat{D}^{j_{2},w_{2}},\hat{D}^{j_{3},w_{3}}}$, which depend on $\alpha_{1}$ instead of $\alpha_{1}\ \text{mod}\ {\mathbb{Z}}$, and must be discarded. Keeping the condition (4.4.61) on spins of discrete series representations implicit, the results are $\displaystyle N_{\hat{D}^{j_{1},w_{1}},\hat{D}^{j_{2},w_{2}},\hat{D}^{j_{3},w_{3}}}$ $\displaystyle=\delta_{\sum w_{i},\frac{1}{2}}\delta_{\sum j_{i}+3+\frac{k}{2},-{\mathbb{N}}}+\delta_{\sum w_{i},-\frac{1}{2}}\delta_{\sum j_{i}+3+k,{\mathbb{N}}}\ ,$ (4.4.66) $\displaystyle N_{\hat{C}^{j_{1},w_{1}}_{\alpha_{1}},\hat{D}^{j_{2},w_{2}},\hat{D}^{j_{3},w_{3}}}$ $\displaystyle=\delta_{\sum w_{i},0}\delta_{\alpha_{1}+j_{2}+j_{3}+\frac{k}{2},{\mathbb{Z}}}\ ,$ (4.4.67) $\displaystyle N_{\hat{C}^{j_{1},w_{1}}_{\alpha_{1}},\hat{C}^{j_{2},w_{2}}_{\alpha_{2}},\hat{D}^{j_{3},w_{3}}}$ $\displaystyle=\delta_{\sum w_{i},\frac{1}{2}}\delta_{\alpha_{1}+\alpha_{2}+j_{3},{\mathbb{Z}}}+\delta_{\sum w_{i},-\frac{1}{2}}\delta_{\alpha_{1}+\alpha_{2}+j_{3}+\frac{k}{2},{\mathbb{Z}}}\ ,$ (4.4.68) $\displaystyle N_{\hat{C}^{j_{1},w_{1}}_{\alpha_{1}},\hat{C}^{j_{2},w_{2}}_{\alpha_{2}},\hat{C}^{j_{3},w_{3}}_{\alpha_{3}}}$ $\displaystyle=\delta_{\sum w_{i},1}\delta_{\sum\alpha_{i}-\frac{k}{2},{\mathbb{Z}}}+2\,\delta_{\sum w_{i},0}\delta_{\sum\alpha_{i},{\mathbb{Z}}}+\delta_{\sum w_{i},-1}\delta_{\sum\alpha_{i}+\frac{k}{2},{\mathbb{Z}}}\ .$ (4.4.69) This leads to the following fusion rules: $\hat{D}^{j_{1},w_{1}}\times\hat{D}^{j_{2},w_{2}}=\int^{\oplus}_{-\frac{1}{2}+i{\mathbb{R}}_{+}}dj\ \hat{C}^{j,w_{1}+w_{2}}_{j_{1}+j_{2}+\frac{k}{2}}\\\ \oplus\bigoplus_{\begin{subarray}{c}j\in j_{1}+j_{2}+1+{\mathbb{N}}\\\ j\in]-\frac{1}{2},-\frac{k+3}{2}[\end{subarray}}\hat{D}^{j,w_{1}+w_{2}-\frac{1}{2}}\oplus\bigoplus_{\begin{subarray}{c}j\in j_{1}+j_{2}+1+\frac{k}{2}-{\mathbb{N}}\\\ j\in]-\frac{1}{2},-\frac{k+3}{2}[\end{subarray}}\hat{D}^{j,w_{1}+w_{2}+\frac{1}{2}}\ ,$ (4.4.70) $\displaystyle\hat{C}^{j_{1},w_{1}}_{\alpha_{1}}\times\hat{D}^{j_{2},w_{2}}$ $\displaystyle=\int^{\oplus}_{-\frac{1}{2}+i{\mathbb{R}}_{+}}dj\left(\hat{C}^{j,w_{1}+w_{2}-\frac{1}{2}}_{\alpha_{1}+j_{2}}\oplus\hat{C}^{j,w_{1}+w_{2}+\frac{1}{2}}_{\alpha_{1}+j_{2}+\frac{k}{2}}\right)\oplus\bigoplus_{\begin{subarray}{c}j\in\alpha_{1}+j_{2}+{\mathbb{Z}}\\\ j\in]-\frac{1}{2},-\frac{k+3}{2}[\end{subarray}}\hat{D}^{j,w_{1}+w_{2}}\ ,$ (4.4.71) $\hat{C}^{j_{1},w_{1}}_{\alpha_{1}}\times\hat{C}^{j_{2},w_{2}}_{\alpha_{2}}=\int^{\oplus}_{-\frac{1}{2}+i{\mathbb{R}}_{+}}dj\left(\hat{C}^{j,w_{1}+w_{2}-1}_{\alpha_{1}+\alpha_{2}-\frac{k}{2}}\oplus 2\,\hat{C}^{j,w_{1}+w_{2}}_{\alpha_{1}+\alpha_{2}}\oplus\hat{C}^{j,w_{1}+w_{2}+1}_{\alpha_{1}+\alpha_{2}+\frac{k}{2}}\right)\\\ \oplus\bigoplus_{\begin{subarray}{c}j\in\alpha_{1}+\alpha_{2}-\frac{k}{2}+{\mathbb{Z}}\\\ j\in]-\frac{1}{2},-\frac{k+3}{2}[\end{subarray}}\hat{D}^{j,w_{1}+w_{2}-\frac{1}{2}}\oplus\bigoplus_{\begin{subarray}{c}j\in\alpha_{1}+\alpha_{2}+{\mathbb{Z}}\\\ j\in]-\frac{1}{2},-\frac{k+3}{2}[\end{subarray}}\hat{D}^{j,w_{1}+w_{2}+\frac{1}{2}}\ .$ (4.4.72) So, while the spectral flow number $w$ is not conserved, fusion violates it by at most one unit. This can alternatively be shown at the level of correlation functions by studying how spectral flow affects the Ward identities and Knizhnik-Zamolodchikov equations [26]. ### 4.5 Exercises ###### Exercise 4.1 (Normalization of OPE coefficients in free bosonic theories) The aim of this exercise is to show that the OPE coefficient in the OPE of $\hat{\mathfrak{u}}_{1}$-primary fields (4.1.1) can be set to one by renormalizing the fields. In other words, calling this coefficient $C_{\alpha_{1},\alpha_{2}}$, we want to show that there exists a function $\lambda(\alpha)$ such that $\displaystyle C_{\alpha_{1},\alpha_{2}}=\frac{\lambda(\alpha_{1}+\alpha_{2})}{\lambda(\alpha_{1})\lambda(\alpha_{2})}\ .$ (4.5.1) To begin with, use commutativity (1.2.5) and associativity (1.2.6) of the OPE, and show that $\displaystyle C_{\alpha_{1},\alpha_{2}}$ $\displaystyle=C_{\alpha_{2},\alpha_{1}}\ ,$ (4.5.2) $\displaystyle C_{\alpha_{1},\alpha_{2}}C_{\alpha_{1}+\alpha_{2},\alpha_{3}}$ $\displaystyle=C_{\alpha_{1},\alpha_{2}+\alpha_{3}}C_{\alpha_{2},\alpha_{3}}\ .$ (4.5.3) Then consider the ansatz $\displaystyle\lambda(\alpha)=\frac{1}{C_{\alpha,0}}\exp\int_{0}^{\alpha}\varphi\ ,\quad\text{where}\quad\varphi(\alpha)$ $\displaystyle=\left.{\frac{\partial}{\partial\alpha_{2}}}\log C_{\alpha,\alpha_{2}}\right|_{\alpha_{2}=0}\ .$ (4.5.4) Show that $C_{\alpha,0}$ is actually an $\alpha$-independent constant, and that the function $\varphi(\alpha)$ is such that $\displaystyle{\frac{\partial}{\partial\alpha_{2}}}\log C_{\alpha_{1},\alpha_{2}}=\varphi(\alpha_{1}+\alpha_{2})-\varphi(\alpha_{2})\ .$ (4.5.5) Prove eq. (4.5.1) by showing that both sides have the same value at $\alpha_{2}=0$, and the same logarithmic derivative wrt $\alpha_{2}$. (If the spectrum is discrete, a similar proof can be done using finite differences instead of derivatives.) ###### Exercise 4.2 (Sugawara construction of a Virasoro field) Compute the $TJ^{a}$ OPE (4.2.13), by applying Wick’s theorem to $J^{a}$ $(z)K_{bc}$ $(J^{b}J^{c})$(y)$,goingthroughthefollowingintermediatesteps:\begin{aligned} \mbox{\rule[-6.5pt]{0.0pt}{1.0pt}\makebox[0.0pt][l]{\hskip 4.73352pt\makebox[0.0pt]{\rule[-6.5pt]{0.3pt}{3.0pt}}\rule[-6.5pt]{4.73352pt}{0.3pt}}\mbox{$J^{a}$}\makebox[0.0pt][l]{\rule[-6.5pt]{12.8681pt}{0.3pt}}\mbox{$(z)$}\makebox[0.0pt][l]{\rule[-6.5pt]{12.79918pt}{0.3pt}\makebox[0.0pt]{\rule[-6.5pt]{0.3pt}{3.0pt}}}\mbox{$(J^{b}J_{b})$}}(y)&=\frac{1}{2\pi i}\oint_{y}\frac{dx}{x-y}\left(\frac{kJ^{a}(y)}{(x-z)^{2}}+\frac{f^{ab}_{c}J^{c}(x)J_{b}(y)}{z-x}+\frac{kJ^{a}(x)}{(y-z)^{2}}-\frac{f^{ab}_{c}J^{c}(x)J_{b}(y)}{z-y}\right),\\\ &=\frac{2kJ^{a}(y)}{(y-z)^{2}}+\frac{f^{ab}_{c}f^{dc}_{b}J_{d}(y)}{(y-z)^{2}}=2(k+g){\frac{\partial}{\partial z}}\frac{J^{a}(z)}{y-z}+O(1)\ .\end{aligned}ThenapplyWick^{\prime}stheoremto$ $T$ $(y)K_{ab}$ $(J^{a}J^{b})$(z)$,andcheckthefollowingidentities:\begin{aligned} \mbox{\rule[-6.5pt]{0.0pt}{1.0pt}\makebox[0.0pt][l]{\hskip 3.61632pt\makebox[0.0pt]{\rule[-6.5pt]{0.3pt}{3.0pt}}\rule[-6.5pt]{3.61632pt}{0.3pt}}\mbox{$T$}\makebox[0.0pt][l]{\rule[-6.5pt]{13.0394pt}{0.3pt}}\mbox{$(y)$}\makebox[0.0pt][l]{\rule[-6.5pt]{13.35594pt}{0.3pt}\makebox[0.0pt]{\rule[-6.5pt]{0.3pt}{3.0pt}}}\mbox{$(J^{a}J_{a})$}}(z)&=\frac{1}{2\pi i}\oint_{z}\frac{dx}{x-z}\left({\frac{\partial}{\partial x}}\frac{J^{a}(x)J_{a}(z)}{y-x}+{\frac{\partial}{\partial z}}\frac{J^{a}(x)J_{a}(z)}{y-z}\right)\ ,\\\ &=\frac{1}{2\pi i}\oint_{z}dx\frac{J^{a}(x)J_{a}(z)}{(x-z)(y-x)(y-z)}+{\frac{\partial}{\partial z}}\frac{1}{2\pi i}\oint_{z}dx\frac{J^{a}(x)J_{a}(z)}{(x-z)(y-z)}\ ,\\\ &=\frac{k\dim\mathfrak{g}}{(y-z)^{4}}+\frac{(J^{a}J_{a})(z)}{(y-z)^{2}}+{\frac{\partial}{\partial z}}\frac{(J^{a}J_{a})(z)}{y-z}\ .\end{aligned}Concludethatthefield$T$satisfiestheVirasorofieldOPE\eqref{tt}withthecentralcharge\eqref{ckg}.$ ###### Exercise 4.3 (Modified Sugawara construction) Compute the $\hat{T}\hat{T}$ OPE for the modified Sugawara field $\hat{T}=T+Q_{a}\partial J^{a}$. Show that $\hat{T}$ is not a Virasoro field. Then, relax the assumption that $\mathfrak{g}$ be semi-simple. The Killing $K^{ab}$ form being degenerate, replace it with a non-degenerate, invariant symmetric tensor $C^{ab}$ in the $J^{a}J^{b}$ OPE (4.2.7) and in the definition of $T$ (4.2.11). What are now the conditions for $\hat{T}$ to be a Virasoro field? ###### Exercise 4.4 (Associativity of the $J^{a}J^{b}\Phi^{R}$ OPE) Check the associativity of the $J^{a}J^{b}\Phi^{R}$ OPE, by performing two different computations of the behaviour near $y_{2}=z_{0}$ of $\displaystyle\mathcal{O}=\frac{1}{2\pi i}\oint_{y_{2}}dy_{1}\ J^{a}(y_{1})J^{b}(y_{2})\Phi^{R}(z_{0})\ .$ (4.5.6) Firstly, use the $J^{a}J^{b}$ OPE (4.2.7), and check that $\displaystyle\mathcal{O}=f_{c}^{ab}J^{c}(y_{2})\Phi^{R}(z_{0})=\frac{-f_{c}^{ab}t^{c}\Phi^{R}(z_{0})}{y_{2}-z_{0}}+O(1)\ .$ (4.5.7) Secondly, split the integration contour in two terms, $\oint_{y_{2}}=\oint_{y_{2},z_{0}}-\oint_{z_{0}}$. In the first term, use the $J^{b}(y_{2})\Phi^{R}(z_{0})$ OPE, as no integration contour runs between these two operators. In the second term, use the $J^{a}(y_{1})\Phi^{R}(z_{0})$ OPE. Check that you obtain the following intermediate steps: $\displaystyle\mathcal{O}$ $\displaystyle=\frac{1}{2\pi i}\oint_{y_{2},z_{0}}dy_{1}J^{a}(y_{1})\frac{-t^{b}\Phi^{R}(z_{0})}{y_{2}-z_{0}}-\frac{1}{2\pi i}\oint_{z_{0}}dy_{1}J^{b}(y_{2})\frac{-t^{a}\Phi^{R}(z_{0})}{y_{1}-z_{0}}+O(1)\ ,$ (4.5.8) $\displaystyle=\frac{-t^{b}}{y_{2}-z_{0}}\frac{1}{2\pi i}\oint_{y_{2},z_{0}}dy_{1}J^{a}(y_{1})\Phi^{R}(z_{0})+t^{a}J^{b}(y_{2})\Phi^{R}(z_{0})+O(1)\ ,$ (4.5.9) $\displaystyle=\frac{(t^{b}t^{a}-t^{a}t^{b})\Phi^{R}(z_{0})}{y_{2}-z_{0}}+O(1)=\frac{-f_{c}^{ab}t^{c}\Phi^{R}(z_{0})}{y_{2}-z_{0}}+O(1)\ .$ (4.5.10) Notice that this explains the minus sign in the $J^{a}\Phi^{R}$ OPE eq. (4.2.18). ###### Exercise 4.5 (Affine primary fields are primary fields) Let $\Phi^{R}$ be an affine primary field. Compute the OPE $T(y)\Phi^{R}(z_{0})$, and check that this OPE is of the form (2.2.13), with the conformal dimension and action of $L_{-1}$ given by eqs. (4.2.19) and (4.2.20) respectively. To do this, apply Wick’s theorem to $\Phi^{R}$ $(z_{0})$ $(J^{a}J_{a})$(y)$,usingeq.\eqref{jpr}intheform$ $\Phi^{R}$ $(z_{0})$ $J^{a}$(x) = -taΦR(x)x-z0. ###### Exercise 4.6 (Wakimoto free-field representation of the algebra $\widehat{\mathfrak{sl}}_{2}$) Consider fields $(J,\beta,\gamma)$ such that $\displaystyle J(y)J(z)=\frac{-\frac{1}{2}}{(y-z)^{2}}+O(1)\quad,\quad\beta(y)\gamma(z)=\frac{-1}{y-z}+O(1)\ ,$ (4.5.11) and the OPEs $J\beta,J\gamma,\beta\beta,\beta\gamma$ have no singular terms. Consider the fields $\displaystyle J^{-}=-\beta\quad,\quad J^{0}=-(\beta\gamma)-bJ\quad,\quad J^{+}=(\beta\gamma^{2})+2b(\gamma J)+k\partial\gamma\ ,$ (4.5.12) where the brackets are normal-ordered products, and the parameters $b$ and $k$ obey the relation (4.2.66). Show that the fields $(J^{-},J^{0},J^{+})$ obey the OPEs (4.2.10), and are therefore $\widehat{\mathfrak{sl}}_{2}$ currents. Show that the Sugawara construction yields the Virasoro field $\displaystyle T=-\beta\partial\gamma-J^{2}+b^{-1}\partial J\ ,$ (4.5.13) and rederive the central charge from this formula. Then define the field $\Phi^{j}_{\mu}(z)$ by $\displaystyle\beta(y)\Phi^{j}_{\mu}(z)=\frac{-\mu}{y-z}\Phi^{j}_{\mu}(z)+O(1)\quad,\quad\gamma(y)\Phi^{j}_{\mu}(z)=O(1)\ ,$ (4.5.14) $\displaystyle J(y)\Phi^{j}_{\mu}(z)=\frac{-b^{-1}(j+1)}{y-z}\Phi^{j}_{\mu}(z)+O(1)\ .$ (4.5.15) Show that the field $\left({\frac{\partial}{\partial\mu}}-\gamma(z)\right)\Phi^{j}_{\mu}(z)$ satisfies the same relations, and is therefore proportional to $\Phi^{j}_{\mu}(z)$. Choosing the coefficient of proportionality such that $\displaystyle{\frac{\partial}{\partial\mu}}\Phi^{j}_{\mu}(z)=\left(\frac{j+1}{\mu}+\gamma(z)\right)\Phi^{j}_{\mu}(z)\ ,$ (4.5.16) write the OPEs $J^{a}(y)\Phi^{j}_{\mu}(z)$ in terms of differential operators as in eq. (4.2.21), and conclude that $\Phi^{j}_{\mu}(z)$ is a $\mu$-basis affine primary field. ###### Exercise 4.7 (Conformal global Ward identities from KZ equations) Prove that the KZ equations imply the global Ward identities of conformal symmetry (2.3.4). Use the global Ward identities of the affine symmetry (4.2.44), the Casimir relation for the isospin differential operators (4.2.22), and the conformal dimensions of affine primary fields (4.2.19). ###### Exercise 4.8 (Proof of the identity (4.2.63)) Check that the identity holds when applied to positions $z_{i}$, due to $\mathcal{K}^{-1}z_{i}=z_{i}$. To check that the identity also holds when applied to Sklyanin variables $y_{j}$, it is enough to check that both sides have the same commutator with $\mathcal{K}\hat{J}^{-}(y)\mathcal{K}^{-1}$. Compute the commutator of the left-hand side using $[\hat{T}(\hat{y}_{j}),\hat{J}^{-}(y)]={\frac{\partial}{\partial y}}\frac{1}{y-\hat{y}_{j}}\hat{J}^{-}(y)$, which is a consequence of eq. (4.2.52). Compute the commutator of the right-hand side using the formula (4.2.56) for $\hat{J}^{-}(y)$, and conclude. ## Index * A-series minimal model 2nd item, §3.2.1 * accessory parameter §4.1.4 * affine highest-weight representation §4.2.2 * affine Lie algebra §4.2.1 * affine Lie algebra $\hat{\mathfrak{u}}_{1}$ §4.1.1 * affine primary field §4.2.2 * associativity item 2 * background charge §2.1.2, §4.1.1 * bootstrap approach §1.2.1 * bosonic field §2.3.2 * BPZ equation §2.3.4 * central charge §1.3.3 * characteristic exponent §2.3.4 * closure under fusion §1.4.1 * commutativity item 1 * compactified free boson §4.1.3 * conformal block §2.3.3, §2.3.4 * conformal bootstrap §1.4.2 * conformal dimension §1.4.1 * conformal field theory §1.3.3 * conformal perturbation theory §4.1.4 * conformal transformation §1.3.1 * correlation function §1.1.2 * cosmological constant §3.1.3 * Coulomb gas integral §4.1.4 * creation operator §2.1.1 * cross-ratio 4th item * crossing symmetry §2.3.3 * degenerate field §2.2.1 * degenerate representation §2.1.2 * descendent field §2.2.1 * descendent state §2.1.1 * diagonal model 2nd item * dilatation §1.3.1 * discrete series representation of $\mathfrak{sl}_{2}$ §4.4.3 * doubly degenerate representation §3.2.1 * DOZZ formula §3.1.4 * energy-momentum tensor §2.3 * field §1.1.1 * free boson §4.1.3 * free bosonic theory §4.1.3 * free energy §4.1.4 * fusing matrix §2.3.3, §2.3.5 * fusion multiplicity §1.2.3 * fusion product §1.2.3 * fusion rule §1.2.3 * $\hat{\mathfrak{g}}$ current §4.2.1 * Gaudin model §4.2.3 * generalized minimal model §3.2.1 * generalized $SU_{2}$ WZW model §4.4.2 * global conformal field theory §1.3.2 * global conformal transformation §1.3.1 * global Ward identity 1st item * $H_{3}^{+}$ §4.1.4, §4.3.2 * $H_{3}^{+}$ model §4.3, §4.3.2 * $H_{3}^{+}$-Liouville relation §4.3 * heavy asymptotic limit 4.1.45 * hermitian conjugate §1.1.2 * highest-weight representation §2.1.1 * holomorphic factorization §1.4.2 * horizontal subalgebra §4.2.2 * hypergeometric equation §2.3.5 * hypergeometric function §2.3.5 * identity field §2.2.2 * indecomposable representation §1.2.2 * irreducible representation §1.2.2 * Ising model 2nd item, §3.2.4 * isospin variable §4.2.2 * Kac formula §2.1.3 * Kac table §3.2.3 * Killing form §4.2.1 * KZ equations §4.2.3 * KZ-BPZ relation §4.2.4 * Lagrangian 2nd item * large level limit §4.3.2 * left-moving 1st item * level (affine Lie algebra) §4.2.1 * level (descendent state) §2.1.1 * light asymptotic limit 4.1.50 * linear dilaton theory 1st item, §4.1.3 * Liouville coupling constant §2.1.2 * Liouville equation §4.1.4 * Liouville theory 1st item * local conformal transformation §1.3.2 * local Ward identity 2nd item * locally holomorphic §1.4.2 * logarithmic conformal field theory §2.1.1 * maximally degenerate representation §2.1.2 * metric §1.3 * minimal model 2nd item * model §1.1.1 * model-dependent data 2nd item * modular bootstrap §1.4.2 * momentum §2.1.2, §4.1.1 * $\mu$-basis field §4.2.2 * multiplicity §1.2.2 * $N$-point function §1.1.2 * normal-ordered product §4.1.1 * null vector §2.1.2 * null vector equation §2.2.5 * observable §1.1.2 * OPE item 3 * OPE coefficient item 3 * Peter-Weyl theorem §4.4.1 * physical system §1.1.1 * primary field §2.2.1 * primary state §2.1.1 * principal series representation of $\mathfrak{sl}_{2}$ §4.4.3 * principal series representation of $\mathfrak{sl}_{2}({\mathbb{C}})$ §4.3.2 * quadratic Casimir operator §4.2.1 * quantum §1.1.1 * quantum field theory §1.1 * quasi-primary field §2.2.3 * radial quantization §1.4.1 * rational model 1st item * reflection §2.1.2 * reflection coefficient §3.1.2 * reflection relation §3.1.2 * regular singular point §2.3.4 * Riemann sphere §1.3.1 * Riemann surface §1.3.2 * right-moving 2nd item * rotation §1.3.1 * $s$-channel §2.3.3, §2.3.4 * scalar product §1.1.2 * Seiberg-Witten equation §4.1.2 * separation of variables §4.2.4 * simple current §1.2.3 * singular vector §2.1.2 * Sklyanin variable §4.2.4 * $\mathfrak{sl}_{2}$ §2.3.2 * $\widehat{\mathfrak{sl}}_{2}$ conformal block §4.2.4 * $\widetilde{SL}_{2}(\mathbb{R})$ §4.4.3 * $\widetilde{SL}_{2}(\mathbb{R})$ WZW model §4.4.3 * solve (model) §1.1.2 * spectral flow §4.4.3 * spectrally flowed representation §4.4.3 * spectrum §1.1.2 * spin (conformal) §2.3.2 * spin ($\mathfrak{sl}_{2}$) §4.2.2 * state-field correspondence §1.1.2 * structure function §1.2.3 * $SU_{2}$ WZW model §4.4.2 * Sugawara construction §4.2.1 * symmetry field §2.2.2 * $t$-channel §2.3.3, §2.3.5 * T-duality §4.1.3 * theory §1.1.1 * three-point structure constant 3rd item * translation §1.3.1 * $\hat{\mathfrak{u}}_{1}$ current §4.1.1 * $\hat{\mathfrak{u}}_{1}$-primary field §4.1.1 * unitary §1.4.1 * universal data 1st item * Verma module §2.1.2 * Virasoro algebra §1.3.3 * Virasoro field §2.2.2 * Wakimoto free-field representation of $\widehat{\mathfrak{sl}}_{2}$ §4.2.2 * Ward identity §2.2.3 * Wick’s theorem §4.1.1 * Witt algebra §1.3.2 * WZW model §4.4.1 * $x$-basis field §4.2.2 ## Bibliography * [1] S. Ribault (2014 blog post) Modular invariance in non-rational CFT * [2] A. Zamolodchikov, A. Zamolodchikov (1990 book) Conformal field theory and 2-D critical phenomena * [3] V. Schomerus (2006 review) [hep-th/0509155] Non-compact string backgrounds and non-rational CFT * [4] P. Di Francesco, P. Mathieu, D. Sénéchal (1997 book) Conformal field theory * [5] Y. Nakayama (2004 review) [hep-th/0402009] Liouville field theory: A decade after the revolution * [6] M. R. Gaberdiel (2000 review) [hep-th/9910156] An Introduction to conformal field theory * [7] J. Cardy (2008 review) [0807.3472] Conformal Field Theory and Statistical Mechanics * [8] P. Bouwknegt, K. Schoutens (1993 review) [hep-th/9210010] W symmetry in conformal field theory * [9] J. Teschner, G. Vartanov (2012) [1202.4698] 6j symbols for the modular double, quantum hyperbolic geometry, and supersymmetric gauge theories * [10] V. A. Alba, V. A. Fateev, A. V. Litvinov, G. M. Tarnopolsky (2011) [1012.1312] On combinatorial expansion of the conformal blocks arising from AGT conjecture * [11] V. A. Fateev, A. V. Litvinov, A. Neveu, E. Onofri (2009) [0902.1331] Differential equation for four-point correlation function in Liouville field theory and elliptic four-point conformal blocks * [12] A. B. Zamolodchikov (2005) [hep-th/0505063] On the three-point function in minimal Liouville gravity * [13] S. Ribault, R. Santachiara (2015) [1503.02067] Liouville theory with a central charge less than one * [14] J. Teschner (2004) [hep-th/0303150] A lecture on the Liouville vertex operators * [15] L. Hadasz, Z. Jaskolski, P. Suchanek (2010) [0911.4296] Modular bootstrap in Liouville field theory * [16] L. Chekhov, B. Eynard, S. Ribault (2013) [1209.3984] Seiberg-Witten equations and non-commutative spectral curves in Liouville theory * [17] A. B. Zamolodchikov, A. B. Zamolodchikov (1996) [hep-th/9506136] Structure constants and conformal bootstrap in Liouville field theory * [18] S. Ribault (2010) [0912.4481] Minisuperspace limit of the AdS3 WZNW model * [19] S. Ribault (2009) [0811.4587] On sl3 Knizhnik-Zamolodchikov equations and W3 null-vector equations * [20] S. Ribault (2008) [0803.2099] A family of solvable non-rational conformal field theories * [21] J. Teschner (1999) [hep-th/9712256] On structure constants and fusion rules in the $SL(2,\mathbb{C})/SU(2)$ WZNW model * [22] S. Ribault, J. Teschner (2005) [hep-th/0502048] $H_{3}^{+}$ correlators from Liouville theory * [23] J. Teschner (1999) [hep-th/9712258] The mini-superspace limit of the SL(2,C)/SU(2) WZNW model * [24] J. M. Maldacena, H. Ooguri (2001) [hep-th/0001053] Strings in $AdS_{3}$ and $SL(2,\mathbb{R})$ WZW model. I * [25] M. R. Gaberdiel (2001) [hep-th/0105046] Fusion rules and logarithmic representations of a WZW model at fractional level * [26] S. Ribault (2005) [hep-th/0507114] Knizhnik-Zamolodchikov equations and spectral flow in $AdS_{3}$ string theory
# Bridging Declarative, Procedural, and Conditional Metacognitive Knowledge Gap Using Deep Reinforcement Learning Mark Abdelshiheed, John Wesley Hostetter, Tiffany Barnes, and Min Chi Department of Computer Science North Carolina State University Raleigh, NC 27695 {mnabdels, jwhostet, tmbarnes<EMAIL_ADDRESS> ###### Abstract In deductive domains, three metacognitive knowledge types in ascending order are declarative, procedural, and conditional learning. This work leverages Deep Reinforcement Learning (DRL) in providing adaptive metacognitive interventions to bridge the gap between the three knowledge types and prepare students for future learning across Intelligent Tutoring Systems (ITSs). Students received these interventions that taught how and when to use a backward-chaining (BC) strategy on a logic tutor that supports a default forward-chaining strategy. Six weeks later, we trained students on a probability tutor that only supports BC without interventions. Our results show that on both ITSs, DRL bridged the metacognitive knowledge gap between students and significantly improved their learning performance over their control peers. Furthermore, the DRL policy adapted to the metacognitive development on the logic tutor across declarative, procedural, and conditional students, causing their strategic decisions to be more autonomous. Keywords: Deep Reinforcement Learning; Preparation for Future Learning; Intelligent Tutoring Systems; Declarative Knowledge; Procedural Knowledge; Conditional Knowledge ## Introduction A demanding required feature of learning is being continuously prepared for future learning (?, ?). Our incremental knowledge is the evidence that preparation for future learning exists yet is hard to predict and measure (?, ?). Considerable research has found that one factor that facilitates preparing students for future learning is their metacognitive knowledge (?, ?, ?, ?). Three types of metacognitive knowledge in deductive domains are _declarative_ , _procedural_ , and _conditional_ learning (?, ?, ?, ?). Although substantial research has shown that the three types could be acquired sequentially or simultaneously (?, ?, ?, ?), it was also shown that each learner possesses a _dominant_ type of knowledge based on the educational context and learning environment (?, ?, ?). Moreover, prior work has shown that students’ metacognitive knowledge can evolve during learning (?, ?, ?). Thus, _adaptive_ interventions considering such development are needed (?, ?, ?, ?). Reinforcement Learning (RL) (?, ?) is one of the most effective approaches for _adaptive_ support and scaffolding across Intelligent Tutoring Systems (ITSs) (?, ?, ?). The deep learning extension of RL, known as deep RL (DRL), has been commonly utilized in pedagogical policy induction across ITSs (?, ?, ?, ?, ?, ?, ?) due to its higher support of model sophistication. As far as we know, no prior work has leveraged DRL in providing adaptive interventions to bridge the metacognitive knowledge gap and prepare students for future learning across ITSs. This work builds on our prior work, where students were categorized into declarative, procedural, or conditional learners based on their _dominant_ metacognitive knowledge on an ITS. We found that only conditional students were prepared for future learning, as they significantly outperformed their declarative and procedural peers across different deductive domains (?, ?, ?). Inspired by such findings, this work empirically evaluates the DRL’s effectiveness through a classroom study; we leverage DRL to provide adaptive metacognitive interventions to bridge the knowledge gap for declarative and procedural students. DRL provided metacognitive interventions that teach students how and when to use a backward-chaining (BC) strategy on a logic tutor that supports a default forward-chaining (FC) strategy. After six weeks, we trained students on a probability tutor that only supports BC without receiving interventions. Our results showed that DRL indeed sealed the gap, prepared students for future learning, adapted to their metacognitive development, and raised their decision-making autonomy. ## Background & Related Work ### Declarative, Procedural and Conditional Knowledge Metacognition indicates cognition about cognition and the ability to control, conceive and regulate knowledge (?, ?, ?, ?). Three types of metacognitive knowledge are declarative, procedural, and conditional (?, ?, ?, ?, ?, ?). Declarative knowledge —also described as surface or rote learning (?, ?)— is considered the simplest and lowest level of knowledge, as it involves memorization of facts and data offered in the default settings of a learning environment (?, ?, ?, ?, ?). Procedural knowledge is a higher form of knowledge that is discerned with the automated understanding of how to use different problem-solving strategies and cognitive skills without conscious attention or reasoning about their rationale (?, ?, ?, ?, ?, ?). Conditional knowledge is the highest level of knowledge, as it requires understanding how, when and why to use each strategy and cognitive skill (?, ?, ?, ?). Considerable research has investigated the significance of acquiring and nurturing each knowledge type (?, ?, ?, ?) and bridged the gap between them (?, ?, ?, ?). ? (?) showed that students’ self-efficacy was significantly correlated to declarative rather than procedural knowledge when solving physics problems. ? (?) found that students with high procedural knowledge significantly outperformed their peers on a tutoring system that teaches linked lists. In ? (?), students who had high conditional knowledge significantly surpassed their low peers in learning predictive parsing algorithms on a tutoring system. ### Metacognitive Development Metacognitive development is defined as the shifts in the learning approach and the metacognitive knowledge and skills used by a student (?, ?, ?, ?, ?, ?, ?). We focus on the development across declarative, procedural, and conditional knowledge. ? (?) proposed a framework for developing and acquiring metacognitive skills. He presented two major stages in skill acquisition: the declarative stage, where facts about the domain are interpreted, and the procedural stage, where domain knowledge is incorporated into procedures for performing the skill. ? (?) stated that knowledge compilation occurs when the learner transitions from the declarative to the procedural stage. Later, in ? (?), he argued that procedural knowledge depends upon conditional knowledge, which in turn depends on declarative knowledge. Specifically, ? (?) articulated that the learner’s metacognitive knowledge develops from memorizing facts about strategies, then knowing the proper situations to use them, and finally, mastering each strategy. ? (?) claimed that declarative and procedural knowledge develop in an iterative fashion through improved problem representations. They conducted two experiments on fifth- and sixth-graders learning decimal fractions and found that initial declarative knowledge predicted gains in procedural knowledge and vice versa. They showed that correct problem representation mediated the relation between declarative and procedural knowledge. ### Reinforcement Learning in ITSs Reinforcement Learning (RL) is a popular machine learning branch ideal in environments where actions result in numeric rewards without knowing a ground truth (?, ?). Due to its aim of maximizing the cumulative reward, RL has been widely used in educational domains due to the flexible implementation of reward functions (?, ?, ?, ?, ?). Deep RL (DRL) is a category of algorithms that combine RL algorithms with neural networks; for instance, Deep Q-Network (DQN) algorithm is the neural network extension of the Q-learning algorithm (?, ?). Substantial work has used RL and DRL in inducing pedagogical policies across ITSs (?, ?, ?, ?). ? (?) utilized hierarchical RL to improve the learning gain on an ITS and showed that their policy significantly outperformed an expert and a random condition. ? (?) presented a DRL framework that identifies the critical decisions to induce a critical policy on an ITS. They evaluated their critical-DRL framework based on two success criteria: necessity and sufficiency. The former required offering help in all critical states, and the latter required offering help only in critical states. Their results showed that the framework fulfilled both criteria. ? (?) conducted two consecutive classroom studies where DRL was applied to decide whether the student or tutor should solve the following problem. They found that the DRL policy with simple explanations significantly improved students’ learning performance more than an expert policy. Despite the wide use of RL and DRL on ITSs, the attempts to combine either with metacognitive knowledge have been minimal (?, ?). ? (?) used RL to teach the metacognitive skill of knowing how much to plan ahead (Deciding How to Decide). Their metacognitive RL framework builds on the semi-gradient SARSA algorithm developed to approximate Markov decision processes. They defined a meta Q-function that takes the meta state of the environment and the planning horizon action. They evaluated their framework on two planning tasks, where constrained reward functions were defined such that the rewards could be predicted many steps ahead to facilitate forming a plan. To sum up, despite much prior work on declarative, procedural, and conditional knowledge, it has yet to investigate the impact of closing the gap between them on preparation for future learning across ITSs. Our work utilizes DRL in providing adaptive metacognitive interventions to bridge the gap across ITSs. We investigate the impact of our interventions on students’ metacognitive development and preparation for future learning. In brief, we induce and deploy a DRL policy of metacognitive interventions on a logic tutor and investigate its impact on a subsequent probability tutor. ## Logic and Probability Tutors (a) Forward Chaining (FC) (b) Backward Chaining (BC) Figure 1: Logic Tutor Problem-Solving Strategies #### Logic Tutor It teaches propositional logic proofs by applying valid inference rules such as Modus Ponens through the standard sequence of pre-test, training and post- test. The three phases share the same interface, but training is the _only_ one where students can seek and get help. The pre-test has two problems, while the post-test is harder and has six problems; the first two are isomorphic to the pre-test problems. Training consists of five ordered levels with an _incremental degree of difficulty_ , and each level consists of four problems. Every problem has a score in the $[0,100]$ range based on the accuracy, time and solution length. The _pre-_ and _post-test_ scores are calculated by averaging their pre- and post-test problem scores. A student can solve any problem throughout the tutor by either a FC or a BC strategy (?, ?, ?, ?). Figure 1(a) shows that for FC, one must derive the conclusion at the bottom from givens at the top, while Figure 1(b) shows that for _BC_ , students need to derive a contradiction from givens and the _negation_ of the conclusion. Problems are presented by _default_ in FC, but students can switch to BC by clicking a button in the tutor interface. Figure 2: Training on the Modified Logic Tutor (DRL) #### Probability Tutor It teaches how to solve probability problems using ten principles, such as the Complement Theorem. The tutor consists of a textbook, pre-test, training, and post-test. Like the logic tutor, training is the only section for students to receive and ask for hints, and the post-test is harder than the pre-test. The textbook introduces the domain principles, while training consists of $12$ problems, each of which can _only_ be solved by BC as it requires deriving an answer by _writing and solving equations_ until the target is ultimately reduced to the givens. In pre- and post-test, students solve $14$ and $20$ open-ended problems, where each pre-test problem has an isomorphic post-test one. Answers are graded double-blind by experienced graders using a partial-credit rubric, where grades are based _only_ on accuracy in the $[0,100]$ range. The _pre-_ and _post-test_ scores are the average grades in their sections. ## Methods Three Metacognitive Interventions As students can choose to switch problem- solving strategies _only_ on the logic tutor, our interventions are provided on the logic training. We previously found that Conditional students frequently switched _early_ (within the first $\mathbf{30}$ actions) to BC on the logic tutor, Procedural students switched _late_ (after the first $\mathbf{30}$ actions), and their Declarative peers made no switches and used the default strategy (?, ?). It was also shown that providing metacognitive interventions that presented problems directly in BC or recommended switching to BC —referred to as Nudges— caused Declarative and Procedural students to catch up with their Conditional peers (?, ?, ?). Figure 3: Strategy Switch Nudge This work leverages DRL to provide three metacognitive interventions regardless of the student’s metacognitive group: Nudge (Nud), Present in BC (Prs), or No Intervention (No). We trained Experimental (DRL) students on the modified tutor shown in Figure 2. Two worked examples on BC were provided for teaching students _how_ to use BC, where the tutor showed a step-by-step solution. Since our interventions included the no-intervention option, we intervened in as many problems as possible. Figure 3 shows an example of a nudge, which is prompted after a number of seconds sampled from a probability distribution of prior students’ switch behavior (?, ?). We did not intervene in the last training problem of each level, as it is used to evaluate the improvement on that level. #### DRL Policy Induction We used data from four previous studies comprising $867$ students (?, ?, ?, ?, ?) and performed a $80-20$ train-test split. The dataset consisted of a record (state, action, reward) per student per logic training problem. The state is the feature vector incorporating $152$ features that capture temporal, accuracy-based and hint-based behaviors. The action is Nudge, Present in BC, or No Intervention. The reward is the immediate problem score in the logic tutor. Our objective is to investigate whether DRL works with our metacognitive interventions rather than which DRL algorithm is better with our interventions. We preferred DRL to RL due to its prevailing success in educational domains (?, ?). To select the algorithm, we had to avoid a relatively simple one such as Deep Q-Network (DQN), which overestimates action values (?, ?) and may result in underfitting. Furthermore, we needed to avoid sophisticated DRL algorithms, such as autoencoders and actor-critic approaches, so that DRL does not overshadow the impact of our metacognitive interventions. In other words, a sophisticated DRL algorithm yielding an optimal policy would be acknowledged likely for its sophistication rather than for the metacognitive interventions it provided. Thus, we exploited Double-DQN (DDQN), which solves the overestimation issue in DQN by decoupling the action selection from evaluation in two different neural networks (?, ?). The resulting modified Bellman equation becomes: $Q(s,a;\boldsymbol{\theta})=r+\gamma\,\,Q(s^{\prime},argmax_{{a^{\prime}}}\,\,Q(s^{\prime},a^{\prime},\boldsymbol{\theta});\boldsymbol{\theta^{-}})$ (1) where $r$ is the reward; $\gamma$ is the discount factor; $s$ and $s^{\prime}$ refer to the current and next states; $a$ and $a^{\prime}$ denote the current and next actions. DDQN uses the main $(\boldsymbol{\theta})$ neural network to select the action with the highest Q-value for the next state and then evaluates its Q-value using the target $(\boldsymbol{\theta^{-}})$ neural network. After hyperparameter tuning, we picked the model with the lowest mean squared error loss. The deployed policy had two hidden layers with $16$ neurons each, $1e$-$3$ learning late, $9e$-$1$ discount factor, $32$ batch size, a synchronization frequency of $4$ steps between both neural networks ($\boldsymbol{\theta}$ and $\boldsymbol{\theta^{-}}$), and was trained until convergence ($\approx 2000$ epochs). ## Experiment Setup The experiment took place in an undergraduate Computer Science class in the Fall of 2022 at North Carolina State University. The participants were assigned each tutor as a class assignment and told that completion is required for full credit. We randomly split students into Experimental (DRL) and Control (Ctrl) conditions, where students were first assigned the logic tutor and then the probability tutor six weeks later. On both tutors, students received the problems in the same order and followed the standard phases described in the Logic and Probability Tutors section. The only difference between the two conditions is that DRL students received our adaptive metacognitive interventions in the logic training provided by the DRL policy (Fig. 2), while their Ctrl peers received no interventions. On probability, all students received no interventions. The main challenge in this work was to ensure an even distribution of the metacognitive groups {Declarative (Decl), Procedural (Proc), Conditional (CDL)} across the two conditions {DRL, Ctrl}. We aimed to investigate whether DRL would help students with different incoming metacognitive knowledge. Therefore, as per our prior work, we utilized the random forest classifier (RFC) that, based on pre-test performance, predicts the incoming metacognitive group before training on logic and was previously shown to be $96\%$ accurate (?, ?). A total of $112$ students finished both tutors. We found that our DRL policy provided no interventions for CDLDRL students $94\%$ of the time. Therefore, we combined CDLDRL and CDLCtrl into CDL. After randomly splitting students into conditions and utilizing the RFC for even distribution, we had $22$ DeclDRL, $24$ ProcDRL, $22$ DeclCtrl, $22$ ProcCtrl and $22$ CDL students. The RFC was $98\%$ accurate in classifying students who received no interventions —DeclCtrl, ProcCtrl and CDL. ## Results ### Learning Performance Table 1: Comparing Groups across Tutors | Experimental $($DRL$:N=46)$ | Control $($Ctrl$:N=44)$ | Conditional $($CDL$:N=22)$ ---|---|---|--- Logic Tutor Pre | $55.9\,(21)$ | $55.8\,(21)$ | $58.2\,(19)$ Iso. Post | $\mathbf{92.1\,(5)^{*}}$ | $73.4\,(17)$ | $83.4\,(12)^{*}$ Iso. NLG | $\mathbf{0.47\,(.1)}^{*}$ | $0.16\,(.28)$ | $0.35\,(.11)^{*}$ Post | $\mathbf{87.7\,(6)}^{*}$ | $69.8\,(15)$ | $80.2\,(11)^{*}$ NLG | $\mathbf{0.45\,(.11)}^{*}$ | $0.13\,(.33)$ | $0.31\,(.15)^{*}$ Probability Tutor Pre | $75.7\,(15)$ | $76\,(14)$ | $78.6\,(14)$ Iso. Post | $\mathbf{95.3\,(4)^{*}}$ | $72.7\,(13)$ | $89.1\,(7)^{*}$ Iso. NLG | $\mathbf{0.4\,(.12)}^{*}$ | -$0.04\,(.18)$ | $0.24\,(.15)^{*}$ Post | $\mathbf{94.9\,(4)^{*}}$ | $70.2\,(15)$ | $87.7\,(8)^{*}$ NLG | $\mathbf{0.37\,(.14)}^{*}$ | -$0.08\,(.21)$ | $0.22\,(.19)^{*}$ | | | In a row, bold is for the highest value, and asterisk means significance over no asterisks. #### Experimental vs. Control vs. Conditional Table 1 shows the groups’ performance on both tutors. We display the mean and standard deviation of pre- and post-test scores, isomorphic scores, and the normalized learning gain (NLG) defined as $(NLG=\frac{Post- Pre}{\sqrt{100-Pre}})$, where $100$ is the maximum score. We refer to pre- and post-test scores as Pre and Post, respectively, while the groups are abbreviated as DRL111Italicized version refers to group; otherwise refers to policy., Ctrl and CDL. We performed a Shapiro-Wilk normality test for each metric for each group and found no evidence of non-normality ($\mathit{p}>.05$). A one-way ANOVA using group as factor found no significant difference on Pre: $\mathit{F}(2,109)=0.09,\,\mathit{p}=.91$ for logic and $\mathit{F}(2,109)=0.21,\,\mathit{p}=.81$ for probability. To measure the improvement on isomorphic problems, repeated measures ANOVA tests were conducted (one for each group on each tutor) using {Pre, Iso. Post} as factor. On both tutors, we found that DRL and CDL learned significantly with $\mathit{p}<.0001$, while Ctrl did not ($\mathit{p}>.05$). A one-way ANCOVA222General effect size $\eta^{2}$ was reported for conservative results. using Pre as covariate and group as factor found a significant effect on Post on both tutors: $\mathit{F}(2,108)=38.4,\,\mathit{p}<.0001,\,\mathit{\eta}^{2}=0.71$ for logic and $\mathit{F}(2,108)=49.6,\,\mathit{p}<.0001,\,\mathit{\eta}^{2}=0.79$ for probability. Subsequent Bonferroni-corrected $(\alpha=.05/3)$ analyses revealed that DRL significantly outperformed both groups, while CDL significantly surpassed Ctrl; for instance, DRL had significantly higher Post than CDL: $\mathit{t}(66)=3.6,\,\mathit{p}<.001$ and $\mathit{t}(66)=3.8,\,\mathit{p}<.001$ for logic and probability, respectively. Similar results were found using ANOVA on NLG. In brief, these findings on both tutors confirm that DRL $>$ CDL $>$ Ctrl. #### Groups within Experimental and Control We compared the performance of the metacognitive groups {Decl, Proc} across the two conditions {DRL, Ctrl} to assess the within- and between-condition impact of the DRL policy. Our results showed the same statistically significant pattern on both tutors for Post, NLG and their isomorphic versions: DeclDRL, Proc${}_{DRL}>$ DeclCtrl, ProcCtrl. We display Pre and Post scores in Figure 4 and report the statistical results for NLG in the following paragraph. Figure 4: Conditions Pre and Post Performance across Tutors A two-way ANOVA using metacognitive group and condition as factors found a significant interaction effect on NLG on both tutors: $\mathit{F}(1,86)=36.3,\,\mathit{p}<.0001,\,\mathit{\eta}^{2}=0.68$ for logic and $\mathit{F}(1,86)=45.8,\,\mathit{p}<.0001,\,\mathit{\eta}^{2}=0.76$ for probability. Follow-up Bonferroni-adjusted $(\alpha=.05/6)$ analyses showed that DRL significantly outperformed Ctrl. Specifically, on logic, DeclDRL $[0.45(.12)]$ had significantly higher NLG than DeclCtrl $[0.16(.33)]$: $\mathit{t}(42)=6.2,\,\mathit{p}<.0001$, and ProcDRL $[0.44(.14)]$ significantly outperformed ProcCtrl $[0.1(.31)]$: $\mathit{t}(44)=7.3,\,\mathit{p}<.0001$. On probability, we found the same patterns, as DeclDRL $[0.34(.13)]$ and ProcDRL $[0.39(.16)]$ surpassed DeclCtrl $[-0.07(.32)]$ and ProcCtrl $[-0.1(.25)]$, respectively: $\mathit{t}(42)=6.5,\,\mathit{p}<.0001$ and $\mathit{t}(44)=7.9,\,\mathit{p}<.0001$. ### Policy Decisions and Students’ Strategic Behavior As the learning performance results found no significant difference within the DRL group on both tutors, we further investigated the distribution of the DRL policy decisions; DeclDRL received $94(33\%)$ Nudges, $65(23\%)$ presented in BC and $127(44\%)$ No Intervention, while ProcDRL received $82(26\%)$ Nudges, $74(24\%)$ presented in BC and $156(50\%)$ No Intervention. A chi-square test showed no significant difference in the decisions’ distribution between the DRL groups: $\chi^{2}(2,\,N=598)=3.2,\,\mathit{p}=.2$. Thus, we combined their policy decisions and analyzed the decisions’ distribution per logic training level, as shown in Table 2. A chi-square test showed a significant relationship between the policy decision type and the training level: $\chi^{2}(8,\,N=598)=81.2,\,\mathit{p}<.0001$. Post-hoc pairwise chi-square tests with Bonferroni adjustment $(\alpha=.05/10)$ showed that the last two levels had significantly more No-Intervention decisions. For instance, the fourth level had more No-Intervention decisions than the third level: $\chi^{2}(2,\,N=276)=33.4,\,\mathit{p}<.0001$. Table 2: Distribution of DRL Policy Decisions across Levels | $L1$ ($\%$) | $L2$ ($\%$) | $L3$ ($\%$) | $L4$ ($\%$) | $L5$ ($\%$) ---|---|---|---|---|--- Nud | $\mathbf{40}$ | $\mathbf{42}$ | $\mathbf{37}$ | $17$ | $18$ Prs | $28$ | $31$ | $31$ | $16$ | $15$ No | $32$ | $27$ | $32$ | $\mathbf{67}$ | $\mathbf{67}$ | | | | | In a column (training level), bold is for the highest value. Figure 5: Students’ Strategic Decisions The students’ strategic decisions on logic training are shown in Figure 5 to investigate the impact of the DRL policy on their choices. The first three columns reflect the choices DRL, Ctrl and CDL students made during the entire training, where DeclDRL and ProcDRL decisions had similar distributions and thus were combined. Early switches to BC occurred within the first $30$ actions, while late ones happened after that, as defined in our prior work (?, ?, ?). The Ctrl and CDL students did not have problems presented in BC and hence had one less choice. Bonferroni-corrected $(\alpha=.05/3)$ chi-square tests showed that DRL $>$ CDL $>$ Ctrl in their early-switch choices333‘use presented BC’ was excluded from pairwise comparisons.. For example, DRL switched early significantly more than CDL: $\chi^{2}(2,\,N=794)=54.9,\,\mathit{p}<.0001$. The last five columns in Figure 5 display the DRL students’ decisions per training level. Pairwise Bonferroni- adjusted $(\alpha=.05/10)$ chi-square tests revealed that DRL early-switch choices significantly increased in the last two levels. For example, DRL students switched early in the fourth level significantly more than the third one: $\chi^{2}(3,\,N=276)=25.1,\,\mathit{p}<.0001$. In essence, while the DRL policy intervened significantly less in the last two training levels (Table 2), Figure 5 shows that DRL students made significantly more early switches in the same two levels. This suggests that DRL students became more autonomous as training proceeded. Additionally, DRL students switched early more frequently than Ctrl and CDL. ### Policy Adaptation for Metacognitive Development One of our objectives was to investigate whether the DRL policy adapted to the student’s metacognitive knowledge as it changes through logic training. We leveraged association rule mining to observe frequent patterns by the DRL policy. Specifically, for a student, each two consecutive policy decisions were encoded into a transaction represented as $\\{a_{t},\,c_{t},\,a_{t+1}\\}$, where $a_{t},\,a_{t+1}\in\\{Nud,\,Prs,\,No\\}$ and refer to the current and next policy decisions, respectively; $c_{t}\in\\{Agree,\,Disagree\\}$ and indicates whether the student _complied_ (agreed/disagreed) with the current policy decision, where agreement and disagreement are defined as: * • _Nudge (Nud)_ : early switches to BC represent agreement, while late switches to BC or using FC means disagreement. * • _Present in BC (Prs)_ : using BC and FC denote agreement and disagreement, respectively. * • No Intervention (No): agreement is to use FC, while using BC reflects disagreement. We focused on rules in the form of $\\{a_{t},\,c_{t}\\}\Rightarrow a_{t+1}$ to extract meaningful association rules for the next policy decision based on the student’s compliance with the current decision. Since $c_{t}$ has two possible values and $a_{t},\,a_{t+1}$ each has three possible values, there are $18$ possible rules. The _support_ and _confidence_ of each rule were traditionally computed as: $\displaystyle\text{Sup}(\\{a_{t},c_{t}\\}\Rightarrow a_{t+1})$ $\displaystyle=\frac{count(\\{a_{t},c_{t},a_{t+1}\\})}{Total}$ $\displaystyle\text{Conf}(\\{a_{t},c_{t}\\}\Rightarrow a_{t+1})$ $\displaystyle=\frac{count(\\{a_{t},c_{t},a_{t+1}\\})}{count(\\{a_{t},c_{t}\\})}$ where $Total=598$ [$46$ DRL students * $13$ training decisions]. Table 3 lists the top six rules for the DRL policy sorted by their support in descending order. The rules learned from DeclDRL and ProcDRL students were similar and thus were combined. The remaining rules were excluded due to their significantly low support and confidence values. Table 3: Top Association Rules for DRL policy Rank | Rule | Supp ($\%$) | Conf ($\%$) ---|---|---|--- 1 | $\\{No,\,Disagree\\}\Rightarrow No$ | $23$ | $76$ 2 | $\\{Nud,\,Agree\\}\Rightarrow No$ | $12$ | $61$ 3 | $\\{Prs,\,Agree\\}\Rightarrow No$ | $10$ | $58$ 4 | $\\{No,\,Agree\\}\Rightarrow Nud$ | $10$ | $60$ 5 | $\\{Nud,\,Disagree\\}\Rightarrow Prs$ | $7$ | $69$ 6 | $\\{Prs,\,Disagree\\}\Rightarrow Nud$ | $5$ | $66$ | | | #### Interpreting Association Rules Table 3 reveals unique perspectives of the DRL policy. In essence, the first three rules suggest that the policy treated those who knew in advance _how_ and _when_ to use BC as Conditional students by _avoiding_ interventions in such situations. The last two rules reflect swapping the interventions once a student disagrees with their utility. This finding suggests that students’ metacognitive knowledge changes during training and confirms our prior results that metacognitive interventions have different effects on Declarative and Procedural students (?, ?). The fourth rule shows that the DRL policy preferred recommending rather than imposing BC for students who previously used FC. ## Discussions & Conclusions #### Bridging the Gap We showed that our DRL policy caused students with low incoming metacognitive knowledge (declarative and procedural) to outperform their conditional peers, who had the highest knowledge and received no interventions. In other words, DRL bridged the metacognitive knowledge gap between students on a logic tutor, where the interventions were provided, and on a subsequent probability tutor, where students received no interventions. #### Preparation for Future Learning Experimental declarative and procedural students received DRL-based interventions on logic and surpassed their no-intervention control peers on logic and probability. This suggests that DRL prepared students for future learning, as they outperformed control students on probability based on logic interventions. #### Autonomy and Metacognitive Development The DRL policy adapted to the back-and-forth metacognitive development between declarative, procedural, and conditional students. Specifically, the association mining rules analyses showed that the DRL policy changed its interventions to adapt to the dynamic metacognitive knowledge of students. Hence, students became more autonomous and made effective strategic decisions, even when DRL intervened significantly less. #### Limitations and Future Work There are at least two caveats in our work. First, splitting students into experimental and control resulted in relatively small sample sizes. Second, the probability tutor supported only one strategy, which restricted our intervention ability to logic. Future work involves implementing forward chaining on the probability tutor, comparing multiple DRL algorithms for our interventions, and comparing the DRL interventions against a stronger control that receives random interventions. ## Acknowledgments This research was supported by the NSF Grants: MetaDash: A Teacher Dashboard Informed by Real-Time Multichannel Self-Regulated Learning Data (1660878), Integrated Data-driven Technologies for Individualized Instruction in STEM Learning Environments (1726550), Generalizing Data-Driven Technologies to Improve Individualized STEM Instruction by Intelligent Tutors (2013502) and CAREER: Improving Adaptive Decision Making in Interactive Learning Environments (1651909). ## References * Abdelshiheed et al.Abdelshiheed et al. Abdelshiheed, M., Hostetter, J. W., Barnes, T., Chi, M. (2023). Leveraging deep reinforcement learning for metacognitive interventions across intelligent tutoring systems. In _Proceedings of the 24th international conference on artificial intelligence in education._ * Abdelshiheed, Hostetter, Shabrina, et al.Abdelshiheed, Hostetter, Shabrina, et al. Abdelshiheed, M., Hostetter, J. W., Shabrina, P., Barnes, T., Chi, M. (2022). The power of nudging: Exploring three interventions for metacognitive skills instruction across intelligent tutoring systems. In _Proceedings of the 44th annual conference of the cognitive science society_ (pp. 541–548). * Abdelshiheed, Hostetter, Yang, et al.Abdelshiheed, Hostetter, Yang, et al. Abdelshiheed, M., Hostetter, J. W., Yang, X., Barnes, T., Chi, M. (2022). Mixing backward- with forward-chaining for metacognitive skill acquisition and transfer. In _Proceedings of the 23rd international conference on artificial intelligence in education_ (pp. 546–552). * Abdelshiheed, Maniktala, et al.Abdelshiheed, Maniktala, et al. Abdelshiheed, M., Maniktala, M., Barnes, T., Chi, M. (2022). Assessing competency using metacognition and motivation: The role of time-awareness in preparation for future learning. In _Design recommendations for intelligent tutoring systems_ (Vol. 9, pp. 121–131). US Army Combat Capabilities Development Command–Soldier Center. * Abdelshiheed et al.Abdelshiheed et al. Abdelshiheed, M., Maniktala, M., Ju, S., Jain, A., Barnes, T., Chi, M. (2021). Preparing unprepared students for future learning. In _Proceedings of the 43rd annual conference of the cognitive science society_ (pp. 2547–2553). * Abdelshiheed et al.Abdelshiheed et al. Abdelshiheed, M., Zhou, G., Maniktala, M., Barnes, T., Chi, M. (2020). Metacognition and motivation: The role of time-awareness in preparation for future learning. In _Proceedings of the 42nd annual conference of the cognitive science society_ (pp. 945–951). * Alam et al.Alam et al. Alam, N., Mostafavi, B., Chi, M., Barnes, T. (2023). Exploring the effect of autoencoder based feature learning for a deep reinforcement learning policy to determine when to provide proactive help. In _Proceedings of the 24th international conference on artificial intelligence in education._ * AndersonAnderson Anderson, J. R. (1982). Acquisition of cognitive skill. _Psychological review_ , _89_(4), 369. * AndersonAnderson Anderson, J. R. (2005). _Cognitive psychology and its implications_. Macmillan. * Azevedo AlevenAzevedo Aleven Azevedo, R., Aleven, V. (2013). _International handbook of metacognition and learning technologies_ (Vol. 26). Springer. * Azevedo et al.Azevedo et al. Azevedo, R., Cromley, J. G., Winters, F. I., Moos, D. C., Greene, J. A. (2005). Adaptive human scaffolding facilitates adolescents’ self-regulated learning with hypermedia. _Instructional science_ , _33_ , 381–412. * BakerBaker Baker, L. (1994). Fostering metacognitive development. _Advances in child development and behavior_ , _25_ , 201–239. * Biggs et al.Biggs et al. Biggs, J., Tang, C., Kennedy, G. (1999). Teaching for quality learning at universities. _Open University, Buckingham_. * BloomBloom Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. _Handbook; Cognitive domain_ , _1_. * Boden et al.Boden et al. Boden, K., Kuo, E., Nokes-Malach, T., Wallace, T., Menekse, M. (2018). What is the role of motivation in procedural and conceptual physics learning? an examination of self-efficacy and achievement goals. _Physics Education Research Conference_ , 60–63. * Bransford SchwartzBransford Schwartz Bransford, J. D., Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with multiple implications. _Review of research in education_ , _24_(1), 61–100. * BrownBrown Brown, A. L. (1987). Metacognition, executive control, self-regulation, and other more mysterious mechanisms. _Metacognition, motivation, and understanding_ , 65–116. * ButlerButler Butler, D. L. (1998). The strategic content learning approach to promoting self-regulated learning: A report of three studies. _Journal of educational psychology_ , _90_(4), 682. * Case GunstoneCase Gunstone Case, J., Gunstone, R. (2002). Metacognitive development as a shift in approach to learning: an in-depth study. _Studies in Higher education_ , _27_(4), 459–470. * Castro-Schez et al.Castro-Schez et al. Castro-Schez, J. J., Glez-Morcillo, C., Albusac, J., Vallejo, D. (2021). An intelligent tutoring system for supporting active learning: A case study on predictive parsing learning. _Information Sciences_ , _544_ , 446–468. * Cutrer et al.Cutrer et al. Cutrer, W. B., Miller, B., Pusic, M. V., Mejicano, G., Mangrulkar, R. S., Gruppen, L. D., … Moore Jr, D. E. (2017). Fostering the development of master adaptive learners: a conceptual model to guide skill acquisition in medical education. _Academic medicine_ , _92_(1), 70–75. * De Backer et al.De Backer et al. De Backer, L., Van Keer, H., Valcke, M. (2012). Exploring the potential impact of reciprocal peer tutoring on higher education students’ metacognitive knowledge and regulation. _Instructional science_ , _40_ , 559–588. * Detterman SternbergDetterman Sternberg Detterman, D. K., Sternberg, R. J. (1993). _Transfer on trial: Intelligence, cognition, and instruction._ Ablex Publishing. * DochyDochy Dochy, F. J. R. C. (1992). _Assessment of prior knowledge as a determinant for future learning_. Open Universiteit. * FlavellFlavell Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. _American psychologist_ , _34_(10), 906. * FossatiFossati Fossati, D. (2009). _Automatic modeling of procedural knowledge and feedback generation in a computer science tutoring system_. University of Illinois at Chicago. * Gao et al.Gao et al. Gao, G., Gao, Q., Yang, X., Pajic, M., Chi, M. (2022). A reinforcement learning-informed pattern mining framework for multivariate time series classification. In _Proceedings of the 31st international joint conference on artificial intelligence._ * Gao et al.Gao et al. Gao, G., Ju, S., Ausin, M. S., Chi, M. (2023). Hope: Human-centric off-policy evaluation for e-learning and healthcare. In _Proceedings of the 22nd international conference on autonomous agents and multiagent systems._ * Georgeff LanskyGeorgeff Lansky Georgeff, M. P., Lansky, A. L. (1986). Procedural knowledge. _Proceedings of the IEEE_ , _74_(10), 1383–1398. * Hershkovitz et al.Hershkovitz et al. Hershkovitz, A., Baker, R., Gowda, S. M., Corbett, A. T. (2013). Predicting future learning better using quantitative analysis of moment-by-moment learning. In _Educational data mining._ * Hostetter et al.Hostetter et al. Hostetter, J. W., Abdelshiheed, M., Barnes, T., Chi, M. (2023a). Leveraging fuzzy logic towards more explainable reinforcement learning-induced pedagogical policies on intelligent tutoring systems. In _2023 IEEE international conference on fuzzy systems._ * Hostetter et al.Hostetter et al. Hostetter, J. W., Abdelshiheed, M., Barnes, T., Chi, M. (2023b). A self-organizing neuro-fuzzy q-network: Systematic design with offline hybrid learning. In _Proceedings of the 22nd international conference on autonomous agents and multiagent systems._ * Jacobs ParisJacobs Paris Jacobs, J. E., Paris, S. G. (1987). Children’s metacognition about reading: Issues in definition, measurement, and instruction. _Educational psychologist_ , _22_(3-4), 255–278. * Ju et al.Ju et al. Ju, S., Zhou, G., Abdelshiheed, M., Barnes, T., Chi, M. (2021). Evaluating critical reinforcement learning framework in the field. In _Proceedings of the 22nd international conference on artificial intelligence in education._ (pp. 215–227). * Kiesewetter et al.Kiesewetter et al. Kiesewetter, J., Ebersbach, R., Tsalas, N., Holzer, M., Schmidmaier, R., Fischer, M. R. (2016). Knowledge is not enough to solve the problems–the role of diagnostic knowledge in clinical reasoning activities. _BMC medical education_ , _16_(1), 1–8. * KrathwohlKrathwohl Krathwohl, D. R. (2002). A revision of bloom’s taxonomy: An overview. _Theory into practice_ , _41_(4), 212–218. * Krueger et al.Krueger et al. Krueger, P. M., Lieder, F., Griffiths, T. (2017). Enhancing metacognitive reinforcement learning using reward structures and feedback. In _Cogsci._ * Kua et al.Kua et al. Kua, J., Lim, W.-S., Teo, W., Edwards, R. A. (2021). A scoping review of adaptive expertise in education. _Medical Teacher_ , _43_(3), 347–355. * KuhnKuhn Kuhn, D. (2000). Metacognitive development. _Current directions in psychological science_ , _9_(5), 178–181. * LarkinLarkin Larkin, S. (2009). _Metacognition in young children_. Routledge. * LivingstonLivingston Livingston, J. A. (2003). _Metacognition: An overview._ ERIC. * MbatoMbato Mbato, C. L. (2019). Indonesian efl learners’ critical thinking in reading: Bridging the gap between declarative, procedural and conditional knowledge. _Humaniora_ , _31_(1), 92. * Mnih et al.Mnih et al. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … others (2015). Human-level control through deep reinforcement learning. _nature_ , _518_(7540), 529–533. * PintrichPintrich Pintrich, P. R. (2002). The role of metacognitive knowledge in learning, teaching, and assessing. _Theory into practice_ , _41_(4), 219–225. * Rittle-Johnson et al.Rittle-Johnson et al. Rittle-Johnson, B., Siegler, R. S., Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. _Journal of educational psychology_ , _93_(2), 346. * Roberts ErdosRoberts Erdos Roberts, M. J., Erdos, G. (1993). Strategy selection and metacognition. _Educational psychology_ , _13_(3-4), 259–266. * Sanz-Ausin et al.Sanz-Ausin et al. Sanz-Ausin, M., Azizsoltani, H., Barnes, T., Chi, M. (2019). Leveraging deep reinforcement learning for pedagogical policy induction in an intelligent tutoring system. _International Educational Data Mining Society_. * Sanz-Ausin et al.Sanz-Ausin et al. Sanz-Ausin, M., Maniktala, M., Barnes, T., Chi, M. (2020). Exploring the impact of simple explanations and agency on batch deep reinforcement learning induced pedagogical policies. In _AIED_ (pp. 472–485). * SchrawSchraw Schraw, G. (1998). Promoting general metacognitive awareness. _Instructional science_ , _26_(1-2), 113–125. * Schraw MoshmanSchraw Moshman Schraw, G., Moshman, D. (1995). Metacognitive theories. _Educational psychology review_ , _7_ , 351–371. * Shabrina, Mostafavi, Abdelshiheed, et al.Shabrina, Mostafavi, Abdelshiheed, et al. Shabrina, P., Mostafavi, B., Abdelshiheed, M., Chi, M., Barnes, T. (2023). Investigating the impact of backward strategy learning in a logic tutor: Aiding subgoal learning towards improved problem solving. _International Journal of Artificial Intelligence in Education_. * Shabrina, Mostafavi, Chi, BarnesShabrina, Mostafavi, Chi, Barnes Shabrina, P., Mostafavi, B., Chi, M., Barnes, T. (2023). Impact of learning a subgoal-directed problem-solving strategy in an intelligent logic tutor. In _Proceedings of the 24th international conference on artificial intelligence in education._ * Shabrina, Mostafavi, Tithi, et al.Shabrina, Mostafavi, Tithi, et al. Shabrina, P., Mostafavi, B., Tithi, S. D., Chi, M., Barnes, T. (2023). Learning problem decomposition-recomposition with data-driven chunky parsons problem within an intelligent logic tutor. In _Proceedings of the 16th international conference on educational data mining._ * Sutton BartoSutton Barto Sutton, R. S., Barto, A. G. (2018). _Reinforcement learning: An introduction_. MIT press. * TengTeng Teng, F. (2020). The role of metacognitive knowledge and regulation in mediating university efl learners’ writing performance. _Innovation in Language Learning and Teaching_ , _14_(5), 436–450. * Van Hasselt et al.Van Hasselt et al. Van Hasselt, H., et al. (2016). Deep reinforcement learning with double q-learning. In _AAAI_ (Vol. 30). * Yildirim et al.Yildirim et al. Yildirim, Z., Ozden, M. Y., Aksu, M. (2001). Comparison of hypermedia learning and traditional instruction on knowledge acquisition and retention. _The Journal of educational research_ , _94_(4), 207–214. * Zhou et al.Zhou et al. Zhou, G., Azizsoltani, H., Ausin-Sanz, M., Barnes, T., Chi, M. (2019). Hierarchical reinforcement learning for pedagogical policy induction. In _Proceedings of the 20th international conference on artificial intelligence in education_ (pp. 544–556).
# Stability of a Poiseuille-type flow for a MHD model of an incompressible polymeric fluid A. M. Blokhin Sobolev Institute of Mathematics, 630090, 4 Acad. Koptuyg Avenue, Novosibirsk, 630090, Russia Mechanics and Mathematics Department, Novosibirsk State University, 630090, 1 Pirogova Str., Novosibirsk, Russia <EMAIL_ADDRESS>D. L. Tkachev Sobolev Institute of Mathematics, 630090, 4 Acad. Koptuyg Avenue, Novosibirsk, 630090, Russia Mechanics and Mathematics Department, Novosibirsk State University, 630090, 1 Pirogova Str., Novosibirsk, Russia <EMAIL_ADDRESS> ###### Abstract We study a generalization of the Pokrovski–Vinogradov model for flows of solutions and melts of an incompressible viscoelastic polymeric medium to nonisothermal flows in an infinite plane channel under the influence of magnetic field. For the linearized problem (when the basic solution is an analogue of the classical Poiseuille flow for a viscous fluid described by the Navier-Stokes equations) we find a formal asymptotic representation for the eigenvalues under the growth of their modulus. We obtain a necessary condition for the asymptotic stability of a Poiseuille-type shear flow. ###### keywords: Incompressible viscoelastic polymeric fluid; rheological relation; magnetohydrodynamic flow. https://doi.org/10.1016/j/euromechflu.2019.12.006 2019\. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ Mathematics Subject Classification 2010: 35S99, 76J99, 76T20 ## Introduction In this work we study a generalization of the structurally phenomenological Pokrovski–Vinogradov model describing flows of melts and solutions of incompressible viscoelastic polymeric media to the nonisothermal case under the influence of magnetic field. In the Pokrovski–Vinogradov model, the polymeric medium is considered as a suspension of polymer macromolecules which move in an anisotropic fluid produced, for example, by a solvent and other macromolecules. The influence of environment on a real macromolecule is modeled by the action on a linear chain of Brownian particles each of which represents a large enough part of the macromolecule. Brownian particles often called “beads” are connected to each other by elastic forces called “springs”. In the case of slow motions the macromolecule is modeled as a chain of two particles called “dumbbell”. The physical representation of linear polymer flows described above results in the formulation of the Pokrovski–Vinogradov rheological model [2, 21, 22]: $\rho(\frac{\partial}{\partial t}v_{i}+v_{k}\frac{\partial}{\partial x_{k}}v_{i})=\frac{\partial}{\partial x_{k}}\sigma_{ik},\quad\frac{\partial v_{i}}{\partial x_{i}}=0,$ (1) $\sigma_{ik}=-p\delta_{ik}+3\frac{\eta_{0}}{\tau_{0}}a_{ik},$ (2) $\frac{d}{dt}a_{ik}-v_{ij}a_{jk}-v_{kj}a_{ji}+\frac{1+(k-\beta)I}{\tau_{0}}a_{ik}=\frac{2}{3}\gamma_{ik}-\frac{3\beta}{\tau_{0}}a_{ij}a_{jk},$ (3) where $\rho$ is the polymer density, $v_{i}$ is the $i$-th velocity component, $\sigma_{ik}$ is the stress tensor, $p$ is the hydrostatic pressure, $\eta_{0}$, $\tau_{0}$ are the initial values of the shear viscosity and the relaxation time respectively for the viscoelastic component, $v_{ij}$ is the tensor of the velocity gradient, $a_{ik}$ is the symmetrical tensor of additional stresses of second rank, $I=a_{11}+a_{22}+a_{33}$ is the first invariant of the tensor of additional stresses, $\gamma_{ik}=\frac{v_{ik}+v_{ki}}{2}$ is the symmetrized tensor of the velocity gradient, $k$ and $\beta$ are the phenomenological parameters taking into account the shape and the size of the coiled molecule in the dynamics equations of the polymer macromolecule. Structurally, the model consists of the incompressibility and motion equations (1) as well as the rheological relations (2), (3) connecting kinematic characteristics of the flow and its inner thermodynamic parameters. Some generalizations of model (1) – (3) provide good results in numerical simulations for viscosymetric flows [16]. For example, such a generalization is a model for which in equation (2) we add a term taking into account the so- called shear viscosity and the parameter $\beta$ is additionally dependent on the first invariant of the anisotropy tensor. Therefore, we may believe that modifications of the basic Polrovski–Vinogradov model could be useful for modeling the polymer motion in complex deformation conditions, e.g., for stationary and non-stationary flows in circular channels, flows in channels with a fast change of sectional area and flows with a free boundary. An important feature of such flows is their two- and three dimensional character. In this work, we consider one of such generalizations that takes into account the influence of the heat and the magnetic field on the polymeric fluid motion (see Sect. 1 for more details). Our main interest is an analogue of the Poiseuille flow which is the well-known shear flow in an infinite channel. It turns out that in our case this flow has a number of features. For example, computations show that for some values of parameters the velocity profile is stretched in the direction opposite to the forces of pressure (see Sect. 1). Our main results are given in Sect. 2. Firstly, we get an asymptotic representation of the spectrum of the problem linearized about the the chosen basic solution which is the Poiseuille-type flow. Secondly, as the result we get a condition whose fulfilment guarantees that the basic solution is asymptotically stable by Lyapunov in the chosen class of perturbations periodic with respect the variable changing along the infinite plane channel. The last section is devoted to the proof of the theorems formulated in Sect. 2. Overall, our work continues the study of Lyapunov’s stability of shear flows for both the original Pokrovski–Vinogradov model and its various generalizations described in [6, 7, 8, 9, 10, 11]. It should be noted that for the case of viscous fluid there is the known Krylov’s result [17] about the linear Lyapunov’s instability of the Poiseuille flow for large enough Reynolds numbers confirming Heisenberg’s hypothesis [15] (a refinement of this result was obtained in [14]). ## 1 Nonlinear model of the polymeric fluid flow in a plane channel under the presence of an external magnetic field Using the results from [1, 19, 23, 25, 26] and [4], let us formulate a mathematical model describing magnetohydrodynamic flows of an incompressible polymeric fluid for which, as in [24], in the equation for the inner energy we introduce some dissipative terms. In a dimensionless form this model reads (we keep the notations from [5]): $div\vec{\boldsymbol{u}}=u_{x}+v_{y}=0,$ (4) $div\vec{H}=L_{x}+M_{y}=0,$ (5) $\frac{d\vec{\boldsymbol{u}}}{dt}+\nabla P=div(Z\Pi)+\sigma_{m}(\vec{H},\nabla)\vec{H}+Gr(Z-1)\begin{pmatrix}0\\\ 1\end{pmatrix},$ (6) $\frac{da_{11}}{dt}-2A_{1}u_{x}-2a_{12}u_{y}+L_{11}=0,$ (7) $\frac{da_{22}}{dt}-2A_{2}v_{y}-2a_{12}v_{x}+L_{22}=0,$ (8) $\frac{da_{12}}{dt}-A_{1}v_{x}-A_{2}u_{y}+\frac{\widetilde{K}_{I}a_{12}}{\bar{\tau}_{0}(Z)}=0,$ (9) $\frac{dZ}{dt}=\frac{1}{Pr}\Delta_{x,y}Z+\frac{A_{r}}{Pr}ZD_{\Gamma}+\frac{A_{m}}{Pr}\sigma_{m}D_{m},$ (10) $\frac{d\vec{H}}{dt}-(\vec{H},\nabla)\vec{\boldsymbol{u}}-b_{m}\Delta_{x,y}\vec{H}=0.$ (11) where $t$ is the time, $u$, $v$ and $L$, $1+M$ are the components of the velocity vector $\vec{\boldsymbol{u}}$ and the magnetic field $\vec{H}$ respectively in the Cartesian coordinate system $x,y$; $P=\rho+\sigma_{m}\frac{L^{2}+(1+M)^{2}}{2},$ $\rho$ is the pressure; $a_{11}$, $a_{22}$, $a_{12}$ are the components of the symmetrical anisotropy tensor of second rank; $\Pi=\frac{1}{Re}(a_{ij}),\quad i,j=1,2;\quad L_{ii}=\frac{K_{I}a_{ii}+\beta(a_{ii}^{2}+a_{12}^{2})}{\bar{\tau}_{0}(Z)},\quad i=1,2;$ $K_{I}=W^{-1}+\frac{\bar{k}}{3}I,\quad\bar{k}=k-\beta;$ $I=a_{11}+a_{22}$ is the first invariant of the anisotropy tensor; $k$, $\beta$ ($0<\beta<1$) are the phenomenological parameters of the rheological model (see [21]); $A_{i}=W^{-1}+a_{ii}$, $i=1,2$; $Z=\frac{T}{T_{0}}$; $T$ is the temperature, $T_{0}$ is an average temperature (room temperature; we will further assume that $T_{0}=300$ K); $\widetilde{K}_{I}=K_{I}+\beta I$; $\bar{\tau}_{0}(Z)=\frac{1}{ZJ(Z)}$, $J(Z)=\exp\\{\bar{E}_{A}\frac{Z-1}{Z}\\}$, $\bar{E}_{A}=\frac{E_{A}}{T_{0}}$, $E_{A}$ is the activation energy; $Re=\frac{\rho u_{H}l}{\eta_{0}^{*}}$ is the Reynolds number; $W=\frac{\tau_{0}^{*}u_{H}}{l}$ is the Weissenberg number; $Gr=\frac{Ra}{Pr}$ is the Grasshoff number; $Pr=\frac{lu_{H}\rho c_{v}}{\varepsilon}=\frac{c_{v}\eta_{0}^{*}Re}{\varepsilon}$ is the Prandtl number; $Ra=\frac{lbgT_{0}Pr}{u_{H}^{2}}$ is the Rayleigh number; $A_{r}=\frac{\alpha u_{H}^{2}Pr}{ReT_{0}c_{v}}=\frac{\alpha u_{H}^{2}\eta_{0}^{*}}{T_{0}\varepsilon}$, $A_{m}=\frac{\alpha_{m}u_{H}^{2}Pr}{T_{0}c_{v}}$; $D_{\Gamma}=a_{11}u_{x}+(v_{x}+u_{y})a_{12}+a_{22}v_{y}$; $D_{m}=L^{2}u_{x}+L(1+M)(v_{x}+u_{y})+(1+M)^{2}v_{y}$; $\rho(=const)$ is the medium density; $\varepsilon$ is the coefficient of thermal conductivity of the polymeric fluid; $b$ is the coefficient thermal expansion of the polymeric fluid; $g$ is the gravity constant; $\eta_{0}^{*}$, $\tau_{0}^{*}$ are the initial values for the shear viscosity and the relaxation time for the room temperature $T_{0}$ (see [21, 22]); $l$ is the characteristic length, $u_{H}$ is the characteristic velocity; $\sigma_{m}=\frac{\mu\mu_{0}H_{0}^{2}}{\rho u_{H}^{2}}$ is the magnetic pressure coefficient; $b_{m}=\frac{1}{Re_{m}}$, $Re_{m}=\sigma_{m}\mu\mu_{0}u_{H}l$ is the magnetic Reynolds number; $\mu_{0}$ is the magnetic penetration in vacuum; $\mu$ is the magnetic penetration of the polymeric fluid; $\sigma$ is the electrical conductivity of the medium; $\alpha$ is the thermal equivalent of work (see [9]); $\alpha_{m}$ is the magnetothermal equivalent of work; $c_{v}$ is the heat capacity; $\frac{d}{dt}=\frac{\partial}{\partial t}+(\vec{\boldsymbol{u}},\nabla)=\frac{\partial}{\partial t}+u\frac{\partial}{\partial x}+v\frac{\partial}{\partial y}$, $\Delta_{x,y}=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}$ is the Laplace operator. The variables $t$, $x$, $y$, $u$, $v$, $p$, $a_{11}$, $a_{22}$, $a_{12}$, $L$, $M$ in system (4)–(11) correspond to the following values: $\frac{l}{u_{H}}$, $l$, $u_{H}$, $\rho u_{H}^{2}$, $\frac{W}{3}$, $H_{0}$, where $H_{0}$ is the characteristic magnitude of the magnetic field (see fig. 1). $\scriptstyle C^{-}$$\scriptstyle x$$\scriptstyle C^{+}$$S_{1}^{-}$$\scriptstyle h$$S$$S_{1}^{+}$$\scriptstyle-1/2$$\scriptstyle 0$$\scriptstyle 1/2$$\scriptstyle y$$\scriptstyle H_{1}^{+}=\,(0,1)$$\scriptstyle H_{1}^{-}=\,(0,1)$$\scriptstyle H_{1}^{-}=\,(0,1)$$\scriptstyle H_{1}^{+}=\,(0,1)$$\scriptstyle M_{1}^{-}=\,0$$\scriptstyle L_{1}^{-}=\,0$$\scriptstyle a_{11},a_{12},a_{22},Z$$\scriptstyle u,v,L,1+M$$\scriptstyle M_{1}^{+}=\,0$$\scriptstyle L_{1}^{+}=\,0$ Figure 1: Plane channel ###### Remark 1.1. The magnetohydrodynamic equations (4) – (11) are derived with the use of the Maxwell equations (see [23, 26]). The magnetic induction vector $\vec{B}$ is represented as $\vec{B}=\mu\mu_{0}\vec{H}=(1+\chi)\mu_{0}\vec{H},$ (12) where $\chi$ is the magnetic susceptibility, and (see [1, 20]) $\chi=\frac{\chi_{0}}{Z}$, $\chi_{0}$ is the magnetic susceptibility for the room temperature $T_{0}$(= 300 K). We will further assume that for the polymeric fluid $\mu=1$ ($\chi_{0}=0$). ###### Remark 1.2. Our main problem is the problem of finding solutions to the mathematical model (4)–(11) describing magnetohydrodynamic flows of an incompressible polymeric fluid in a plane channel with the depth $1(l)$ and bounded by the horizontal walls which are the electrodes $C^{+}$ and $C^{-}$ along which we have electric currents with the current strength $J^{+}$ and $J^{-}$ respectively (see Fig. 1). External with respect to the chanel $S$ areas $S_{1}^{+}$, $S_{1}^{-}$ are under the influence of the magnetic fields with components $L_{1}^{+}=0$, $M_{1}^{+}$, and $M_{1}^{+}|_{y=\frac{1}{2}+0}=-1+\frac{1+M(\frac{1}{2})}{1+\chi_{0}^{+}}$, $L_{1}^{-}=0$, $M_{1}^{-}$ and $M_{1}^{-}|_{y=-\frac{1}{2}+0}=-1+\frac{1+M(-\frac{1}{2})}{1+\frac{\chi_{0}^{-}}{1+\bar{\theta}}}$. Values of temperature $Z$ on the sides of the chanel will be defined below with boundary conditions (13). Acquired correlations between boundary values of $M_{1}^{+}+1$, $1+M(\frac{1}{2})$, and $M_{1}^{-}+1$, $1+M(-\frac{1}{2})$ correspondingly arise due to the equality (12) and continuity of the normal component for the magnetic induction vector on the sides of the chanel. The domains $S_{1}^{+}$ and $S_{1}^{-}$ external to the channel are magnets with the magnetic susceptibilities $\chi_{1}^{+}$ and $\chi_{1}^{-}$. On the walls of the channel the following boundary conditions hold: $\left\\{\begin{array}[]{l}y=\pm\frac{1}{2}:\quad\vec{u}=0\quad(\mbox{no-slip condition}),\\\ y=\frac{1}{2}:\qquad Z=1\quad(T=T_{0}),\\\ y=-\frac{1}{2}:\quad Z=1+\bar{\theta}\quad(\bar{\theta}=\frac{\theta}{T_{0}},\,\theta=T-T_{0}).\end{array}\right.$ (13) We have the temperature $T=T_{0}$ in the domain $S_{1}^{+}$ where as on the electrode $C^{+}$ we have: $y=-\frac{1}{2}:Z=1+\bar{\theta},\bar{\theta}=\frac{\theta}{T_{0}},\theta=T-T_{0},$ i.e., for $\bar{\theta}>0$ there is heating from below ($T$ is the temperature in the domain $S_{1}^{-}$ and on the electrode $C^{-}$), and for $\bar{\theta}<0$ there is heating from above. ###### Remark 1.3. We will consider the electrodes $C^{+}$ and $C^{-}$ as the boundaries between two uniform isotropic magnetics. Therefore, on the boundaries $C^{+}$ and $C^{-}$ the following known conditions hold (see [1, 18]): $\left\\{\begin{array}[]{l}y=\frac{1}{2}(C^{+}):\quad L=-J^{+},\quad M_{y}=0,\\\ y=-\frac{1}{2}(C^{-}):\quad L=-J^{-},\quad M_{y}=0.\end{array}\right.$ (14) We get the boundary condition $M_{y}=0$ at $y=\pm\frac{1}{2}$ by assuming that relation (5) holds for $y=\pm\frac{1}{2}$ and by taking into account the conditions $L=-J^{+}$ $(y=\frac{1}{2})$ and $L=-J^{-}$ $(y=-\frac{1}{2})$ (see (14)) that gives us $M_{y}=0$ ($y=\pm\frac{1}{2}$). ###### Remark 1.4. Let us show that $\left\\{\begin{array}[]{l}d=L_{x}+M_{y}=0\quad\mbox{for }y=\pm\frac{1}{2},\\\ d=0\quad\mbox{for }t=0,|y|<\frac{1}{2},x\in R^{1};\\\ d\to 0\quad\mbox{for }|x|\to\infty,t>0,|y|<\frac{1}{2},x\in R^{1},\end{array}\right.$ i.e., relation (5) follows from equations (4), (11). To prove this we apply the operator div to equation (11). Taking into account (4), we get $d_{t}+(\vec{u},\nabla)d-b_{m}\Delta_{x,y}d=0.$ Consequently, $(d^{2})_{t}+div(d^{2}\vec{u}-2b_{m}d\cdot\nabla d)+2b_{m}|\nabla d|^{2}=0.$ Integrating this relation with respect to $x$ from $-\infty$ to $+\infty$ and with respect to $y$ from $-\frac{1}{2}$ to $\frac{1}{2}$ gives $\frac{d}{dt}\left\\{\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{-\infty}^{+\infty}d^{2}(t,x,y)dxdy\right\\}+2b_{m}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{-\infty}^{+\infty}|\nabla d(t,x,y)|^{2}dxdy=0.$ This implies $\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{-\infty}^{+\infty}d^{2}(t,x,y)dxdy\leq 0,$ i.e., $d=0$ for $t>0$, $|y|<\frac{1}{2}$, $x\in R^{1}$. If we consider $d$ as a function from a wider class of functions bounded on each infinite set $\\{(t,x,y)|0\leq t\leq T,-\infty<x<\infty,-\frac{1}{2}\leq y\leq\frac{1}{2}$} (the parameter $T$ is being varied), then the fact that $d$ vanishes follows from the maximum principle for the heat equation $d_{t}-b_{m}\Delta_{x,y}d=0.$ This equation can be obtained by rewriting relation (11) in the form $\frac{d\vec{H}}{dt}-rot(\vec{u}\times\vec{H})-b_{m}\Delta_{x,y}\vec{H}=0$ and using the operator div. Stationary solutions of the mathematical model (4) – (11) were studied in [5]. Particular solutions (analogous to Poiseuille and Couette solutions for the Navier–Stoks equations system) were constructed there in the following form: $\left\\{\begin{array}[]{l}\vec{U}(t,x,y)=\hat{\vec{U}}(y),\\\ p(t,x,y)=\hat{P}(y)+\hat{p}_{0}-\hat{A}x,\end{array}\right.$ (15) where $\vec{U}=(u,v,a_{11},a_{12},a_{22},Z,L,M)^{T}$, $\hat{\vec{U}}(y)=(\hat{u}(y),\hat{v}(y),\hat{a}_{11}(y),\hat{a}_{12}(y),\hat{a}_{22}(y),\hat{Z}(y),\hat{L}(y),\hat{M}(y))^{T}$, $\hat{P}(y)$ ($\hat{P}(0)=0$) is some function that we will define further, $\hat{p}_{0}$ is the pressure on the channel axis for $y=0$, $x=0$ and $\hat{A}(>0)$ is the dimensionless constant drop of pressure on the segment $h$. From (4) – (11), (13), (14) we have the following relations for getting functions $\hat{u}(y)$, $\hat{a}_{11}(y)$, $\hat{a}_{12}(y)$, $\hat{a}_{22}(y)$, $\hat{Z}(y)$, $\hat{L}(y)$, $\hat{M}(y)$, $\hat{P}(y)$: $\displaystyle\frac{d}{dy}(\hat{Z}\hat{a}_{12}+(1+\hat{\lambda}\sigma_{m}Re\hat{L})=(\hat{Z}\hat{a}_{12}+(1+\hat{\lambda}\sigma_{m}Re\hat{L})^{{}^{\prime}}=-\hat{D},$ (16) $\displaystyle(\hat{P}-\frac{\hat{Z}\hat{a}_{22}}{Re}+\sigma_{m}\frac{\hat{L}^{2}}{2})^{{}^{\prime}}=Gr(\hat{Z}-1),\quad\hat{P}(0)=0,$ $\displaystyle\hat{u}^{{}^{\prime}}=\frac{\tilde{K}_{\hat{I}}J(\hat{Z})\hat{Z}\hat{a}_{12}}{\hat{A}_{2}},\quad\hat{u}(\pm\frac{1}{2})=0,$ $\displaystyle K_{\hat{I}}\hat{a}_{22}+\beta(\hat{g}+\hat{a}_{22}^{2})=0,$ $\displaystyle K_{\hat{I}}\hat{a}_{11}+\beta(\hat{g}+\hat{a}_{11}^{2})-2\hat{g}\frac{\tilde{K}_{\hat{I}}}{\hat{A}_{2}}=0,$ $\displaystyle\hat{Z}^{{}^{\prime\prime}}+(a_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}(1+\hat{\lambda})\hat{L})\hat{u}^{{}^{\prime}}=0,\,\hat{Z}(\frac{1}{2})=1,\,\hat{Z}(-\frac{1}{2})=1+\bar{\theta},$ $\displaystyle b_{m}\hat{L}^{{}^{\prime\prime}}+(1+\hat{\lambda})\hat{u}^{{}^{\prime}}=0,\quad\hat{L}(\pm\frac{1}{2})=-J^{\pm},$ $\displaystyle\hat{M}=\hat{\lambda}=\chi_{0}^{+}=\frac{\chi_{0}^{-}}{1+\bar{\theta}}=const.$ Here $\hat{D}=Re\cdot\hat{A}$, $K_{\hat{I}}=W^{-1}+\frac{\bar{k}}{3}\hat{I}$, $\hat{I}=\hat{a}_{11}+\hat{a}_{22}$, $\tilde{K}_{\hat{I}}=K_{\hat{I}}+\beta\hat{I}$, $\hat{g}=\hat{a}_{12}^{2}$, $\hat{A}_{2}=W^{-1}+\hat{a}_{22}$, $\chi_{0}^{+}$ and $\chi_{0}^{-}$ are the magnetic susceptibilities for $T=T_{0}$ in the domains $S_{1}^{+}$ and $S_{1}^{-}$ respectively. A detailed analysis of relations (16) was performed in [5]. Some solutions or more precisely their components $\hat{u}$, $\hat{Z}$ and $\hat{L}$ are represented in Fig. 2, 3, 4, 5. Moreover, in the first (main) case (if we use the terminology from [5]) $\hat{A}=1$, $\hat{\lambda}=1$, $\sigma_{m}=1$, $Re=1$, $W=1$, $\beta=0.5$, $A_{r}=1$, $A_{m}=1$, $\bar{\theta}=1$, $b_{m}=1$, $\bar{E}_{A}=1$, $J^{+}=2$, $J^{-}=1$, and in the second, third and forth cases we change one of the parameters $\hat{A}$, $J^{+}$ and $\theta$ respectively by leaving other parameters unchanged. Figure 2: Main case. Figure 3: Solution for $\hat{A}=3$. Figure 4: Solution for $J^{+}=-1$. Figure 5: Solution for $\bar{\theta}=-0.95$. Let us list most important features of the behavior of stationary polymeric fluid flows. In [4] the influence of the parameter $\bar{E}_{A}=\frac{E_{A}}{T_{0}}$ (connected with the activation energy $E_{A}$) on the form of the velocity profile was found. Unlike [4], in our case the velocity profile loses symmetry which is a feature of the parabolic velocity profile of the Poiseuille flow for viscous fluid [19]. This means that the solutions of (4) – (11), (13), (14) have a wider range of interesting properties. In the main case (see Fig. 2) the velocity profile is elongated in the direction opposite to that of pressure forces (due to the influence of magnetic field!). In Fig. 3 the pressure drop (its absolute value) is enlarged. This implies that the velocity profile is turned to the right. In Fig. 4 the absolute value of pressure is again small. But since the current’s direction on the top electrode is now opposite to the previous one, the velocity profile is again turned to the right. At last, strong cooling of the bottom channel’s boundary implies that the fluid velocity in the bottom part of the channel becomes close to zero (see Fig. 5). We will further study Lyapunov’s stability with respect to small perturbations of the stationary flow described above. Denoting small perturbations of all values by the same symbols, after linearization we obtain the following linear problem: $\left\\{\begin{aligned} u_{t}&+\hat{u}u_{x}-\hat{Z}(\alpha_{11})_{x}+\hat{Z}(\alpha_{22})_{x}-\hat{Z}(\alpha_{12})_{y}+\hat{u}^{{}^{\prime}}v-\hat{Z}^{{}^{\prime}}\alpha_{12}+\Gamma_{1}=0,\\\ v_{t}&+\hat{u}v_{x}-\hat{Z}(\alpha_{12})_{x}+\Gamma_{2}=0,\\\ (\alpha_{11})_{t}&+\hat{u}(\alpha_{11})_{x}-2\hat{\alpha}_{1}u_{x}-2\hat{\alpha}_{12}u_{y}+\hat{\alpha}_{11}^{{}^{\prime}}v+R_{33}\alpha_{11}+\\\ &+R_{34}\alpha_{12}+R_{35}\alpha_{22}+r_{11}Z=0,\\\ (\alpha_{12})_{t}&+\hat{u}(\alpha_{12})_{x}-\hat{\alpha}_{1}v_{x}-\hat{\alpha}_{2}u_{y}+\hat{\alpha}_{12}^{{}^{\prime}}v+R_{43}\alpha_{11}+\\\ &+R_{54}\alpha_{12}+R_{45}\alpha_{22}+r_{12}Z=0,\\\ (\alpha_{22})_{t}&+\hat{u}(\alpha_{22})_{x}-2\hat{\alpha}_{12}v_{x}-2\hat{\alpha}_{2}v_{y}+\hat{\alpha}_{22}^{{}^{\prime}}v+R_{53}\alpha_{11}+\\\ &+R_{54}\alpha_{12}+R_{55}\alpha_{22}=0,\\\ Z_{t}&+\hat{u}Z_{x}+\hat{Z}^{\prime}v=\frac{1}{Pr}\Delta_{x,y}Z+\frac{Ar}{Pr}\hat{u}^{\prime}\hat{a}_{12}Z+\frac{Ar}{Pr}\hat{Z}(\hat{a}_{11}u_{x}+\\\ &+\hat{a}_{12}v_{x}+\hat{a}_{12}u_{y}+\hat{a}_{22}v_{y}+\hat{u}^{\prime}a_{12})+\frac{Am}{Pr}\sigma_{m}(\hat{L}^{2}u_{x}+\\\ &+\hat{L}(1+\hat{\lambda})v_{x}+\hat{L}(1+\hat{\lambda})u_{y}+\hat{u}^{\prime}((1+\hat{\lambda})L+\hat{L}M)+(1+\hat{\lambda})^{2}v_{y}),\\\ L_{t}&+\hat{u}L_{x}+v\hat{L}^{\prime}-\hat{L}u_{x}-(1+\hat{\lambda})u_{y}-\hat{u}^{\prime}M-b_{m}\Delta_{x,y}L=0,\\\ M_{t}&+\hat{u}M_{x}-\hat{L}v_{x}-(1+\hat{\lambda})v_{y}-b_{m}\Delta_{x,y}M=0,\\\ u_{x}&+v_{y}=0,\quad t>0,\quad|y|<\frac{1}{2},\quad x\in R^{1};\end{aligned}\right.$ (17) $u=v=Z=L=M_{y}=0\quad\mbox{ }y=\pm\frac{1}{2},\,t>0,x\in R^{1};$ (18) where $\displaystyle\alpha_{ij}$ $\displaystyle=\frac{a_{ij}}{Re},\hat{\alpha}_{ij}=\frac{\hat{a}_{ij}}{Re},i,j=1,2,$ $\displaystyle\hat{\alpha}_{i}$ $\displaystyle=\hat{\alpha}_{ii}+\kappa^{2},\kappa^{2}=\frac{1}{WRe}i=1,2,$ $\displaystyle\Gamma_{1}$ $\displaystyle=\Omega_{x}-\hat{\alpha}_{11}Z_{x}-\hat{\alpha}_{12}Z_{y}-\hat{\alpha}_{12}^{\prime}Z+\sigma_{m}(1+\hat{\lambda})\omega_{m}-\sigma_{m}\hat{L}M,$ $\displaystyle\Gamma_{2}$ $\displaystyle=\Omega_{y}-\hat{\alpha}_{12}Z_{x}-\hat{\alpha}_{2}Z_{y}-(\hat{\alpha}_{12}^{{}^{\prime}}+Gr)Z-\sigma_{m}\hat{L}\omega_{m}+\sigma_{m}\hat{L}^{\prime}L,$ $\displaystyle\Omega$ $\displaystyle=P-\hat{Z}\alpha_{22},\omega_{m}=M_{x}-L_{y}$ $\displaystyle R_{33}$ $\displaystyle=\hat{\bar{\chi}}_{0}(\hat{K}_{\hat{I}}+\hat{a}_{11}(\frac{\bar{k}}{3}+2\beta)),R_{34}=-2\hat{u}^{\prime}+2\beta\hat{a}_{12}\chi_{0}^{*},$ $\displaystyle R_{35}$ $\displaystyle=\frac{\bar{k}}{3}\hat{a}_{11}\hat{\bar{\chi}}_{0},\hat{\bar{\chi}}_{0}=\frac{1}{\bar{\tau}_{0}(\hat{Z})},$ $\displaystyle r_{11}$ $\displaystyle=(\hat{\bar{\chi}}_{0})^{\prime}\frac{2\hat{\alpha}_{12}^{2}\tilde{K}_{\hat{I}}}{\hat{\alpha}_{2}},\hat{\bar{\chi}}_{0}^{`}=\hat{\bar{\chi}}_{0}\frac{\bar{E}_{A}+\hat{Z}}{\hat{Z}^{2}},\hat{\tilde{K}}_{\hat{I}}=\hat{K}_{\hat{I}}+\beta(\hat{a}_{11}+\hat{a}_{22}),$ $\displaystyle R_{43}$ $\displaystyle=\hat{a}_{12}\hat{\bar{\chi}}_{0}(\frac{\bar{k}}{3}+\beta),R_{45}=-\hat{u}^{\prime}+\hat{a}_{12}\hat{\bar{\chi}}_{0}(\frac{\bar{k}}{3}+\beta),R_{44}=\hat{\bar{\chi}}_{0}^{{}^{\prime}}\hat{\tilde{K}}_{\hat{I}},$ $\displaystyle r_{12}$ $\displaystyle=(\hat{\bar{\chi}}_{0})^{\prime}\hat{\alpha}_{12}\hat{\tilde{K}}_{\hat{I}},$ $R_{53}=\hat{\bar{\chi}}_{0}\hat{a}_{22}\frac{\bar{k}}{3},R_{54}=2\beta\hat{a}_{12}\hat{\bar{\chi}}_{0},R_{55}=\hat{\bar{\chi}}_{0}(\hat{K}_{\hat{I}}+\hat{a}_{22}(\frac{\bar{k}}{3}+2\beta)).$ (19) ###### Remark 1.5. The relation $L_{x}+M_{y}=0$ is not included into system (17) because due to the last three equations in (17) it holds for $t>0$ if it was true for $t=0$. That is, relation (5) is, in fact, a constraint on the initial data for $l$ and $M$. ###### Remark 1.6. Unlike [5], the components $M_{1}^{+}$ and $M_{1}^{-}$ are not zero in the domains $S_{1}^{+}$ and $S_{1}^{-}$ (see Fig. 1): $\displaystyle M_{1}^{+}\big{|}_{y=\frac{1}{2}+0}$ $\displaystyle=-1+\frac{1+M(\frac{1}{2})}{1+\chi_{0}^{+}},$ (20) $\displaystyle M_{1}^{-}\big{|}_{y=-\frac{1}{2}-0}$ $\displaystyle=-1+\frac{1+M(-\frac{1}{2})}{1+\frac{\bar{\chi}_{0}^{-}}{(1+\bar{\theta})}},$ where $M(\frac{1}{2})$ and $M(-\frac{1}{2})$ are the values of $M$ on the top and bottom electrodes respectively. ###### Remark 1.7. Let us assume that the domains $S_{1}^{\pm}$ are filled with nonconducting mediums. Then, in view of the Maxwell equation [18], the small perturbations of $M_{1}^{\pm}$ satisfy the Laplace equation and additional conditions at infinity: $\displaystyle\Delta_{x,y}M_{1}^{+}$ $\displaystyle=0\,\mbox{in }S_{1}^{+},\,M_{1}^{+}\to 0\,\mbox{for }y\to\infty,$ (21) $\displaystyle\Delta_{x,y}M_{1}^{-}$ $\displaystyle=0\,\mbox{in }S_{1}^{-},\,M_{1}^{-}\to 0\,\mbox{for }y\to-\infty.$ If the components $M_{1}^{+}$ are periodic functions with respect to $x$, i.e., $M_{1}^{\pm}(x,y)=\tilde{M}_{1}^{\pm}(y)e^{i\omega x},\quad\omega\in R,$ (22) then from relations (21) we get $\displaystyle\tilde{M}_{1}^{+}(y)$ $\displaystyle=M_{1}^{+}\big{|}_{y=\frac{1}{2}+0}e^{-|\omega|(y-\frac{1}{2})},$ $\displaystyle\tilde{M}_{1}^{-}(y)$ $\displaystyle=M_{1}^{-}\big{|}_{y=-\frac{1}{2}-0}e^{|\omega|(y+\frac{1}{2})}.$ Thereby, due to formulas (20) the components $L_{1}^{\pm}$ and $1+M_{1}^{\pm}$ of the tension vector $\vec{H}$ of the magnetic field are defined in the domains $S_{1}^{\pm}$, and $L_{1}^{\pm}=0$. ## 2 Periodic perturbations. Linearized problem. Formulation of main results We will be looking for solutions of system (17) in the special form $\vec{U}(t,x,y)=\vec{\tilde{U}}(y)\exp\\{\lambda t+i\omega x\\},$ (23) where $\lambda=\eta+i\xi$, $\xi,\omega\in R^{1}$, $\vec{U}=(u,v,\alpha_{11},\alpha_{12},\alpha_{22},\Omega,Z,L,M)^{T}$. We will below drop tildes. Then, it follows from (17) that $\displaystyle u^{{}^{\prime}}$ $\displaystyle=\frac{\hat{\alpha}_{12}^{{}^{\prime}}-i\omega\hat{\alpha}_{1}}{\hat{\alpha}_{2}}v+\frac{\lambda+i\omega\hat{u}+R_{44}}{\hat{\alpha}_{2}}\alpha_{12}+\frac{R_{43}}{\hat{\alpha}_{2}}\alpha_{11}+\frac{R_{45}}{\hat{\alpha}_{2}}\alpha_{22}+\frac{r_{12}}{\hat{\alpha}_{2}}Z,$ $\displaystyle v^{{}^{\prime}}$ $\displaystyle=-i\omega u,$ $\displaystyle(\lambda$ $\displaystyle+i\omega\hat{u}-\frac{2\hat{\alpha}_{12}R_{43}}{\hat{\alpha}_{2}}+R_{33})\alpha_{11}-2\hat{\alpha}_{1}i\omega u-\left(\frac{2\hat{\alpha}_{12}(\hat{\alpha}_{12}^{{}^{\prime}}-i\omega\hat{\alpha}_{1})}{\hat{\alpha}_{2}}-\hat{\alpha}_{11}^{{}^{\prime}}\right)v-$ $\displaystyle-\left(\frac{2\hat{\alpha}_{12}(\lambda+i\omega\hat{u}+R_{44})}{\hat{\alpha}_{2}}-R_{34}\right)\alpha_{12}-\left(\frac{2\hat{\alpha}_{12}R_{45}}{\hat{\alpha}_{2}}-R_{35}\right)\alpha_{22}-$ $\displaystyle-\left(\frac{2\hat{\alpha}_{12}r_{12}}{\hat{\alpha}_{2}}-r_{11}\right)Z=0,$ $\displaystyle\alpha_{12}^{{}^{\prime}}$ $\displaystyle=\frac{\lambda+i\omega\hat{u}}{\hat{Z}}u+\frac{\hat{u}^{{}^{\prime}}}{\hat{Z}}v-\frac{\hat{Z}^{{}^{\prime}}}{\hat{Z}}\alpha_{12}+\frac{i\omega}{\hat{Z}}\Omega-i\omega\alpha_{11}+i\omega\alpha_{22}-\frac{i\omega\hat{\alpha}_{11}+\hat{\alpha}_{12}^{{}^{\prime}}}{\hat{Z}}Z-$ $\displaystyle-\frac{\hat{\alpha}_{12}}{\hat{Z}}Z^{{}^{\prime}}+\frac{\sigma_{m}(1+\hat{\lambda})}{\hat{Z}}(i\omega M-L^{{}^{\prime}})-\frac{\sigma_{m}\hat{L}^{{}^{\prime}}}{\hat{Z}}M,$ $\displaystyle(\lambda+i\omega\hat{u}+R_{55})\alpha_{22}-(2\hat{\alpha}_{12}i\omega-\hat{\alpha}_{22}^{{}^{\prime}})v+2\hat{\alpha}_{2}i\omega u+R_{53}\alpha_{11}+R_{54}\alpha_{12}=0,$ $\displaystyle\Omega^{{}^{\prime}}$ $\displaystyle=-(\lambda+i\omega\hat{u})v+i\omega\hat{Z}\alpha_{12}+(\hat{\alpha}_{12}i\omega+\hat{\alpha}_{22}^{{}^{\prime}}+Gr)Z+\hat{\alpha}_{22}Z^{{}^{\prime}}+$ $\displaystyle+\sigma_{m}\hat{L}(i\omega M-L^{{}^{\prime}})-\sigma_{m}\hat{L}^{{}^{\prime}}L,$ $\displaystyle\frac{1}{Pr}Z^{{}^{\prime\prime}}$ $\displaystyle=(\frac{q_{2}}{Pr}-\frac{A_{r}}{Pr}(\hat{u}^{{}^{\prime}}\hat{a}_{12}+\hat{Z}\frac{\hat{a}_{12}}{\hat{\alpha}_{2}}r_{12})-\frac{A_{m}}{Pr}\sigma_{m}\hat{L}(1+\hat{\lambda})\frac{r_{12}}{\hat{\alpha}_{2}})Z-$ $\displaystyle-u(\frac{A_{r}}{Pr}\hat{Z}(\hat{a}_{11}i\omega-\hat{a}_{22})i\omega+\frac{A_{m}}{Pr}\sigma_{m}(\hat{L}^{2}-(1+\hat{\lambda})^{2})i\omega)+$ $\displaystyle+v\bigg{(}\hat{Z}^{{}^{\prime}}-\frac{A_{r}}{Pr}\hat{Z}(\hat{a}_{22}i\omega+\hat{a}_{12}\frac{\hat{\alpha}_{12}-i\omega\hat{\alpha}_{1}}{\hat{\alpha}_{2}})-\frac{A_{m}}{Pr}\sigma_{m}\hat{L}(1+\hat{\lambda})\times$ $\displaystyle\times(i\omega+\frac{\hat{\alpha}_{12}^{{}^{\prime}}-i\omega\hat{\alpha}_{1}}{\hat{\alpha}_{2}})\bigg{)}-\frac{A_{m}}{Pr}\sigma_{m}(1+\hat{\lambda})\hat{u}^{{}^{\prime}}L-\frac{A_{m}}{Pr}\sigma_{m}\hat{u}^{{}^{\prime}}\hat{L}M-$ $\displaystyle-\alpha_{12}\bigg{(}\frac{A_{r}}{Pr}\hat{Z}\hat{a}_{12}+\frac{A_{m}}{Pr}\sigma_{m}\hat{L}(1+\hat{\lambda})\frac{\lambda+i\omega\hat{u}+R_{44}}{\hat{\alpha}_{2}}\bigg{)}-$ $\displaystyle-\alpha_{11}\big{(}\frac{A_{r}}{Pr}\hat{Z}\hat{a}_{12}+\frac{A_{m}}{Pr}\sigma_{m}\hat{L}(1+\hat{\lambda})\big{)}\frac{R_{43}}{\hat{\alpha}_{2}}-\alpha_{22}\big{(}\frac{A_{r}}{Pr}\hat{Z}\hat{a}_{12}+\frac{A_{m}}{Pr}\sigma_{m}\hat{L}(1+\hat{\lambda})\big{)}\frac{R_{45}}{\hat{\alpha}_{2}},$ $\displaystyle L^{{}^{\prime\prime}}$ $\displaystyle=q_{3}L-\frac{i\omega\hat{L}}{b_{m}}u+v\bigg{(}\frac{\hat{L}^{{}^{\prime}}}{b_{m}}-\frac{1+\lambda}{b_{m}}\frac{(\hat{\alpha}_{12}^{{}^{\prime}}-i\omega\hat{\alpha}_{1})}{\hat{\alpha}_{2}}\bigg{)}-\frac{1+\hat{\lambda}}{b_{m}}\frac{\lambda+i\omega\hat{u}+R_{44}}{\hat{\alpha}_{2}}\alpha_{12}-$ $\displaystyle-\frac{1+\hat{\lambda}}{b_{m}}\frac{R_{43}}{\hat{\alpha}_{2}}\alpha_{11}-\frac{1+\hat{\lambda}}{b_{m}}\frac{r_{12}}{\hat{\alpha}_{2}}Z-\frac{\hat{u}^{{}^{\prime}}}{b_{m}}M-\frac{1+\hat{\lambda}}{b_{m}}\frac{R_{45}}{\hat{\alpha}_{2}}\alpha_{22},$ (24) $\displaystyle M^{{}^{\prime\prime}}$ $\displaystyle=q_{3}M+\frac{1+\hat{\lambda}}{b_{m}}i\omega u-\frac{i\omega\hat{L}}{b_{m}}v,$ where $q_{2}=Pr(\lambda+i\omega\hat{u})+\omega^{2}$, $q_{3}=\frac{\lambda+i\omega\hat{u}+b_{m}\omega^{2}}{b_{m}}$. The following statements are true. ###### Theorem 2.1. If problem (17), (18) has a solution in form (23) (the parameter $\omega$ is constant!), then we have the following asymptotic representation for $\lambda$: $\lambda_{k}=\left[\int_{-\frac{1}{2}}^{\frac{1}{2}}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi\right]^{-1}\bigg{(}\int_{-\frac{1}{2}}^{\frac{1}{2}}-\frac{1}{2}\bigg{[}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}\bigg{(}\frac{i\omega\hat{u}+R_{44}}{\hat{\alpha}_{2}}+\frac{2\hat{\alpha}_{12}R_{43}}{\hat{\alpha}_{2}^{2}}\bigg{)}+\\\ +i\omega\hat{u}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}+\frac{\hat{\alpha}_{12}}{\hat{Z}}\times\\\ \times(A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda}))\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}+\frac{\sigma_{m}(1+\hat{\lambda})^{2}}{b_{m}\sqrt{\hat{Z}\hat{\alpha}_{2}}}\bigg{]}d\xi+k\pi i\bigg{)}+O(\frac{1}{k}),\quad k\to\infty,$ (25) where we use $O$ as a big O notation. From representation (25) we obtain a necessary condition for the asymptotic stability of the Poiseuille-type flow described in Sect. ###### Theorem 2.2. For the asymptotic stability of the Poiseuille-type flow it is necessary that the following inequality holds true: $\int_{-\frac{1}{2}}^{\frac{1}{2}}\bigg{[}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}\hat{\bar{\chi}}_{0}\left(\frac{1}{\hat{\alpha}_{2}}\big{(}W^{-1}+\frac{k+2\beta}{3}(\hat{a}_{11}+\hat{a}_{22})\big{)}+\hat{\bar{\chi}}_{0}\frac{2\hat{\alpha}_{12}}{\hat{\alpha}_{2}^{2}}\hat{a}_{12}\frac{k+2\beta}{3}\right)+\\\ +\frac{\hat{\alpha}_{12}}{\hat{Z}}\big{(}A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda})\big{)}\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}+\frac{\sigma_{m}(1+\hat{\lambda})^{2}}{b_{m}\sqrt{\hat{Z}\hat{\alpha}_{2}}}\bigg{]}d\xi>0.$ (26) ## 3 Proof of theorems 1 and 2 Due to the third and fifth equations of system (24) we can get representations for the components $\alpha_{11}$, $\alpha_{22}$: $\displaystyle\alpha_{11}$ $\displaystyle=\frac{2\hat{\alpha}_{12}}{\hat{\alpha}_{2}}\alpha_{12}+\alpha_{11}^{*},$ (27) $\displaystyle\alpha_{22}$ $\displaystyle=\alpha_{22}^{*},$ where the functions $\alpha_{11}^{*}$, $\alpha_{22}^{*}$ are expressed through $u,v,Z,\alpha_{12}$ with coefficients proportional to $\frac{1}{\lambda}$. We will further show that such values do not influence on the first term in the asymptotic representation for the eigenvalues $\lambda$ as $|\lambda|\to\infty$ and, hence, they can be omitted. Using relations (27) and denoting $\tilde{Y}=(\tilde{y}_{1},\tilde{y}_{2},\tilde{y}_{3},\tilde{y}_{4},\tilde{y}_{5},\tilde{y}_{6},\tilde{y}_{7},\tilde{y}_{8},\tilde{y}_{9},\tilde{y}_{10})^{T}=(u,v,\alpha_{12},\Omega,Z,Z^{{}^{\prime}},L,L^{{}^{\prime}},M,M^{{}^{\prime}})^{T},$ system (24) can be rewritten in the following form: $\displaystyle\tilde{y}_{1}^{{}^{\prime}}$ $\displaystyle=\frac{\hat{\alpha}_{12}^{{}^{\prime}}-i\omega\hat{\alpha}_{1}}{\hat{\alpha}_{2}}\tilde{y}_{2}+\frac{\lambda+i\omega\hat{u}+R_{44}+\frac{2\hat{\alpha}_{12}R_{43}}{\hat{\alpha}_{2}}}{\hat{\alpha}_{2}}\tilde{y}_{3}+\frac{r_{12}}{\hat{\alpha}_{2}}\tilde{y}_{5},$ (28) $\displaystyle\tilde{y}_{2}^{{}^{\prime}}$ $\displaystyle=-i\omega\tilde{y}_{1},$ $\displaystyle\tilde{y}_{3}^{{}^{\prime}}$ $\displaystyle=\frac{\lambda+i\omega\hat{u}}{\hat{Z}}\tilde{y}_{1}+\frac{\hat{u}^{{}^{\prime}}}{\hat{Z}}\tilde{y}_{2}-(\frac{\hat{Z}^{{}^{\prime}}}{\hat{Z}}+\frac{i\omega 2\hat{\alpha}_{12}}{\hat{\alpha}_{2}})\tilde{y}_{3}+\frac{i\omega}{\hat{Z}}\tilde{y}_{4}-\frac{i\omega\hat{\alpha}_{11}+\hat{\alpha}_{12}^{{}^{\prime}}}{\hat{Z}}\tilde{y}_{5}-$ $\displaystyle-\frac{\hat{\alpha}_{12}}{\hat{Z}}\tilde{y}_{6}+\left(\frac{\sigma_{m}(1+\hat{\lambda})}{\hat{Z}}i\omega-\frac{\sigma_{m}\hat{L}^{{}^{\prime}}}{\hat{Z}}\right)\tilde{y}_{9}-\frac{\sigma_{m}(1+\hat{\lambda})}{\hat{Z}}\tilde{y}_{8},$ $\displaystyle\tilde{y}_{4}^{{}^{\prime}}$ $\displaystyle=-(\lambda+i\omega\hat{u})\tilde{y}_{2}+i\omega\hat{Z}\tilde{y}_{3}+(\hat{\alpha}_{12}i\omega+\hat{\alpha}_{22}^{{}^{\prime}}+Gr)\tilde{y}_{5}-\sigma_{m}\hat{L}^{{}^{\prime}}\tilde{y}_{7}-\sigma_{m}\hat{L}\tilde{y}_{8}+$ $\displaystyle+\sigma_{m}Li\omega\tilde{y}_{9}+\hat{\alpha}_{22}\tilde{y}_{6},$ $\displaystyle\tilde{y}_{5}^{{}^{\prime}}$ $\displaystyle=\tilde{y}_{6},$ $\displaystyle\frac{1}{Pr}\tilde{y}_{6}^{{}^{\prime}}$ $\displaystyle=(\lambda+i\omega\hat{u}+\frac{\omega^{2}}{Pr}-\frac{A_{r}}{Pr}(\hat{u}^{{}^{\prime}}\hat{a}_{12}+\hat{Z}\frac{\hat{a}_{12}}{\hat{\alpha}_{2}}r_{12})-\frac{A_{m}}{Pr}\sigma_{m}\hat{L}(1+\hat{\lambda})\frac{r_{12}}{\hat{\alpha}_{2}})\tilde{y}_{5}-$ $\displaystyle-\tilde{y}_{1}(\frac{A_{r}}{Pr}\hat{Z}(\hat{a}_{11}-\hat{a}_{22})i\omega+\frac{A_{m}}{Pr}\sigma_{m}(\hat{L}^{2}-(1+\hat{\lambda})^{2})i\omega)+$ $\displaystyle+\tilde{y}_{2}\bigg{(}\hat{Z}^{{}^{\prime}}-\frac{A_{r}}{Pr}\hat{Z}\hat{\alpha}_{12}(i\omega+\frac{\hat{\alpha}_{12}^{{}^{\prime}}-i\omega\hat{\alpha}_{1}}{\hat{\alpha}_{2}})-\frac{A_{m}}{Pr}\sigma_{m}\hat{L}(1+\hat{\lambda})\times$ $\displaystyle\times(i\omega+\frac{\hat{\alpha}_{12}^{{}^{\prime}}-i\omega\hat{\alpha}_{1}}{\hat{\alpha}_{2}})\bigg{)}-\frac{A_{m}}{Pr}\sigma_{m}(1+\hat{\lambda})\hat{u}^{{}^{\prime}}\tilde{y}_{7}-\frac{A_{m}}{Pr}\sigma_{m}\hat{u}^{{}^{\prime}}\hat{L}\tilde{y}_{9}-$ $\displaystyle-\bigg{(}\frac{A_{r}}{Pr}\hat{Z}\hat{a}_{12}+\frac{A_{m}}{Pr}\sigma_{m}\hat{L}(1+\hat{\lambda})\frac{\lambda+i\omega\hat{u}+R_{44}+\frac{2\hat{\alpha}_{12}R_{53}}{\hat{\alpha}_{2}}}{\hat{\alpha}_{2}}\bigg{)}\tilde{y}_{3},$ $\displaystyle\tilde{y}_{7}^{{}^{\prime}}$ $\displaystyle=\tilde{y}_{8}$ $\displaystyle\tilde{y}_{8}^{{}^{\prime}}$ $\displaystyle=(\frac{\lambda+i\omega\hat{u}}{b_{m}}+\omega^{2})\tilde{y}_{7}-\frac{i\omega\hat{L}}{b_{m}}\tilde{y}_{1}+\bigg{(}\frac{\hat{L}^{{}^{\prime}}}{b_{m}}-\frac{1+\lambda}{b_{m}}(\hat{\alpha}_{12}-i\omega\hat{\alpha}_{1})\bigg{)}\tilde{y}_{2}-$ $\displaystyle-\frac{1+\hat{\lambda}}{b_{m}}\frac{\lambda+i\omega\hat{u}+R_{44}+\frac{2\hat{\alpha}_{12}R_{43}}{\hat{\alpha}_{2}}}{\hat{\alpha}_{2}}\tilde{y}_{3}-\frac{1+\hat{\lambda}}{b_{m}}{r_{12}}{\hat{\alpha}_{2}}\tilde{y}_{5}-\frac{\hat{u}^{{}^{\prime}}}{b_{m}}\tilde{y}_{9},$ $\displaystyle\tilde{y}_{9}^{{}^{\prime}}$ $\displaystyle=\tilde{y}_{10},$ $\displaystyle\tilde{y}_{10}^{{}^{\prime}}$ $\displaystyle=\left(\frac{\lambda+i\omega\hat{u}}{b_{m}}+\omega^{2}\right)\tilde{y}_{9}+\frac{1+\hat{\lambda}}{b_{m}}i\omega\tilde{y}_{1}-\frac{i\omega\hat{L}}{b_{m}}\tilde{y}_{2}.$ Let us rewrite system (28) in the more compact form $\tilde{Y}^{{}^{\prime}}=(\lambda D+P)\tilde{Y},$ (29) where the matrix $D$ is rather sparse: $\displaystyle d_{13}$ $\displaystyle=\frac{1}{\hat{\alpha}_{2}},\,d_{31}=\frac{1}{\hat{Z}},\,d_{42}=-1,\,d_{63}=-\frac{1}{\hat{\alpha}_{2}}(A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda})),$ $\displaystyle d_{65}$ $\displaystyle=Pr,\,d_{83}=-\frac{1+\hat{\lambda}}{b_{m}\hat{\alpha}_{2}},\,d_{87}=\frac{1}{b_{m}},\,d_{10,9}=\frac{1}{b_{m}},$ all other elements equal zero. We can make the change $\tilde{Y}=TY,$ (30) where the elements of the matrix $T$ read: $\displaystyle t_{11}$ $\displaystyle=-t_{12}=-\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}},\,t_{71}=-t_{72}=(A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda}))\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}},$ $\displaystyle t_{81}$ $\displaystyle=-t_{82}=\frac{1+\hat{\lambda}}{b_{m}}\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}},\,t_{58}=\frac{1}{Pr},\,t_{7,10}=b_{m},$ $\displaystyle t_{31}$ $\displaystyle=t_{32}=t_{45}=t_{67}=t_{89}=t_{93}=t_{10,4}=-t_{26}=1,$ and other elements are zero. This enables one to write down the matrix $D$ in the upper Jordan form: $T^{-1}DT=W=diag\bigg{\\{}-\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}},\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}},0,0,block\begin{pmatrix}0&1\\\ 0&0\end{pmatrix},block\begin{pmatrix}0&1\\\ 0&0\end{pmatrix},\\\ block\begin{pmatrix}0&1\\\ 0&0\end{pmatrix}\bigg{\\}}.$ (31) Then, system (29) is transformed in the following way: $Y^{{}^{\prime}}=(\lambda W+C)Y,$ (32) where $C=T^{-1}PT-T^{-1}T^{{}^{\prime}}$. Lets write only two elements of the matrix $C$ which will be used below: $\displaystyle c_{11}$ $\displaystyle=-\frac{1}{2}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}\left(\frac{i\omega\hat{u}+R_{44}+\frac{2\hat{\alpha}_{12}R_{43}}{\hat{\alpha}_{2}}}{\hat{\alpha}_{2}}\right)+\frac{1}{2}\bigg{(}-i\omega\hat{u}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}-\frac{\hat{Z}^{{}^{\prime}}}{\hat{Z}}-\frac{i\omega\hat{\alpha}_{12}}{\hat{\alpha}_{2}}-\frac{\hat{\alpha}_{12}}{\hat{Z}}\times$ (33) $\displaystyle\times(A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda}))\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}-\frac{\sigma_{m}(1+\hat{\lambda})^{2}}{b_{m}\sqrt{\hat{Z}\hat{\alpha}_{2}}}\bigg{)}-\frac{1}{2}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}\left(\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}\right)^{{}^{\prime}},$ $\displaystyle c_{22}$ $\displaystyle=\frac{1}{2}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}\left(\frac{i\omega\hat{u}+R_{44}+\frac{2\hat{\alpha}_{12}R_{43}}{\hat{\alpha}_{2}}}{\hat{\alpha}_{2}}\right)+\frac{1}{2}\bigg{(}-i\omega\hat{u}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}-\frac{\hat{Z}^{{}^{\prime}}}{\hat{Z}}-\frac{i\omega\hat{\alpha}_{12}}{\hat{\alpha}_{2}}-\frac{\hat{\alpha}_{12}}{\hat{Z}}\times$ $\displaystyle\times(A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda}))\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}-\frac{\sigma_{m}(1+\hat{\lambda})^{2}}{b_{m}\sqrt{\hat{Z}\hat{\alpha}_{2}}}\bigg{)}-\frac{1}{2}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}\left(\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}\right)^{{}^{\prime}}.$ Let us get an asymptotic representation for the fundamental matrix of system (32). For this purpose we split the matrix $C$ into blocks corresponding to representation (31) of the matrix $W$: the first diagonal block corresponds to the nonzero diagonal elements $-\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}$ and $\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}$ whereas the second diagonal block corresponds, on the contrary, to zero elements. Then, system (32) can be written in the more convenient form $Y^{{}^{\prime}}=\begin{pmatrix}Y_{I}\\\ Y_{II}\end{pmatrix}^{{}^{\prime}}=(\lambda W+C)\begin{pmatrix}Y_{I}\\\ Y_{II}\end{pmatrix},$ (34) where $C=\begin{pmatrix}C_{I}^{I}&C_{II}^{I}\\\ C_{I}^{II}&C_{II}^{II}\end{pmatrix}.$ (35) Using splitting (35) of the matrix $C$, we get the following system for the vector $Y_{II}$: $(Y_{II})^{{}^{\prime}}=C_{II}^{II}Y_{II}+C_{I}^{II}Y_{I}.$ (36) Assuming that the vector $Y_{I}$ is known, we can write down a system of fundamental solutions associated with the homogeneous system: $Y_{II}=\sum_{i=3}^{10}c_{i}Y_{II}^{i},$ where $c_{i}$ are arbitrary complex constants, $i=3,\dots,10$, $Y^{i}_{II}\big{|}_{y=-\frac{1}{2}}=\begin{pmatrix}0\\\ \vdots\\\ 0\\\ 1\\\ 0\\\ \vdots\\\ 0\end{pmatrix}$ is the $i$th component, and the general solution of system (36) is $Y_{II}=\sum_{i=3}^{10}c_{i}Y_{II}^{i}+\int_{-\frac{1}{2}}^{y}Y(y)Y^{-1}(s)C_{I}^{II}Y_{I}ds,$ (37) where $Y(y)$ is the fundamental matrix composed by the vectors $Y_{II}^{i}$. Due to representation (37) the system for other component $Y_{I}$ can be written as $Y_{I}^{{}^{\prime}}=\lambda\begin{pmatrix}-\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}&0\\\ 0&\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}\end{pmatrix}Y_{I}+C_{I}^{I}Y_{I}+C_{II}^{I}\left(\sum_{i=3}^{10}c_{i}Y_{II}^{i}\right)+C_{II}^{I}\times\\\ \times\int_{-\frac{1}{2}}^{y}Y(y)Y^{-1}(s)C_{I}^{II}Y_{I}(s)ds.$ (38) Considering $C_{II}^{I}(\sum\limits_{i=3}^{10}c_{i}Y_{II}^{i})$ as a free term, we can find the fundamental matrix for system (38) in the following form: $\hat{Y}=\left(P_{0}(y)+\frac{1}{\lambda}P_{1}(y)+\frac{1}{\lambda^{2}}P_{2}(y)+\dots\right)\left(\delta_{ij}e^{\lambda\Gamma_{j}(y)}\right)+\frac{M_{0}}{\lambda^{2}}+\frac{M_{1}}{\lambda^{3}}+\dots,\,i,j=1,2,$ (39) where $\delta_{ij}$ is the Kronecker symbol, $\Gamma_{1}(y)=e^{-\int\limits_{-\frac{1}{2}}^{y}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi}$, $\Gamma_{2}(y)=e^{\int\limits_{-\frac{1}{2}}^{y}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi}$. ###### Remark 3.1. The first term in (39) is the representation obtained by G. Birkhoff [3] for the differential equation $Y_{I}^{{}^{\prime}}=\lambda\begin{pmatrix}-\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}&0\\\ 0&\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}\end{pmatrix}Y_{I}+C_{I}^{I}Y_{I}.$ Substituting the matrix $\hat{Y}$ into equation (38), we get the following correlation (the term $C_{II}^{I}(\sum\limits_{i=3}^{10}c_{i}Y_{II}^{i})$ is omitted): $\displaystyle\left(P_{0}^{{}^{\prime}}(y)+\frac{1}{\lambda}P_{1}^{{}^{\prime}}(y)+\frac{1}{\lambda^{2}}P_{2}^{{}^{\prime}}(y)+\dots\right)\left(\delta_{ij}e^{\lambda\Gamma_{j}(y)}\right)_{i,j=1,2}+$ (40) $\displaystyle+\lambda\left(P_{0}(y)+\frac{1}{\lambda}P_{1}(y)+\frac{1}{\lambda^{2}}P_{2}(y)+\dots\right)\Lambda\left(\delta_{ij}e^{\lambda\Gamma_{j}(y)}\right)_{i,j=1,2}+$ $\displaystyle+\frac{M_{0}^{{}^{\prime}}(y)}{\lambda^{2}}+\frac{M_{1}^{{}^{\prime}}(y)}{\lambda^{3}}+\dots=$ $\displaystyle=\lambda\Lambda\left(P_{0}(y)+\frac{1}{\lambda}P_{1}(y)+\frac{1}{\lambda^{2}}P_{2}(y)+\dots\right)\left(\delta_{ij}e^{\lambda\Gamma_{j}(y)}\right)_{i,j=1,2}+$ $\displaystyle+\lambda\Lambda\left(\frac{M_{0}(y)}{\lambda^{2}}+\frac{M_{1}(y)}{\lambda^{3}}+\dots\right)+$ $\displaystyle+C_{I}^{I}\left(P_{0}(y)+\frac{1}{\lambda}P_{1}(y)+\frac{1}{\lambda^{2}}P_{2}(y)+\dots\right)\left(\delta_{ij}e^{\lambda\Gamma_{j}(y)}\right)_{i,j=1,2}+$ $\displaystyle+C_{I}^{I}\left(\frac{M_{0}(y)}{\lambda^{2}}+\frac{M_{1}(y)}{\lambda^{3}}+\dots\right)+$ $\displaystyle+C_{II}^{I}\int_{-\frac{1}{2}}^{y}Y(y)Y^{-1}(s)C_{I}^{II}\bigg{[}\left(P_{0}(s)+\frac{1}{\lambda}P_{1}(s)+\frac{1}{\lambda^{2}}P_{2}(s)+\dots\right)\times$ $\displaystyle\times\left(\delta_{ij}e^{\lambda\Gamma_{j}(s)}\right)_{i,j=1,2}+\frac{M_{0}(s)}{\lambda^{2}}+\frac{M_{1}(s)}{\lambda^{3}}+\dots\bigg{]}ds,\quad\Lambda=diag\\{-\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}},\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}\\}.$ Comparing the coefficients by the same powers of $\lambda$ which may contain the matrix $(\delta_{ij}e^{\lambda\Gamma_{j}(s)})_{i,j=1,2}$ or be independent on it and integrating by parts a needed number of times, we, in particular, get $P_{0}\Lambda=\Lambda P_{0},$ (41) $P_{0}^{{}^{\prime}}+P_{1}\Lambda=\Lambda P_{1}+C_{I}^{I}P_{0},$ (42) $P_{1}^{{}^{\prime}}+P_{2}\Lambda=\Lambda P_{2}+C_{I}^{I}P_{1}+C_{II}^{I}C_{I}^{II}P_{0}\Lambda^{-1},$ (43) $\Lambda M_{0}=C_{II}^{I}YC_{I}^{II}(-\frac{1}{2}).$ (44) Then, it follows from equality (41) that $P_{0}(y)$ is a diagonal matrix, $P_{0}(y)=\begin{pmatrix}p_{1}(y)&0\\\ 0&p_{2}(y)\end{pmatrix},$ and equality (42) for the case of diagonal elements gives the Cauchy problems $p_{i}^{{}^{\prime}}=c_{ii}^{I}p_{i},\quad p_{i}(-\frac{1}{2})=1,\quad i=1,2,$ (45) where $c_{ii}^{I}$ are diagonal elements of the matrix $C_{I}^{I}$ (see formulas (33)). Solving problems (45) gives us the functions $p_{i}(y)$: $p_{i}(y)=e^{\int\limits_{-\frac{1}{2}}^{y}c_{ii}^{I}(\xi)d\xi}$, $i=1,2$ (the top index $I$ will be omitted below). Then, equality (42) enables one to determine the non-diagonal elements of the matrix $P_{1}(y)$ which, in turn, enables one to find the diagonal elements from equality (43). By finite induction we can find all the matrices $P_{i}(y)$, $i=2,\dots$. Equalities analogous to (44) enables one to determine the matrices $M_{i}$, $i=0,1,\dots$ Due to the method of variation of constants the presence of the free term $C_{II}^{I}(\sum\limits_{i=3}^{10}c_{i}Y_{II}^{i})$ results in representation (39) for the additional terms containing as multipliers the powers of $\frac{1}{\lambda}$: $\frac{1}{\lambda^{k}}$, $k=1,2,\dots$. It becomes then clear that such terms do not influence on the main term in the asymptotic representation of the spectrum. This means that we can use only the main term in representation (39): $Y=P_{0}(y)(\delta_{ij}e^{\lambda\Gamma_{j}(y)})_{i,j=1,2}.$ (46) Recalling the fundamental matrix of equation (34) and taking into account (46), we get the main term in the asymptotic representation of the fundamental matrix $W_{Y}$ of system (34): $W_{Y}=\begin{pmatrix}e^{-\lambda\int\limits_{-\frac{1}{2}}^{y}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi}p_{1}(y)&0&0&\dots&0\\\ 0&e^{\lambda\int\limits_{-\frac{1}{2}}^{y}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi}p_{2}(y)&0&\dots&0\\\ 0&0&y_{3}^{3}&\dots&y_{10}^{3}\\\ \dots&\dots&\dots&\dots&\dots\\\ 0&0&y_{3}^{10}&\dots&y_{10}^{10}\\\ \end{pmatrix},$ (47) where $y_{i}^{j}$, $i,j=3,\dots,10$ are components of the fundamental system for equation (36) composed by the columns of the matrix $Y(y)$. ###### Remark 3.2. In this work we do not give a justification of the fact that representation (39) is really asymptotic series as well as we do not justify the described representation of the fundamental matrix $W_{Y}$ (more precisely, its “full variant”). This is a subject for future research. We only note that this fact was established by G. Birkhoff [3] for equation (40) (when there are no terms with coefficients $M_{0}$, $M_{1}$, $\dots$ in the right-hand side of equality (39), i.e., the part of the series corresponding to the integral term in equation (38)), and in each half-plane $Re\lambda>0$ and $Re\lambda<0$ the asymptotic series are different from each other [12]. By the way, considering the integral term as a free term and using the method of variation of constants is another way of finding the matrices $M_{0}$, $M_{1}$, $\dots$ (the idea of getting fundamental matrices by the method of variation of constants is described in [13]). Recalling the boundary conditions for $y=\pm\frac{1}{2}$, we can note that relations (18) after change (30) of the variable $\tilde{Y}$ are transformed in the following way: $y_{1}=y_{2},\,y_{6}=0,\,y_{8}=0,\,y_{10}=0,\,y_{4}=0,\quad\mbox{ }y=\pm\frac{1}{2},\,t>0.$ (48) Or, taking into account representation (47), they can be written as the equality $det\begin{pmatrix}L\\\ LW_{Y}(\frac{1}{2})\end{pmatrix}=0,$ (49) where $L=\setcounter{MaxMatrixCols}{11}\begin{pmatrix}1&-1&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&1&0&0&0&0&0\\\ 0&0&0&0&0&0&0&1&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&1\\\ 0&0&0&1&0&0&0&0&0&0&0\end{pmatrix}.$ After elementary transforms of the determinant and the use of the Laplace theorem about the representation of the determinant as multiplications of its minors we see that (49) is equivalent to the equality $\displaystyle e^{\lambda\int\limits_{-\frac{1}{2}}^{\frac{1}{2}}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi}p_{2}(\frac{1}{2})-e^{-\lambda\int\limits_{-\frac{1}{2}}^{\frac{1}{2}}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi}p_{1}(\frac{1}{2})=0,\,\mbox{ }$ (50) $\displaystyle e^{\lambda\int\limits_{-\frac{1}{2}}^{\frac{1}{2}}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi}e^{\int\limits_{-\frac{1}{2}}^{\frac{1}{2}}c_{22}(\xi)d\xi}-e^{-\lambda\int\limits_{-\frac{1}{2}}^{\frac{1}{2}}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi}e^{\int\limits_{-\frac{1}{2}}^{\frac{1}{2}}c_{11}(\xi)d\xi}=0.$ Recalling formulas (33), we get the spectrum representation $\lambda_{k}=\left[\int_{-\frac{1}{2}}^{\frac{1}{2}}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}d\xi\right]^{-1}\bigg{(}\int_{-\frac{1}{2}}^{\frac{1}{2}}-\frac{1}{2}\bigg{[}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}\bigg{(}\frac{i\omega\hat{u}+R_{44}}{\hat{\alpha}_{2}}+\frac{2\hat{\alpha}_{12}R_{43}}{\hat{\alpha}_{2}^{2}}\bigg{)}+\\\ +i\omega\hat{u}\frac{1}{\sqrt{\hat{Z}\hat{\alpha}_{2}}}+\frac{\hat{\alpha}_{12}}{\hat{Z}}\times\\\ \times(A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda}))\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}+\frac{\sigma_{m}(1+\hat{\lambda})^{2}}{b_{m}\sqrt{\hat{Z}\hat{\alpha}_{2}}}\bigg{]}d\xi+k\pi i\bigg{)}+O(\frac{1}{k}),\quad k\to\infty,$ (51) where the symbol $O$ denotes a big O. The proof of Theorem 1 is thus complete. ###### Remark 3.3. Using representation (39), we can get an asymptotic representation of $\lambda_{k}$ with an arbitrary order of accuracy defined by the powers $\frac{1}{k}$ (see also [3]). Now, as a consequence of representation (51), we get the following result. If the Poiseuille-type flow described in Sect. 1 is asymptotically stable, then the following inequality necessarily hold: $Re\lambda_{k}=\int_{-\frac{1}{2}}^{\frac{1}{2}}\bigg{[}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}(\frac{R_{44}}{\hat{\alpha}_{2}}+\frac{2\hat{\alpha}_{12}R_{43}}{\hat{\alpha}_{2}^{2}})+\\\ +\frac{\hat{\alpha}_{12}}{\hat{Z}}\big{(}A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda})\big{)}\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}+\frac{\sigma_{m}(1+\hat{\lambda})^{2}}{b_{m}\sqrt{\hat{Z}\hat{\alpha}_{2}}}\bigg{]}d\xi>0,$ or taking into account formula (19), $Re\lambda_{k}=\int_{-\frac{1}{2}}^{\frac{1}{2}}\bigg{[}\sqrt{\frac{\hat{\alpha}_{2}}{\hat{Z}}}\hat{\bar{\chi}}_{0}\left(\frac{1}{\hat{\alpha}_{2}}\big{(}W^{-1}+\frac{k+2\beta}{3}(\hat{a}_{11}+\hat{a}_{22})\big{)}+\hat{\bar{\chi}}_{0}\frac{2\hat{\alpha}_{12}}{\hat{\alpha}_{2}^{2}}\hat{a}_{12}\frac{k+2\beta}{3}\right)+\\\ +\frac{\hat{\alpha}_{12}}{\hat{Z}}\big{(}A_{r}\hat{Z}\hat{a}_{12}+A_{m}\sigma_{m}\hat{L}(1+\hat{\lambda})\big{)}\sqrt{\frac{\hat{Z}}{\hat{\alpha}_{2}}}+\frac{\sigma_{m}(1+\hat{\lambda})^{2}}{b_{m}\sqrt{\hat{Z}\hat{\alpha}_{2}}}\bigg{]}d\xi>0.$ (52) The proof of Theorem 2 is complete. The authors are grateful to A.V. Yegitov for his help in the preparation of the manuscript of the paper. This work is supported by the Russian Foundation for Basic Research (the grant numbers 17-01-00791 and 19-01-00261 ). ## References * [1] A.N. Akhiezer, N.A. Akhiezer, Electromagnetism and electromagnetic waves (Higher School, Moscow, 1985) (in Russian). * [2] Yu. A. Altukhov, A.S. Gusev, G.V. Pishnograi, Introduction into mesoscopic theory of flowing polymeric systems (Alt. GPA, Barnaul, 2012) (in Russian). * [3] G.D. Birkhoff, Collected mathematical papers, (AMS, New York,1950). * [4] A.M. Blokhin, A.S. Rudometova, Stationary solutions of the equations for nonisothermal electroconvection of a weakly conducting incompressible polymeric liquid Journal of Applied and Industrial Mathematics 9(2) (2015) 147–156. * [5] A.M. Blokhin, R.Y. Semenko, Stationary magnetohydrodynamic flows of a non-isothermal incompressible polymeric liquid in the flat channel Bulletin of the South Ural State University, Ser. Mathematics Modeling, Programming & Computer Software 11(4) (2018) 41–54. * [6] A.M. Blokhin, D.L. Tkachev, Linear asymptotic instability of a stationary flow of a polymeric medium in a plane channel in the case of periodic perturbations Journal of Applied and Industrial Mathematics 8(4) (2014) 467–478. * [7] Alexander Blokhin and Dmitry Tkachev, Spectral asymptotics of a linearized problem about flow of an incompressible polymeric fluid. Base flow is analogue of a Poiseuille flow AIP Conference Proceedings 2017(030028) (2018) 030028-1-030028-7. * [8] A.M. Blokhin, D.L. Tkachev, A.V. Yegitov, Asymptotic Formula for the Spectrum of the Linear Problem Describing Periodic Polymer Flows in an Infinite Channel Journal of Applied Mechanics and Technical Physics 59(9) (2018) 992–1003. * [9] Alexander Blokhin, Dmitry Tkachev and Aleksey Yegitov, Spectral asymptotics of a linearized problem for an incompressible weakly conducting polymeric fluid ZAMM (Z. Angrew. Math. Mech.) 98(4) (2018) 589–601. * [10] A.M. Blokhin, A.V. Yegitov, D.L. Tkachev, Asymptotics of the Spectrum of a Linearized Problem of the Stability of a Stationary Flow of an Incompressible Polymer Fluid with a Space Chargef Computational Mathematics and Mathematical Physics 56(1) (2018) 102–117. * [11] A.M. Blokhin, A.V. Yegitov, D.L. Tkachev, Linear instability of solutions in a mathematical model describing polymer flows in an infinite channel Computational Mathematics and Mathematical Physics 55(5) (2015) 848–873. * [12] K.V. Brushlinski, On growth of mixed problem solution in case of incomplete eigen-functions Izvestiya AN SSSR, seriya matematika 23 (1959) 893–912 (in Russian). * [13] M.V. Fedoruk, Asymptotic methods for ordinary differential equations, (Nauka, Moscow, 1983) (in Russian). * [14] E. Grenier, Y. Guo, T.T. Nguyen, Spectral instability of characteristic boundary layer flows Duke Math J. 165(16) (2016) 3085–3146. * [15] W. Heisenberg, Uber Stabilitat und Turbulenz von Flussingkeitsstromen Ann. Phys. 74 (1924) 577–627. * [16] K.B. Koshelev, G.V. Pishnograi, A.Ye. Kuznetsov, M.Yu. Tolstikh, Dependancy of hydrodynamic characteristics of the polymer melts flow in converging channel from temperature Mechanics of composite materials and constructions 22(2) (2016) 175–191 (in Russian). * [17] A.N. Krylov, On the stability of a Poiseuille flow in a planar channel DAN 158(5) (1964) 978–981 (in Russian). * [18] L.D. Landau, Ye.M. Lifshitz, Electrodynamics of continuum media, (Pergamon Press, 1960). * [19] L.G. Loitsyanski, Mechanics of Liquids and Gases, (BHB, 1995). * [20] K. Nordling, J. Osterman, Physics Handbook for Science and Engineering, (Chartwell-Bratt, 1996). * [21] G.V. Pishnograi, V.N. Pokrovski, Yu. G. Yanovski, I.F. Obraztsov. Yu. N. Cornet, Defining equation for nonlinear viscoelastic (polymeric) mediums in zero approximation by parameters of molecular theory and results for shear and stretch DAN SSSR 355(9) (1994) 612–615. * [22] V.N. Pokrovskii, The mesoscopic theory of polymer dynamics, 2nd Ed. / V.N. Pokrovskii (Springer, Dordrecht-Heidelberg, London - New-York, 2010). * [23] L.I. Sedov, A Course in Continuum Mechanics: Basic Equations and Analytical Techniques (Volume 1) (Wolters-Noordhoff Publishing, 1971). * [24] Y. Shibata, On the R-Boundedness for the Two Phase Problem with Phase Transition: Compressible - Incompressible Model Problem Funkcialay Ekvacioj 59 (2016) 243–287. * [25] Shih-i Pai, Introduction to the Theory of Compressible Flow, (D. Van Nostrand Co, Princeton, 1962). * [26] A.B. Vatazhin, G.A. Lubimov, S.A. Regirer, Magnetohydrodynamic flows in channels (Nauka, Moscow, 1970) (in Russian).
# LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution Shon Otmazgin1 Arie Cattan1 Yoav Goldberg1,2 1Computer Science Department, Bar Ilan University 2Allen Institute for Artificial Intelligence <EMAIL_ADDRESS> ###### Abstract Current state-of-the-art coreference systems are based on a single pairwise scoring component, which assigns to each pair of mention spans a score reflecting their tendency to corefer to each other. We observe that different kinds of mention pairs require different information sources to assess their score. We present LingMess, a linguistically motivated categorization of mention-pairs into 6 types of coreference decisions and learn a dedicated trainable scoring function for each category. This significantly improves the accuracy of the pairwise scorer as well as of the overall coreference performance on the English Ontonotes coreference corpus and 5 additional datasets.111The codebase to train and run LingMess is available in https://github.com/shon-otmazgin/lingmess-coref. Also, our recent F-coref Python package (Otmazgin et al., 2022) includes a simple and efficient implementation of LingMess in https://github.com/shon-otmazgin/fastcoref. ## 1 Introduction Coreference resolution is the task of clustering textual mentions that refer to the same discourse entity. This fundamental task requires many decisions. In this work, we argue that different _kinds_ of decisions involve different challenges. To illustrate that, consider the following text: _“ Lionel Messi has won a record seven Ballon d’Or awards. He signed for Paris Saint-Germain in August 2021. “I would like to thank my family”, said the Argentinian footballer. Messi holds the records for most goals in La Liga”_ To correctly identify that the pronoun “He” refers to “Lionel Messi”, models need to model the discourse, while linking “my” to “I” may rely more heavily on morphological agreement. Likewise, linking “the Argentinian footballer” to “Lionel Messi” requires world knowledge, while linking “Messi” to “Lionel Messi” may be achieved by simple lexical heuristics. Indeed, pre-neural coreference resolution works often considered the types of a mention-pair, either by incorporating this information as model features, or by tailoring specific rules or specific models for each mention pair (see related work section). However, neural-network based coreference models are all based on a single pairwise scorer that is shared for all mention pairs, regardless of the different challenges that needs to be addressed by each pair type (Lee et al., 2017, 2018; Joshi et al., 2019; Kantor and Globerson, 2019; Joshi et al., 2020; Xu and Choi, 2020; Xia et al., 2020; Toshniwal et al., 2020; Thirukovalluru et al., 2021; Kirstain et al., 2021; Dobrovolskii, 2021). Figure 1: Architecture of our multi expert model. Given two spans _“Lionel Messi”_ and _“He”_ , we sum four scores: individual mention scores (black), $f_{m}(\emph{``Lionel Messi''})$, $f_{m}(\emph{``He''})$, and pairwise scores, shared antecedent score (white) $f_{a}(\emph{``Lionel Messi''},\emph{``He''})$ and the relevant “expert” score (blue) $f_{a}^{\textsc{Pron- Ent}}(\emph{``Lionel Messi''},\emph{``He''})$. Category | Co-referring example | Non Co-referring example ---|---|--- Pron-Pron-C | _A couple of my law clerks were going to … and I was afraid I was going to…_ | _The Lord God said to my Lord: “Sit by me at my right side , and I will put your enemies …_ ” Pron-Pron-NC | _“ I made a similar line and I produced it cheaper” , he says._ | _She is my Goddess …_ Ent-Pron | _Spain, Argentina, Thailand and Indonesia were doing too little to prevent … across their borders._ | _Tonight, to kick off the effort, CNN will premiere its first prime - time newscast in years._ Match | _… says Paul Amos, CNN executive vice president for programming. Accordingly, CNN is …_ | _Hertz and Avis can not benefit Budget’s programs, ” said Bob Wilson, Budget’s vice president …_ Contains | _He reportedly showed DeLay a videotape that made him weep. Tom DeLay then …_ | _Give SEC authority to halt securities trading, (also opposed by new SEC chairman) …_ Other | _They also saw the two men who were standing with him. When Moses and Elijah were leaving …_ | _The company is already working on its own programming … the newspaper said._ Table 1: Example of each category, taken from Ontonotes development set. We define the categories of mention pairs as follows. Pron-Pron-C: compatible pronouns based on their attributes such as gender, number and animacy (see Appendix C for more details), Pron-Pron-NC: incompatible pronouns, Ent-Pron: a pronoun and another span, Match: non-pronoun spans with the same content words, Contains: one contains the content words of the other, Other: all other pairs. Content words exclude stop words, see Appendix C for the full list of stop words. In this work, we suggest that modeling different mention pairs by different sub-models (in our case, different learned scoring functions) depending on their types is beneficial also for neural models. We identify a set of decisions: (a) linking compatible pronouns (Pron-Pron-C); (b) linking incompatible pronouns (Pron-Pron-NC); (c) linking pronouns to entities (Ent- Pron); (d) linking entities which share the exact lexical form (Match); (e) linking entities where the lexical form of one contains the lexical form of the other (Contains); (f) other cases (Other). Each of these classes is easy to identify deterministically, each contains both positive and negative instances, and each could benefit from a somewhat different decision process. Table 1 demonstrates the classes.222More fine-grained distinctions are of course also possible, but we leave exploration of them to future work. We present Linguistically Informed Multi Expert Scorers (LingMess), a coreference model which categorizes each pairwise decision into one of these classes, and learns, in addition to a shared scoring function, also a separate scoring function for each pair type. At inference time, for each pair of mentions being scored, we deterministically identify the pair’s type, and use the corresponding scoring function.333For computational efficiency on the GPU, we find it beneficial to compute all the scoring functions and mask away the not needed values. Specifically, we extend the recent _s2e_ ’s model (Kirstain et al., 2021) by adding per-category scoring, but the method is general and may work with other coreference models as well. As illustrated in Figure 1, the final coreference score between two spans is composed—in addition to the individual mention scores—of two pairwise scores: a shared antecedent-compatibility score and an “expert” antecedent compatibility score which depends on the linguistic-type of the pair. We show that this significantly improves the coreference performance on Ontonotes (Pradhan et al., 2012) and 5 additional datasets. We also inspect the performance of the model for each category separately, showing that some classes improve more than others. This analysis provides a finer-grained understanding of the models and points out directions for future research. ## 2 Background: the s2e Model The _s2e_ model (Kirstain et al., 2021) achieves the current best coreference scores among all practical neural models.444We define “practical models” as those that require a constant number transformer-based document encodings per passage, as opposed to a constant number of document encodings per mention. The CorefQA model (Wu et al., 2020) achieves a substantially higher score, but requires to run a separate BERT inference for each mention, making it highly impractical. Given a sequence of tokens $x_{1},\ldots,x_{n}$, each mention pair $(c,q)$ is scored using a scoring function $F_{\textsc{s}}(c,q)$555s state for shared described below, where $c$ is a “candidate span” and $q$ is a “query span” which appears before $c$ in the sentence. The span encodings are based on contextualized word embeddings obtained by a Longformer encoder, see Kirstain et al. (2021) for details. These pairwise scores are then used to form coreference chains (see “inference” below). The scoring function $F_{\textsc{s}}$ is further decomposed: $\displaystyle F_{\textsc{s}}(c,q)=\begin{cases}f_{m}(c)+f_{m}(q)+f_{a}(c,q)&c\neq\varepsilon\\\ 0&c=\varepsilon\end{cases}$ where $\varepsilon$ is the null antecedent, and $f_{m}$ and $f_{a}$ are parameterized functions, scoring each individual span ($f_{m}$) and the pairwise interaction ($f_{a}$). For each possible mention $q$, the learning objective optimizes the sum of probabilities over the true antecedent $\hat{c}$ of $q$: $\displaystyle L_{\textsc{s}}(q)=\log\sum_{\hat{c}\in\mathcal{C}(q)\cap\textsc{gold}(q)}P_{\textsc{s}}(\hat{c}\mid q)$ where $\mathcal{C}(q)$ is the set of all candidate antecedents666All spans before $q$ that passed some pruning threshold. with a null antecedent $\varepsilon$. $\textsc{gold}(q)$ is the set of the true antecedents of $q$. $P_{\textsc{s}}(c\mid q)$ is computed as a softmax over $F_{\textsc{s}}(c,q)$ scores for $c$ values in $\mathcal{C}(q)$: $\displaystyle P_{\textsc{s}}(\hat{c}\mid q)=\frac{\exp{F_{\textsc{s}}(\hat{c},q)}}{\sum\limits_{c^{\prime}\in\mathcal{C}(q)}\exp{F_{\textsc{s}}(c^{\prime},q)}}$ ## 3 LingMess Clustering coreferring entities typically involves many different phenomena, which we argue should be addressed in a different manner. Therefore, our core contribution is proposing to allocate a dedicated scorer $f^{t}_{a}(c,q)$ for each phenomenon type $t$, in addition to the shared pairwise scorer $f_{a}(c,q)$. The overall architecture of our model is shown in Figure 1. Concretely, we extend the _s2e_ model with six additional antecedent scorers $f_{a}^{t}$ where $t\in\\{\textsc{Pron-Pron-C},\textsc{Pron-Pron- NC},\textsc{Ent-Pron},$ $\textsc{Match},\textsc{Contains},\textsc{Other}\\}$, the six categories we list in Table 1. The pairwise scoring function now becomes: $\displaystyle F(c,q)=$ $\displaystyle\begin{cases}f_{m}(c)+f_{m}(q)+f(c,q)&c\neq\varepsilon\\\ 0&c=\varepsilon\\\ \end{cases}$ $\displaystyle f(c,q)=$ $\displaystyle f_{a}(c,q)+f^{T(c,q)}_{a}(c,q)$ where $T(c,q)$ is a deterministic, rule-based function to determine the category $t$ of the pair $(c,q)$. The pairwise scoring function $f(c,q)$ scoring $c$ as the antecedent of $q$, is now composed of a shared scorer $f_{a}(c,q)$ and an “expert” scorer $f^{t}_{a}(c,q)$ which differs based on the type of the pair $c,q$. Each of the seven pairwise scoring functions ($f_{a}$ and the six $f_{a}^{t}$) is parameterized separately using its own set of matrices. The transformer-based encoder and the mention scorer $f_{m}$ are shared between all the antecedent scorers. See Appendix A.2 for full model architecture. #### Training For each span $q$, our model optimizes the objective function $L_{\textsc{Coref}}$ over the sum of probabilities of all true antecedents of $q$: $\displaystyle L_{\textsc{Coref}}(q)=\log\sum_{\hat{c}\in\mathcal{C}(q)\cap\textsc{gold}(q)}P(\hat{c}\mid q)$ Here, $P(\hat{c}\mid q)$ is a softmax over $F(\hat{c},q)$ scores, that is our new score function described in Figure 1. $\displaystyle P(\hat{c}\mid q)=\frac{\exp{F(\hat{c},q)}}{\sum\limits_{c^{\prime}\in\mathcal{C}(q)}\exp{F(c^{\prime},q)}}$ This scorer is also the one used in inference. However, this objective does not explicitly push each category (“expert”) to specialize. For example, for the Pron-Pron-C cases, it would be useful to explicitly train the model to distinguish between the possible antecedents of that type only (without regarding other antecedents), as well as to explicitly distinguish between a pronoun antecedent and a null antecedent. To this end, we extend the training objective by also training each “expert” separately: $\displaystyle L_{t}(q)=\log\sum_{\hat{c}\in\mathcal{C}_{t}(q)\cap\textsc{gold}(q)}P_{t}(\hat{c}\mid q)$ | | MUC | | B3 | | CEAF$\phi_{4}$ | | LEA | | ---|---|---|---|---|---|---|---|---|---|--- | | R | P | F1 | | R | P | F1 | | R | P | F1 | | R | P | F1 | | Avg. F1 _s2e_ | | 85.2 | 86.6 | 85.9 | | 77.9 | 80.3 | 79.1 | | 75.4 | 76.8 | 76.1 | | 75.8 | 78.3 | 77.0 | | 80.3 LingMess | | 85.1 | 88.1 | 86.6 | | 78.3 | 82.7 | 80.5 | | 76.1 | 78.5 | 77.3 | | 76.3 | 80.9 | 78.5 | | 81.4 Table 2: Performance on the test set of the English OntoNotes 5.0 dataset. The averaged F1 of MUC, B3, CEAF$\phi$ is the main evaluation metric. Our model outperforms the _s2e_ model (Kirstain et al., 2021) by 1.1 CoNLL F1. The performance gain is statistically significant according to a non-parametric permutation test ($p<0.05$). | _s2e_ | LingMess ---|---|--- WikiCoref (Ghaddar and Langlais, 2016) | 59.7 | 62.6 GAP (Webster et al., 2018) | 88.3 | 89.6 WinoGender (Rudinger et al., 2018) | 70.5 | 77.3 WinoBias (Zhao et al., 2018) | 84.3 | 85.1 BUG (Levy et al., 2021) | 72.2 | 74.6 Table 3: Performance on the test set of various coreference datasets. The reported metrics are CoNLL F1 for WikiCoref, F1 for GAP and Accuracy for WinoGender, WinoBias and BUG. | _s2e_ | | LingMess ---|---|---|--- | P | R | F1 | | P | R | F1 Pron-Pron-C | 88.8 | 71.3 | 79.1 | | 88.0 | 85.1 | 86.5 Pron-Pron-NC | 84.2 | 55.8 | 67.1 | | 88.3 | 68.7 | 77.3 Ent-Pron | 78.8 | 68.7 | 73.4 | | 80.4 | 69.8 | 74.7 Match | 85.6 | 90.2 | 87.8 | | 85.3 | 93.7 | 89.3 Contains | 72.4 | 80.9 | 76.4 | | 77.4 | 78.9 | 78.1 Other | 60.1 | 70.2 | 64.7 | | 71.7 | 64.2 | 67.7 Table 4: Pairwise performance by category, on the test set of the English OntoNotes 5.0 dataset. LingMess surpasses the _s2e_ model (Kirstain et al., 2021) for most categories by a substantial margin. $\displaystyle P_{t}(\hat{c}\mid q)=\frac{\exp{F_{t}(\hat{c},q)}}{\sum\limits_{c^{\prime}\in\mathcal{C}_{t}(q)}\exp{F_{t}(c^{\prime},q)}}$ $\displaystyle F_{t}(c,q)=\begin{cases}f_{m}(c)+f_{m}(q)+f^{t}_{a}(c,q)&c\neq\varepsilon\\\ 0&c=\varepsilon\end{cases}$ Note that for $L_{t}(q)$ we replace $\mathcal{C}(q)$ with $\mathcal{C}_{t}(q)$, considering only the potential antecedents that are compatible with the span $q$ for the given type. For example, for $L_{\textsc{Match}}$ and a span $q$, we will only consider candidates $c$ which appear before $q$ with the exact same content words as $q$. Our final objective for each mention span $q$ is thus: $\displaystyle L(q)$ $\displaystyle=L_{\textsc{Coref}}(q)+L_{\textsc{Experts}}(q)$ $\displaystyle L_{\textsc{Experts}}(q)$ $\displaystyle=\sum_{t}L_{t}(q)+L_{\textsc{s}}(q)$ #### Inference At inference time, we compute the score of each mention based on the shared scorer as well as the per-type scorer matching the mention type. We then form the coreference chains by linking each mention $q$ to its most likely antecedent $c$ according to $F(c,q)$. We do not use higher-order inference as it has been shown to have a marginal impact (Xu and Choi, 2020). ## 4 Experiments Our baseline is the s2e model trained on the English OntoNotes 5.0 dataset by its authors (Kirstain et al., 2021). We train LingMess also on OntoNotes, and evaluate both models on OntoNotes, WikiCoref, GAP, WinoGender, WinoBias and BUG. Implementation details are described in Appendix B. #### Performance Table 2 presents the performance of LingMess on the test set of Ontonotes. LingMess achieves 81.4 F1 on Ontonotes, while the _s2e_ baseline achieves 80.3. Our performance gain is statistically significant according to a non- parametric permutation test ($p$ < 0.05). Additionally, Table 3 shows that LingMess outperforms the _s2e_ model on WikiCoref (+2.9) GAP (+1.3), WinoGender (+6.8), WinoBias (+0.8) and BUG (+2.4), indicating a better out-of- domain generalization. #### Importance of per-category scoring. To assess that the improvement of LingMess is due to the decomposition into our set of categories and not to the added parameters, we do two experiments. First, we train a random baseline, which randomly assigns a category for each pair777For each pair of mentions $(c,q)$, we take the modulo of the sum of the ASCII code of the last character of the last token of $c$ and $q$. and obtain similar results as the baseline. Second, we train our model by optimizing only the overall loss $L_{\textsc{Coref}}$ and not $L_{\textsc{Experts}}$. This achieves lower results than the baseline, due to low mention recall. In addition to the standard coreference evaluation, we report pairwise performance for each category. Given a mention-pair $(c,q)$, if $F(c,q)$ is higher than 0, we treat it as a positive prediction, otherwise negative. We then measure precision, recall and F1 based on gold clusters labels. Table 4 shows the pairwise performance of the _s2e_ model and LingMess. LingMess outperforms _s2e_ by a significant margin for all categories (e.g +7.4 F1 for Pron-Pron-C, +10.2 F1 for Pron-Pron-NC, etc.).888These gains in this pairwise metric are higher than the CoNLL metrics reported in Table 2, because the CoNLL metrics are based on the final clusters, after aggregation of individual pairwise decisions. The performance varies across the different categories, suggesting aspects of the coreference problem where future work can attempt to improve. #### The importance of the shared scorer. To investigate the role of the shared scorer, we trained the LingMess model with only the per-type pairwise scorers, excluding the shared pairwise scorer $F_{\textsc{s}}(c,q)$ and its accompanying loss term $L_{\textsc{s}}(q)$. This resulted in a significant decrease in performance (-0.9), specifically in the recall of the mention detection component. However, adding the shared scorer was able to mitigate this degradation by balancing the different “experts” pairwise scorers. ## 5 Related Work Many pre-neural works consider the various linguistic phenomena involved in coreference resolution as a different challenge. The early coreference system by Zhou and Su (2004) divided the antecedents candidates into distinct coreference categories (e.g., Name Alias, Apposition, Definite Noun, and a few more) and defined tailored rules for each category. Later, Lee et al. (2013) proposed the multi-sieves deterministic model, where each sieve adds coreference links between mention pairs from a specific linguistic category (e.g string match, compatible pronoun, etc.). Haghighi and Klein (2009) performed an error analysis of their coreference model according to different types of antecedent decisions, such as Proper Noun-Pronoun, Pronoun-Pronoun, etc. Based on this analysis, they focus on fixing the pronoun antecedent choices by adding syntactic features. More recently, Lu and Ng (2020) analyze empirically the performance of neural coreference resolvers on various fine- grained resolution categories of mentions (e.g gendered pronoun vs. 1st and 2nd pronoun). They find that while models perform well on name and nominal mention pairs with some shared content words, they still struggle with resolving pronouns, particularly relative pronouns. Early supervised statistical models train a feature-based classifier that incorporates the type of antecedent decision (e.g. pronoun-entity, string match) as features at the mention-pair level (Soon et al., 2001; Bengtson and Roth, 2008; Clark and Manning, 2015, 2016). Subsequently, Denis and Baldridge (2008) demonstrate that training separate classifiers that specialize in particular types of mentions (e.g third person pronouns, speech pronouns, proper names, definite descriptions, and all other) provides significant performance improvements. Lassalle and Denis (2013) took that observation a step further and proposed a more advanced method for model specialization by learning to separate types of mention into optimal classes and their proper feature space. In our work, we make progress in coreference systems specialization direction, and show that the incorporation of linguistic information is helpful also in the context of end-to-end neural models. ## 6 Conclusion We present LingMess, a coreference model that significantly improves accuracy by splitting the scoring function into different categories, and routing each scoring decision to its own category based on a deterministic, linguistically informed heuristic. This indicates that while end-to-end training is very effective, linguistic knowledge and symbolic computation can still be used to improve results. ## Limitations In this paper, we consider a set of 6 linguistic categories of mention pairs, as listed in Table 1. These categories might not be optimal for the task, while a different set of finer-grained categories may result to a higher performance gain. Another aspect that can be considered as a limitation is the computation of the categories for every possible pairs. Although the model considers only the top-scoring spans, this additional computation layer increases training and inference time over the baseline (see Appendix B.3 for the exact time). Our linguistic heuristics could be improved by, e.g., running a parser and considering head words. However, we chose not to do so in this work as this will further increase runtime. ## Acknowledgements This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). Arie Cattan is partially supported by the PBC fellowship for outstanding PhD candidates in data science. ## References * Bagga and Baldwin (1998) Amit Bagga and Breck Baldwin. 1998. Entity-based cross-document coreferencing using the vector space model. In _36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1_ , pages 79–85, Montreal, Quebec, Canada. Association for Computational Linguistics. * Beltagy et al. (2020) Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. _ArXiv_ , abs/2004.05150. * Bengtson and Roth (2008) Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In _Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing_ , pages 294–303, Honolulu, Hawaii. Association for Computational Linguistics. * Clark and Manning (2015) Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 1405–1415, Beijing, China. Association for Computational Linguistics. * Clark and Manning (2016) Kevin Clark and Christopher D. Manning. 2016. Improving coreference resolution by learning entity-level distributed representations. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 643–653, Berlin, Germany. Association for Computational Linguistics. * Denis and Baldridge (2008) Pascal Denis and Jason Baldridge. 2008. Specialized models and ranking for coreference resolution. In _Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing_ , pages 660–669, Honolulu, Hawaii. Association for Computational Linguistics. * Dobrovolskii (2021) Vladimir Dobrovolskii. 2021. Word-level coreference resolution. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7670–7675, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. * Ghaddar and Langlais (2016) Abbas Ghaddar and Phillippe Langlais. 2016. WikiCoref: An English coreference-annotated corpus of Wikipedia articles. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , pages 136–142, Portorož, Slovenia. European Language Resources Association (ELRA). * Haghighi and Klein (2009) Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In _Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing_ , pages 1152–1161, Singapore. Association for Computational Linguistics. * Joshi et al. (2020) Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. _Transactions of the Association for Computational Linguistics_ , 8:64–77. * Joshi et al. (2019) Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolution: Baselines and analysis. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5803–5808, Hong Kong, China. Association for Computational Linguistics. * Kantor and Globerson (2019) Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 673–677, Florence, Italy. Association for Computational Linguistics. * Kirstain et al. (2021) Yuval Kirstain, Ori Ram, and Omer Levy. 2021. Coreference resolution without span representations. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , pages 14–19, Online. Association for Computational Linguistics. * Lassalle and Denis (2013) Emmanuel Lassalle and Pascal Denis. 2013. Improving pairwise coreference models through feature space hierarchy learning. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 497–506, Sofia, Bulgaria. Association for Computational Linguistics. * Lee et al. (2013) Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. _Computational Linguistics_ , 39(4):885–916. * Lee et al. (2017) Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics. * Lee et al. (2018) Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics. * Levy et al. (2021) Shahar Levy, Koren Lazar, and Gabriel Stanovsky. 2021. Collecting a large-scale gender bias dataset for coreference resolution and machine translation. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 2470–2480, Punta Cana, Dominican Republic. Association for Computational Linguistics. * Lu and Ng (2020) Jing Lu and Vincent Ng. 2020. Conundrums in entity coreference resolution: Making sense of the state of the art. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6620–6631, Online. Association for Computational Linguistics. * Luo (2005) Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In _Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing_ , pages 25–32, Vancouver, British Columbia, Canada. Association for Computational Linguistics. * Moosavi and Strube (2016) Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 632–642, Berlin, Germany. Association for Computational Linguistics. * Otmazgin et al. (2022) Shon Otmazgin, Arie Cattan, and Yoav Goldberg. 2022. F-coref: Fast, accurate and easy to use coreference resolution. In _Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: System Demonstrations_ , pages 48–56, Taipei, Taiwan. Association for Computational Linguistics. * Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc. * Pradhan et al. (2012) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In _Joint Conference on EMNLP and CoNLL - Shared Task_ , pages 1–40, Jeju Island, Korea. Association for Computational Linguistics. * Rudinger et al. (2018) Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. * Soon et al. (2001) Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. _Computational Linguistics_ , 27(4):521–544. * Thirukovalluru et al. (2021) Raghuveer Thirukovalluru, Nicholas Monath, Kumar Shridhar, Manzil Zaheer, Mrinmaya Sachan, and Andrew McCallum. 2021. Scaling within document coreference to long texts. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 3921–3931, Online. Association for Computational Linguistics. * Toshniwal et al. (2020) Shubham Toshniwal, Sam Wiseman, Allyson Ettinger, Karen Livescu, and Kevin Gimpel. 2020. Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 8519–8526, Online. Association for Computational Linguistics. * Ulmer et al. (2022) Dennis Ulmer, Christian Hardmeier, and Jes Frellsen. 2022. deep-significance-easy and meaningful statistical significance testing in the age of neural networks. _arXiv preprint arXiv:2204.06815_. * Vilain et al. (1995) Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In _Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland, November 6-8, 1995_. * Webster et al. (2018) Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the GAP: A balanced corpus of gendered ambiguous pronouns. _Transactions of the Association for Computational Linguistics_ , 6:605–617. * Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics. * Wu et al. (2020) Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. 2020. CorefQA: Coreference resolution as query-based span prediction. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 6953–6963, Online. Association for Computational Linguistics. * Xia et al. (2020) Patrick Xia, João Sedoc, and Benjamin Van Durme. 2020. Incremental neural coreference resolution in constant memory. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 8617–8624, Online. Association for Computational Linguistics. * Xu and Choi (2020) Liyan Xu and Jinho D. Choi. 2020. Revealing the myth of higher-order inference in coreference resolution. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 8527–8533, Online. Association for Computational Linguistics. * Zhao et al. (2018) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018\. Gender bias in coreference resolution: Evaluation and debiasing methods. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. * Zhou and Su (2004) GuoDong Zhou and Jian Su. 2004. A high-performance coreference resolution system using a constraint-based multi-agent strategy. In _COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics_ , pages 522–528, Geneva, Switzerland. COLING. ## Appendix A Model Architecture Given a sequence of tokens $x_{1},...,x_{n}$ from an input document, a transformer-based (BERT-like) encoder first forms contextualized representation vectors, $\mathbf{x_{1}},...,\mathbf{x_{n}}$ for each token in the sequence. ### A.1 The _s2e_ Model #### Mention scorer Given a span $q=(\mathbf{x_{i}},\mathbf{x_{j}})$, represented by its start and end tokens, the score for $q$ being a mention is defined as follow: $\displaystyle f_{m}(q)$ $\displaystyle=\mathbf{m}_{s}(\mathbf{x_{i}})\cdot\mathbf{v}_{s}+\mathbf{m}_{s}(\mathbf{x_{j}})\cdot\mathbf{v}_{e}$ $\displaystyle+\mathbf{m}_{s}(\mathbf{x_{i}})\cdot\mathbf{B}_{m}\cdot\mathbf{m}_{s}(\mathbf{x_{j}})$ where $\mathbf{m}_{s}(\mathbf{x})$ and $\mathbf{m}_{e}(\mathbf{x})$ are two non-linear functions to obtain start and end representations for each token $x$, and $f_{m}(q)$ is a biaffine product over these representations. #### Antecedent scorer Given two spans, $c=(\mathbf{x_{i}},\mathbf{x_{j}})$ and $q=(\mathbf{x_{k}},\mathbf{x_{l}})$, represented by their start and end tokens, the score for $c$ being an antecedent of $q$ is computed as follow: $\displaystyle f_{a}(c,q)$ $\displaystyle=\mathbf{a}_{s}(\mathbf{\mathbf{x_{i}}})\cdot\mathbf{B}_{ss}\cdot\mathbf{a}_{s}(\mathbf{\mathbf{x_{k}}})$ $\displaystyle+\mathbf{a}_{e}(\mathbf{\mathbf{x_{j}}})\cdot\mathbf{B}_{es}\cdot\mathbf{a}_{s}(\mathbf{\mathbf{x_{k}}})$ $\displaystyle+\mathbf{a}_{s}(\mathbf{\mathbf{x_{i}}})\cdot\mathbf{B}_{se}\cdot\mathbf{a}_{e}(\mathbf{\mathbf{x_{l}}})$ $\displaystyle+\mathbf{a}_{e}(\mathbf{\mathbf{x_{j}}})\cdot\mathbf{B}_{ee}\cdot\mathbf{a}_{e}(\mathbf{\mathbf{x_{l}}})$ Similar to the mention scorer, $\mathbf{a}_{s}(\mathbf{x})$ and $\mathbf{a}_{e}(\mathbf{x})$ are two non-linear functions to obtain start/end representations for each token, and $f_{a}(c,q)$ is a sum of four bilinear functions over the start and end representations of $c$ and $q$. ### A.2 LingMess #### Mention scorer Our mention scorer is the same as _s2e_ mention scorer implementation. #### Antecedent scorer As mentioned in the paper (§3), in addition to the shared antecedent scorer $f_{a}(c,q)$, LingMess includes a dedicated antecedent scorer $f^{t}_{a}(c,q)$ for each category $t\in\\{\textsc{Pron-Pron-C},\textsc{Pron-Pron- NC},\textsc{Ent-Pron},$ $\textsc{Match},\textsc{Contains},\textsc{Other}\\}$. The overall score for $c=(\mathbf{x_{i}},\mathbf{x_{j}})$ being an antecedent of $q=(\mathbf{x_{k}},\mathbf{x_{l}})$ becomes the sum of the shared scorer and the relevant category “expert” scorer: $\displaystyle f(c,q)=f_{a}(c,q)+f^{T(c,q)}_{a}(c,q)$ where $T(c,q)$ is a deterministic function to determine the category $t$ of the pair $(c,q)$. For each category $t$, we define two specific non-linear functions to obtain start and end representations ($\mathbf{a}^{t}_{s}(\mathbf{x})$ and $\mathbf{a}^{t}_{e}(\mathbf{x})$) as well as an “expert” antecedent scoring function $f^{t}_{a}(c,q)$: $\displaystyle f^{t}_{a}(c,q)$ $\displaystyle=\mathbf{a}^{t}_{s}(\mathbf{\mathbf{x_{i}}})\cdot\mathbf{B}^{t}_{ss}\cdot\mathbf{a}^{t}_{s}(\mathbf{\mathbf{x_{k}}})$ $\displaystyle+\mathbf{a}^{t}_{e}(\mathbf{\mathbf{x_{j}}})\cdot\mathbf{B}^{t}_{es}\cdot\mathbf{a}^{t}_{s}(\mathbf{\mathbf{x_{k}}})$ $\displaystyle+\mathbf{a}^{t}_{s}(\mathbf{\mathbf{x_{i}}})\cdot\mathbf{B}^{t}_{se}\cdot\mathbf{a}^{t}_{e}(\mathbf{\mathbf{x_{l}}})$ $\displaystyle+\mathbf{a}^{t}_{e}(\mathbf{\mathbf{x_{j}}})\cdot\mathbf{B}^{t}_{ee}\cdot\mathbf{a}^{t}_{e}(\mathbf{\mathbf{x_{l}}})$ Overall, our model introduces 6 learnable matrices for each category ($\mathbf{W}^{t}_{\mathbf{a}_{s}}$, $\mathbf{W}^{t}_{\mathbf{a}_{e}}$, $\mathbf{B}^{t}_{ss}$, $\mathbf{B}^{t}_{es}$, $\mathbf{B}^{t}_{se}$, $\mathbf{B}^{t}_{ee}$). The transformer-based encoder and the mention scorer are shared between all the different pairwise scorers. ## Appendix B Implementation Details ### B.1 Hyperparameteres We extend the _s2e_ ’s implementation based on PyTorch (Paszke et al., 2019) and Transformers (Wolf et al., 2020). We used the same hyperparameters (e.g learning rate, warmup, etc.) as the _s2e_ model except the hidden size of all matrices $W$ and $B$. As our method introduces a dedicated antecedent scoring function $f^{t}_{a}$ function for each category $t$, we reduce the size of these matrices from 3072 to 2048 to fit training into memory in our hardware. Similar to the baseline our head method is on top of Longformer-Large Beltagy et al. (2020), resulting in a total of 590M learnable parameters (the _s2e_ model contains 494M learnable parameters). We used dynamic batching both for training and inference, specifically 5K tokens in a single batch during training and 10K tokens at inference. ### B.2 Evaluation As mentioned in the paper (§4), we conduct our experiments on the English portion of the OntoNotes corpus (Pradhan et al., 2012). This dataset contains 2802 documents for training, 343 for development, and 348 for test. We evaluate our model according to the standard coreference metric: MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), CEAF$\phi_{4}$ (Luo, 2005), and LEA (Moosavi and Strube, 2016) using the official CoNLL coreference scorer.999https://github.com/conll/reference-coreference-scorers. LingMess achieves 81.6 CoNLL F1 on the development set of Ontonotes. Also, Table 5 presents the pairwise performance on the development set for each category. We compute statistical significance with a non-parametric permutation test using Ulmer et al. (2022)’s implementation. Table 6 shows that LingMess consistenly outperforms the _s2e_ model on GAP. | Kirstain et al. (2021) | | LingMess ---|---|---|--- | P | R | F1 | | P | R | F1 Pron-Pron-C | 91.7 | 77.5 | 84.0 | | 91.7 | 90.2 | 91.0 Pron-Pron-NC | 88.9 | 66.2 | 75.9 | | 90.2 | 81.3 | 85.5 Ent-Pron | 82.0 | 74.1 | 77.9 | | 81.4 | 74.7 | 77.9 Match | 88.3 | 87.5 | 87.9 | | 88.4 | 92.0 | 90.2 Contains | 69.1 | 77.2 | 72.9 | | 76.1 | 73.5 | 74.8 Other | 56.8 | 67.5 | 61.7 | | 70.8 | 64.4 | 67.5 Table 5: Pairwise performance by category, on the dev set of the English OntoNotes 5.0 dataset. | Masc | Fem | Bias | Overall ---|---|---|---|--- Kirstain et al. (2021) | 90.6 | 85.8 | 0.95 | 88.3 LingMess | 91.3 | 87.8 | 0.96 | 89.6 Table 6: Performance on the test set of the GAP coreference dataset. The reported metrics are F1 scores. ### B.3 Runtime and Memory | Runtime | Memory ---|---|--- Kirstain et al. (2021) | 28 | 5.4 LingMess | 43 | 5.9 Table 7: Inference time(Seconds) and memory(GiB) on 343 docs of OntoNotes development set. Using Dynamic batching, 10K tokens in a single batch. Hardware, NVIDIA Tesla V100 SXM2 Our model was trained for 129 epochs on a single 32GB NVIDIA Tesla V100 SXM2 GPU. The training took 23 hours. At shown in Table 7, the run-time at inference time in LingMess is longer than in the _s2e_ model because of the category selection for every possible pair of mentions. The memory consumption remains quite similar to the baseline. ## Appendix C Determining pair types Our method routes each pair of spans to their corresponding category scorer. This decision is based on the linguistic properties of the spans. Given a mention-pair $(c,q)$, we defined a rule based function $T(c,q)$ that determines the category of that pair. If $c$ and $q$ are both pronouns, if they are compatible according to gender, number and animacy (see Table 8 for the full list), the metnion pair will be routed to Pron-Pron-C, otherwise Pron-Pron-NC. If $c$ is pronoun and $q$ is a non-pronoun span (or vise-versa) we route the mention-pair to Pron-Ent. We route the remaining pairs to their corresponding categories (Match, Contains or Other) by considering only content words, excluding the following stop words: _{ ’s, a, all, an, and, at, for, from, in, into, more, of, on, or, some, the, these, those}_. Accordingly, the mentions “ _the U.S. and Japan_ ” and “ _Japan and the U.S._ ” are considered Match, “ _This lake of fire_ ” and “ _the lake of fire_ ” are considered Contains and “ _Bill Clinton_ ” and “ _The President_ ” are considered Other. ID | Pronouns ---|--- 1 | _I, me, my, mine, myself_ 2 | _you, your, yours, yourself, yourselves_ 3 | _he, him, his, himself_ 4 | _she, her, hers, herself_ 5 | _it, its, itself_ 6 | _we, us, our, ours, ourselves_ 7 | _they, them, their, themselves_ 8 | _that, this_ Table 8: List of groups of compatible pronouns, pronouns with the same ID are considered as compatible.
* Gil-Lopez _et al._ [2021] J. Gil-Lopez, Y. S. Teo, S. De, B. Brecht, H. Jeong, C. Silberhorn, and L. L. Sánchez-Soto, Universal compressive tomography in the time-frequency domain, Optica 8, 1296 (2021). * Browne and Rudolph [2005] D. E. Browne and T. Rudolph, Resource-efficient linear optical quantum computation, Phys. Rev. Lett. 95, 010501 (2005). * Bartolucci _et al._ [2021c] S. Bartolucci, P. M. Birchall, M. Gimeno-Segovia, E. Johnston, K. Kieling, M. Pant, T. Rudolph, J. Smith, C. Sparrow, and M. D. Vidrighin, Creation of entangled photonic states using linear optics (2021c), arXiv:2106.13825 [quant-ph] . * Lee and Jeong [2021] S.-H. Lee and H. Jeong, Universal resource-efficient topological measurement-based quantum computation via color-code-based cluster states (2021), arXiv:2106.07530 [quant-ph] . * Criger and Terhal [2016] B. Criger and B. Terhal, Noise thresholds for the [[4, 2, 2]]-concatenated toric code, Quantum Information and Computation 16, 1261 (2016). * Fowler and Goyal [2009b] A. G. Fowler and K. Goyal, Topological cluster state quantum computing, Quantum Info. Comput. 9, 721–738 (2009b). * Edmonds [1965a] J. Edmonds, Paths, trees, and flowers, Can. J. Math. 17, 449 (1965a). * Edmonds [1965b] J. Edmonds, Maximum matching and a polyhedron with 0, 1-vertices, J. Res. Nat. Bur. Stand. B 69, 55 (1965b). * Fowler [2015] A. G. Fowler, Minimum weight perfect matching of fault-tolerant topological quantum error correction in average o(1) parallel time, Quantum Info. Comput. 15, 145–158 (2015). * Kolmogorov [2009] V. Kolmogorov, Blossom v: a new implementation of a minimum cost perfect matching algorithm, Math. Program. Comput. 1, 43 (2009).
# A class of normal dilation matrices affirming the Marcus-de Oliveira conjecture Kijti Rodtes missing<EMAIL_ADDRESS> ###### Abstract. In this article, we prove a class of normal dilation matrices affirming the Marcus-de Oliveira conjecture. Keywords: Normal dilation, Normal matrices, Marcus de Oliveira Conjecture MSC(2010): 15A15; 15A60; 15A86 Throughout, $n$ will denote a positive integer. The determinant conjecture of Marcus and de Oliveira states that the determinant of the sum of two $n$ by $n$ normal matrices $A$ and $B$ belongs to the convex hull of the $n!$ $\sigma$-points, $z_{\sigma}:=\prod_{i=1}^{n}(a_{i}+b_{\sigma(i)})$, indexed by $\sigma\in S_{n}$, where $a_{i}$’s and $b_{j}$’s are eigenvalues of $A$ and $B$, respectively (see [9],[3],[11]). We briefly write as $(A,B)\in MOC$ if the pair of normal matrices $A,B$ affirms the Marcus and de Oliveira conjecture, i.e., $\det(A+B)\in co(\\{z_{\sigma}|\sigma\in S_{n}\\}).$ In [8], Fiedler showed that, for two hermitian matrices $A,B$ $\Delta(A,B):=\\{\det(A+UBU^{*})|U\in U_{n}(\mathbb{C})\\}$ is a line segment with $\sigma$-points as endpoints, where $U_{n}(\mathbb{C})$ denotes the set of all unitary matrices of dimension $n\times n$. This result, in fact, motivates the conjecture. As a consequence of Fiedler’s result, $(A,B)\in MOC$ for any pair of skew-hermitian matrices $A,B$. In [1], N. Bebiano, A. Kovacec, and J.da Providencia provided that if $A$ is positive definite and $B$ a non-real scalar multiple of a hermitian matrix, then $(A,B)\in MOC$. They also obtained that if eigenvalues of $A$ are pairwise distinct complex numbers lying on a line $l$ and all eigenvalues of $B$ lie on a parallel to $l$, then $(A,B)\in MOC$. S.W. Drury showed that $(A,B)\in MOC$ for the case that $A$ is hermitian and $B$ is non-real scalar multiple of a hermitian matrix (essentially hermitian matrix) in [5] and the case that $A=sU$ and $B=tV$ for $s,t\in\mathbb{C}$ and $U,V\in U_{n}(\mathbb{C})$ in [6]. It is also known that, for normal matrices $A,B\in M_{n}(\mathbb{C})$ (the set of all $n\times n$ matrices over $\mathbb{C}$), $(A,B)\in MOC$: if $\det(A+B)=0$ ([7]); if the point $z_{\sigma}$ lie all on a straight line ([10]); if $n=2,3$ ([3, 2]); if $A$ or $B$ has only two distinct eigenvalues, one of them simple, ([3]). However, it seems that there is no new affirmative class of normal matrices to this conjecture after the year 2007. Let $X$ be a square $n\times n$ complex matrix and $s$ be a complex number. It is a direct calculation to see that $N(X,s):=\left(\begin{array}[]{cc}X&(X-sI)^{*}\\\ (X-sI)^{*}&X\\\ \end{array}\right)$ is a normal matrix of size $2n\times 2n$ and thus it is a normal dilation of $X$. We will see (in the proof of the main result) that the eigenvalues of $N(X,s)$ lie on both real and imaginary axis and thus this matrix need not be essentially hermitian or a scalar multiple of a unitary matrix. In this short note, we show that: ###### Theorem 0.1. Let $X,Y\in M_{n}(\mathbb{C})$ and $s,t\in\mathbb{C}$. Then $(N(X,s),N(Y,t))\in MOC$. Note that if $A\in M_{n}(\mathbb{C})$ is normal then $UAU^{*}$ is also normal for any $U\in U_{n}(\mathbb{C})$. Then $VN(X,s)V^{*}$ is also a normal dilation of $X$ for any $V\in U_{2n}(\mathbb{C})$. Moreover, since the conjecture is invariant under simultaneous unitary similarity, we also deduce from Theorem 0.1 that $(VN(X,s)V^{*},VN(Y,t)V^{*})\in MOC$ for any $V\in U_{2n}(\mathbb{C})$. To prove the main result, we will use the following lemmas. ###### Lemma 0.2. Let $A,B\in M_{n}(\mathbb{C})$ and $C,D\in M_{m}(\mathbb{C})$ be normal. If $(A,B)\in MOC$ and $(C,D)\in MOC$, then $(A\oplus C,B\oplus D)\in MOC$. ###### Proof. Suppose that $\\{a_{i}\,|\,1\leq i\leq n\\}$, $\\{b_{i}\,|\,1\leq i\leq n\\}$, $\\{c_{i}\,|\,1\leq i\leq m\\}$ and $\\{d_{i}\,|\,1\leq i\leq m\\}$ are ordered set of the eigenvalues of $A,B,C$ and $D$, respectively. Denote $e_{i}:=a_{i}$, $f_{i}:=b_{i}$ for $i=1,\dots,n$ and $e_{n+j}=c_{j}$, $f_{n+j}=d_{j}$ for $j=1,\dots,m$. Then, $\\{e_{i}\,|\,1\leq i\leq n+m\\}$ and $\\{f_{i}\,|\,1\leq i\leq n+m\\}$ are ordered set of the eigenvalues of $A\oplus C$ and $B\oplus D$, respectively. For each $\sigma\in S_{n},\pi\in S_{m}$ and $\theta\in S_{n+m}$, denote $z_{\sigma},v_{\pi}$ and $w_{\theta}$ the product $\prod_{i=1}^{n}(a_{i}+b_{\sigma(i)})$, $\prod_{i=1}^{m}(c_{i}+d_{\pi(i)})$ and $\prod_{i=1}^{n+m}(e_{i}+f_{\theta(i)})$, respectively. Suppose that $(A,B)\in MOC$ and $(C,D)\in MOC$, then $\det(A+B)=\sum_{\sigma\in S_{n}}t_{\sigma}z_{\sigma}\hbox{ and }\det(C+D)=\sum_{\pi\in S_{m}}s_{\pi}v_{\pi},$ where $t_{\sigma},s_{\pi}\in[0,1]$ such that $\sum_{\sigma\in S_{n}}t_{\sigma}=1$ and $\sum_{\sigma\in S_{m}}s_{\pi}=1$. Note that $\displaystyle\det(A\oplus C+B\oplus D)$ $\displaystyle=$ $\displaystyle\det((A+B)\oplus(C+D))$ $\displaystyle=$ $\displaystyle\det(A+B)\cdot\det(C+D)$ $\displaystyle=$ $\displaystyle(\sum_{\sigma\in S_{n}}t_{\sigma}z_{\sigma})(\sum_{\pi\in S_{m}}s_{\pi}v_{\pi})$ $\displaystyle=$ $\displaystyle\sum_{\sigma\in S_{n},\pi\in S_{m}}(t_{\sigma}s_{\pi})(z_{\sigma}v_{\pi}).$ For each $\sigma\in S_{n}$ and $\pi\in S_{m}$, define a permutation $\theta(\sigma,\pi)\in S_{n+m}$ by $\theta(\sigma,\pi):=\left(\begin{array}[]{cccccc}1&\cdots&n&n+1&\cdots&n+m\\\ \sigma(1)&\cdots&\sigma(n)&n+\pi(1)&\cdots&n+\pi(m)\\\ \end{array}\right)$ Then $w_{\theta(\sigma,\pi)}=z_{\sigma}v_{\pi}$. Since, for each $\sigma\in S_{n}$ and $\pi\in S_{m}$, $t_{\sigma}s_{\pi}\in[0,1]$ and $\sum_{\sigma\in S_{n},\pi\in S_{m}}(t_{\sigma}s_{\pi})=(\sum_{\sigma\in S_{n}}t_{\sigma})(\sum_{\sigma\in S_{m}}s_{\pi})=(1)(1)=1,$ we conclude that $\det(A\oplus C+B\oplus D)\in co\\{w_{\theta(\sigma,\pi)}\,|\,\sigma\in S_{n},\pi\in S_{m}\\}\subseteq co\\{w_{\theta}\,|\,\theta\in S_{n+m}\\}.$ Hence $(A\oplus C,B\oplus D)\in MOC$. ∎ To be a self contained material, we record a result of S.W. Drury. ###### Theorem 0.3. [4] Let $A$ and $B$ be hermitian matrices with the given eigenvalues $(a_{1},\dots,a_{n})$ and $(b_{1},\dots,b_{n})$ respectively. Let $(t_{1},\dots,t_{n})$ be the eigenvalues of $A+B$. Then $\prod_{j=1}^{n}(\lambda+t_{j})\in co\\{\prod_{j=1}^{n}(\lambda+a_{j}+b_{\sigma(j)})|\sigma\in S_{n}\\},$ where $co$ denotes the convex hull in the space of polynomials and $\lambda$ is an indeterminate. As a corollary of the above theorem, we have that: ###### Lemma 0.4. Let $X,Y\in M_{n}(\mathbb{C})$ and $\alpha,\beta\in\mathbb{C}$. Then $(X-X^{*}+\alpha I_{n},Y-Y^{*}+\beta I_{n})\in MOC$ and $(X+X^{*}+\alpha I_{n},Y+Y^{*}+\beta I_{n})\in MOC$. ###### Proof. Since $X+X^{*}$ and $Y+Y^{*}$ are hermitian, by Theorem 0.3, we deduce directly that $(X+X^{*}+\alpha I_{n},Y+Y^{*}+\beta I_{n})\in MOC$. Since $X-X^{*}$ and $Y-Y^{*}$ are skew-hermitian, $i(X-X^{*})$ and $i(Y-Y^{*})$ are hermitian. Again, by Theorem 0.3, $(X-X^{*}+\alpha I_{n},Y-Y^{*}+\beta I_{n})\in MOC$. ∎ ###### Proof. (Theorem 0.1) Let $U$ be the block matrix in $M_{2n}(\mathbb{C})$ defined by $U:=\frac{1}{\sqrt{2}}\left(\begin{array}[]{cc}I_{n}&I_{n}\\\ -I_{n}&I_{n}\\\ \end{array}\right).$ It is a direct computation to see that $U$ is a unitary matrix and $U^{*}\left(\begin{array}[]{cc}M&N\\\ N&M\\\ \end{array}\right)U=(M-N)\oplus(M+N),$ for any $M,N\in M_{n}(\mathbb{C})$. Let $A:=X-X^{*}+(\overline{s})I_{n}$, $B:=Y-Y^{*}+\overline{t}I_{n}$, $C:=X+X^{*}-(\overline{s})I_{n}$, and $D:=Y+Y^{*}-\overline{t}I_{n}$. By Lemma 0.4, the pair of normal matrices $(A,B)$ and $(C,D)$ satisfy the conjecture. Hence, by Lemma 0.2, $(A\oplus C,B\oplus D)\in MOC$. Therefore, $(N(X,s),N(Y,t))=(U(A\oplus C)U^{*},U(B\oplus D)U^{*})\in MOC,$ which completes the proof. ∎ ## Acknowledgments The author would like to thank Prof Tin Yau Tam for bringing this topic to the author. He would like to thank the referee(s) for valuable comments to improve the paper. He also would like to thank Naresuan University for the financial support on the project number R2563C006. ## References * [1] N. Bebiano, A. Kovacec, and J.da Providencia. The validity of the Marcus-de Oliveira conjecture for essentially Hermitian matrices. _Second Conference of the International Linear Algebra Society (ILAS) (Lisbon, 1992). Linear Algebra Appl._ , 197/198:411-427, 1994. * [2] N. Bebiano, J. K. Merikoski, and J. da Providencia, On a conjecture of G. N. de Oliveira on determinants, _Linear and Multilinear Algebra_ , 20:167-170, 1987. * [3] G. N. de Oliveira, Normal matrices (research problem),_linear and Multilinear Algebra_ , 12:153-154, 1982. * [4] S. W. Drury. On Symmetric Functions of the Eigenvalues of the Sum of Two Hermitian Matrices. _Linear Algebra Appl._ , 176:211-222, 1992. * [5] S. W. Drury. Essentially Hermitian matrices revisited. _Electron. J. Linear Algebra_ , 15:285-296, 2006. * [6] S. W. Drury. OMC for scalar multiples of unitaries. _Linear Algebra Appl._ , 422(1):318-325, 2007. * [7] S. W. Drury and B. Cload, On the determinantal conjecture of Marcus and de Oliveira, _Linear Algebra Appl._ , 177:105-109, 1992. * [8] M. Fiedler, Bounds for the determinant of the sum of hermitian matrices, _Proc.Amer.Math.Soc_ , 30:27-31, 1971. * [9] M. Marcus, Derivations, Plucker relations, and the numerical range. _Indiana Univ. Math. J._ , 22:1137-1149, 1972/73. * [10] J. K. Merikoski and A. Virtanen, Some notes on de Oliveira’s determinantal conjecture, _Linear Algebra Appl._ , 121:345-352, 1989. * [11] X. Zhan, Open problems in matrix theory, _ICCM_ , II. 1-4, 2007. Kijti Rodtes Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
August 28, 2024 # Combining Evolutionary Strategies and Novelty Detection to go Beyond the Alignment Limit of the $Z_{3}$ 3HDM Jorge Crispim Romão<EMAIL_ADDRESS>CFTP, Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Avenida Rovisco Pais 1, 1049 Lisboa, Portugal Miguel Crispim Romão <EMAIL_ADDRESS>Institute for Particle Physics Phenomenology, Durham University, Durham DH1 3LE, UK LIP – Laboratório de Instrumentação e Física Experimental de Partículas, Escola de Ciências, Campus de Gualtar, Universidade do Minho, 4701-057 Braga, Portugal ###### Abstract We present a novel Artificial Intelligence approach for Beyond the Standard Model parameter space scans by augmenting an Evolutionary Strategy with Novelty Detection. Our approach leverages the power of Evolutionary Strategies, previously shown to quickly converge to the valid regions of the parameter space, with a _novelty reward_ to continue exploration once converged. Taking the $Z_{3}$ 3HDM as our Physics case, we show how our methodology allows us to quickly explore highly constrained multidimensional parameter spaces, providing up to eight orders of magnitude higher sampling efficiency when compared with pure random sampling and up to four orders of magnitude when compared to random sampling around the alignment limit. In turn, this enables us to explore regions of the parameter space that have been hitherto overlooked, leading to the possibility of novel phenomenological realisations of the $Z_{3}$ 3HDM that had not been considered before. ††preprint: IPPP/24/04††preprint: CFTP/24-002 ## I Introduction The Standard Model (SM) of particle physics has demonstrated remarkable success in accurately describing the electroweak and strong interactions. However, experimental challenges such as neutrino mass, dark matter, and the baryonic asymmetry of the Universe prompt the exploration of physics Beyond the SM (BSM). In many cases, BSM theories involve expanding the minimal scalar sector of the SM, characterised by a single Higgs doublet. Multi-Higgs doublet models are particularly prominent among these extensions, primarily because they maintain the tree-level value of the electroweak $\rho$-parameter, in good agreement with experimental observations. The extensively studied two Higgs doublet model (2HDM) [1] has provided valuable insights. More recently, there has been a surge of interest in the investigation of three Higgs doublet models (3HDMs) [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], where the scalar sector encompasses three Higgs doublets. These models show great promise as they possess the essential ingredients to address the challenges posed by dark matter and the baryonic asymmetry of the Universe. With no unambiguous signs of New Physics in general and of extra exotic scalars in particular, BSM phenomenology is faced with an ever increasing and ever more restricting list of (direct and indirect) experimental constraints, in addition to any theoretical and self-consistency constraints. From a model building perspective, this means that the allowed regions of BSM parameter spaces are effectively _shrinking_ , making finding such regions ever more difficult or outright impractical, even if those regions host points which are very good fits to the data. To mitigate this, BSM phenomenologists often simplify the problem in two possible ways. The first approach is to simplify the constraints and leave some questions unanswered, to be eventually addressed by an Ultra Violet completion. The second approach relies on simplifying the sampling space by changing the priors from which the parameters are drawn, usually by restricting the parameter space to a subregion where known valid points had been found before or for which there is a limiting case, such as the case of the alignment limit in multi Higgses models. While in the former case one ends up with an incomplete model, in the latter case one is left with the worrisome prospect of missing out possible phenomenological signatures. In recent years, Artificial Intelligence (AI) in general, and Machine Learning (ML) in particular, have received considerable attention by the High Energy Physics (HEP) community with a wide range of applications [17]. Of particular interest to this work are the ongoing attempts to explore the parameter space of highly constrained and multidimensional BSM scenarios, where sampling points that respect theoretical and experimental bounds poses a great challenge. The first attempts to mitigate this problem using AI/ML are based on using supervised classifiers and regressors to produce estimates of BSM predictions of physical observables and physical quantities [18, 19, 20, 21, 22], bypassing the computational overhead often associated with these. Other approaches leverage the active learning framework to track the ground truth of the observables to draw the boundary of the allowed regions of the parameter space [23, 24]. However, we point out that the usage of AI/ML for BSM parameter spaces studies is not restricted to exploration, as generative models have been studied to provide a possible way of replicating points from valid regions [25] or to gain new insights through latent space visualisation [26]. The approaches cited above rely heavily on the quality of the trained ML model, namely on the coverage of the parameter space provided by the points used, especially in or near valid regions. A different approach was presented in [27], where different AI-based exploration algorithms were used to explore the parameter space of the cMSSM (four parameters) and the pMSSM (16 parameters) by reframing the problem as a black-box optimisation problem. In such an approach, exploration starts with just a few random points from which iterative algorithms will progressively suggest better points through trial and error. Although the Physics cases in [27] were not especially realistic, as only Higgs mass and Dark Matter relic density constraints were used, the methodology provided orders of magnitude of sampling efficiency improvement over random sampling, while still capturing a global picture of the parameter space. In this paper, we will extend and build on top of that of [27]. In that work, an Evolutionary Strategy algorithm was observed to _eagerly_ converge to the valid region of the parameter space and to stop exploring once converged. To mitigate this, in [27] the algorithm was endowed with a restart strategy to draw a more global picture of the valid region of the parameter space. In this work, we will take a different approach by incorporating a _novelty reward_ into the black-box optimisation problem to drive the exploration algorithm away from the regions already explored. As we will see, our approach retains the benefits of drastically improving sampling efficiency while gaining a novel approach to charter the valid region of the parameter space. Although the proposed approach is general to any BSM parameter space,111In fact, the methodology presented in this paper is applicable to _any_ sampling in highly constrained multidimensional spaces, not only to BSM phenomenology. we will use our novel methodology to perform a realistic search on the highly constrained $Z_{3}$ 3HDM model, where all known and possible constraints will be considered. This poses a terrific challenge for sampling valid points from the parameter space, which has led previous studies to consider sampling around the alignment limit. Therefore, the $Z_{3}$ 3HDM provides an ideal scenario not only to develop our methodology, but to use to do new Physics by exploring points beyond random sampling strategies around the alignment limit and to discover novel phenomenological realisations of the $Z_{3}$ 3HDM that have been obfuscated in the past such strategies. This paper is organised as follows. In section II we present the $Z_{3}$ 3HDM model which is the Physics subject of our study. In section III we present its constraints, both theoretical and experimental. In section IV we outline the random sampling strategy near the alignment limit, which we will use to compare our methodology. In section V we introduce the AI scan strategy, redefining the scan as black-box optimisation, and the _novelty award_ based on density estimation. In section VI we present and analyse the results obtained with our methodology, showcasing the versatility of our approach. Finally, in section VII we conclude our discussion and point out novel directions of work. ## II The $Z_{3}$ 3HDM Model ### II.1 Scalar Sector For the potential of the $Z_{3}$ 3HDM model, denoted $V_{Z_{3}}$, we use the conventions of [6]. The $Z_{3}$-invariant terms, given by $\phi_{i}\to\phi_{i}^{\prime}=(S_{Z_{3}})_{ij}\phi_{j}$, can be expressed as: $V_{Z_{3}}=V_{2}+V_{4},$ (1) with the quartic part represented by: $\displaystyle V_{4}$ $\displaystyle=\lambda_{1}(\phi_{1}^{\dagger}\phi_{1})^{2}+\lambda_{2}(\phi_{2}^{\dagger}\phi_{2})^{2}+\lambda_{3}(\phi_{3}^{\dagger}\phi_{3})^{2}+\lambda_{4}(\phi_{1}^{\dagger}\phi_{1})(\phi_{2}^{\dagger}\phi_{2})+\lambda_{5}(\phi_{1}^{\dagger}\phi_{1})(\phi_{3}^{\dagger}\phi_{3})$ $\displaystyle\quad+\lambda_{6}(\phi_{2}^{\dagger}\phi_{2})(\phi_{3}^{\dagger}\phi_{3})+\lambda_{7}(\phi_{1}^{\dagger}\phi_{2})(\phi_{2}^{\dagger}\phi_{1})+\lambda_{8}(\phi_{1}^{\dagger}\phi_{3})(\phi_{3}^{\dagger}\phi_{1})+\lambda_{9}(\phi_{2}^{\dagger}\phi_{3})(\phi_{3}^{\dagger}\phi_{2})$ $\displaystyle\quad+\left[\lambda_{10}(\phi_{1}^{\dagger}\phi_{2})(\phi_{1}^{\dagger}\phi_{3})+\lambda_{11}(\phi_{1}^{\dagger}\phi_{2})(\phi_{3}^{\dagger}\phi_{2})+\lambda_{12}(\phi_{1}^{\dagger}\phi_{3})(\phi_{2}^{\dagger}\phi_{3})+\text{h.c.}\right].$ (2) The quadratic part, denoted as $V_{2}$, is given by: $V_{2}=m_{11}^{2}\phi_{1}^{\dagger}\phi_{1}+m_{22}^{2}\phi_{2}^{\dagger}\phi_{2}+m_{33}^{2}\phi_{3}^{\dagger}\phi_{3}+\left[m_{12}^{2}(\phi_{1}^{\dagger}\phi_{2})+m_{13}^{2}(\phi_{1}^{\dagger}\phi_{3})+m_{23}^{2}(\phi_{2}^{\dagger}\phi_{3})+\text{h.c.}\right],$ (3) which includes terms, $m_{12}^{2}$, $m_{13}^{2}$, and $m_{23}^{2}$, responsible for breaking the symmetry softly. Following spontaneous symmetry breaking (SSB), the three doublets can be parameterised in terms of their component fields as: $\phi_{i}=\begin{pmatrix}w_{k}^{\dagger}\\\ (v_{i}+x_{i}+i\,z_{i})/\sqrt{2}\end{pmatrix}\,\,,\qquad(i=1,2,3)$ (4) where $v_{i}/\sqrt{2}$ corresponds to the vacuum expectation value (vev) for the neutral component of $\phi_{i}$. It is assumed that the scalar sector of the model explicitly and spontaneously conserves CP. Under this assumption, all parameters in the scalar potential are real, and the vevs $v_{1}$, $v_{2}$, $v_{3}$ are also real. The scalar potential in eq. 1 contains eighteen parameters, and the vevs are parameterised as follows: $v_{1}=v\cos\beta_{1}\cos\beta_{2}\,,\qquad v_{2}=v\sin\beta_{1}\cos\beta_{2}\,,\qquad v_{3}=v\sin\beta_{2},$ (5) leading to the Higgs basis [28, 29, 30] obtained by the following rotation, $\begin{pmatrix}H_{0}\\\ R_{1}\\\ R_{2}\end{pmatrix}=\mathcal{O}_{\beta}\begin{pmatrix}x_{1}\\\ x_{2}\\\ x_{3}\end{pmatrix}=\begin{pmatrix}c_{\beta_{2}}c_{\beta_{1}}&c_{\beta_{2}}s_{\beta_{1}}&s_{\beta_{2}}\\\ -s_{\beta_{1}}&c_{\beta_{1}}&0\\\ -c_{\beta_{1}}s_{\beta_{2}}&-s_{\beta_{1}}s_{\beta_{2}}&c_{\beta_{2}}\end{pmatrix}\begin{pmatrix}x_{1}\\\ x_{2}\\\ x_{3}\end{pmatrix},$ (6) where here we have used the short notation, $c_{x}\equiv\cos x,s_{x}\equiv\sin x$. Orthogonal matrices, denoted as R, P and Q, diagonalise the squared mass matrices in the CP-even scalar, CP-odd scalar, and charged scalar sectors. These matrices are crucial for transforming the weak basis into the physical mass basis for states with well-defined masses. Although this has already been discussed before [6, 10, 31], for completeness and to fix our notation, we give here the rotations that relate the mass eigenstates to the weak basis states. For the neutral scalar sector, the mass terms can be extracted through the following rotation, $\begin{pmatrix}h_{1}\\\ h_{2}\\\ h_{3}\end{pmatrix}=\mathcal{O}_{\alpha}\begin{pmatrix}x_{1}\\\ x_{2}\\\ x_{3}\end{pmatrix},$ (7) where we take $h_{1}\equiv h_{125}$ to the be the 125 GeV Higgs boson found at LHC. The form chosen for $\mathcal{O}_{\alpha}\equiv\textbf{R}$ is $\textbf{R}\equiv\mathcal{O}_{\alpha}=\mathcal{R}_{3}.\mathcal{R}_{2}.\mathcal{R}_{1},$ (8) where the matrices $\mathcal{R}_{i}$ are $\mathcal{R}_{1}=\begin{pmatrix}c_{\alpha_{1}}&s_{\alpha_{1}}&0\\\ -s_{\alpha_{1}}&c_{\alpha_{1}}&0\\\ 0&0&1\end{pmatrix}\,,\quad\mathcal{R}_{2}=\begin{pmatrix}c_{\alpha_{2}}&0&s_{\alpha_{2}}\\\ 0&1&0\\\ -s_{\alpha_{2}}&0&c_{\alpha_{2}}\end{pmatrix}\,,\quad\mathcal{R}_{3}=\begin{pmatrix}1&0&0\\\ 0&c_{\alpha_{3}}&s_{\alpha_{3}}\\\ 0&-s_{\alpha_{3}}&c_{\alpha_{3}}\end{pmatrix}\,.\quad$ (9) For the charged and pseudoscalar sectors, the physical scalars can be obtained via the following $3\times 3$ rotations, $\begin{pmatrix}G^{0}\\\ A_{1}\\\ A_{2}\end{pmatrix}=\mathcal{O}_{\gamma_{1}}\mathcal{O}_{\beta}\begin{pmatrix}z_{1}\\\ z_{2}\\\ z_{3}\end{pmatrix},\qquad\begin{pmatrix}G^{+}\\\ H_{1}^{+}\\\ H_{2}^{+}\end{pmatrix}=\mathcal{O}_{\gamma_{2}}\mathcal{O}_{\beta}\begin{pmatrix}w_{1}^{\dagger}\\\ w_{2}^{\dagger}\\\ w_{3}^{\dagger}\end{pmatrix},$ (10) where, the rotation matrices are given by $\mathcal{O}_{\gamma_{1}}=\begin{pmatrix}1&0&0\\\ 0&c_{\gamma_{1}}&-s_{\gamma_{1}}\\\ 0&s_{\gamma_{1}}&c_{\gamma_{1}}\end{pmatrix},\qquad\mathcal{O}_{\gamma_{2}}=\begin{pmatrix}1&0&0\\\ 0&c_{\gamma_{2}}&-s_{\gamma_{2}}\\\ 0&s_{\gamma_{2}}&c_{\gamma_{2}}\end{pmatrix}.$ (11) For later use, we define the matrices P and Q as the combinations that connect the weak basis to the physical mass basis for the CP odd and charged Higgs scalars, respectively, $\textbf{P}\equiv\mathcal{O}_{\gamma_{1}}\mathcal{O}_{\beta},\qquad\textbf{Q}\equiv\mathcal{O}_{\gamma_{2}}\mathcal{O}_{\beta}.$ (12) As the states in the physical basis have well-defined masses, we can obtain relations between the set $\displaystyle\left\\{v,\beta_{1},\beta_{2},m_{h1},m_{h2},m_{h3},m_{A1},m_{A2},m_{H_{1}^{\pm}},m_{H_{2}^{\pm}},\alpha_{1},\alpha_{2},\alpha_{3},\gamma_{1},\gamma_{2}\right\\},$ (13) and the potential parameters in eq. 1, as shown in Ref. [6, 10, 31]. ### II.2 Higgs-Fermion Yukawa Interactions In the Type-I models considered here222For a detailed discussion of all the types of Higgs-Fermion couplings that lead to Natural Flavour Conservation (NFC) see Ref.[12]., fermion fields are unaffected by the $Z_{3}$ transformation, allowing them to couple only to $\phi_{3}$. The Yukawa couplings to fermions are expressed compactly as: $\mathscr{L}_{\rm Y}\ni-\frac{m_{f}}{v}\bar{f}(a^{f}_{j}+i\,b^{f}_{j}\gamma_{5})fh_{j},$ (14) where $h_{j}\equiv(h_{1},h_{2},h_{3},A_{1},A_{2})_{j}$ represents the physical Higgs fields. For completeness, we list the couplings $a_{j}^{f}$ and $b_{j}^{f}$ here [12]. We have, $\displaystyle a_{j}^{f}\to$ $\displaystyle\frac{\textbf{R}_{j,3}}{\hat{v_{3}}},\qquad\qquad j=1,2,3\qquad\text{for all leptons and down quarks},$ $\displaystyle b_{j}^{f}\to$ $\displaystyle\frac{\textbf{P}_{j-2,3}}{\hat{v_{3}}},\qquad\quad j=4,5\quad\qquad\text{for all leptons and down quarks},$ $\displaystyle a_{j}^{f}\to$ $\displaystyle\frac{\textbf{R}_{j,3}}{\hat{v_{3}}},\qquad\qquad j=1,2,3\qquad\text{for all up quarks},$ $\displaystyle b_{j}^{f}\to$ $\displaystyle-\frac{\textbf{P}_{j-2,3}}{\hat{v_{3}}},\quad\quad j=4,5\quad\qquad\text{for all up quarks},$ (15) For the charged Higgs, $H_{1}^{\pm}$ and $H_{2}^{\pm}$, the Yukawa couplings to fermions are expressed as: $\displaystyle\mathscr{L}_{\rm Y}$ $\displaystyle\ni$ $\displaystyle\frac{\sqrt{2}}{v}\bar{\psi}_{d_{i}}\left[m_{\psi_{d_{i}}}V_{ji}^{\ast}\,\eta_{k}^{L}P_{L}+m_{\psi_{u_{j}}}V_{ji}^{\ast}\,\eta_{k}^{R}P_{R}\right]\psi_{u_{j}}H_{k}^{-}$ (16) $\displaystyle+\frac{\sqrt{2}}{v}\bar{\psi}_{u_{i}}\left[m_{\psi_{d_{j}}}V_{ij}\,\eta_{k}^{L}P_{R}+m_{\psi_{u_{i}}}V_{ij}\,\eta_{k}^{R}P_{L}\right]\psi_{d_{j}}H_{k}^{+},$ where $(\psi_{u_{i}},\psi_{d_{i}})$ is $(u_{i},d_{i})$ for quarks or $(\nu_{i},\ell_{i})$ for leptons. The coefficients $\eta_{k}^{\ell\,L}$, $\eta_{k}^{\ell\,R}$, $\eta_{k}^{q\,L}$, and $\eta_{k}^{q\,R}$ are $\eta_{k}^{\ell\,L}=-\frac{\textbf{Q}_{k+1,3}}{\hat{v_{3}}}\,,\quad\eta_{k}^{\ell\,R}=0\,,\quad\eta_{k}^{q\,L}=-\frac{\textbf{Q}_{k+1,3}}{\hat{v_{3}}}\,,\quad\eta_{k}^{q\,R}=\frac{\textbf{Q}_{k+1,3}}{\hat{v_{3}}}\,,\quad\text{k=1,2}\,.$ (17) ## III Constraints In this section, we outline the various constraints necessary to impose theoretical and phenomenological consistency on the model parameters. The specifics of these constraints in the context of 3HDM are well established, as documented in previous works [10, 12, 16]. For brevity, we provide a brief list here, deferring further elaboration to appendix A. From a phenomenological standpoint, our primary objective is to ensure the existence of an SM-like Higgs, identifiable as the scalar boson detected at the LHC. As demonstrated in [6], achieving this involves staying close to the ’alignment limit,’ characterised in the 3HDM by the conditions: $\alpha_{1}=\beta_{1}\,,\qquad\alpha_{2}=\beta_{2}\,.$ (18) In this limit, the lightest CP-even scalar, denoted as $h$, exhibits exact SM- like couplings at the tree level, automatically satisfying constraints from Higgs signal strengths. However, our interest lies in exploring permissible deviations from this precise alignment. To this end, we use the signal strength formalism, comparing the results with the experimental limits [32]. Subsequently, we must address constraints stemming from electroweak precision parameters, specifically $S$, $T$, and $U$. We use the analytic expressions of Ref.[33], contrasting them with the fit values provided in Ref.[34]. Notably, similar to the 2HDM scenario, we can bypass $T$-parameter constraints by imposing [14]: $m_{C1}=m_{A1}\,,\qquad m_{C2}=m_{A2}\,,\qquad\gamma_{1}=\gamma_{2}\,.$ (19) as we will explain in section IV. Additionally, we incorporate constraints arising from flavour data, as detailed in appendix A. Direct searches at the LHC for the heavy non-standard scalars are also considered, employing HiggsTools-1.1.3 following Ref. [35], which provides a comprehensive list of relevant experimental searches. For theoretical constraints, we insist on the perturbativity of Yukawa couplings, perturbative unitarity, and BFB (bounded from below) conditions. The implementation details for these constraints can be found in appendix A. ## IV Random Scan strategy We developed a dedicated code specifically for the $Z_{3}$ constrained 3HDM 333The Feynman rules for this model were derived with the software FeynMaster[36, 37] and automatically imported into the code., building upon our earlier codes [38, 39, 10]. A thorough exploration of the parameter space was conducted using eq. 13. Our fixed inputs remained $v=246,\text{GeV}$ and $m_{h1}=125,\text{GeV}$. Random values were assigned within the following ranges: $\displaystyle\alpha_{1}\,,\alpha_{2}\,,\alpha_{3}\,,\gamma_{1}\,,\gamma_{2},\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right];\qquad\tan{\beta_{1}}\,,\tan{\beta_{2}},\in\left[0.3,10\right];$ $\displaystyle m_{H_{1}}\equiv m_{h_{2}}\,,m_{H_{2}}\equiv m_{h_{3}},\in\left[125,1000\right],\text{GeV};$ (20) $\displaystyle m_{A_{1}}\,,m_{A_{2}},m_{H_{1}^{\pm}}\,,m_{H_{2}^{\pm}},\in\left[100,1000\right],\text{GeV};$ (21) $\displaystyle m^{2}_{12},m^{2}_{13},m^{2}_{23}\in\left[\pm 10^{-1},\pm 10^{7}\right],\text{GeV}^{2}\,,$ (22) where the last expression applies only to the soft masses that are not obtained as derived quantities (see Ref.[10] for the complete expressions). However, this extensive scan exhibited low efficiency (as detailed in table 3 below). Recognising the significance of alignment in 3HDM [6, 10, 9], where alignment is defined by the lightest Higgs scalar having Standard Model (SM) couplings, we imposed constraints to enhance efficiency. Initially, aligning $\alpha_{1}$ with $\beta_{1}$ and $\alpha_{2}$ with $\beta_{2}$ (eq. 18) did not yield enough good points. Ref. [9] introduced an additional condition : $\gamma_{1}=\gamma_{2}=-\alpha_{3},\quad m_{H_{1}}=m_{A_{1}}=m_{H_{1}^{\pm}},\quad m_{H_{2}}=m_{A_{2}}=m_{H_{2}^{\pm}},$ (23) This, alongside eq. 18, led to a symmetric form of the quartic part of the potential[40, 9]. In fact, these conditions simplified the potential to $V_{\rm Sym,Lim}=\lambda_{\rm SM}\left[(\phi_{1}^{\dagger}\phi_{1})+(\phi_{2}^{\dagger}\phi_{2})+(\phi_{3}^{\dagger}\phi_{3})\right]^{2}\,,$ (24) where $\lambda_{\rm SM}=\frac{m_{h}^{2}}{2v^{2}},$ (25) is the the SM quartic Higgs coupling. All $\lambda_{i}$ vanish or are expressed in terms of $\lambda_{\rm SM}$. The validity of eq. 18 and eq. 23 also implies that the soft masses can be explicitly solved, that is, they are no more independent parameters (see Ref.[10] for complete expressions). Now it should be clear why all such points are good points. Due to alignment, the LHC results on the $h_{125}$ are easily obeyed, whereas the perturbativity unitarity, STU and the other constraints are automatically obeyed. To facilitate efficiency and consider deviations from strict alignment, two conditions were introduced [10]. The first denoted ”Al-1”, allowed a percentage deviation: $\frac{\alpha_{1}}{\beta_{1}},\frac{\alpha_{2}}{\beta_{2}},\in,[0.5,1.5]\,.\ \ \ \textbf{(Al-1)}$ (26) The condition ”Al-2” was more stringent, combining Al-1 with six additional conditions: $\frac{\alpha_{1}}{\beta_{1}},\frac{\alpha_{2}}{\beta_{2}},\frac{\gamma_{2}}{\gamma_{1}},\frac{-\alpha_{3}}{\gamma_{1}},\frac{m_{A_{1}}}{m_{H_{1}}},\frac{m_{H_{1}^{\pm}}}{m_{H_{1}}},\frac{m_{A_{2}}}{m_{H_{2}}},\frac{m_{H_{2}^{\pm}}}{m_{H_{2}}},\in,[0.5,1.5]\,.\ \ \ \textbf{(Al-2)}$ (27) These conditions, especially Al-2, improved the generation of meaningful points beyond the SM, even with a departure from strict alignment. ## V Artificial Intelligence Black-Box Optimiser Scan Strategy To quickly explore the parameter space, we will employ the AI black box optimisation approach to parameter space sampling first presented in [27]. In this approach, a constraint function, $C$, is introduced, $C(\mathcal{O})=\max(0,-\mathcal{O}+\mathcal{O}_{LB},\mathcal{O}-\mathcal{O}_{UB})\,,$ (28) where $\mathcal{O}$ is the value of an observable (or a constrained quantity), $\mathcal{O}_{LB}$ is its lower bound (i.e. its lowest allowed value), and $\mathcal{O}_{UB}$ its upper bound (i.e. its highest allowed value). Here, $\mathcal{O}$ is obtained by some computational routine that maps the parameter space to physical quantities, where the details of such routine are irrelevant and can therefore be taken as a black-box. If the value of $\mathcal{O}$ is within its lower and upper bounds, $C(\mathcal{O})$ returns $0$, otherwise it returns a positive number that measures _how far_ the value of $\mathcal{O}$ is from its allowed interval. $\mathcal{O}$ is dependent on the parameters of the model, $\theta=(\alpha_{1},\beta_{1},\dots)\in\mathcal{P}$ (where $\mathcal{P}$ is the parameter space defined by eq. 22), that is, $\mathcal{O}=\mathcal{O}(\theta)$, which implies that $C(\mathcal{O})=C(\mathcal{O}(\theta))=C(\theta)$. Therefore, the set of valid points, $\mathcal{V}$, that satisfy a constraint can be defined in terms of $\theta$ as $\mathcal{V}=\left\\{\theta^{*}:\ \theta\in\mathcal{P}\text{ s.t. }C(\theta)=0\right\\}\ .$ (29) Since $C(\theta)$ is both vanishing and minimal in $\mathcal{V}$, the same set can be defined through the optimisation statement $\mathcal{V}=\left\\{\theta^{*}:\ \theta\in\mathcal{P}\text{ s.t. }\theta^{*}=\arg\min C(\theta)\right\\}\ .$ (30) Therefore, the task of finding the points in the parameter space that satisfy constraints is the same as finding the points that minimise $C(\mathcal{O})$. When faced with multiple constraints on multiple observables or constrained quantities, $\mathcal{O}_{i}$, we can then combine them into a single _loss function_ , $L$, which we wish to minimise $L(\theta)=\sum_{i=1}^{N_{C}}C(\mathcal{O}_{i}(\theta)),$ (31) where the sum runs over all the $N_{C}$ constraints discussed in section III, and $L\geq 0\ \forall_{\theta}$ with $L=0$ if and only if all constraints are met. We notice that the quantity $\mathcal{O}_{i}$ does not need to be observable, such as the $\mu_{if}$ signal strengths measured by the LHC. For example, the theoretical constraints related to BFB and unitary conditions are included as $\mathcal{O}_{i}$ with the respective required bounds. The ability to mix measurements, limits, and theoretical constraints in the same loss function is a key strength of this methodology. Although $\mathcal{O}_{i}$ included observables and other constrained quantities, we will often abuse terminology and refer to all $\mathcal{O}_{i}$ as observables and the space they span as observable space, $\mathcal{O}$. ### V.1 The Search Algorithm: Covariant Matrix Approximation Evolutionary Strategy Having framed parameter space sampling as a black-box optimisation problem, we need to choose which AI black-box optimisation algorithm to perform the task. In [27], three different algorithms were considered: a Bayesian optimisation algorithm, a genetic algorithm, and an evolutionary strategy algorithm. Each realised different balances of the so-called exploration (how much of the parameter space explored) vs. exploitation (how fast it converged to $\mathcal{V}$) trade-off. In this work, we will use the evolutionary strategy algorithm, which provides the fastest convergence speed. The evolutionary strategy algorithm in question is the Covariant Matrix Approximation Evolutionary Strategy (CMAES) [41, 42]. Evolutionary Strategies (ES) are powerful numerical optimisation algorithms from the broader field of Evolutionary Computing (EC). EC algorithms are characterised by an iterative process in which candidate solutions for a problem are tested and the best ones are used to generate new solutions. In our case, the candidate solutions are points in the parameter space, and their suitability (i.e. their _fitness_) is measured by the loss function function eq. 31. As opposed to Genetic Algorithms, ES do not make use of genes to generate new candidate points. Instead, in ES, new candidates are _sampled_ from a distribution, the parameters of which are set by the best candidates from previous iterations, called generations. In CMAES, the distribution is a highly localised multivariate normal. This normal distribution is initialised with the centre at a random point in the parameter space, and its covariant matrix is set to the identity multiplied to an overall scaling constant $\sigma$. A generation of $\lambda$ candidates is sampled from the multivariate normal and their fitness is evaluated by eq. 31. Next, the $\lambda$ candidates are sorted from best to worst, and the $\mu$ best candidates are used to compute a new mean of the normal distribution, as well as to approximate its covariant matrix. Intuitively, the change in mean progresses the algorithm in the direction of steepest descent of the loss function, just like a first-order optimisation method would, and the covariant matrix approximates the (local) Hessian of the loss function, just like a second-order optimisation method would. The difference, however, is that CMAES _does not_ compute derivatives of the loss function, and therefore it is suitable for nonconvex, ill-conditioned, and even non-continuous loss functions. This feature makes CMAES converge very quickly on a wide variety of optimisation problems. We warn, however, that the intuitive description of CMAES presented above hides many of its inner working details, which are not relevant for the study at hand, and we point the interested reader to the original references provided. ### V.2 The Novelty Reward: Histogram Based Outlier System In [27] CMAES was shown to converge very fast compared to other AI black-box search algorithms. However, it was also observed that CMAES has limited exploration capacity due to the highly localised nature of the multivariate normal from which new candidate solutions are drawn. To mitigate this, in [27] many independent runs of CMAES with restarting heuristics were performed to draw a more global picture of the valid region of the parameter space. In this work, we present a novel approach to promote exploration by adding a _novelty reward_ to the loss function. To achieve this, we will compute the density of valid points already found and add it to the loss function as penalty. In this way, the loss function will be minimal not only when the constraints are satisfied, but also away valid regions that have already been found, which pushes CMAES to explore new regions and effectively working as a _novelty reward_. The addition of a penalty to the loss function in eq. 31 might produce new local minima. For example, consider the addition of various penalties, $p_{j}$, each taking values $p_{j}\in[0,1]$. This will create a competition between penalties $p_{j}$, and observable optimisation $C(\mathcal{O}_{i})$, when $\sum_{i}C(\mathcal{O}_{i})\simeq\sum_{j}p_{j}$, producing local minima away from $\sum_{i}C(\mathcal{O}_{i})=0$ and therefore spoiling our attempt to find good points. In order to prevent this, we artificially raise the value of the loss function outside the valid region by one so that such competition never arises, i.e. we consider a new version of $L$, $\tilde{L}$, $\tilde{L}(\theta)=\begin{cases}1+L(\theta)&\text{if }L(\theta)>0\\\ 0&\text{if }L(\theta)=0\end{cases}\ ,$ (32) which guarantees that the total loss function, $L_{T}$, including $N_{p}$ penalties $L_{T}(\theta)=\tilde{L}(\theta)+\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}p_{i}\ ,$ (33) is still positive semi-definite, and such that for a valid point we have $\sum_{i}C(\mathcal{O}_{i})=0\Rightarrow 0\leq L_{T}\leq 1$ in a way that the density penalty does not compete with the constraints since, i.e. for invalid points we always have $L_{T}>1$. Having defined how penalties can be added to the loss function without spoiling our implementation of a black-box optimisation algorithm to find valid points, we now have to choose how to compute and quantify the penalty to produce the _novelty reward_. The first thing we need to address is how to compute the point density. This task is known in the Machine Learning (ML) literature as _density estimation_ , and for large multidimensional datasets it is a very challenging task. Furthermore, not only we want to be able to estimate the point density accurately, we do not want the density estimation to be prohibitively slow to compute. After some preliminary exploratory experimentation, we identified the Histogram Based Outlier System (HBOS) [43] as a fast and easy to implement solution.444Other possibilities were explored, such as One-Class Support Vector Machines, Isolation Forest, Kernel Density Estimation, among others, but with considerable computational cost increase. A systematic study of alternative novelty reward models is left for future work. HBOS has also been previously explored in the context of model-independent searches for new physics using anomaly detection [44]. HBOS works by fitting a histogram with a fixed number of bins, $N_{bins}$, to each dimension, i.e for each parameter. A density penalty for a point $\theta$ is obtained by summing the heights, $h_{j}$, of the bins on which the values of the parameters, $\theta_{j}$, belong.555The usage of histograms suggests that HBOS suffers from the so-called _curse of dimensionality_. In our studies below, we will see that this manifests non-trivially as an interplay between CMAES exploration and the geometry and topology of the valid region of the parameter space. The penalties are normalised to be $p\in[0,1]$, so that a novelty point has penalty $0$ and a point too similar to the ones already seen has maximal penalty $1$. Furthermore, we notice that the penalty over the parameter space density needs not to be over all parameters, $\theta=(\alpha_{1},\beta_{2},\dots)$, but can be _focused_ on only a subset of these, $\\{\theta_{j}\\}$ – this is especially useful to promote focused exploration in parameters of interest. Whilst the discussion above focusses on parameter space density penalty, it can be extended to other spaces. Of particular interest, which we will include in our study, is the space of physical quantities, $\mathcal{O}$. This will allow us to explore not only novel areas of the parameter space, but – perhaps more importantly – cover novel areas of the observables space, i.e. to explore novel phenomenological aspects of the model. To do so, we need to train a penalty $p$ not on the values of the parameters, $\theta$, but on its resulting physical quantities, $\\{\mathcal{O}_{i}\\}$, where the set $\\{\mathcal{O}_{i}\\}$ can be of all or a subset of $\mathcal{O}_{i}$. In our scans, we will include the novelty reward in both the parameter space and in the observable space, and in each case we will study penalties focused subsets of the parameters and the observables. ### V.3 Further Implementation Details We now discuss some implementation details of the ideas presented above. We implemented CMAES using `DEAP` \- Distributed Evolutionary Algorithms in Python [45]. The function $C$ was slightly modified so as to force all values of $C(\mathcal{O}_{i})$ to be nominally similar. To achieve this, we have implemented in the code $C(\mathcal{O}_{i})\to\log(C(\mathcal{O}_{i})+1)$ which retains the same properties: positive semidefiniteness, continuous, and monotonically increasing away from the allowed interval. Furthermore, to prevent any constraint $C(\mathcal{O}_{i})$ from dominating over any other, we rescale them at each generation to be bounded by $[0,1]$ using `scikit-learn` [46] `MinMaxScaler` before computing $\tilde{L}$ and the final loss eq. 33. We used `pyod` [47] implementation of the HBOS [43], and set $N_{bins}=100$, observing a considerable computational overhead for higher values.666The Evolutionary Strategy with Novelty Reward implementation is made available inhttps://gitlab.com/miguel.romao/evolutionary-strategy-novelty- detection-3hdm. The constraints that were checked in our main computational loop are listed in appendix A. For collider limits on novel scalars, we used `HiggsTools` version `1.1.3` [35]. As we perform the signal strength, $\mu_{ij}$, checks in our main computational loop, we only use the `HiggsBounds` functionality of `HiggsTools`, implemented using the `HiggsBounds` version `5` input files using the Python script provided by the `HiggsTools` authors, and used the `HiggsBounds` dataset version `1.4`. For each scan, we ran a total of $100$ independent runs, each with a maximum of $2000$ generations. We use the CMAES default parameters, which set the population size, $\lambda$, and the number of best candidates, $\mu$, using a heuristic. The values for our problem were automatically set to $\lambda=12$ and $\mu=6$, which means that each scan will at most evaluate $2000\times 12\times 100=2.4\times 10^{6}$ points. CMAES has very few parameters to be defined by the user, only the initial mean of the multivariate normal and the overall scale of the covariance $\sigma$. For each run, we set the mean to a random point in the parameter space and initialise CMAES with $\sigma=1$. Furthermore, we follow the methodology in [27] where we define all operations related to CMAES and density estimation not over the parameter space, but over a hypercube $[0,1]^{d_{\mathcal{P}}}$, where $d_{\mathcal{P}}$ is the number of parameters, that maps to the parameter space $\mathcal{P}$. This allows us to treat all parameters in an equal nominal footing, avoiding any potential pathologies arising from having different parameters spanning many different orders of magnitude. The final loss to be optimised to explore the parameter space is $L_{T}(\theta)=\tilde{L}(\theta)+\frac{1}{2}(p_{\mathcal{P}}(\\{\theta_{j}\\})+p_{\mathcal{O}}(\\{\mathcal{O}_{j}(\theta)\\})),$ (34) where $p_{\mathcal{P}}(\\{\theta_{j}\\})$ is the density penalty computed over the subset of parameters $\\{\theta_{j}\\}$ effectively working as a novelty reward in $\mathcal{P}$, $p_{\mathcal{O}}(\\{\mathcal{O}_{j}(\theta)\\})$ is the density penalty computed over the subset of observables $\\{\mathcal{O}_{j}\\}$ effectively working as a novelty reward in $\mathcal{O}$, $N_{C}$ is the number of constraints. As discussed previously, we present different scans for different choices of $\\{\theta_{j}\\}$ and $\\{\mathcal{O}_{j}\\}$ to be included in the computation of $p_{\mathcal{P}}$ and $p_{\mathcal{O}}$ to promote _focused_ scans. ## VI Analysis and results We now present and analyse the results for multiple scans using the ideas presented in section V, and two random sampling scan strategies: purely random over the whole parameter space and $50\%$ away from the alignment limit defined by eq. 27. We present two analysis, one where the `HiggsBounds` constraints using `HiggsTools` were not included in the loss function, and one where it has. The scan without `HiggsTools` runs faster in both computational overhead and CMAES convergence (to be discussed below), which allowed us to experiment with our methodology. The impact of `HiggsTools` on points obtained without including it in the loop is studied. We then perform a second analysis where we included `HiggsTools` in the loop to show how one can include multiple constraints from different sources and still be able to explore the whole parameter space of the model. We start with scans that do not take into account `HiggsTools` results in the loss function. All scans are performed over the whole $16$-dimensional parameter space and all have the same $61$ constraints. We then include `HiggsTools` in the optimisation loop by adding the respective contribution to the loss function. All scans and their details can be seen in table 1. As will be discussed in section VI.4, `HiggsTools` reduces the number of successful runs by a factor of 2, and therefore for these runs we allow for 200 instead of 100 scans. Sampling Scan $d_{\mathcal{P}}$ $N_{C}$ $p_{\mathcal{P}}$ $p_{\mathcal{O}}$ Random Completely random $16$ $61$ N/A N/A Around alignment: AL-2 $16$ $61$ N/A N/A CMAES No penalty (Vanilla) $16$ $61$ None None Parameter space penalty $16$ $61$ $16$ parameters None Constraint space penalty $16$ $61$ None $61$ quantities $\mathcal{O}_{i}$ $\alpha_{1}$, $\beta_{1}$, $\alpha_{2}$, $\beta_{2}$ focus $16$ $61$ $\alpha_{1}$, $\beta_{1}$, $\alpha_{2}$, $\beta_{2}$ None $\mu_{ggF,\gamma\gamma}$ and $\mu_{ggF,Z\gamma}$ focus $16$ $61$ None $\mu_{ggF,\gamma\gamma}$, $\mu_{ggF,Z\gamma}$ $m_{H_{1}^{+}}$ and $m_{H_{2}^{+}}$ $16$ $61$ $m_{H_{1}^{+}}$, $m_{H_{2}^{+}}$ None CMAES Parameter space penalty $16$ $67$ $17$ parameters None w/ HT Constraint space penalty $16$ $67$ None $67$ quantities $\mathcal{O}_{i}$ $\alpha_{1}$, $\beta_{1}$, $\alpha_{2}$, $\beta_{2}$ focus $16$ $67$ $\alpha_{1}$, $\beta_{1}$, $\alpha_{2}$, $\beta_{2}$ None $\mu_{ggF,\gamma\gamma}$ and $\mu_{ggF,Z\gamma}$ focus $16$ $67$ None $\mu_{ggF,\gamma\gamma}$, $\mu_{ggF,Z\gamma}$ $m_{H_{1}^{+}}$ and $m_{H_{2}^{+}}$ $16$ $67$ $m_{H_{1}^{+}}$, $m_{H_{2}^{+}}$ None Table 1: List of scans performed, where $d_{\mathcal{P}}$ is the number of parameters, $N_{C}$ the number of constraints, $p_{\mathcal{P}}$ the novelty reward over the parameter space, $p_{\mathcal{O}}$ the novelty reward over the space of physical quantities $\mathcal{O}_{i}$. ### VI.1 Rewarding Exploration in the Parameter Space We first study the implementation of CMAES with and without parameter space reward to show the enhanced exploration capabilities of CMAES when including a parameter density penalty in the loss function. In fig. 1 we show the scatter plot of the $(\sin(\alpha_{1}-\beta_{1}),\sin(\alpha_{2}-\beta_{2}))$ plane for different runs. In particular, we exhibit the difficulty of random sampling finding valid points, with fig. 1(a) showing only 23 valid points, of which only one passed the `HiggsBounds` constraints. These were obtained from a scan that sampled an estimated $\mathcal{O}(10^{13})$.777We can only estimate the number of points as it would have been prohibitive to store all non-valid points. Therefore, we measured how long the scan took to process a few thousand points, and kept a loose track of the random sampling run to produce the estimate. In fig. 1(b) we show the points obtained by sampling within 50% of the alignment limit, where the allowed points are highly constrained with $\alpha_{1}\simeq\beta_{1}$, an expected result due to the highly constraining bounds from collider measurements of the Standard Model- like Higgs boson decay channels. In the same graph, we can also observe the boundaries imposed by the alignment limit, as $|\sin(\alpha_{2}-\beta_{2})|\simeq 0.5$. In the next plot, fig. 1(c), we show the result of the CMAES scan without further exploration. We observe a funnelling of the results into a single region $\alpha_{1},\ \beta_{1},\ \alpha_{2},\ \beta_{2}\simeq 0$, clearly providing little coverage of the parameter space, although still providing far more valid points than the random sampler. This lack of exploration of CMAES was first observed in [27] and is easily understood from an algorithm point of view as CMAES does not have built-in mechanisms to escape a minimum (global or minimal).888In [27] CMAES was endowed with a restart strategy, which mitigates this and allowed CMAES to draw a more complete picture of the valid regions of the parameter space. In this paper, we have not implemented this, as our focus is on developing a novelty reward driven exploration. (a) Random sampling (b) Alignment limit sampling (c) CMAES without exploration (d) CMAES with parameter novelty reward (e) CMAES with parameter novelty reward focused on $(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})$ Figure 1: $(\sin(\alpha_{1}-\beta_{1}),\sin(\alpha_{2}-\beta_{2}))$ scatter plot for different runs without novelty reward. A point surviving HiggsTools is represented in green, otherwise in red. Despite the seemingly lacklustre result, the points in fig. 1(c) obtained by CMAES were obtained in a quick run of only around $\mathcal{O}(10^{3})$ attempts, providing around ten orders of magnitude sampling efficiency improvement over the random scan (see section VI.4 for a more detailed discussion on convergence metrics and performance). This allows us to change the sampling logic as to explore the parameter space once the sampler converges into a valid subregion of the parameter space. As explained in the preceding sections, this is achieved by including a parameter density penalty. In fig. 1(d) we show the result for the CMAES runs when activating the novelty reward for all parameters, i.e. using a density penalty for all parameters. We can immediately see a much larger region of the parameter space scanned, especially beyond the alignment limit in the $\sin(\alpha_{1}-\beta_{1})$ direction. We can also observe the first _artefact_ of this methodology: we can see sequences of points, akin to the trail of a paintbrush, in this plane. These trails are in fact the path that CMAES has covered while exploring the parameter space away from previously found points. By introducing the parameter penalty over all parameter space, we were able to find novel points away from the alignment limit. However, because the penalty is computed over all the parameter space, CMAES has no incentive to explore _interesting_ regions of the parameter space, as it can reduce the density penalty by spreading across parameters which have little impact on the constraints.999This can be seen as a variation of the _curse of dimensionality_. To mitigate this, we _focus_ the parameter density penalty on the four parameters described by these scatter plots, $\alpha_{1},\beta_{1},\alpha_{2},\beta_{2}$. The resulting points can be seen in fig. 1(e), where we see that CMAES was able to spread its exploration even more in the $(\sin(\alpha_{1}-\beta_{1}),\sin(\alpha_{2}-\beta_{2}))$ plane. More interestingly, the points that subsequently pass `HiggsBounds`, shown in green, cover a much larger region than those obtained using the sampling around the alignment limit, although the latter has arguably a better coverage over $\sin(\alpha_{2}-\beta_{2})$ values more uniformly over different values of $\sin(\alpha_{1}-\beta_{1})$. The above scans were produced by performing a run without checking for the constraints coming from `HiggsBounds` provided by `HiggsTools`. The survival rate against `HiggsTools` of the points obtained using these scans is $3\%-5\%$, or, in other words, a factor of $1/20$ or less. Furthermore, the execution time with `HiggsTools` increases by a factor of around $3$. To first approximation, checking for `HiggsBounds` in the loop can slow down the process of finding good points by an expected factor of $60$ or more. On the other hand, as we can see in fig. 1 not using `HiggsTools` leads to too many invalid points, and depriving CMAES of this information will only prevent it from finding points that survive `HiggsBounds`. In fig. 2 we present the first scans with `HiggsTools` in the loop to check for `HiggsBounds` constraints. The scans presented in fig. 2(a) and fig. 2(b) are direct analogous to those presented in fig. 1(d) and fig. 1(e), respectively. In both cases we observe a far wider coverage of the parameter space than before, showcasing the importance of providing `HiggsTools` feedback to CMAES. More importantly, both runs covered the space of valid points around the alignment limit in this plane, but went far beyond in the $\alpha_{1},\beta_{1}$ subspace. (a) CMAES with parameter novelty reward (b) CMAES with parameter novelty reward focused on $(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})$ Figure 2: $(\sin(\alpha_{1}-\beta_{1}),\sin(\alpha_{2}-\beta_{2}))$ scatter plot for different runs with novelty reward and HiggsTools constraints included in the loss function. We now turn to the masses of the charged scalars, which are constrained by direct searches. In fig. 3 we show the points obtained in the $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ plane. The logic is similar to the previous discussion, with the difference that the last plot, fig. 3(e), shows the points from a scan where the penalty was _focused_ on the charge scalar masses instead of the mixing angles. From the scan around the alignment limit, fig. 3(b), we can observe how `HiggsBounds` affects the valid region, especially for small values of scalar masses. Interestingly, in fig. 3(c) we see that CMAES has provided more coverage over this cross section of the parameter space than before. This can be easily interpreted: CMAES works akin to a gradient descent algorithm, but with the performance enhanced by the approximation of local second derivative. This means that CMAES _rolls down_ the loss function with _momentum_ , following the quickest path to its minimum. This preference for a quick convergence path explains why different CMAES runs will provide similar values of the most constrained parameters, as it is through them that a path needs to be found so as to minimise the loss function. This _eagerness_ to converge is a feature of CMAES, which is on the _exploitive_ side of the _exploration-exploitation_ trade off, as previously also discussed [27]. (a) Random sampling (b) Alignment limit sampling (c) CMAES without exploration (d) CMAES with parameter novelty reward (e) CMAES with parameter novelty reward focused on $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ Figure 3: $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ scatter plot for different runs without novelty reward. A point surviving HiggsTools is represented in green, otherwise in red. In fig. 3(d) and fig. 3(e) we show the results of the scans with the parameter density penalty over all parameters and focused on the charged masses, respectively. We observe that both were able to cover more parameter space than the alignment limit random scan, but the scan focused on the charged scalar masses was able to cover more of the $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ plane, especially in the $m_{H^{+}_{1/2}}\gtrsim 100$ GeV limits. This result further shows that focussing the exploration reward on subsets of parameters can help uncover novel regions overlooked by traditional scans, although in this case most of the points with $m_{H^{+}_{1/2}}\gtrsim 100$ GeV did not survive `HiggsBounds`. Analogously to the discussion above on the mixing angles $\alpha_{1},\ \beta_{1},\ \alpha_{2},\ \beta_{2}$, we now present the results with `HiggsTools` included in the loop to check `HiggsBounds` in fig. 4. We see that both with unfocused, fig. 4(a), and charged masses focused, fig. 4(b), novelty reward CMAES is able to cover a much larger parameter space region than the alignment limit sampling. Furthermore, the valid points found also span a larger region than those in fig. 3 that survived `HiggsBounds` constraints, highlighting the importance of including `HiggsTools` in the loop. (a) CMAES with parameter novelty reward (b) CMAES with parameter novelty reward focused on $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ Figure 4: $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ scatter plot for different runs with novelty reward and HiggsTools constraints included in the loss function. The paths taken by CMAES while exploring the parameter space are very prominent in the scans just discussed. To better understand how CMAES is exploring, in fig. 5 we show the path traversed by a run projected onto the $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ plane. This run has converged at generation number $129$, with values $(m_{H^{+}_{1}},m_{H^{+}_{2}})\simeq(218.6,151.4)$ GeV. At generation 129, the overall scale of the covariant matrix, given by $\sigma$ of the CMAES algorithm, is $\sigma\simeq 0.002$, a value much smaller than the initialised value of $\sigma=1$. Once converged, the density penalty is then added to the loss function forcing CMAES to explore new values of the parameters, as can be observed in the left pane. As it explores, CMAES might be slowed down by the penalty; this leads the algorithm to increase $\sigma$ to find new good points farther away. On the right pane we see this dynamical adaptation of $\sigma$ by CMAES, with higher (lower) values of $\sigma$ leading to more (less) localised samplings. This ability to adaptively increase $\sigma$ when slowed down also provides CMAES with the capacity to escape local minima, and in our case it provides a way of forcing CMAES to move away from where it has been. Figure 5: Path of a CMAES scan with focused parameter density penalty on the $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ plane. Left: colour representing the generation number. Right: colour representing $\sigma$, the overall scale of the covariant matrix of CMAES. #### VI.1.1 The $m_{H^{+}_{1,2}}\leq 150$ GeV Region The above results exhibit a peculiar feature that warrants further discussion. Upon closer inspection of the points that survive `HiggsBounds` when comparing fig. 3 to fig. 4, we see that the scans without `HiggsTools` in the loop appear to have two _islands_ of points at $m_{H^{+}_{1/2}}\sim 140$ GeV, which the scans with `HiggsTools` in the loop missed. We present a _zoomed_ look of this region in fig. 6, where we only show the points that have passed `HiggsBounds` constraints. This suggests that we have not completely mitigated the excessive _eagerness_ of CMAES, which might lead us to miss multimodal solutions, i.e., disjoint valid regions of the parameter space. Figure 6: $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ scatter plot zoomed at $m_{H^{+}_{1,2}}\leq 150$ GeV for different runs. Only points passing HiggsBounds constraints are shown. To better understand whether CMAES is being driven away from this region by its _eagerness_ to converge, we performed a dedicated scan where we restricted the parameter space to $m_{H^{+}_{1,2}}\leq 150$ GeV, and with all other parameter bounds unchanged. We present the result in fig. 7, where we notice that if restricted to that region, CMAES will explore it extensively. Furthermore, we notice that the points $m_{H^{+}_{1/2}}\sim 140$ GeV, which seem above to populate two disjoint regions, do not describe isolated _islands_ of the valid parameter space. There are two important conclusions to draw from this. The first conclusion is that the empty regions of the scatter plot of valid points produced by CMAES do not equate to regions without valid points. This means that one has to be very careful when interpreting these _seemingly empty_ regions without studying them in detail. The second conclusion is that when one focusses on studying these regions, one can find a completely different picture than assumed. In this case the previous results, both from alignment limit scan and from CMAES without `HiggsTools` in the loop, suggested that there are multiple disjoint regions of valid points in the $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ plane for $m_{H^{+}_{1/2}}\sim 140$ GeV, whereas a closer inspection teaches us that this is not the case. Figure 7: $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ scatter plot for a CMAES run with parameter space restricted to $m_{H^{+}_{1,2}}\leq 150$ GeV. ### VI.2 Rewarding Exploration in the Observable Space So far we have shown how we can improve the parameter space coverage by providing CMAES with a novelty reward in the parameter space by turning on a density penalty in observable values $p(\\{\mathcal{O}_{i}\\})$. However, a more interesting avenue is to apply the novelty reward to the observable space, $\mathcal{O}$,101010We abuse terminology by calling observable all the physical quantities which are constrained. This is not strictly correct, as many constraints are theoretical, and some parameters (namely the masses) are themselves physical observables. The purpose of this section is to study how we can achieve exploration through the constrained quantities and observables. as this will allow us to assess whether there is new phenomenology obscured by traditional random sampling techniques. In our first study we want to assess the impact on using a novelty reward in the observable versus the novelty reward in the parameter space studied above. In fig. 8 we show the $(\mu_{ggF,\gamma\gamma},\mu_{ggF,ZZ})$ scatter plots for different scans without `HiggsTools` in the loop. Similarly to the previous discussions on parameter space coverage, we see that CMAES without further exploration, fig. 8(c), provides a narrower coverage of the observable space than the alignment limit sampling strategy, adding to the intuition that CMAES alone is too eager to converge to be a reliable tool to draw a complete picture of the Physics. This changes considerably once we turn on the parameter space novelty award already studied, which also leads to a greater exploration of the observable space, as can be seen in fig. 8(d). This is easy to interpret, as forcing CMAES to explore the parameter space will always impact the value of the physical quantities of the model. We notice, however, that this is a byproduct of the parameter space exploration, as in this case CMAES does not have an explicit _incentive_ to produce new observable values. In fig. 8(e) we show the result of turning on the density penalty in the observable space, therefore explicitly forcing CMAES to find _points with different phenomenology_. The result is stunningly different from all the other scans, with CMAES finding points with a far more diverse value than any of the other scans considered so far. Of particular interest, we observe how CMAES can find points with $\mu_{ggF,\gamma\gamma}\simeq 1.2$, which was completely obscured using alignment limit random sampling and painting a very different picture of what phenomenology the $Z_{3}$ 3HDM model can have. (a) Random sampling (b) Alignment limit sampling (c) CMAES without exploration (d) CMAES with parameter novelty reward (e) CMAES with observable novelty reward Figure 8: $(\mu_{ggF,\gamma\gamma},\mu_{ggF,ZZ})$ scatter plot for different runs without novelty reward. A point surviving HiggsTools is represented in green, otherwise in red. Having shown how an observable space penalty can drive CMAES exploration into novel phenomenological realisations of the model, we now perform the scan with `HiggsTools` in the loop to endow CMAES with information of the `HiggsBounds` constraints (themselves added to the loss function and to the penalty). The resulting $(\mu_{ggF,\gamma\gamma},\mu_{ggF,ZZ})$ scatter plot is shown in fig. 9, where we observe how CMAES was able to find points across all allowed values (up to 2-$\sigma$ with experimental measurement) for $\mu_{ggF,\gamma\gamma}$ with $0.89\lesssim\mu_{ggF,ZZ}\lesssim 1.025$, while completely _rediscovering_ the possible values produced by the alignment limit sampling strategy. This result highlights the power and versatility of our methodology to find new phenomenological realisations of a model. Figure 9: $(\mu_{ggF,\gamma\gamma},\mu_{ggF,ZZ})$ scatter plot for CMAES with novelty reward in observable space and HiggsTools constraints included in the loss function. Just like we could _focus_ the parameter density on a subset of parameters, we can also focus the observable density penalty on a subset of constraints, allowing one to explore to what extent the model explains certain experimental results. For example, recently [48] ATLAS and CMS have released their most recent measurements of the Higgs decaying to $Z\gamma$ with $\mu_{Z\gamma}=2.2\pm 0.7$, which is just compatible with the Standard Model within 2-$\sigma$. Then one can ask whether the $Z_{3}$ 3HDM model discussed in this work could explain such a high value of $\mu_{Z\gamma}$, considering that, without additional states111111To be able to have such a situation on has to go beyond NHDM, see for instance the discussion in Ref.[49]., the Higgs decay channels are considerably correlated, preventing any particular $\mu_{ij}$ to be large while all others remain small. To study this, we performed a scan with a focused observable density penalty over $(\mu_{ggF,\gamma\gamma},\mu_{ggF,Z\gamma})$, which we present in fig. 10 alongside the scatter plot obtained by the CMAES run with observable density computed over all constraints. Perhaps surprisingly, we see that the scan with the focused density penalty, fig. 10(b), has covered a smaller region than the one with the density penalty computed using all constraints, fig. 10(b). A possible interpretation for this is that the focused density is too constraining, preventing CMAES from finding other ways to populate this plane around other constraints. Conversely, the run with penalty over all constraints will be less demanding for CMAES to explore this subspace, as CMAES will find a way of reducing the penalty by spreading the possible constraint values elsewhere, eventually finding a new route to new values in the $(\mu_{ggF,\gamma\gamma},\mu_{ggF,Z\gamma})$ plane. In other words, although when projected onto the $(\mu_{ggF,\gamma\gamma},\mu_{ggF,Z\gamma})$ plane the valid region appears simply connected, the overall geometry and topology of the valid region of the parameter space are likely far more intricate with focused scans obfuscating these nuances. This interplay between a focused density, the availability of paths for CMAES to explore, and the topological and geometrical details of the valid region is an aspect of our methodology that will be further explored in future work. (a) CMAES with observable novelty reward (b) CMAES with observable novelty reward focused on $(\mu_{ggF,\gamma\gamma},\mu_{ggF,ZZ})$ Figure 10: $(\mu_{ggF,\gamma\gamma},\mu_{ggF,Z\gamma})$ scatter plot for CMAES with focused novelty reward in observable space and HiggsTools constraints included in the loss function. ### VI.3 Using Points as Seeds for New Runs The scans performed so far have highlighted the versatility of our methodology in exploring parameter (and observable) spaces. However, the runs performed are independent of each other, i.e., while each run has its own parameter/observable density estimator, this is only trained using valid points found during that run alone. Then, one can entertain the idea of reusing the information of previous scans to guide new runs in regions of interest. In this section, we explore this idea and provide an example of its implementation by choosing valid points from the previous scans as a _seed_ to new runs. Recalling that CMAES can be initiated with an explicit mean, i.e. starting position, and an overall scale of the covariant matrix, $\sigma$, we can then use a valid point as the starting position of a new scan. In order to start exploring the vicinity of our starting position $\sigma$ cannot be too large, and we found that setting it to $\sigma=0.01$ guarantees that CMAES starts already at the minimum of the constraint loss function. Seed points were identified by running HBOS on the entire collection of valid points (left pane of fig. 11). For this concrete example, we evaluated the density only on the $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ subspace and identified the $1\%$ outliers (middle pane of fig. 11), i.e., the points representing the least explored parts of the valid region of the parameter space. We notice some of the shortcomings of HBOS as the density estimator in this plot: given that HBOS fits a histogram to each dimension to compute the density, a point might be in a relatively sparse region but might not be picked as an outlier if one of its components is in a populated bin. For example, we see that the outliers are not necessarily at the rim (convex haul) of the space, but in regions where there are few points both in $m_{H^{+}_{1}}$ and $m_{H^{+}_{2}}$. This same shortcoming of HBOS is present in the scans with novelty reward, leaving room for improvement to be explored in future work. Figure 11: $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ scatter plot for the seeded run. Left: The whole collection of valid points obtained by the other scans. Middle: The $1\%$ outliers classified by HBOS. Right: New points obtained by the seeded runs started at points randomly selected from the $1\%$ outliers. With the most outlying valid points identified, we ran $100$ scans not only seeded by a point randomly chosen from the $1\%$ outlier subset, but with the density penalty also making use of the outliers to guide the new scans away from the already explored regions. We did not use the whole sample of valid points to train the density estimator as it is comprised of over 4 million valid points, considerably slowing down the scan. In the right pane of fig. 11 we show the resulting valid points found by the seeded scans, where we observe that CMAES was able to explore even further away from the previously chartered valid region. Clearly, one could now use the new points as new seeds in repeated iterations to explore even more this subsection of the parameter space, or any other section of it or of the observable space, in order to draw an even more global picture of the valid region. The caveat of only using _chained_ scans is that one can only explore regions that are simply connected to the seed, a detail which must be kept in mind when employing this strategy. Finally, we notice that the scatter plot appears to have some vertical and horizontal regions with less points, this is an artefact of HBOS which draws a histogram with $100\times 100$ bins in this subspace, impacting the density value along horizontal and vertical strips with width similar to the width of the bins. ### VI.4 Convergence Metrics Having discussed how density penalties can be used to enhance the CMAES exploration of the parameter space, we now turn to another aspect of our methodology: the convergence speed. Recalling that CMAES operates by minimising the total loss function, $L_{T}$ from eq. 31, we show how its value decreases sharply after just a few generations in fig. 12, where we also provide the values of random generations for comparison. More precisely, after just 100 generations, totalling around just 1200 points, CMAES has nearly converged to the valid region of the parameter space. Figure 12: Total loss as function of generation. Only the first 500 generations are shown. The random sampler curves are over random generations of $12$ poitns, the same population size as CMAES. The shaded regions represent $0.95$ confidence intervals computed using a bootstrap of 100 runs. Despite the suggestive previous plot, not all scans converge within the budget, which we set to 100 runs of 2000 generations for each case in table 1 without `HiggsTools`, and to 200 runs of 2000 generations for the cases with `HiggsTools`.121212One could alternatively increase the budget for the HiggsTools by increasing the number of generations to 4000. Intuitively, more runs provide a more global picture, whereas longer runs allow for longer explorations of the valid region. The choice between allocating more budget to one over the other depends on the intended study. We present these metrics in table 2 alongside statistics on the fraction of valid points that are within the alignment cases, AL-1 from eq. 26 and AL-2 from eq. 27. We see that, while most points are within AL-1, only a minority are within AL-2. More interestingly, we observe how the scan with novelty reward in the $\alpha_{1},\ \beta_{1},\ \alpha_{2},\ \beta_{2}$ subspace of the parameter space has produced the most points away from the alignment limit. This can be visually understood in fig. 1 and fig. 2 where it is clear that the novelty reward is guiding CMAES to values of $\alpha_{1},\ \beta_{1}$ that are beyond the alignment limit bounds. This is a feature of the versatility of our methodology, as we can perform dedicated scans explicitly away from previously considered priors and regions of the parameter space. Sampling Scan Converged Runs Within AL-1 Within AL-2 CMAES No penalty (Vanilla) 97 out of 100 $1.00$ $1.5\times 10^{-2}$ Parameter space penalty 95 out of 100 $0.90$ $5.1\times 10^{-3}$ Constraint space penalty 90 out of 100 $0.99$ $1.8\times 10^{-4}$ $\alpha_{1}$, $\beta_{1}$, $\alpha_{2}$, $\beta_{2}$ focus 94 out of 100 $0.85$ $5.9\times 10^{-3}$ $\mu_{ggF,\gamma\gamma}$ and $\mu_{ggF,Z\gamma}$ focus 91 out of 100 $1.00$ $4.2\times 10^{-5}$ $m_{H_{1}^{+}}$ and $m_{H_{2}^{+}}$ 90 out of 100 $0.95$ $6.6\times 10^{-3}$ CMAES Parameter space penalty 92 out of 200 $0.92$ $1.1\times 10^{-2}$ w/ HT Constraint space penalty 111 out of 200 $0.94$ $1.4\times 10^{-2}$ $\alpha_{1}$, $\beta_{1}$, $\alpha_{2}$, $\beta_{2}$ focus 102 out of 200 $0.84$ $4.1\times 10^{-2}$ $\mu_{ggF,\gamma\gamma}$ and $\mu_{ggF,Z\gamma}$ focus 101 out of 200 $0.93$ $4.1\times 10^{-3}$ $m_{H_{1}^{+}}$ and $m_{H_{2}^{+}}$ 91 out of 200 $0.98$ $5.3\times 10^{-2}$ Table 2: Convergence and coverage statistics of the different scans presented in table 1. While the first CMAES rows correspond to scans without HiggsTools in the loop, the corresponding fractions of points in both alignment cases are computed using points that have passed HiggsBounds constraints after the scan. As CMAES converges, it will start to find good points. This can be seen in fig. 13 where we observe that after around 100 generations for the runs without `HiggsTools` in the loop, and after around 200 generations for the runs with `HiggsTools` in the loop, CMAES starts finding valid points. Figure 13: Number of valid points founds as function of generation. Only the first 500 generations are shown. The random sampler curves are over random generations of $12$ points, the same population size as CMAES. The shaded regions represent $0.95$ confidence intervals computed using a bootstrap of 100 runs. Before HT After HT Points Efficiency Points Efficiency AL-1 21 in $4.4\times 10^{12}$ $\mathcal{O}(10^{-11})$ 0 in $4.4\times 10^{12}$ $<\mathcal{O}(10^{-12})$ AL-2 13 701 in $10^{10}$ $\mathcal{O}(10^{-6})$ 510 in $\times 10^{10}$ $\mathcal{O}(10^{-8})$ Random 23 in $10^{13}$ $\mathcal{O}(10^{-12})$ 1 in $10^{13}$ $\mathcal{O}(10^{-13})$ Table 3: Sampling efficiencies of random sampling strategies. We note that for the completely random scan the numbers are estimated. In fig. 14 we show the distribution of the generation where the first valid point was valid for the runs with and without `HiggsTools` in the loop. We see that checking for `HiggsBounds` _postpones_ the discovery of the first good point by a factor of around $2$ in terms of generation number. However, this is far better than with random sampling (both purely random and around the alignment limit) where only at most $5\%$ of the points that respect all other constraints survive `HiggsBounds` constraints, table 3. Additionally, we note that the number of trial points needed to find a good valid point is around $\mathcal{O}(250-750)\times 12$ when not including `HiggsBounds` constraints in the loop, and $\mathcal{O}(400-1300)\times 12$ when including `HiggsBounds` constraints in the loop. Recall that the efficient sampling of the random sampler is (estimated) to be $\mathcal{O}(10^{-12})$ and $\mathcal{O}(10^{-13})$ respectively, which means that our methodology improves the sampling efficiency by a factor of 8 orders of magnitude, even if we do not consider the near perfect sampling efficiency after convergence during the density penalty-guided exploration phase. Unsurprisingly, the improvement over the AL-2 alignment limit sampling strategy is more modest, with around four orders of magnitude when considering `HiggsBounds` constraints, but we reiterate that this only pertains to the first convergence and that our algorithm can then continue to explore the region found and go beyond the alignment limit bounds, as was highlighted in the previous section. Figure 14: Distribution of the generation with the first valid point. The distributions shown are obtained from all runs with and without HiggsTools in the loop. So far, we have seen the improvements to convergence provided by CMAES by taking into account the number of points tried, observing massive improvements over random sampling. On the other hand, the methodology presented in this work is only useful if it also provides a speed-up in terms of wall time, i.e. time passed from the reference frame of the user. In fig. 15 we present a variation of fig. 14 but in terms of elapsed time, instead of generations. We see that CMAES without including `HiggsBounds` constraints in the loop tends to find points within $\mathcal{O}(100-500)$ seconds, i.e. in minutes, while when including `HiggsBounds` constraints, this increases do $\mathcal{O}(750-2250)$ seconds. The slowing down is easily understood: using `HiggsTools` slows down the evaluation of a point by a factor of around 3 (see below for more details), and since converging on `HiggsBounds` constraints delays CMAES in finding a good point by a factor of around 2 (see discussion above) we expect an overall wall time delay of 5, which is what we can see here. Figure 15: Distribution of the elapsed time until the first valid point. The distributions shown are obtained from all runs with and without HiggsTools in the loop. To better understand the impact of the different components of our methodology in the total time, we present in table 4 the times taken by different steps of the loop for CMAES and the random sampler, and with and without `HiggsTools` in the loop. In this table, generation time represents the time needed to perform all the steps of a generation including evaluation time, i.e. the time needed to compute all observables (including `HiggsBounds`, when applicable), train the density estimator (when applicable), and perform diverse housekeeping tasks such as save intermediate results, keep track of run metrics, etc. In this table we see that the overall housekeeping overhead can be assessed in the random sampler rows as for these there is no overhead related to CMAES and to density estimation, and it is around 0.015 (0.019) seconds without (with) `HiggsTools`.131313The larger housekeeping overhead associated with HiggsTools is due to the presence of more metrics to keep track and larger intermediate files to save. The most important observation to take from this table is that the overall overheard of our methodology, including that associated with CMAES and density estimation, is at most around $10\%$ of the total generation time for the CMAES runs without `HiggsTools`. Once we include `HiggsTools` in the loop, the overall generation time increases 3-fold but the overhead remains mostly the same, showing that our methodology provides even greater gains for problems with a slow evaluation time. Additionally, we see that our choice of HBOS for density estimator corresponds to a minor fraction of the overhead. However, one can notice that the standard deviation of the density estimator training is greater than the mean; this is because HBOS has a linear computational complexity with respect to the number of valid points, effectively becoming slower to train the more valid points we have found. Improvements to the density estimator are left for future work. Sampler HT in Loop Generation Evaluation Density Estimator Overhead (sec) (sec) Training (sec) (sec) CMAES False $0.48\pm 0.16$ $0.43\pm 0.15$ $(3.9\pm 23)\times 10^{-3}$ $0.055\pm 0.043$ True $1.6\pm 0.3$ $1.6\pm 0.2$ $(1.9\pm 22)\times 10^{-3}$ $0.068\pm 0.048$ Random False $0.33\pm 0.06$ $0.31\pm 0.06$ N/A $0.015\pm 0.003$ True $1.4\pm 0.2$ $1.3\pm 0.2$ N/A $0.019\pm 0.012$ Table 4: Comparison of times taken by different parts of the loop for CMAES and random sampler with and without HiggsTools in the loop. The times refer to a generation of 12 points in every case. The values presented are the mean $\pm$ one standard deviation over the scans falling in each of the four categories. ## VII Conclusions In this paper, we have developed a novel approach to explore the highly constrained multidimensional parameter space of the $Z_{3}$ 3HDM, defined in section II and constrained discussed in section III and appendix A, and go beyond alignment limit priors presented in section IV, by combining CMAES power of exploration with a Machine Learning estimator for point density. It is important to note that, while the subject of study in this paper was the $Z_{3}$ 3HDM parameter space, our approach is general and applicable to any Physics case, providing a solution to the difficulty of sampling good points in highly constrained multidimensional parameter spaces. In section V we introduced our strategy, using CMAES, a powerful evolutionary strategy, in combination with HBOS, a fast ML model for density estimation. Our approach guarantees that the density-based _novelty reward_ does not compete in the loss function with the constraints on the model and pushes CMAES to explore the parameter space once converged. Importantly, we showed how our methodology is versatile, as we can turn on the _novelty reward_ in the parameter space or in the observable space, where the phenomenology is realised. Additionally, the _novelty reward_ can be computed by estimating the density in only a subset of parameters and/or observables, allowing for quick focused scans on regions of interest. In section VI we presented the results of multiple scans performed with our methodology, each with different combinations of parameters and/or observables on which the density penalty was computed. We showed how our approach can effortlessly go beyond the alignment limit sampling strategies, finding valid points in regions of the parameter space hitherto ignored by such sampling strategies. More precisely, using the novelty reward in the parameter space, both in whole or in a subset, in section VI.1 we have found that it is easy to go beyond the alignment limit in the $(\alpha_{1},\beta_{1})$ plane. In the same analysis, we showed how our methodology also exposes regions of heavy scalar masses, even preferring it over the region excluded by `HiggsBounds`, which we explored in detail in section VI.1.1 by restricting the parameter space to $m_{H^{+}_{1,2}}\leq 150$ GeV, finding a considerably different picture of that region of the parameter space that one would get from alignment limit sampling. While we set ourselves to explore the parameter space with CMAES combined with a _novelty reward_ , the Physics of the model resides its space of observables and physical quantities. In section VI.2 we have the results of scans where the density penalty was computed in the observable space instead of the parameter space. The results uncover novel possible phenomenological realisations of the 3HDM, an important contribution of this work that would not have been possible to achieve without our AI-based scan. In particular, we find that it is possible to accommodate Higgs decay signal strengths larger than one up to their current upper experimental bounds, a phenomenological signature not captured by alignment limit random sampling strategies. Given the versatility in exploring different observables, we set to study whether the $Z_{3}$ 3HDM can explain the recent measurement of $\mu_{Z\gamma}\simeq 2.2$ by ATLAS and CMS [48], finding that $\mu_{Z\gamma}\lesssim 1.1$ in the $Z_{3}$ 3HDM, not a surprising result given that decay signal strengths are highly correlated in this model and one cannot get arbitrarily high values for one of them without spoiling the experimental measurements of the remainder (for other possibilities, see, for instance, the discussion in [49]). Finally, in section VI.4 we discuss the convergence metrics of our algorithm and compare it with the pure and alignment limit random sample strategies. Our methodology is orders of magnitude faster (both in number of tried points and wall time) than the random sampling strategies, providing a solution to the random sampling efficiency problem in highly constrained multidimensional parameter spaces. Although the methodology presented in this work provides impressive speedup and efficiency improvement when compared to random sampling strategies, we have encountered some shortcomings and less appealing characteristics that we want to improve in future work. First, the methodology used in the analyses employs independent runs, each with its own density estimator from which the _novelty reward_ is derived. An alternative approach is to share information across runs to ensure the novelty of the exploration. In section VI.3 we showed how such a strategy could be implemented, where we identified the valid region of the parameter space less populated (in the $(m_{H^{+}_{1}},m_{H^{+}_{2}})$ subspace) by our scans and then used some of the points in that region as a seed for new runs. The resulting new points were significantly different from the ones found previously, highlighting the potential for even further exploration by chaining runs together, a methodological detail that can be improved in the future. Second, in the same study, we encountered some artefacts arising from the _binning_ nature of the HBOS, which can, in principle, be mitigated by using a different density estimator (or a different novelty detector). In our early exploration, we tried a variety of alternatives, all significantly slower than HBOS, making our methodology impractical. Producing a better way of assigning the _novelty reward_ could solve the binning problem and any manifestation of the _curse of dimensionality_ produced by it. Third, we have observed that our methodology might not explore all possible regions as CMAES intuitively follows a _path of fastest descent_. This was particularly clear in section VI.1.1 where we addressed the overlooked region of small charged scalar masses. By restricting the parameter space, we were able to populate that region easily, but the fact that it was not explored in the first place shows that we need to be careful when interpreting _empty_ regions as regions without valid points. Lastly, we have observed that the geometrical and topological details of the valid region of the parameter space might impact the possible exploration paths of CMAES. This can have a profound effect on the results when there are disjoint, not simply connected, regions of the parameter space supporting good points. We leave to future work the development of a way to assess whether the scans are capable of capturing multimodal valid regions confidently. Finally, our methodology opens up the possibility for a complete exploration of other $N$HDM (or any other BSM Physics) parameter spaces in light of the current highly constraining experimental results and theoretical conditions. As our work shows, this could lead to novel phenomenological realisations of these models and, ultimately, to the possibility of novel experimental signatures. We leave this phenomenological study for the future. ## Acknowledgments We would like to welcome to this world and dedicate this work to Leo David Nascimento Crispim Romão. We are very thankful to Fernando Abreu de Souza, Nuno Filipe Castro, Andreas Karle, and Werner Porod for valuable and fruitful discussions. MCR is supported by the STFC under Grant No. ST/T001011/1. MCR thanks the Southampton HEP group for the hospitality and access to the infrastructure. This work is supported in part by the Portuguese Fundação para a Ciência e Tecnologia (FCT) under Contracts CERN/FIS-PAR/0002/2021, UIDB/00777/2020, and UIDP/00777/2020, these projects are partially funded through POCTI (FEDER), COMPETE, QREN, and the EU. ## Appendix A Description of the various constraints In this appendix, we summarise the constraints that have to be satisfied for a point in parameter space to be considered a valid point. As these have already been discussed in great detail in a series of papers [10, 12, 16], here we just give a brief review and indicate the places where to look for further information. We list the constraints in the order in which they are applied in the code. ### A.1 The $\kappa$’s formalism We found that it is useful to select points that are already close to the LHC constraints, using the $\kappa$’s formalism. We require them to be within 3$\sigma$ of the LHC data [50]. The expressions for the $\kappa$’s for the different types of fermion couplings in the 3HDM are given in [12]. As in this work we just consider Type I, we have for the fermions, $\kappa_{U}=\frac{\sin(\alpha_{2})}{\sin(\beta_{2})},\quad\kappa_{D}=\frac{\sin(\alpha_{2})}{\sin(\beta_{2})},\quad\kappa_{L}=\frac{\sin(\alpha_{2})}{\sin(\beta_{2})}.$ (35) The couplings with the vector bosons give, for all types, $\kappa_{W}=\cos(\alpha_{2})\cos(\alpha_{1}-\beta_{1})\cos(\beta_{2})+\sin(\alpha_{2})\sin(\beta_{2})\,,$ (36) which gives $\kappa_{W}=1$ when $\alpha_{1}=\beta_{1}$ and $\alpha_{2}=\beta_{2}$. We should note that the points are subsequently tested for the signal strengths, so this constraint is applied with a large interval (3$\sigma$) just to make the selection faster. ### A.2 Bounded From Below (BFB) The scalar potential has to be BFB. As explained in ref.[12], to find the necessary and sufficient conditions for this to happen is a difficult task. For the 3HDM is only known for a few cases with high symmetry in the potential. For the $Z_{3}$ 3HDM that we consider here, the best we can do is to use sufficient conditions. We refer to ref.[12] for the details of the implementation. ### A.3 Oblique parameters $S,T,U$ To discuss the effect of the electroweak precision parameter, S, T and U, we use the expressions in [33] and the experimental summary in [34, 51]. The expression for the needed matrices $V$ ($3\times 6$) and $U$ ($3\times 3$) is [12], $V=\begin{pmatrix}i\textbf{P}^{T}_{11}&\textbf{R}^{T}_{11}&\textbf{R}^{T}_{12}&\textbf{R}^{T}_{13}&i\textbf{P}^{T}_{12}&i\textbf{P}^{T}_{13}\\\ i\textbf{P}^{T}_{21}&\textbf{R}^{T}_{21}&\textbf{R}^{T}_{22}&\textbf{R}^{T}_{23}&i\textbf{P}^{T}_{22}&i\textbf{P}^{T}_{23}\\\ i\textbf{P}^{T}_{31}&\textbf{R}^{T}_{31}&\textbf{R}^{T}_{32}&\textbf{R}^{T}_{33}&i\textbf{P}^{T}_{32}&i\textbf{P}^{T}_{33}\end{pmatrix},$ (37) and $U=\textbf{Q}^{T},$ (38) where the matrices $\textbf{R},\textbf{P},\textbf{Q}$ were defined before. ### A.4 Unitarity A valid point in the parameter space must also satisfy the perturbative unitarity constraints. This can be expressed in terms of constraints on the $\lambda_{i}$ potential parameters. For the different symmetry constrained 3HDM these are fully given in [11] to which we refer the reader for further details. ### A.5 The signal strengths $\mu_{ij}$ The LHC results on the 125 GeV Higgs boson are normally given by the signal strengths, $\displaystyle\mu_{if}=\left(\frac{\sigma_{i}^{\text{3HDM}}(pp\to h)}{\sigma_{i}^{\text{SM}}(pp\to h)}\right)\left(\frac{\text{BR}^{\text{3HDM}}(h\to f)}{\text{BR}^{\text{SM}}(h\to f)}\right)\,,$ (39) where the subscript ‘$i$’ denotes the production mode and the subscript ‘$f$’ denotes the decay channel of the SM-like Higgs scalar. The relevant production mechanisms include gluon fusion ($ggF$), vector boson fusion ($VBF$), associated production with a vector boson ($VH$, $V=W$ or $Z$), and associated production with a pair of top quarks ($ttH$). The SM cross section for the gluon fusion process is calculated using HIGLU [52], and for the other production mechanisms we use the prescription of Ref. [53]. The calculated $\mu_{if}$ are required to be within 2$\sigma$ of the LHC results[32]. ### A.6 Constraints from flavour data In Type-I, by construction, 3HDM there are no FCNCs at the tree-level. Therefore, the only NP contribution at one-loop order to observables such as $b\to s\gamma$ and the neutral meson mass differences will come from the charged scalar Yukawa couplings. We follow [9] where it was shown that the constraints coming from the meson mass differences tend to exclude very low values of $\tan\beta_{1,2}$. Therefore, we only consider $\tan\beta_{1,2}>0.3\,,$ (40) to safeguard ourselves from the constraints coming from the neutral meson mass differences. To deal with the constraints resulting from $b\to s\gamma$, we follow the procedure described in Refs. [39, 10, 54] and impose the following restriction $2.87\times 10^{-4}<\text{BR}(B\to X_{s}\gamma)<3.77\times 10^{-4}\,,$ (41) which represents the $3\sigma$ experimental limit. As in the 2HDM, for the case of Type-I, this does not put strong constraints on the charged Higgs masses. ### A.7 Perturbativity of the Yukawa couplings We need to ensure the perturbativity of the Yukawa couplings. For the Type-I Yukawa structure, the top, bottom, and tau Yukawa couplings are given by $\displaystyle y_{t}=\frac{\sqrt{2}\,m_{t}}{v\sin\beta_{2}}\;,\quad y_{b}=\frac{\sqrt{2}\,m_{b}}{v\sin\beta_{2}}\;,\quad y_{\tau}=\frac{\sqrt{2}\,m_{\tau}}{v\sin\beta_{2}}\;,$ (42) which follow from our convention that only $\phi_{3}$ couples to up-type quarks, down-type quarks, and charged leptons. To maintain the perturbativity of Yukawa couplings, we impose $\lvert y_{t}\rvert,\lvert y_{b}\rvert,\lvert y_{\tau}\rvert<\sqrt{4\pi}$. For our case, these constraints are all satisfied if we take into account the lower value of $\tan\beta_{2}$ in eq. 40. ## References * [1] G.C. Branco, P.M. Ferreira, L. Lavoura, M.N. Rebelo, M. Sher and J.P. Silva, _Theory and phenomenology of two-Higgs-doublet models_ , _Phys. Rept._ 516 (2012) 1 [1106.0034]. * [2] V. Keus, S.F. King and S. Moretti, _Three-Higgs-doublet models: symmetries, potentials and Higgs boson masses_ , _JHEP_ 01 (2014) 052 [1310.8253]. * [3] I.P. Ivanov and E. Vdovin, _Classification of finite reparametrization symmetry groups in the three-Higgs-doublet model_ , _Eur. Phys. J. C_ 73 (2013) 2309 [1210.6553]. * [4] A. Pilaftsis, _Symmetries for standard model alignment in multi-Higgs doublet models_ , _Phys. Rev. D_ 93 (2016) 075012 [1602.02017]. * [5] A.G. Akeroyd, S. Moretti, K. Yagyu and E. Yildirim, _Light charged Higgs boson scenario in 3-Higgs doublet models_ , _Int. J. Mod. Phys. A_ 32 (2017) 1750145 [1605.05881]. * [6] D. Das and I. Saha, _Alignment limit in three Higgs-doublet models_ , _Phys. Rev. D_ 100 (2019) 035021 [1904.03970]. * [7] J.M. Alves, F.J. Botella, G.C. Branco and M. Nebot, _Extending trinity to the scalar sector through discrete flavoured symmetries_ , _Eur. Phys. J. C_ 80 (2020) 710 [2005.13518]. * [8] H.E. Logan, S. Moretti, D. Rojas-Ciofalo and M. Song, _CP violation from charged Higgs bosons in the three Higgs doublet model_ , _JHEP_ 07 (2021) 158 [2012.08846]. * [9] M. Chakraborti, D. Das, M. Levy, S. Mukherjee and I. Saha, _Prospects for light charged scalars in a three-Higgs-doublet model with Z3 symmetry_ , _Phys. Rev. D_ 104 (2021) 075033 [2104.08146]. * [10] R. Boto, J.C. Romão and J.P. Silva, _Current bounds on the type-Z Z3 three-Higgs-doublet model_ , _Phys. Rev. D_ 104 (2021) 095006 [2106.11977]. * [11] M.P. Bento, J.C. Romão and J.P. Silva, _Unitarity bounds for all symmetry-constrained 3HDMs_ , _JHEP_ 08 (2022) 273 [2204.13130]. * [12] R. Boto, J.C. Romão and J.P. Silva, _Bounded from below conditions on a class of symmetry constrained 3HDM_ , _Phys. Rev. D_ 106 (2022) 115010 [2208.01068]. * [13] R. Plantey, O.M. Ogreid, P. Osland, M.N. Rebelo and M.A. Solberg, _Weinberg’s 3HDM potential with spontaneous CP violation_ , _Phys. Rev. D_ 108 (2023) 075029 [2208.13594]. * [14] D. Das, M. Levy, P.B. Pal, A.M. Prasad, I. Saha and A. Srivastava, _Democratic three-Higgs-doublet models: The custodial limit and wrong-sign Yukawa coupling_ , _Phys. Rev. D_ 107 (2023) 055035 [2301.00231]. * [15] A. Kunčinas, O.M. Ogreid, P. Osland and M.N. Rebelo, _Complex S 3-symmetric 3HDM_, _JHEP_ 07 (2023) 013 [2302.07210]. * [16] R. Boto, D. Das, L. Lourenco, J.C. Romao and J.P. Silva, _Fingerprinting the type-Z three-Higgs-doublet models_ , _Phys. Rev. D_ 108 (2023) 015020 [2304.13494]. * [17] M. Feickert and B. Nachman, _A Living Review of Machine Learning for Particle Physics_ , 2102.02770. * [18] S. Caron, J.S. Kim, K. Rolbiecki, R. Ruiz de Austri and B. Stienen, _The BSM-AI project: SUSY-AI–generalizing LHC limits on supersymmetry with machine learning_ , _Eur. Phys. J. C_ 77 (2017) 257 [1605.02797]. * [19] J. Ren, L. Wu, J.M. Yang and J. Zhao, _Exploring supersymmetry with machine learning_ , _Nucl. Phys. B_ 943 (2019) 114613 [1708.06615]. * [20] F. Staub, _xBIT: an easy to use scanning tool with machine learning abilities_ , 1906.03277. * [21] B.S. Kronheim, M.P. Kuchera, H.B. Prosper and A. Karbo, _Bayesian Neural Networks for Fast SUSY Predictions_ , _Phys. Lett. B_ 813 (2021) 136041 [2007.04506]. * [22] A. Hammad, M. Park, R. Ramos and P. Saha, _Exploration of parameter spaces assisted by machine learning_ , _Comput. Phys. Commun._ 293 (2023) 108902 [2207.09959]. * [23] S. Caron, T. Heskes, S. Otten and B. Stienen, _Constraining the Parameters of High-Dimensional Models with Active Learning_ , _Eur. Phys. J. C_ 79 (2019) 944 [1905.08628]. * [24] M.D. Goodsell and A. Joury, _Active learning BSM parameter spaces_ , _Eur. Phys. J. C_ 83 (2023) 268 [2204.13950]. * [25] J. Hollingsworth, M. Ratz, P. Tanedo and D. Whiteson, _Efficient sampling of constrained high-dimensional theoretical spaces with machine learning_ , _Eur. Phys. J. C_ 81 (2021) 1138 [2103.06957]. * [26] J. Baretz, N. Carrara, J. Hollingsworth and D. Whiteson, _Visualization and efficient generation of constrained high-dimensional theoretical parameter spaces_ , _JHEP_ 11 (2023) 062 [2305.12225]. * [27] F.A. de Souza, M. Crispim Romão, N.F. Castro, M. Nikjoo and W. Porod, _Exploring parameter spaces with artificial intelligence and machine learning black-box optimization algorithms_ , _Phys. Rev. D_ 107 (2023) 035004 [2206.09223]. * [28] H. Georgi and D.V. Nanopoulos, _Suppression of Flavor Changing Effects From Neutral Spinless Meson Exchange in Gauge Theories_ , _Phys. Lett. B_ 82 (1979) 95. * [29] J.F. Donoghue and L.F. Li, _Properties of Charged Higgs Bosons_ , _Phys. Rev. D_ 19 (1979) 945. * [30] F.J. Botella and J.P. Silva, _Jarlskog - like invariants for theories with scalars and fermions_ , _Phys. Rev. D_ 51 (1995) 3870 [hep-ph/9411288]. * [31] R. Boto, _Symmetry-constrained Multi-Higgs Doublet Models_ , Master’s thesis, IST, Univ. Lisbon, 19 January 2021, * [32] ATLAS collaboration, _A detailed map of Higgs boson interactions by the ATLAS experiment ten years after the discovery_ , _Nature_ 607 (2022) 52 [2207.00092]. * [33] W. Grimus, L. Lavoura, O.M. Ogreid and P. Osland, _A Precision constraint on multi-Higgs-doublet models_ , _J. Phys._ G35 (2008) 075001 [0711.4022]. * [34] Gfitter Group collaboration, _The global electroweak fit at NNLO and prospects for the LHC and ILC_ , _Eur. Phys. J. C_ 74 (2014) 3046 [1407.3792]. * [35] H. Bahl, T. Biekötter, S. Heinemeyer, C. Li, S. Paasch, G. Weiglein et al., _HiggsTools: BSM scalar phenomenology with new versions of HiggsBounds and HiggsSignals_ , _Comput. Phys. Commun._ 291 (2023) 108803 [2210.09332]. * [36] D. Fontes and J.C. Romao, _FeynMaster: a plethora of Feynman tools_ , _Comput. Phys. Commun._ 256 (2020) 107311 [1909.05876]. * [37] D. Fontes and J.C. Romão, _Renormalization of the C2HDM with FeynMaster 2_ , _JHEP_ 06 (2021) 016 [2103.06281]. * [38] D. Fontes, J.C. Romão and J.P. Silva, _$h\rightarrow Z\gamma$ in the complex two Higgs doublet model_, _JHEP_ 12 (2014) 043 [1408.2534]. * [39] R.R. Florentino, J.C. Romão and J.P. Silva, _Off diagonal charged scalar couplings with the Z boson: Zee-type models as an example_ , _Eur. Phys. J. C_ 81 (2021) 1148 [2106.08332]. * [40] N. Darvishi and A. Pilaftsis, _Classifying Accidental Symmetries in Multi-Higgs Doublet Models_ , _Phys. Rev. D_ 101 (2020) 095008 [1912.00887]. * [41] N. Hansen, _The CMA evolution strategy: a comparing review_ , Springer (2006). * [42] N. Hansen, _The CMA Evolution Strategy: A Tutorial_ , 1604.00772. * [43] M. Goldstein and A.R. Dengel, _Histogram-based outlier score (hbos): A fast unsupervised anomaly detection algorithm_ , 2012, https://api.semanticscholar.org/CorpusID:3590788. * [44] M. Crispim Romão, N.F. Castro and R. Pedro, _Finding New Physics without learning about it: Anomaly Detection as a tool for Searches at Colliders_ , _Eur. Phys. J. C_ 81 (2021) 27 [2006.05432]. * [45] F.-A. Fortin, F.-M. De Rainville, M.-A.G. Gardner, M. Parizeau and C. Gagné, _Deap: Evolutionary algorithms made easy_ , _The Journal of Machine Learning Research_ 13 (2012) 2171. * [46] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel et al., _Scikit-learn: Machine learning in python_ , _the Journal of machine Learning research_ 12 (2011) 2825. * [47] Y. Zhao, Z. Nasrullah and Z. Li, _Pyod: A python toolbox for scalable outlier detection_ , 1901.01588. * [48] ATLAS collaboration, _Evidence for the Higgs boson decay to a Z boson and a photon at the LHC_ , ATLAS-CONF-2023-025, CERN, Geneva (May, 2023). * [49] R. Boto, D. Das, J.C. Romao, I. Saha and J.P. Silva, _New physics interpretations for nonstandard values of $h\to Z\gamma$_, 2312.13050. * [50] ATLAS Collaboration collaboration, _Combined measurements of Higgs boson production and decay using up to 80 fb -1 of proton–proton collision data at $\sqrt{s}=$ 13 TeV collected with the ATLAS experiment_, Tech. Rep. ATLAS-CONF-2018-031, CERN, Geneva (Jul, 2018). * [51] Particle Data Group collaboration, _Review of Particle Physics_ , _PTEP_ 2022 (2022) 083C01. * [52] M. Spira, _HIGLU: A program for the calculation of the total Higgs production cross-section at hadron colliders via gluon fusion including QCD corrections_ , hep-ph/9510347. * [53] LHC Higgs Cross Section Working Group collaboration, _Handbook of LHC Higgs Cross Sections: 4. Deciphering the Nature of the Higgs Sector_ , 1610.07922. * [54] A.G. Akeroyd, S. Moretti, T. Shindou and M. Song, _CP asymmetries of ${\overline{B}}\to X_{s}/X_{d}\gamma$ in models with three Higgs doublets_, _Phys. Rev. D_ 103 (2021) 015035 [2009.05779].
# Characterization and thermometry of dissipatively stabilized steady states G. S. Grattan A. M. Liguori-Schremp<EMAIL_ADDRESS>D. Rodriguez Perez E. Kapit Physics Department, Colorado School of Mines 1523 Illinois Street Golden, CO 80401 W. Jones P. Graf National Renewable Energy Laboratory 15013 Denver W Pkwy Golden, CO 80401 ###### Abstract In this work we study the properties of dissipatively stabilized steady states of noisy quantum algorithms, exploring the extent to which they can be well approximated as thermal distributions, and proposing methods to extract the effective temperature T. We study an algorithm called the Relaxational Quantum Eigensolver (RQE), which is one of a family of algorithms that attempt to find ground states and balance error in noisy quantum devices. In RQE, we weakly couple a second register of auxiliary “shadow” qubits to the primary system in Trotterized evolution, thus engineering an approximate zero-temperature bath by periodically resetting the auxiliary qubits during the algorithm’s runtime. Balancing the infinite temperature bath of random gate error, RQE returns states with an average energy equal to a constant fraction of the ground state. We probe the steady states of this algorithm for a range of base error rates, using several methods for estimating both T and deviations from thermal behavior. In particular, we both confirm that the steady states of these systems are often well-approximated by thermal distributions, and show that the same resources used for cooling can be adopted for thermometry, yielding a fairly reliable measure of the temperature. These methods could be readily implemented in near-term quantum hardware, and for stabilizing and probing Hamiltonians where simulating approximate thermal states is hard for classical computers. ††preprint: Quantum Science & Tech. 2024 - RQE ## I Introduction Many important applications of quantum computation involve preparing and/or approximating the ground state of quantum Hamiltonians, for which many algorithms have been proposed, ranging from variational methods such as the Variational Quantum Eingensolver (VQE) [1, 2] to techniques based on adiabatic processes [3, 4] to the implementation of effective imaginary time evolution [5, 6]. All these methods present substantial drawbacks or limitations, including: dependency of the performance on the quality of the variational ansatz [7, 8] and classical optimization to minimize the energy, in the case of variational methods; limitation of the convergence time that grows without bound for system size when phase transitions are encountered in the trajectories of the adiabatic processes; necessity for intermediate steps of tomography and costly classical post-processing for the imaginary time evolution technique. On present and near-term, noisy quantum computers, all of these problems pale in comparison to the issue of noise [9, 10, 11]. As current and near-term quantum devices are irreducibly noisy, controlling the strength and nature of the coupling of a quantum system to the environment is of paramount importance in the development of quantum technology. Though at a first glance counter-intuitive, coupling to the environment can actually yield substantial advantages when the system-bath couplings and bath structure are tuned appropriately [12, 13]. One might expect the coupling to the environment to increase the system’s entropy, and thus to reduce its “quantum efficiency." And indeed, the reduction of coupling to uncontrolled degrees of freedom of the environment has enabled much of the progress in quantum information processing, particularly in solid state systems [14, 15, 16]. However, carefully engineering the bath coupled to a system in specific ways can effectively reduce the system’s entropy. To this end, there are different approaches, including cases in which the bath is effectively very cold and thus acts as an entropy and/or energy “dump" [17, 18], or the Maxwell’s demon that is digital error correction codes [19, 20]. Indeed, dissipatively stabilized states, and thermal states in particular, are a novel alternative to fault tolerant error correction in near term devices, with important theoretical and experimental developments in the last few years [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. Further, there is evidence that these states can be very difficult to reproduce classically [42, 41, 43]. Significant questions remain about the detailed structure of these states, and to find practical uses for them it is important to better understand them. An effective way to study them is within the frame of quantum thermodynamics, an emerging field of physics that unites concepts of quantum information with concepts from thermodynamics such as entropy, work, heat, and temperature [44, 45, 46]. Some practical applications include, for instance, temperature estimation and improved metrology methods within the context of quantum thermometry using “impurity probes" in many-body systems [47, 48, 49, 50, 51, 52, 53, 54, 55]. In this work, using extensive numerical simulations and theory, we explore dissipatively stabilized states within the framework of quantum thermodynamics, by applying our algorithm to two 1D many-body Hamiltonians, namely: the ferromagnetic Ising model with transverse field; and the Heisenberg model on a ring. The algorithm presented here is called the Relaxational Quantum Eigensolver (RQE) to be consistent with its first proposal in [56, 57], and is formulated with near-term devices in mind. We first show that the steady state reached by our algorithm returns an average energy equal to a constant fraction of the ground state energy (assuming random states have energy zero), which steadily improves as the error rate decreases. Then, we find that in many cases, dissipatively stabilized states obtained from our algorithm are well approximated by thermal states, with a temperature set by balancing error rates. In general, we do not know a priori that the dissipatively-stabilized states are thermal; indeed, undoubtedly there are cases in which they would not be. However, for the two 1D many-body Hamiltonians considered in our work, it is a reasonable assumption, which we confirm through detailed simulations in this work. But even when the steady state of the system is well-approximated as thermal [58, 59, 60, 45, 61], the question of how to measure that temperature is nontrivial. This work presents three methods to characterize the temperature of the dissipatively-stabilized state obtained from our algorithm, one of which efficiently infers it from the populations of the qubits used as the cold bath. Moreover, this new thermometry technique is particularly resource efficient, as it requires only a small bath while also providing a method to estimate a system’s temperature without the need of computationally-expensive methods based on exact diagonalization, or experimentally expensive processes such as full state tomography. In the remainder of this paper, we first describe the algorithm used to study the steady state of two many-body Hamiltonians of interest. Then, we present the results obtained to characterize the dissipatively-stabilized states, first showing the approximation of the ground-state energy and then presenting our thermometry studies. In doing so we benchmark our new method, quantify its accuracy and identify potential mechanisms that can cause it to report overly high temperatures (relative to the real effective temperature of the many-body state). Finally, we draw conclusions and offer an outlook for future work. ## II RQE Algorithm Description To set up our RQE algorithm, we first choose a quantum many-body Hamiltonian $H_{P}$ that governs the primary qubits and that we wish to minimize. We then couple the primary qubits weakly to a second register of $N_{S}=N_{P}/2$ ancillary, or “shadow", qubits [17, 62, 34, 39]; the shadow qubits act as an engineered bath that removes excitations from the “primary" system, acting as an approximate error correction mechanism. The shadow qubit Hamiltonian $H_{S}$ is given by $H_{S}=\sum_{j=1}^{N_{S}}\frac{\omega_{S_{j}}}{2}\sigma^{z}_{S_{j}}$ (1) with $\omega_{S_{j}}$ being the energy of the $j$-th shadow qubit; the coupling between primary and shadow qubits is described by the Hamiltonian $H_{PS}(t)=\sum_{jk}\Omega_{jk}(t)O_{P_{j}}O_{S_{k}}$ (2) with $\Omega_{jk}(t)$ being the primary-shadow interaction energy coupling primary qubit $j$ to shadow qubit $k$, and $O_{P_{j}}$, $O_{S_{k}}$ Pauli operators acting on the $j$-th primary qubit and $k$-th shadow qubit, respectively. Unless otherwise specified, the Pauli operators for the primary- shadow interactions in this work are taken as $\sigma^{y}_{P_{j}}\sigma^{y}_{S_{k}}$. The total system Hamiltonian is $H_{T}(t)=H_{P}+H_{S}+H_{PS}(t)$. In order to implement time evolution under the (total) Hamiltonian on a gate- model quantum computer, we must Trotterize it with a small timestep $dt$. If $dt$ is sufficiently small, though, we can approximate evolution by the so- called Trotter decomposition $e^{iH(t)dt}\ket{\psi}\approx\big{(}e^{iH_{P}dt}e^{iH_{S}dt}e^{iH_{PS}dt}+O(dt^{2})\big{)}\ket{\psi}$ (3) This Trotter decomposition allows us to implement a continuous time evolution Hamiltonian with a digitized, gate-based model. As we are focused on near term NISQ implementations, higher order operation approaches [63] that can be implemented to mitigate Trotter error are counterproductive here due to gate error, and instead we use this simpler layering structure. ### II.1 RQE algorithm details A schematic of the RQE algorithm is shown in Figure 1: we take the initial state of the composite system as $\ket{\psi_{in}}\equiv\ket{\psi_{0,P}}\otimes\ket{0_{S}\cdots 0_{S}}$ with the shadow qubits prepared in the totally polarized state $\ket{0\cdots 0}$. Then, at time $t=0$, the following sequence of gates is applied (we refer to this sequence as a single algorithmic layer)[56]: 1. 1. Evolve the system under $\ket{\psi}\rightarrow e^{-2i\pi dtH_{P}}\ket{\psi}$, appropriately Trotterized; 2. 2. Apply the $Z$ rotations which define the shadow qubit Hamiltonian, $\ket{\psi}\rightarrow e^{-2i\pi dtH_{S}}\ket{\psi}$; this can be done in parallel with step (1); 3. 3. Apply the primary-shadow qubit interaction term, $\ket{\psi}\rightarrow e^{-2i\pi dtH_{PS}(t)}\ket{\psi}$; 4. 4. If $t$ is a multiple of $t_{PS}$, then reset all the shadow qubits to their ground state $\ket{0\cdots 0}$; by periodically resetting the auxiliary qubits during the algorithm’s runtime we can effectively engineer an approximate zero-temperature bath which balances the effects of the infinite temperature bath from random gate error; 5. 5. If $t=t_{f}$, then halt and enact the appropriate gate sequence to measure $H_{P}$; otherwise, update $t\rightarrow t+dt$ and return to step (1). To study the thermal properties of the system, a thermometry step (shown schematically in Figure 2) is implemented by performing a measurement at the end of the last reset cycle and getting the shadow qubit population for state $\ket{1}$ instead of resetting the shadow qubits to state $\ket{0\cdots 0}$. This thermometry step is performed only at the end, not during the algorithm, because the shadow qubit energy is being varied in sweeps, which is more efficient for cooling, but makes it impossible to determine how much energy was extracted from the primary system when a $\ket{1}$ is measured. This step is used in the third method to evaluate the primary system’s temperature by fitting the average temperature $\langle T\rangle$ from the Fermi-Dirac distribution, as explained below, and requires multiple time evolutions. This thermometry step is key to the analysis of the dissipatively stabilized states obtained from our algorithm and we compare the thermometry results with two other methods used as benchmarks to characterize the temperature of the steady state of our RQE algorithm. The details of the three methods to estimate the temperature are explained in the next section. Figure 1: Schematic of the RQE algorithm (top) with high-level details of the algorithm implementation through Trotterized evolution (bottom): the entire bottom figure is inside each $RQE_{n}$ cycle. $H_{P}$ is the problem Hamiltonian acting on the primary qubit register, $H_{S}(t)$ acts on the shadow qubit register, and $H_{PS}(t)$ is the Hamiltonian governing the interaction between our primary and shadow qubit registers. We trotterize the total Hamiltonaian $H(t)=H_{P}+H_{S}(t)+H_{PS}(t)$ and evolve for some period of time before resetting the shadow qubits to the $|0\rangle$ state. We can tune terms in $H_{S}(t)$ and $H_{PS}(t)$ to optimize the transfer of energy and entropy from our primary qubits to our shadow qubits, which are subsequently reset to reduce the energy of the combined system. Figure 2: Schematic of the thermometry step where we sample shadow-qubit energies $\omega_{Sj}$ from a uniform distribution (top left), run RQE, and measure the shadow qubits rather than resetting them (bottom). By repeating this we can extract a thermal distribution relating the probability of measuring an excited shadow qubit to its corresponding energy (top right). Figure 3: Ratio between the energy expectation value $\langle H_{P}\rangle$ for the steady state of the RQE algorithm and the energy of the ground state $E_{GS}$ of the antiferromagnetic Heisenberg chain, as a function of ring size $N_{P}$ for $N_{r}=40$ (left) and number of resets $N_{r}$ for $N_{P}=10$ (right), for different values of the two-qubit gate error rate $p_{2}$. For the problems studied in this work, random states have energy zero. Figure 4: Thermometry of the dissipatively stabilized Heisenberg chain. Values of energy expectation values $\langle H_{P}\rangle$ as a function of temperature $T$ (left plot); fidelity measure $F_{KL}$ as a function of temperature $T$ (right plot); for both methods, the number of primary qubits is $N_{P}=10$ and we show results for two values of error rate $p_{2}=0.0001,\,0.001$ and two circuit depths ($N_{r}=10,\,40$). The variational parameters of the RQE algorithm are: * • Algorithm time variable $t$ and global timestep $dt$; the latter is defined relative to the problem Hamiltonian: if in each timestep the system evolves via an appropriately Trotterized $\ket{\psi}\rightarrow e^{-i\alpha H_{P}}$, then $\alpha\equiv 2\pi dt$. Note that $dt$ must be chosen carefully to balance Trotter error and gate error, so in this work, we chose $dt$ to be small enough to eliminate Trotter error while large enough to be able to meaningfully evolve the system [64]; * • Runtime of the algorithm $t_{f}$ which defines the number of algorithmic layers of gates $N_{L}\equiv t_{f}/dt$; * • Time duration of a primary-shadow interaction pulse, $t_{PS}$, in which the primary-shadow interaction is ramped up and down before the shadow qubits are measured or reset; this defines the number of algorithm layers per pulse $N_{L,PS}\equiv t_{PS}/dt$. For the cycles per reset in this work, after trying a few different options, we found that for our given $dt$, 20 was a reasonable value yielding a sufficiently long evolution allowing for the system to thermalize without proliferating error dominating the system. In comparison to other work [39], we use longer cycles, the coupling in our algorithm is ramped up and down (instead of kept constant), and we focus on slightly lower levels of simulated noise; * • Number of times the shadow qubits are reset to their ground state, $N_{r}$; * • Shadow-qubits energies $\omega_{S_{j}}$: in our normal RQE algorithm, they are chosen similarly to [34], performing a downhill sweep for these energy values from 6 to 0.75 for $3/4$ of the evolution time and then holding the values at 0.75 for the remainder of the time. However, in a Thermometry step as seen in Figure 2, we randomly sample shadow-qubit energies from a uniform distribution, $\omega_{Sj}\sim U(1,6)$, enabling us to extract the distributions shown in Figures 2 & 5; * • Primary-shadow interaction energies $\Omega_{jk}(t)$: for this interaction energy coupling primary-qubit $j$ to shadow-qubit $k$ we use a function that smoothly ramps up and down, specifically implementing a dome function $\Omega_{PS}(t)\equiv 4t(1-t)$, where $t\equiv(k+1)/(N_{LPS}+1)$ with $k$ being the Trotter step index before reset; in our simulations we take this dome function to have a phase sum (i.e. area under the curve) of $\sum_{t=0}^{t_{PS}}\Omega_{PS}(t)dt=\pi/4$; * • Error rate per operation $p_{j}$: generally, this includes single-qubit error rate $p_{1}$, two-qubit gate error rate $p_{2}$, measurement error rate $p_{M}$, and reset error rate $p_{R}$. Typically $p_{1}\approx p_{2}/10$ as a phenomenological model 111More structured error models, such as the loss/dephasing model in transmon qubits, can potentially be corrected more efficiently.; $p_{2}$ represents the error per composite two-qubit unitary, e.g. $e^{i\gamma\sigma^{z}_{j}\sigma^{z}_{k}}$. In particular, in our algorithm we simulate depolarizing noise by randomly choosing one qubit for each 2-qubit gate appended and applying one of the Pauli matrices with probability $p_{2}/3$. The parameter choices of our algorithm balance algorithmic efficiency and efficacy. The choices of $dt$, $N_{L,PS}$, and $N_{r}$ are directly related to circuit depth, the simulation runtime, and Trotter error. The initial values of many parameters were intuitively guided and later refined through variational methods to achieve the results presented in this work. One of the main key factors was finding the balance between the Trotter timestep $dt$ and the number of layers in an RQE cycle $N_{L,PS}$ to minimize Trotter error while minimizing the expectation value of the prepared state. In our work we found that $dt=0.0667$ and $N_{L,PS}=20$ balanced these criteria fairly well, though obviously the best choice depends on the problem Hamiltonian $H_{P}$, base error rate and other details. Additionally, our parameter choices for the shadow qubits and reset rates dictate how well our engineered bath can cool the system. Similarly, the initial choices of the shadow qubit energies $\omega_{S_{j}}$, reset count $N_{r}$, and coupling strengths $\Omega_{jk}(t)$ were theoretically guided and later varied during experimentation to optimize algorithmic efficacy. ## III Ground state approximation We first consider the degree to which RQE is able to approximate the ground states of these systems. The first system we consider is a ring of $N_{P}$ primary qubits in a 1D ferromagnetic Ising model with transverse field, described by the primary Hamiltonian $H_{P}$ $H_{P}=-J\sum_{j=1}^{N_{P}}\big{(}\sigma^{z}_{j}\sigma^{z}_{j+1}+\sigma^{z}_{1}\sigma^{z}_{N_{P}}\big{)}-\kappa\sum_{j=1}^{N_{P}}\sigma^{x}_{j}$ (4) with ferromagnetic energy scale $J$ (set to 1 for simplicity) and transverse field strength $\kappa<1$. The second system we consider is a ring of $N_{P}$ primary qubits in an antiferromagnetic Heisenberg model, described by the primary Hamiltonian $\displaystyle H_{P}=J\sum_{j=1}^{N_{P}}\big{(}\sigma^{x}_{j}\sigma^{x}_{j+1}+\sigma^{x}_{1}\sigma^{x}_{N_{P}}+\sigma^{y}_{j}\sigma^{y}_{j+1}+\sigma^{y}_{1}\sigma^{y}_{N_{P}}+\sigma^{z}_{j}\sigma^{z}_{j+1}+\sigma^{z}_{1}\sigma^{z}_{N_{P}}\big{)}.$ (5) In Figure 3 the ratio between the energy expectation value $\langle H_{P}\rangle$ for the steady state of the RQE algorithm and the energy of the ground state $E_{GS}$ of the 1D antiferromagnetic Heisenberg ring is plotted as a function of the number of primary qubits $N_{P}$ (left) and as a function of the number of resets $N_{r}$ (right). In general, for a random state the energy is zero. The energy expectation value of the steady state of the RQE algorithm approximates the ground-state energy with results improving for decreasing error rate $p_{2}$, as expected. These results align well with other recent works [34, 39], and improve both with increasing number of primary qubits $N_{P}$ and with higher-depth circuits (i.e. increasing number of resets $N_{r}$). To understand why RQE both returns a constant fraction approximation of the ground state energy, and does not reach the exact ground state for zero error (for the parameters chosen), let us consider the algorithm as an approximation to continuous time evolution. Random gate error introduces an error rate $\Gamma_{P}$ that heats the system toward infinite temperature; the primary- shadow interaction increases this to $\Gamma_{P}\left(1+\epsilon\right)+\Gamma_{R}^{\prime}$, where the factor of $\epsilon$ comes from increasing circuit complexity per timestep and $\Gamma_{R}^{\prime}$ is a small off-resonant heating rate from the shadow qubits themselves, which vanishes as $\Omega_{PS}\to 0$ but is nonzero for the parameters chosen here (more on this below). Balancing this rate, the shadow qubits induce an average excitation removal or cooling rate $\Gamma_{R}$ which depends on the algorithm parameters and not gate error. Hence, with appropriate tuning, we can obtain a substantial cooling rate $\Gamma_{R}\gg\Gamma_{P}$ which should allow our primary system to eventually thermalize and approximate the ground state of the corresponding many-body Hamiltonian $H_{P}$. Maximizing $\Gamma_{R}$ is thus key and can be achieved by leveraging the resonant transitions by varying the shadow-qubit energies $\omega_{Sj}$ during the algorithm’s runtime. We can thus expect the long-time residual energy to scale roughly as $\Delta E\approx N_{P}\frac{\Gamma_{P}(1+\epsilon)+\Gamma_{R}^{\prime}}{\Gamma_{R}}$ and therefore the energy of the steady state of the RQE algorithm to approximate the ground-state energy to a constant fraction. In general, for some systems such as quantum spin glasses, the time to equilibrate may be prohibitively long [66, 67]; it is empirically fairly short here but this is not a universal property of the algorithm. Figure 5: Logarithmically scaled shadow-qubit energy distributions, used to infer the shadow-qubit temperature $T_{S}$. Antiferromagnetic Heisenberg Chain (left); Ferromagnetic transverse field Ising model (TFIM) with $\kappa=0.75$ (center) and $\kappa=0.90$ (right); all with $N_{P}=10$, $N_{r}=40$, and $p_{2}=0.0001$. Figure 6: Temperatures extracted from the three different methods (explained in Section IV) plotted versus system size: $T_{F_{KL}}$ from the fidelity measure $F_{KL}$; $T(E)$ from the energy expectation values; $T_{S}$ from the shadow qubit populations. Ferromagnetic Transverse Field Ising Model with $\kappa=0.75$ (top left) and $\kappa=0.90$ (top right); Antiferromagnetic Heisenberg ring (bottom left); Antiferromagnetic XXZ with primary-bath operator $O_{P_{j}}O_{S_{k}}\equiv\sigma^{y}_{P_{j}}\sigma^{y}_{S_{k}}$ and $J_{y}=0.5$ (bottom right). All four systems are plotted for high depth circuits, i.e. $N_{r}$ = 40, with time step $dt=0.0667$ and number of algorithm layers per pulse $N_{L,PS}=20$; the primary-shadow qubit coupling is $2:1$ and the shadow-qubit energies are randomly sampled from a uniform distribution, $\omega_{S_{j}}\sim U(1,6)$. The data is from 1000 random trajectories per point. Figure 7: Temperature extracted using the three different methods (i.e., $T$ from the energy expectation value $\langle H_{P}\rangle$; $T$ from the distribution that minimizes the fidelity measure $F_{KL}$ in the $Z$-basis; $T$ from the shadow qubit populations) plotted versus system size: Here we slightly modify the thermometry method by randomly selecting the primary-bath operator $O_{P_{j}}O_{S_{k}}$ from $\\{\sigma^{x}_{P_{j}}\sigma^{x}_{S_{k}},\sigma^{y}_{P_{j}}\sigma^{y}_{S_{k}},\sigma^{z}_{P_{j}}\sigma^{z}_{S_{k}}\\}$ and averaging over multiple samples (left); Temperature overestimation ratio, $2T_{S}/(T_{F_{KL}}+T_{E(T)})$, plotted versus system size comparing our traditional primary-bath operator $O_{P_{j}}O_{S_{k}}\equiv\sigma^{y}_{P_{j}}\sigma^{y}_{S_{k}}$ to the randomly sampled operator (right). Both plots are for TFIM with $\kappa=0.90$ and $N_{r}=40$. The three thermometry methods as well as the overestimation are discussed in Section IV. ## IV Thermometry We now turn to the thermometry of these dissipatively stabilized states. In order to better understand the thermal behavior of the steady state obtained from the RQE algorithm, we use three methods to characterize the temperature of the dissipatively stabilized state, namely: 1. 1. Inferring $T$ from the energy expectation value $\langle H_{P}\rangle$; 2. 2. Inferring $T$ from the distribution that minimizes K-L divergence between the true thermal density matrix and measurements of bitstrings in the $Z$-basis; 3. 3. Inferring $T$ from the shadow qubit populations. While the first two methods are both based on exact diagonalization of the primary system Hamiltonian, the third method is applicable on real quantum hardware and to cases where exact diagonalization is not possible. The first two methods are used to calibrate our expectations and check the results obtained from our RQE algorithm with the third method: in fact, as shown in Figure 6, the temperatures extracted from the three different methods are broadly consistent with each other. For the first method, in particular, we are assuming that the steady state is thermal: while in general this is not always the case, for the two 1D many- body Hamiltonians considered in our work, it is a reasonable assumption, which is also confirmed by the results obtained with the K-L divergence method presented in the next section. Thus, we can use an operational definition of temperature (as in [45]), consider the system consisting only of primary qubits, and fully diagonalize it. From the exact diagonalization, we obtain $2^{N_{P}}$ energy eigenvalues $\\{E_{j}\\}_{j=1,\dots,2^{N_{P}}}$. Then, for each temperature value $T$ in a given range, we compute the partition function $Z=\sum_{j=1}^{2^{N_{P}}}e^{-E_{j}/T}$ and the thermal expectation value for the energy as a function of the temperature $T$: $E(T)=\frac{\sum_{j=1}^{2^{N_{P}}}E_{j}e^{-E_{j}/T}}{Z}$ (6) Hence we extrapolate at what specific temperature value $T^{*}$ the curve $E(T)$ crosses the energy expectation value $\langle H_{P}\rangle$ obtained from the RQE algorithm. Moreover, we can also use this result from the exact diagonalization to compare it with the temperature inferred from the shadow- qubit populations, as explained in the third method below. For the second method, we start by measuring the distance between the final state obtained from the RQE algorithm and the thermal distribution obtained from the transverse-field Ising model with a fixed number of primary qubits on a ring. We measure this distance with the Kullback-Leibler divergence (a.k.a. K-L divergence or relative entropy), with which one can measure how close, or similar, one statistical distribution $P$ is to a given, or reference, statistical distribution $Q$ [68, 69]. In our specific case, the distributions are discrete; the given, or reference, distribution $Q$ can be taken to be the thermal state obtained from exact diagonalization of the primary qubits Hamiltonian; thus, the $Z$-basis distribution obtained from the RQE algorithm is the distribution $P$ for which we want to calculate the distance from the reference distribution $Q$. We use the standard definition of the K-L divergence: $D_{KL}(P\;||\;Q)\equiv\sum_{j=1,\dots,2^{N_{P}}}=P(j)\log\bigg{(}\frac{P(j)}{Q(j)}\bigg{)}$ (7) From this definition, as in [43], we want to find the temperature (Figure 4) which maximizes the normalized fidelity measure $F_{KL}\equiv 1-\frac{D_{KL}(\rho_{therm}||\rho_{RQE})}{D_{KL}(\rho_{IUR}||\rho_{RQE})}$ (8) where $\rho_{therm}=e^{-H/T}/\text{Tr}(e^{-H/T})$ is the ideal thermal state, $\rho_{RQE}$ is the dissipatively stabilized state obtained from our algorithm, and $\rho_{IUR}$ is the random uniform distribution where all strings have probability $2^{-N_{P}}$. The normalization of the fidelity measure $F_{KL}$ is performed by the incoherent uniform random distribution since this is the distribution we would get in a high depth circuit with no error correction. We use this method as a second check of the system’s temperature since it is closer to what can be done on hardware, though it is expensive as it requires many samples to converge (and requires us to calculate $\rho_{therm}$ for comparison). However, it is much more efficient than full state tomography once the system size is large. As shown in Figure 4, the temperatures extracted from this method and from the average energy show excellent quantitative agreement, confirming the thermal character of these states. Finally, with the third method we estimate the temperature of the steady state obtained from the RQE algorithm by fitting the average temperature $\langle T\rangle$ from the number of shadow qubits in the excited state $\ket{1}$ using the Fermi-Dirac distribution $\langle n\rangle=\frac{e^{-\omega_{S_{j}}/(k_{B}T)}}{1+e^{-\omega_{S_{j}}/(k_{B}T)}}$ (9) Since the shadow qubits act as a bath draining energy from the system of primary qubits by extracting excitations, the number of shadow qubits found in the state $\ket{1}$ at the end of the algorithm can be used as a measure for the temperature of the primary system (given fixed shadow qubit $\omega_{S_{j}}$). We apply each of the three methods both to the 1D ferromagnetic Ising model with transverse field and to the antiferromagnetic Heisenberg ring. For both models, we performed simulations with different circuits depths (i.e. with the number of resets ranging from $N_{r}=10$ up to $N_{r}=40$), for error rates $p_{2}=0,\,0.0001,\,0.001$, and for rings with a varying number of primary qubits $N_{P}=[4,\dots,13]$. For the Ising model, we considered two different values of transverse field strength $\kappa=0.75,\,0.9$. In Figure 4 we show results from the first two (i.e. the benchmark) methods for the Heisenberg model. As can be seen here, $F_{KL}$ as defined in (8) reaches a maximum value proving that the state reached with the RQE algorithm in fact approaches the thermal distribution. Moreover, the temperature for which the normalized K-L divergence reaches its maximum value is consistent with the temperature $T(E)$ obtained from the energy expectation values. It is important to note that for the TFIM the temperature does not reach an exactly zero value. One of the physical causes of this is the nature of the excitations in the system. In fact, the elementary excitations in the ferromagnetic phase are non-local and topological, since they are domain walls, and as such they can only be created and destroyed in pairs, in the bulk [34, 35, 60]. Thus, topological excitations cannot be removed by local operations; therefore, two or more of such excitations need to coincide in order to be removed. This makes it effectively harder to cool quantum systems with non-local (topological) excitations and challenging to prepare topological ground states on quantum simulators. For the third method, we have performed simulations for different ring sizes, with primary qubit range $N_{P}=[4,\dots,13]$, varying the number of resets $N_{r}$ of the shadow qubits and error rates $p_{2}=0.0001,\,0.001$. In Figure 5 we show the logarithmically scaled shadow-qubit energy distributions, used to infer the shadow-qubit temperature $T_{S}$ for three cases, all with $N_{P}=10$, $N_{r}=40$, and $p_{2}=0.0001$: antiferromagnetic Heisenberg ring (left); ferromagnetic transverse field Ising model (TFIM) with $\kappa=0.75$ (center) and $\kappa=0.90$ (right). These extrapolated temperature values are fairly consistent with those obtained from the other two methods, as shown also in Figure 6. In particular, for the 1D ferromagnetic Ising model with transverse field we see a relatively constant overestimation of the thermometry temperature, $T_{S}$, which remains consistent with fixed error rates over varying circuit depths; whereas for the antiferromagnetic Heisenberg ring with $N_{P}>6$ we get more accurate predictions of the thermometry temperature, $T_{S}$, improving as system size increases. An intuitive explanation for the temperature overestimation can be given based on concepts from [12]. As the interaction strength isn’t strictly zero, the probability of the primary-shadow interaction inducing “off resonant" transitions (e.g. interactions where $dE_{P}+dE_{S}\neq 0$) is also non-zero. The rate $\Gamma_{\pm}(\Delta)$ of those processes is suppressed only at low- order polynomially in the energy mismatch $\Delta$ and it is symmetric about $\Delta=0$ [12]: $\Gamma_{\pm}(\Delta)=\frac{\Omega_{jk}^{2}\omega_{S_{j}}/2}{(\Omega_{jk}^{2}\pm\Delta)^{2}+\omega_{S_{j}}^{2}/16}$ (10) In contrast, the expected shadow-qubit population $\langle n\rangle$ vs. $E$ (as a function of temperature) is an exponential distribution; therefore, if we add some “uncertainty" from such resonance effects, then that blurring is going to symmetrically increase the expected population above a given energy just as much as it increases the population below it, which in turn is going to artificially increase $T$ in fitting. Furthermore, gate error which excites a shadow qubit will lead to a higher temperature measurement (though with $p_{1}=p_{2}/10$ this will be rare). Thus, on balance we expect that this method will tend to overestimate the temperature. This argument does not explain however why the shadow-qubit estimated temperatures display a much larger relative overestimation for the TFIM than in our Heisenberg simulations (see Figure 6). While we expect the topological character of elementary excitations plays a role here, it turns out the choice of operator ($Y_{j}$ by default) used for the system-bath coupling also contributes significantly to the overestimation. Specifically, the TFIM in our notation is composed of $ZZ$ interactions with a transverse field along $X$, whereas the Heisenberg chain is isotropic and its Hamiltonian does not have any “preferred directions" in operator space. While given an infinite time the primary and shadow qubits should perfectly thermalize (assuming, of course, the primary system is thermal in the first place), our thermometry method only collects excitations for a single cycle. It stands to reason that if particular operator choices bias the transfer of excitations (as a function of energy or any other parameter), then this may contribute significantly to “error" in thermometry from short time measures such as ours. In order to address the difference in the accuracy of the temperature extrapolated from the thermometry method for the TFIM vs. Heisenberg model, we performed simulations for the XXZ model keeping all parameters the same as for the Heisenberg ring except for decreasing the exchange energy between nearest- neighbors in the $y$ direction, i.e. $J_{x}=J_{z}=1$ but $J_{y}=0.5$. In this case, as seen in the lower right panel of Figure 6, the accuracy of the temperature extrapolated from the thermometry method is not as close as the Heisenberg model but better than the TFIM. And in order to further confirm these expectations, we performed simulations for the TFIM with $\kappa=0.9$ randomly sampling the primary-shadow interaction operators $O_{P_{j}}O_{S_{k}}$ from $\\{\sigma^{x}_{P_{j}}\sigma^{x}_{S_{k}},\sigma^{y}_{P_{j}}\sigma^{y}_{S_{k}},\sigma^{z}_{P_{j}}\sigma^{z}_{S_{k}}\\}$, as shown in Figure 7: as can be clearly seen here, the estimation of the temperature inferred from the RQE thermometry method improves when randomly sampling the primary-shadow interaction operators $O_{P_{j}}O_{S_{k}}$ from $\\{\sigma^{x}_{P_{j}}\sigma^{x}_{S_{k}},\sigma^{y}_{P_{j}}\sigma^{y}_{S_{k}},\sigma^{z}_{P_{j}}\sigma^{z}_{S_{k}}\\}$ vs. $O_{P_{j}}O_{S_{k}}\equiv\sigma^{y}_{P_{j}}\sigma^{y}_{S_{k}}$. We present all these simulations to note and carefully illustrate how operator choice is a potential source of error in thermometry through this method. We note, of course, that the choice of primary-shadow interaction operator is another variational parameter for our algorithm, and the choice which is best for cooling the primary qubits (we chose $Y_{j}$ because it led to the lowest energies in our early simulations used to set parameters) may not be the best for accurately measuring the temperature using the dissipative elements. ## V Conclusions & Outlook In this work, we characterized the dissipatively stabilized states of our relaxational quantum eigensolver (RQE) algorithm by estimating their temperature using three different methods which yield broadly consistent results both for the 1D ferromagnetic Ising model with transverse field and for the antiferromagnetic Heisenberg ring. The first two methods, minimizing the normalized K-L divergence between the distribution obtained by $Z$ basis measurements and a thermal average, and estimating temperature from the mean energy, require exact diagonalization of the system (or similar computationally expensive methods) to create the ideal distributions to which our dissipative algorithm results are compared. They are thus not suitable for acheiving advantage in quantum simulation, but are a useful tool to characterize these states at small scales and set expectations, and both measures returned temperatures that were very close to each other (and likely within sampling error) in all cases, showing that the steady states of these two problems are indeed thermal to good approximation. To provide a more efficient measure of the temperature, we introduced a thermometry method based on the dissipative elements themselves (“shadow qubits" in the language of our algorithm). We showed that while this method tends to overestimate the temperature on general grounds, and care must be taken in operator choices to reduce this effect, it nonetheless can provide qualitatively and often quantitatively good thermometry results (using the first two methods as a benchmark). This thermometry method is simple and scalable, and could be implemented in real quantum hardware to measure the temperature of dissipatively stabilized steady states at beyond classical scales. For instance, it could be applied to non-stoquastic primary Hamiltonians $H_{P}$ (where Quantum Monte Carlo has a sign problem), where getting thermal states can be classically hard. Moreover, given that this new thermometry technique uses only $O(N_{P})$ shadow qubits (i.e. a small external bath), one of its key values is resource efficiency; it represents a novel use of the quantum resources of the system by providing an approximate error correction mechanism that can be used for other purposes too. Moreover, we also calculated the ratio between the energy expectation value $\langle H_{P}\rangle$ for the steady state of the RQE algorithm and the energy of the ground state $E_{GS}$ for the models studied, showing that the energy of the steady state of the RQE algorithm reaches a constant fraction of the ground-state energy, with results improving for decreasing error rate $p_{2}$, as expected, and in good agreement previous works [34, 39]. Our results improve both with increasing number of primary qubits $N_{P}$ and with higher-depth circuits (i.e. increasing number of resets $N_{r}$), demonstrating that we approached the asymptotic steady states of these systems in reasonably short order. These results also suggest that this method could prove to be an efficient shortcut to simulating thermal distributions in fault tolerant machines. All that said, it is important to emphasize that even in the noise-free case (where there was no error in the primary qubits), the temperature did not reach zero in our algorithm as formulated, due to stray excitations created by off-resonant interactions between the primary and shadow qubits, and the topological character of excitations in the TFIM specifically. In future work, it would be interesting to further explore the fine tuning of algorithm parameters with both noise and system size, though the fact that our results converged to fixed approximation ratios and average temperatures suggest that scaling a single parameter set for each model and error rate works well in practice. More illuminating would be the application of this thermometry algorithm in real quantum hardware, and in systems with more complex geometries. We only explored one-dimensional problems in this work to ensure we could get decent scaling estimates from numerical simulations, but our algorithm can be applied in any dimension. On real quantum hardware with long ranged connectivity (either naturally or through error corrected encodings), this method could even be applied to infinite range problems such as quantum spin glasses. In the worst case, the thermalization timescales in such systems are expected to be very long at large $N$, but with rich attendant physics that could be probed in new ways through our methods. ## Acknowledgements We would like to thank Erez Berg, Vadim Oganesyan, Eleanor Rieffel, David Schuster, Norm Tubman and Paul Varosy for valuable discussions of the issues in this work. This work was supported by the Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. The SQMS Center supported EK’s advisory role in this project, and ALS and GG’s theoretical and computational research. This work was also supported by the National Science Foundation through grant PHY-1653820, and the Army Research Office through Grant No. W911NF-17-S0001. ## References * Dallaire-Demers _et al._ [2020] P.-L. Dallaire-Demers, M. Stęchły, J. F. Gonthier, N. T. Bashige, J. Romero, and Y. Cao, An application benchmark for fermionic quantum simulations (2020), arXiv:2003.01862 [quant-ph] . * Tilly _et al._ [2022] J. Tilly, H. Chen, S. Cao, D. Picozzi, K. Setia, Y. Li, E. Grant, L. Wossnig, I. Rungger, G. H. Booth, and J. Tennyson, The variational quantum eigensolver: A review of methods and best practices, Physics Reports 986, 1–128 (2022). * Farhi _et al._ [2000] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser, Quantum computation by adiabatic evolution (2000), arXiv:quant-ph/0001106 [quant-ph] . * Albash and Lidar [2018] T. Albash and D. A. Lidar, Adiabatic quantum computation, Reviews of Modern Physics 90, 10.1103/revmodphys.90.015002 (2018). * Motta _et al._ [2019] M. Motta, C. Sun, A. T. K. Tan, M. J. O’Rourke, E. Ye, A. J. Minnich, F. G. S. L. Brandão, and G. K.-L. Chan, Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution, Nature Physics 16, 205–210 (2019). * Jouzdani _et al._ [2022] P. Jouzdani, C. W. Johnson, E. R. Mucciolo, and I. Stetcu, Alternative approach to quantum imaginary time evolution, Physical Review A 106, 10.1103/physreva.106.062435 (2022). * Fedorov _et al._ [2021] D. A. Fedorov, B. Peng, N. Govind, and Y. Alexeev, Vqe method: A short survey and recent developments (2021), arXiv:2103.08505 [quant-ph] . * Lee _et al._ [2023] S. Lee, J. Lee, H. Zhai, Y. Tong, A. M. Dalzell, A. Kumar, P. Helms, J. Gray, Z.-H. Cui, W. Liu, M. Kastoryano, R. Babbush, J. Preskill, D. R. Reichman, E. T. Campbell, E. F. Valeev, L. Lin, and G. K.-L. Chan, Evaluating the evidence for exponential quantum advantage in ground-state quantum chemistry, Nature Communications 14, 10.1038/s41467-023-37587-6 (2023). * Shaib _et al._ [2023] A. Shaib, M. H. Naim, M. E. Fouda, R. Kanj, and F. Kurdahi, Efficient noise mitigation technique for quantum computing, Sci Rep. 13, 3912 (2023). * Urbanek _et al._ [2021] M. Urbanek, B. Nachman, V. R. Pascuzzi, A. He, C. W. Bauer, and W. A. de Jong, Mitigating depolarizing noise on quantum computers with noise-estimation circuits, Phys. Rev. Lett. 127, 270502 (2021). * Resch and Karpuzcu [2021] S. Resch and U. R. Karpuzcu, Benchmarking quantum computers and the impact of quantum noise, ACM Computing Surveys 54, 1 (2021). * Kapit [2017] E. Kapit, The upside of noise: engineered dissipation as a resource in superconducting circuits, Quantum Science and Technology 2, 033002 (2017). * Harrington _et al._ [2022a] P. M. Harrington, E. J. Mueller, and K. W. Murch, Engineered dissipation for quantum information science, Nature Reviews Physics 4, 660–671 (2022a). * Koch _et al._ [2007] J. Koch, T. M. Yu, J. Gambetta, A. A. Houck, D. I. Schuster, J. Majer, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Charge-insensitive qubit design derived from the cooper pair box, Physical Review A 76, 10.1103/physreva.76.042319 (2007). * Manucharyan _et al._ [2009] V. E. Manucharyan, J. Koch, L. I. Glazman, and M. H. Devoret, Fluxonium: Single cooper-pair circuit free of charge offsets, Science 326, 113–116 (2009). * Gyenis _et al._ [2021] A. Gyenis, A. Di Paolo, J. Koch, A. Blais, A. A. Houck, and D. I. Schuster, Moving beyond the transmon: Noise-protected superconducting quantum circuits, PRX Quantum 2, 10.1103/prxquantum.2.030101 (2021). * Kapit _et al._ [2014] E. Kapit, M. Hafezi, and S. H. Simon, Induced self-stabilization in fractional quantum hall states of light, Physical Review X 4, 10.1103/physrevx.4.031039 (2014). * Zaletel _et al._ [2021] M. P. Zaletel, A. Kaufman, D. M. Stamper-Kurn, and N. Y. Yao, Preparation of low entropy correlated many-body states via conformal cooling quenches, Physical Review Letters 126, 10.1103/physrevlett.126.103401 (2021). * Vedral [2000] V. Vedral, Landauer’s erasure, error correction and entanglement, Proc. R. Soc. Lond. A 456, 969–984 (2000). * Rex [2017] A. Rex, Maxwell’s demon—a historical review, Entropy 19, 240 (2017). * Boykin _et al._ [2002] P. O. Boykin, T. Mor, V. Roychowdhury, F. Vatan, and R. Vrijen, Algorithmic cooling and scalable nmr quantum computers, Proceedings of the National Academy of Sciences 99, 3388–3393 (2002). * Verstraete _et al._ [2008] F. Verstraete, M. M. Wolf, and J. I. Cirac, Quantum computation, quantum state engineering, and quantum phase transitions driven by dissipation (2008), arXiv:0803.1447 [quant-ph] . * Cormick _et al._ [2013] C. Cormick, A. Bermudez, S. F. Huelga, and M. B. Plenio, Dissipative ground-state preparation of a spin chain by a structured environment, New Journal of Physics 15, 073027 (2013). * Geerlings _et al._ [2013] K. Geerlings, Z. Leghtas, I. M. Pop, S. Shankar, L. Frunzio, R. J. Schoelkopf, M. Mirrahimi, and M. H. Devoret, Demonstrating a driven reset protocol for a superconducting qubit, Physical Review Letters 110, 10.1103/physrevlett.110.120501 (2013). * Kimchi-Schwartz _et al._ [2016] M. Kimchi-Schwartz, L. Martin, E. Flurin, C. Aron, M. Kulkarni, H. Tureci, and I. Siddiqi, Stabilizing entanglement via symmetry-selective bath engineering in superconducting qubits, Physical Review Letters 116, 10.1103/physrevlett.116.240503 (2016). * Wang [2017] H. Wang, Quantum algorithm for preparing the ground state of a system via resonance transition, Scientific Reports 7, 10.1038/s41598-017-16396-0 (2017). * Kaplan _et al._ [2017] D. B. Kaplan, N. Klco, and A. Roggero, Ground states via spectral combing on a quantum computer (2017), arXiv:1709.08250 [quant-ph] . * Magnard _et al._ [2018] P. Magnard _et al._ , Fast and unconditional all-microwave reset of a superconducting qubit, Phys. Rev. Lett. 121, 060502 (2018). * Metcalf _et al._ [2020] M. Metcalf, J. E. Moussa, W. A. de Jong, and M. Sarovar, Engineered thermalization and cooling of quantum many-body systems, Physical Review Research 2, 10.1103/physrevresearch.2.023214 (2020). * Polla _et al._ [2021] S. Polla, Y. Herasymenko, and T. E. O’Brien, Quantum digital cooling, Physical Review A 104, 10.1103/physreva.104.012414 (2021). * Raghunandan _et al._ [2020] M. Raghunandan, F. Wolf, C. Ospelkaus, P. O. Schmidt, and H. Weimer, Initialization of quantum simulators by sympathetic cooling, Science Advances 6, 10.1126/sciadv.aaw9268 (2020). * Feng _et al._ [2022] J.-J. Feng, B. Wu, and F. Wilczek, Quantum computing by coherent cooling, Physical Review A 105, 10.1103/physreva.105.052601 (2022). * Harrington _et al._ [2022b] P. M. Harrington, E. J. Mueller, and K. W. Murch, Engineered dissipation for quantum information science, Nature Reviews Physics 4, 660 (2022b). * Matthies _et al._ [2023] A. Matthies, M. Rudner, A. Rosch, and E. Berg, Programmable adiabatic demagnetization for systems with trivial and topological excitations (2023), arXiv:2210.17256 [quant-ph] . * Kishony _et al._ [2023] G. Kishony, M. S. Rudner, A. Rosch, and E. Berg, Gauged cooling of topological excitations and emergent fermions on quantum simulators (2023), arXiv:2310.16082 [cond-mat.str-el] . * Mi _et al._ [2024] X. Mi, A. A. Michailidis, S. Shabani, K. C. Miao, P. V. Klimov, J. Lloyd, and e. a. Rosenberg, E., Stable quantum-correlated many-body states via engineered dissipation, Science 383, 1332 (2024). * Lambert _et al._ [2023] N. Lambert, M. Cirio, J. dong Lin, P. Menczel, P. Liang, and F. Nori, Fixing detailed balance in ancilla-based dissipative state engineering (2023), arXiv:2310.12539 [quant-ph] . * Sannia _et al._ [2023] A. Sannia, F. Tacchino, I. Tavernelli, G. L. Giorgi, and R. Zambrini, Engineered dissipation to mitigate barren plateaus (2023), arXiv:2310.15037 [quant-ph] . * Mi _et al._ [2023] X. Mi, A. A. Michailidis, S. Shabani, K. C. Miao, P. V. Klimov, J. Lloyd, E. Rosenberg, R. Acharya, I. Aleiner, T. I. Andersen, M. Ansmann, F. Arute, K. Arya, A. Asfaw, J. Atalaya, J. C. Bardin, A. Bengtsson, G. Bortoli, A. Bourassa, J. Bovaird, L. Brill, M. Broughton, B. B. Buckley, D. A. Buell, T. Burger, B. Burkett, N. Bushnell, Z. Chen, B. Chiaro, D. Chik, C. Chou, J. Cogan, R. Collins, P. Conner, W. Courtney, A. L. Crook, B. Curtin, A. G. Dau, D. M. Debroy, A. D. T. Barba, S. Demura, A. D. Paolo, I. K. Drozdov, A. Dunsworth, C. Erickson, L. Faoro, E. Farhi, R. Fatemi, V. S. Ferreira, L. F. B. E. Forati, A. G. Fowler, B. Foxen, E. Genois, W. Giang, C. Gidney, D. Gilboa, M. Giustina, R. Gosula, J. A. Gross, S. Habegger, M. C. Hamilton, M. Hansen, M. P. Harrigan, S. D. Harrington, P. Heu, M. R. Hoffmann, S. Hong, T. Huang, A. Huff, W. J. Huggins, L. B. Ioffe, S. V. Isakov, J. Iveland, E. Jeffrey, Z. Jiang, C. Jones, P. Juhas, D. Kafri, K. Kechedzhi, T. Khattar, M. Khezri, M. Kieferova, S. Kim, A. Kitaev, A. R. Klots, A. N. Korotkov, F. Kostritsa, J. M. Kreikebaum, D. Landhuis, P. Laptev, K. M. Lau, L. Laws, J. Lee, K. W. Lee, Y. D. Lensky, B. J. Lester, A. T. Lill, W. Liu, A. Locharla, F. D. Malone, O. Martin, J. R. McClean, M. McEwen, A. Mieszala, S. Montazeri, A. Morvan, R. Movassagh, W. Mruczkiewicz, M. Neeley, C. Neill, A. Nersisyan, M. Newman, J. H. Ng, A. Nguyen, M. Nguyen, M. Y. Niu, T. E. OBrien, A. Opremcak, A. Petukhov, R. Potter, L. P. Pryadko, C. Quintana, C. Rocque, N. C. Rubin, N. Saei, D. Sank, K. Sankaragomathi, K. J. Satzinger, H. F. Schurkus, C. Schuster, M. J. Shearn, A. Shorter, N. Shutty, V. Shvarts, J. Skruzny, W. C. Smith, R. Somma, G. Sterling, D. Strain, M. Szalay, A. Torres, G. Vidal, B. Villalonga, C. V. Heidweiller, T. White, B. W. K. Woo, C. Xing, Z. J. Yao, P. Yeh, J. Yoo, G. Young, A. Zalcman, Y. Zhang, N. Zhu, N. Zobrist, H. Neven, R. Babbush, D. Bacon, S. Boixo, J. Hilton, E. Lucero, A. Megrant, J. Kelly, Y. Chen, P. Roushan, V. Smelyanskiy, and D. A. Abanin, Stable quantum-correlated many body states via engineered dissipation (2023), arXiv:2304.13878 [quant-ph] . * Li _et al._ [2023] Z. Li, T. Roy, D. R. Perez, K.-H. Lee, E. Kapit, and D. I. Schuster, Autonomous error correction of a single logical qubit using two transmons (2023), arXiv:2302.06707 [quant-ph] . * Shtanko and Movassagh [2023] O. Shtanko and R. Movassagh, Preparing thermal states on noiseless and noisy programmable quantum processors (2023), arXiv:2112.14688 [quant-ph] . * Troyer and Wiese [2005] M. Troyer and U.-J. Wiese, Computational complexity and fundamental limitations to fermionic quantum monte carlo simulations, Phys. Rev. Lett. 94, 170201 (2005). * Li _et al._ [2024] Z. Li, T. Roy, D. R. Pérez, D. I. Schuster, and E. Kapit, Hardware-efficient autonomous error correction with linear couplers in superconducting circuits, Phys. Rev. Res. 6, 013171 (2024). * Vinjanampathy and Anders [2016] S. Vinjanampathy and J. Anders, Quantum thermodynamics, Contemp. Phys. 57, 545–579 (2016). * Shabani and Neven [2016] A. Shabani and H. Neven, Artificial quantum thermal bath: Engineering temperature for a many-body quantum system, Phys. Rev. A 94, 052301 (2016). * Deffner and Campbell [2019] S. Deffner and S. Campbell, _Quantum Thermodynamics_ (Morgan & Claypool Publishers, 2019). * Hovhannisyan and Correa [2018] K. V. Hovhannisyan and L. A. Correa, Measuring the temperature of cold many-body quantum systems, Phys. Rev. B 98, 045101 (2018). * Pasquale and M.Stace [2018] A. D. Pasquale and T. M.Stace, Quantum thermometry (2018), arXiv:1807.05762 [quant-ph] . * Rubio _et al._ [2021] J. Rubio, J. Anders, and L. A. Correa, Global quantum thermometry, Phys. Rev. Lett. 127, 190402 (2021). * Hovhannisyan _et al._ [2021] K. V. Hovhannisyan, M. R. Jørgensen, G. T. Landi, A. M. Alhambra, J. B. Brask, and M. Perarnau-Llobet, Optimal quantum thermometry with coarse-grained measurements, PRX Quantum 2, 020322 (2021). * Planella _et al._ [2022] G. Planella, M. F. B. Cenni, A. Acín, and M. Mehboudi, Bath-induced correlations enhance thermometry precision at low temperatures, Phys. Rev. Lett. 128, 040502 (2022). * Beckey _et al._ [2022] J. L. Beckey, M. Cerezo, A. Sone, and P. J. Coles, Variational quantum algorithm for estimating the quantum fisher information, Phys. Rev. Res. 4, 013083 (2022). * Brenes and Segal [2023] M. Brenes and D. Segal, Multispin probes for thermometry in the strong-coupling regime, Phys. Rev. A 108, 032220 (2023). * Mihailescu _et al._ [2023] G. Mihailescu, S. Campbell, and A. K. Mitchell, Thermometry of strongly correlated fermionic quantum systems using impurity probes, Phys. Rev. A 107, 042614 (2023). * Mihailescu _et al._ [2024] G. Mihailescu, A. Bayat, S. Campbell, and A. K. Mitchell, Multiparameter critical quantum metrology with impurity probes, Quantum Sci. Technol. 9, 035033 (2024). * Rodriguez Perez [2021] D. Rodriguez Perez, _Quantum Error Mitigation and Autonomous Correction using Dissipative Engineering and Coupling Techniques_ , Ph.D. thesis, Colorado School of Mines (2021). * Kapit [2021] E. Kapit, Systems and methods for accelerated quantum optimization (2021), uS Patent App. 17/027,146. * Lipka-Bartosik _et al._ [2023] P. Lipka-Bartosik, M. Perarnau-Llobet, and N. Brunner, Operational definition of the temperature of a quantum state, Physical Review Letters 130, 10.1103/physrevlett.130.040401 (2023). * Bhattacharjee _et al._ [2018] S. Bhattacharjee, U. Bhattacharya, and A. Dutta, Role of topology on the work distribution function of a quenched haldane model of graphene, Phys. Rev. B 98, 104302 (2018). * Srivastava _et al._ [2023] A. K. Srivastava, U. Bhattacharya, M. Lewenstein, and M. Płodzień, Topological quantum thermometry (2023), arXiv:2311.14524 [quant-ph] . * Foldager J [2022] H. L. Foldager J, Pesah A, Noise-assisted variational quantum thermalization, Sci Rep. 12, 3862 (2022). * Kapit [2016] E. Kapit, Hardware-efficient and fully autonomous quantum error correction in superconducting circuits, Phys. Rev. Lett. 116, 150501 (2016). * Childs _et al._ [2021] A. M. Childs, Y. Su, M. C. Tran, N. Wiebe, and S. Zhu, Theory of trotter error with commutator scaling, Phys. Rev. X 11, 011020 (2021). * Pérez _et al._ [2023] D. R. Pérez, P. Varosy, Z. Li, T. Roy, E. Kapit, and D. Schuster, Error-divisible two-qubit gates, Phys. Rev. Appl. 19, 024043 (2023). * Note [1] More structured error models, such as the loss/dephasing model in transmon qubits, can potentially be corrected more efficiently. * van Horssen _et al._ [2015] M. van Horssen, E. Levi, and J. P. Garrahan, Dynamics of many-body localization in a translation-invariant quantum glass model, Physical Review B 92, 10.1103/physrevb.92.100305 (2015). * King _et al._ [2023] A. D. King, J. Raymond, T. Lanting, R. Harris, A. Zucca, F. Altomare, A. J. Berkley, K. Boothby, S. Ejtemaee, C. Enderud, E. Hoskinson, S. Huang, E. Ladizinsky, A. J. R. MacDonald, G. Marsden, R. Molavi, T. Oh, G. Poulin-Lamarre, M. Reis, C. Rich, Y. Sato, N. Tsai, M. Volkmann, J. D. Whittaker, J. Yao, A. W. Sandvik, and M. H. Amin, Quantum critical dynamics in a 5,000-qubit programmable spin glass, Nature 617, 61–66 (2023). * Csiszar [1975] I. Csiszar, $I$-Divergence Geometry of Probability Distributions and Minimization Problems, The Annals of Probability 3, 146 (1975). * Kullback and Leibler [1951] S. Kullback and R. A. Leibler, On Information and Sufficiency, The Annals of Mathematical Statistics 22, 79 (1951).
# Rings whose invertible elements are weakly nil-clean Peter Danchev Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria<EMAIL_ADDRESS><EMAIL_ADDRESS>, Omid Hasanzadeh Department of Mathematics, Tarbiat Modares University, 14115-111 Tehran Jalal AleAhmad Nasr, Iran<EMAIL_ADDRESS>, Arash Javan Department of Mathematics, Tarbiat Modares University, 14115-111 Tehran Jalal AleAhmad Nasr, Iran<EMAIL_ADDRESS>and Ahmad Moussavi Department of Mathematics, Tarbiat Modares University, 14115-111 Tehran Jalal AleAhmad Nasr, Iran<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract. We study those rings in which all invertible elements are weakly nil-clean calling them UWNC rings. This somewhat extends results due to Karimi-Mansoub et al. in Contemp. Math. (2018), where rings in which all invertible elements are nil-clean were considered abbreviating them as UNC rings. Specifically, our main achievements are that the triangular matrix ring ${\rm T}_{n}(R)$ over a ring $R$ is UWNC precisely when $R$ is UNC. Besides, the notions UWNC and UNC do coincide when $2\in J(R)$. We also describe UWNC $2$-primal rings $R$ by proving that $R$ is a ring with $J(R)={\rm Nil}(R)$ such that $U(R)=\pm 1+{\rm Nil}(R)$. In particular, the polynomial ring $R[x]$ over some arbitrary variable $x$ is UWNC exactly when $R$ is UWNC. Some other relevant assertions are proved in the present direction as well. ###### Key words and phrases: nil-clean element (ring), weakly nil-clean element (ring), UU-rings, weakly UU-rings ###### 2010 Mathematics Subject Classification: 16S34, 16U60 ## 1\. Introduction and Major Concepts Everywhere in the current paper, let $R$ be an associative but not necessarily commutative ring having identity element, usually stated as $1$. Standardly, for such a ring $R$, the letters $U(R)$, $\rm{Nil}(R)$ and ${\rm Id}(R)$ are designed for the set of invertible elements (also termed as the unit group of $R$), the set of nilpotent elements and the set of idempotent elements in $R$, respectively. Likewise, $J(R)$ denotes the Jacobson radical of $R$, and ${\rm Z}(R)$ denotes the center of $R$. The ring of $n\times n$ matrices over $R$ and the ring of $n\times n$ upper triangular matrices over $R$ are denoted by ${\rm M}_{n}(R)$ and ${\rm T}_{n}(R)$, respectively. Standardly, a ring is said to be abelian if each of its idempotents is central, that is, ${\rm Id}(R)\subseteq{\rm Z}(R)$. For all other unexplained explicitly notions and notations, we refer to the classical source [17] or to the cited in the bibliography research sources. However, for completeness of the exposition and for the reader’s convenience, we recall the following basic notions. ###### Definition 1.1 ([13]). Let $R$ be a ring. An element $r\in R$ is said to be nil-clean if there is an idempotent $e\in R$ and a nilpotent $b\in R$ such that $r=e+b$. Such an element $r$ is further called strongly nil-clean if the existing idempotent and nilpotent can be chosen such that $be=eb$. A ring is called nil-clean (respectively, strongly nil-clean) if each of its elements is nil-clean (respectively, strongly nil-clean). ###### Definition 1.2 ([12],[3],[6]). A ring $R$ is said to be weakly nil-clean provided that, for any $a\in R$, there exists an idempotent $e\in R$ such that $a-e$ or $a+e$ is nilpotent. A ring $R$ is said to be strongly weakly nil-clean provided that, for any $a\in R$, $a$ or $-a$ is strongly nil-clean. ###### Definition 1.3 ([4],[11]). A ring is called UU if all of its units are unipotent,that is, $U(R)\subseteq 1+{\rm Nil}(R)$ (and so $1+{\rm Nil}(R)=U(R)$). ###### Definition 1.4 ([9]). A ring $R$ is called weakly UU and abbreviated as $WUU$ if $U(R)={\rm Nil}(R)\pm 1$. This is equivalent to the condition that every unit can be presented as either $n+1$ or $n-1$, where $n\in{\rm Nil}(R)$. ###### Definition 1.5 ([14]). A ring $R$ is called UNC if every of its units is nil-clean. Our key working instrument is the following one. ###### Definition 1.6. A ring $R$ is called UWNC if every of its units is weakly nil-clean. In [14], the authors investigated UNC rings, i.e., those rings whose units are nil-clean. Our plan is to expand this substantially by developing an useful for this purpose machinery. In fact, we use some non-standard techniques from ring and matrix theories as well as also compare and distinguish the established results with these from [9]. It is worth noticing that some closely related background material can also be found in the sources [10] and [18]. The next constructions that show some proper ring classes inclusions are worthwhile. ###### Example 1.7. 1. (i) Any nil-clean ring is weakly nil-clean, but the converse is not true in general. For instance, $\mathbb{Z}_{3}$ is weakly nil-clean but is not nil- clean. 2. (ii) Any UU ring is WUU, but the converse is not true in general. For instance, $\mathbb{Z}$ is a WUU ring but is not UU. 3. (iii) Any UU ring and nil-clean ring are UNC, but the converse is not true. For instance, the direct sum $\mathbb{Z}_{2}[t]\oplus{\rm M}_{2}(\mathbb{Z}_{2})$ is a UNC ring which is neither UU nor nil-clean. 4. (iv) Any WUU ring and weakly nil-clean ring are UWNC, but the converse is not true in general. For instance, the direct sum $\mathbb{Z}\oplus M_{2}(\mathbb{Z}_{2})$ is a UWNC ring which is neither WUU nor weakly nil- clean. 5. (v) Any UNC ring is UWNC, but the converse is not true in general. For instance, all of the rings $\mathbb{Z}_{3}$, $\mathbb{Z}_{6}$, $\mathbb{Z}_{3}\oplus{\rm M}_{2}(\mathbb{Z}_{2})$ are UWNC but are not UNC. nil-clean weakly nil-clean UNC UWNC UU UNC WUU UWNC Our further programm is the following: In the next second section, we obtain some main properties of the newly defined class of UWNC rings – the main results here are Proposition 2.14 as well as Theorems 2.21, 2.23 and 2.33, respectively. In the subsequent third section, we explore UWNC group rings – the main results here are Propositions 3.3 and 3.5, respectively. ## 2\. Basic Properties of UWNC Rings We start here with the following obvious technicality. ###### Proposition 2.1. A unit $u$ of a ring $R$ is strongly weakly nil-clean if, and only if, $u\in\pm 1+{\rm Nil}(R)$. In particular, $R$ is a WUU ring if, and only if, every unit of $R$ is strongly weakly nil-clean. We continue our work with the next two technical claims as follows. ###### Proposition 2.2. Let $R$ be a UWNC ring and $S$ a UNC ring. Then, $R\times S$ is a UWNC ring. ###### Proof. Choose $(u,v)\in U(R\times S)=U(R)\times U(S)$. Thus, there exist an idempotent $e\in R$ and a nilpotent $n\in R$ such that $u=e+n$ or $u=-e+n$. We now differ two cases: Case I. Write $u=e+n$. Then, we have an idempotent $f\in S$ and a nilpotent $n^{\prime}\in S$ such that $v=f+n^{\prime}$. Thus, $(u,v)=(e,f)+(n,n^{\prime})$, where $(e,f)\in{\rm Id}(R\times S)$ and $(n,n^{\prime})\in{\rm Nil}(R\times S)$. Case II. Write $u=-e+n$. Then, we have an idempotent $f\in S$ and a nilpotent $n^{\prime}\in S$ such that $-v=f+n^{\prime}$. Thus, $(u,v)=-(e,f)+(n,-n^{\prime})$, where $(e,f)\in{\rm Id}(R\times S)$ and $(n,-n^{\prime})\in{\rm Nil}(R\times S)$. Therefore, $(u,v)$ is either the sum or difference of a nilpotent and an idempotent in $R\times S$, whence we get the desired result. ∎ ###### Proposition 2.3. Let $\\{R_{i}\\}$ be a family of rings. Then, the direct product $R=\prod R_{i}$ of rings $R_{i}$ is UWNC if, and only if, each $R_{i}$ is UWNC and at most one of them is not UNC. ###### Proof. ($\Rightarrow$). Obviously, each $R_{i}$ is UWNC. Suppose now $R_{i_{1}}$ and $R_{i_{2}}$ $(i_{1}\neq i_{2})$ are not UNC. Then, there exist some $u_{i_{j}}\in U(R_{i_{j}})\quad(j=1,2)$ such that $u_{i_{1}}\in U(R_{i_{1}})$ and $-u_{i_{2}}\in U(R_{i_{2}})$ are both not nil-clean. Choosing $u=(u_{i})$, where $u_{i}=0$ whenever $i\neq i_{j}\quad(j=1,2)$, we infer that $u$ and $-u$ are it not the sum of an idempotent and a nilpotent, as required to get a contradiction. Consequently, each $R_{i}$ is a UWNC ring as at most one of them is not UNC. ($\Leftarrow$). Assume that $R_{i_{0}}$ is a UWNC ring and all the others $R_{i}$ are UNC. So, a simple check gives that $\prod_{i\neq i_{0}}R_{i}$ is UNC. According to Proposition 2.2, we conclude that $R$ is a UWNC ring. ∎ However, the property of being UWNC is not closed under taking (internal, external) direct sum as the next construction illustrate. ###### Example 2.4. A ring $\mathbb{Z}_{3}$ is a UWNC ring, but $\mathbb{Z}_{3}\times\mathbb{Z}_{3}$ is not UWNC. Three further helpful affirmations are the following. ###### Corollary 2.5. Let $L=\prod_{i\in I}R_{i}$ be the direct product of rings $R_{i}\cong R$ and $|I|\geq 2$. Then, $L$ is a UWNC ring if, and only if, $L$ is a UNC ring if, and only if, $R$ is a UNC ring. ###### Corollary 2.6. For any $n\geq 2$, the ring $R^{n}$ is UWNC if, and only if, $R^{n}$ is UNC if, and only if, $R$ is UNC. ###### Proposition 2.7. Let $R$ be a UWNC ring. If $T$ is a factor-ring of $R$ such that all units of $T$ lift to units of $R$, then $T$ is a UWNC ring. ###### Proof. Suppose that $f:R\rightarrow T$ is a surjective ring homomorphism. Let $v\in U(T)$. Then, there exists $u\in U(R)$ such that $v=f(u)$ and $u=\pm e+n$, where $e\in{\rm Id}(R)$ and $n\in{\rm Nil}(R)$. Therefore, we have $v=\pm f(e)+f(n)$, where $f(e)\in{\rm Id}(T)$ and $f(n)\in{Nil}(T)$, as needed. ∎ We now offer the validity of the following statement. ###### Theorem 2.8. Let $R$ be a ring and $I$ a nil-ideal of $R$. 1. (i) $R$ is a UWNC ring if, and only if, $J(R)$ is nil and $\dfrac{R}{J(R)}$ is a UWNC ring. 2. (ii) $R$ is a UWNC ring if, and only if, $\dfrac{R}{I}$ is a UWNC ring. ###### Proof. 1. (i) Let $R$ be a UWNC ring and suppose $x\in J(R)$ and $x\notin{\rm Nil}(R)$. Since $1+x\in U(R)$, it must be that $1+x=-e+n$, where $n\in{\rm Nil}(R)$ and $e\in{\rm Id}(R)$, because if $1+x=e+n$, we will have $x\in{\rm Nil}(R)$ that is a contradiction. So, $2+x\in{\rm Nil}(R)$. Similarly, since $1+x^{2}\in U(R)$, we deduce that $2+x^{2}\in{\rm Nil}(R)$. Hence, $(2+x^{2})-(2+x)=x^{2}-x=-x(1-x)\in{\rm Nil}(R).$ But $1-x\in U(R)$ whence $x\in{\rm Nil}(R)$, a contradiction. Thus, $J(R)$ is nil. Now, letting $\bar{u}\in U\left(\bar{R}=\dfrac{R}{J(R)}\right)$, we obtain $u\in U(R)$, because units lift module $J(R)$. Therefore, $u=\pm e+n$, where $e\in{\rm Id}(R)$ and $n\in{\rm Nil}(R)$. So, we have $\bar{u}=\pm\bar{e}+\bar{n}$, where $\bar{e}\in{\rm Id}(\bar{R})$ and $\bar{n}\in{\rm Nil}(\bar{R})$. Thus, $\dfrac{R}{J(R)}$ is a UWNC ring, as promised. Conversely, let $u\in U(R)$. Then, $\bar{u}\in U(\bar{R})$ and write $\bar{u}=\pm\bar{e}+\bar{n}$, where $\bar{e}\in{\rm Id}(\bar{R})$ and $\bar{n}\in{\rm Nil}(\bar{R})$. As $J(R)$ is nil, idempotents of $\dfrac{R}{J(R)}$ can be lifted to idempotents of $R$. So, we can assume that $e^{2}=e\in R$. Moreover, one inspects that $n\in R$ is nilpotent. Thus, for some $j\in J(R)$, $u=\pm e+n+j=\pm e+(n+j)$ is weakly nil-clean, because $n+j\in{\rm Nil}(R)$, as expected. 2. (ii) The proof is similar to (i), so we omit the details. ∎ Given a ring $R$ and a bi-module ${}_{R}M_{R}$, the trivial extension of $R$ by $M$ is the ring $T(R,M)=R\oplus M$ with the usual addition and the following multiplication: $(r_{1},m_{1})(r_{2},m_{2})=(r_{1}r_{2},r_{1}m_{2}+m_{1}r_{2})$. This is isomorphic to the ring of all matrices $\begin{pmatrix}r&m\\\ 0&r\end{pmatrix}$, where $r\in R$ and $m\in M$ and the usual matrix operation are used. As an immediate consequence, we yield: ###### Corollary 2.9. Let $R$ be a ring and $M$ a bi-module over $R$. Then, the following hold: 1. (i) The trivial extension ${\rm T}(R,M)$ is a UWNC ring if, and only if, $R$ is a UWNC ring. 2. (ii) For $n\geq 2$, the quotient-ring $\dfrac{R[x]}{\langle x^{n}\rangle}$ is a UWNC ring if, and only if, $R$ is a UWNC ring. 3. (iii) For $n\geq 2$, the quotient-ring $\dfrac{R[[x]]}{\langle x^{n}\rangle}$ is a UWNC ring if, and only if, $R$ is a UWNC ring. ###### Proof. 1. (i) Set $A={\rm T}(R,M)$ and consider $I:={\rm T}(0,M)$. It is not too hard to verify that $I$ is a nil-ideal of $A$ such that $\dfrac{A}{I}\cong R$. So, the result follows directly from Theorem 2.8. 2. (ii) Put $A=\dfrac{R[x]}{\langle x^{n}\rangle}$. Considering $I:=\dfrac{\langle x\rangle}{\langle x^{n}\rangle}$, we obtain that $I$ is a nil-ideal of $A$ such that $\dfrac{A}{I}\cong R$. So, the result follows automatically Theorem 2.8. 3. (iii) Knowing that the isomorphism $\dfrac{R[x]}{\langle x^{n}\rangle}\cong\dfrac{R[[x]]}{\langle x^{n}\rangle}$ is true, point (iii) follows automatically from (ii). ∎ Consider $R$ to be a ring and $M$ to be a bi-module over $R$. Let ${\rm DT}(R,M):=\\{(a,m,b,n)|a,b\in R,m,n\in M\\}$ with addition defined componentwise and multiplication defined by $(a_{1},m_{1},b_{1},n_{1})(a_{2},m_{2},b_{2},n_{2})=(a_{1}a_{2},a_{1}m_{2}+m_{1}a_{2},a_{1}b_{2}+b_{1}a_{2},a_{1}n_{2}+m_{1}b_{2}+b_{1}m_{2}+n_{1}a_{2}).$ Then, ${\rm DT}(R,M)$ is a ring which is isomorphic to ${\rm T}({\rm T}(R,M),{\rm T}(R,M))$. Also, we have ${\rm DT}(R,M)=\left\\{\begin{pmatrix}a&m&b&n\\\ 0&a&0&b\\\ 0&0&a&m\\\ 0&0&0&a\end{pmatrix}|a,b\in R,m,n\in M\right\\}.$ We have the following isomorphism as rings: $\dfrac{R[x,y]}{\langle x^{2},y^{2}\rangle}\rightarrow{\rm DT}(R,R)$ defined by $a+bx+cy+dxy\mapsto\begin{pmatrix}a&b&c&d\\\ 0&a&0&c\\\ 0&0&a&b\\\ 0&0&0&a\end{pmatrix}.$ We, thereby, detect the following. ###### Corollary 2.10. Let $R$ be a ring and $M$ a bi-module over $R$. Then the following statements are equivalent: 1. (i) $R$ is a UWNC ring. 2. (ii) ${\rm DT}(R,M)$ is a UWNC ring. 3. (iii) ${\rm DT}(R,R)$ is a UWNC ring. 4. (iv) $\dfrac{R[x,y]}{\langle x^{2},y^{2}\rangle}$ is a UWNC ring. Another two consequences of interest are the following ones: ###### Corollary 2.11. Let $R$, $S$ be rings and let $M$ be an $(R,S)$-bi-module. If $T=\begin{pmatrix}R&M\\\ 0&S\end{pmatrix}$ is a UWNC ring, then both $R,S$ are UWNC rings. The converse holds provided one of the rings $R$ or $S$ is UNC and the other is UWNC. ###### Proof. Given $I:=\left(\begin{array}[]{ll}0&M\\\ 0&0\end{array}\right)$. A routine inspection shows that this is a nil-ideal in $T$ with $\dfrac{T}{I}\cong R\times S$. Therefore, the result follows applying Proposition 2.3 and Theorem 2.8. ∎ Let $\alpha$ be an endomorphism of $R$ and $n$ a positive integer. It was defined by Nasr-Isfahani in [19] the skew triangular matrix ring like this: ${\rm T}_{n}(R,\alpha)=\left\\{\left.\begin{pmatrix}a_{0}&a_{1}&a_{2}&\cdots&a_{n-1}\\\ 0&a_{0}&a_{1}&\cdots&a_{n-2}\\\ 0&0&a_{0}&\cdots&a_{n-3}\\\ \ddots&\ddots&\ddots&\vdots&\ddots\\\ 0&0&0&\cdots&a_{0}\end{pmatrix}\right|a_{i}\in R\right\\}$ with addition point-wise and multiplication given by: $\displaystyle\begin{pmatrix}a_{0}&a_{1}&a_{2}&\cdots&a_{n-1}\\\ 0&a_{0}&a_{1}&\cdots&a_{n-2}\\\ 0&0&a_{0}&\cdots&a_{n-3}\\\ \ddots&\ddots&\ddots&\vdots&\ddots\\\ 0&0&0&\cdots&a_{0}\end{pmatrix}\begin{pmatrix}b_{0}&b_{1}&b_{2}&\cdots&b_{n-1}\\\ 0&b_{0}&b_{1}&\cdots&b_{n-2}\\\ 0&0&b_{0}&\cdots&b_{n-3}\\\ \ddots&\ddots&\ddots&\vdots&\ddots\\\ 0&0&0&\cdots&b_{0}\end{pmatrix}=$ $\displaystyle\begin{pmatrix}c_{0}&c_{1}&c_{2}&\cdots&c_{n-1}\\\ 0&c_{0}&c_{1}&\cdots&c_{n-2}\\\ 0&0&c_{0}&\cdots&c_{n-3}\\\ \ddots&\ddots&\ddots&\vdots&\ddots\\\ 0&0&0&\cdots&c_{0}\end{pmatrix},$ where $c_{i}=a_{0}\alpha^{0}(b_{i})+a_{1}\alpha^{1}(b_{i-1})+\cdots+a_{i}\alpha^{i}(b_{i}),~{}~{}1\leq i\leq n-1.$ We denote the elements of ${\rm T}_{n}(R,\alpha)$ by $(a_{0},a_{1},\ldots,a_{n-1})$. If $\alpha$ is the identity endomorphism, then ${\rm T}_{n}(R,\alpha)$ is a subring of upper triangular matrix ring ${\rm T}_{n}(R)$. All of the mentioned above guarantee the truthfulness of the following statement. ###### Corollary 2.12. Let $R$ be a ring. Then, the following are equivalent: 1. (i) $R$ is a UWNC ring. 2. (ii) ${\rm T}_{n}(R,\alpha)$ is a UWNC ring. ###### Proof. Choose $I:=\left\\{\left.\begin{pmatrix}0&a_{12}&\ldots&a_{1n}\\\ 0&0&\ldots&a_{2n}\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\ldots&0\end{pmatrix}\right|a_{ij}\in R\quad(i\leq j)\right\\}.$ Then, one easily verifies that $I^{n}=0$ and $\dfrac{{\rm T}_{n}(R,\alpha)}{I}\cong R$. Consequently, Theorem 2.8 applies to get the wanted result. ∎ Let $\alpha$ be an endomorphism of $R$. We denote by $R[x,\alpha]$ the skew polynomial ring whose elements are the polynomials over $R$, the addition is defined as usual, and the multiplication is defined by the equality $xr=\alpha(r)x$ for any $r\in R$. Thus, there is a ring isomorphism $\varphi:\dfrac{R[x,\alpha]}{\langle x^{n}\rangle}\rightarrow{\rm T}_{n}(R,\alpha),$ given by $\varphi(a_{0}+a_{1}x+\ldots+a_{n-1}x^{n-1}+\langle x^{n}\rangle)=(a_{0},a_{1},\ldots,a_{n-1})$ with $a_{i}\in R$, $0\leq i\leq n-1$. So, one finds that ${\rm T}_{n}(R,\alpha)\cong\dfrac{R[x,\alpha]}{\langle x^{n}\rangle}$, where $\langle x^{n}\rangle$ is the ideal generated by $x^{n}$. We, thus, extract the following claim. ###### Corollary 2.13. Let $R$ be a ring with an endomorphism $\alpha$ such that $\alpha(1)=1$. Then, the following are equivalent: 1. (i) $R$ is a UWNC ring. 2. (ii) $\dfrac{R[x,\alpha]}{\langle x^{n}\rangle}$ is a UWNC ring. 3. (iii) $\dfrac{R[[x,\alpha]]}{\langle x^{n}\rangle}$ is a UWNC ring. The next assertion is pivotal. ###### Proposition 2.14. Let $R$ be a ring. Then, the following are equivalent: 1. (i) $R$ is a UNC ring. 2. (ii) ${\rm T}_{n}(R)$ is a UNC ring for all $n\in\mathbb{N}$. 3. (iii) ${\rm T}_{n}(R)$ is a UNC ring for some $n\in\mathbb{N}$. 4. (iv) ${\rm T}_{n}(R)$ is a UWNC ring for some $n\geq 2$. ###### Proof. (i) $\Rightarrow$ (ii). This follows employing [14, Corollary 2.6]. (ii) $\Rightarrow$ (iii) $\Rightarrow$ (iv). These two implications are trivial, so we remove the details. (iv) $\Rightarrow$ (i). Method 1: Let $u\in U(R)$ and choose $A=\begin{pmatrix}u&&&\ast\\\ &-u_{1}&&\\\ &&\ddots&\\\ 0&&&1\end{pmatrix}\in U({\rm T}_{n}(R)).$ By hypothesis, we can find an idempotent $\begin{pmatrix}e_{1}&&&\ast\\\ &e_{2}&&\\\ &&\ddots&\\\ 0&&&e_{n}\end{pmatrix}$ and a nilpotent $\begin{pmatrix}w_{1}&&&\ast\\\ &w_{2}&&\\\ &&\ddots&\\\ 0&&&w_{n}\end{pmatrix}$ such that $A=\pm\begin{pmatrix}e_{1}&&&\ast\\\ &e_{2}&&\\\ &&\ddots&\\\ 0&&&e_{n}\end{pmatrix}+\begin{pmatrix}w_{1}&&&\ast\\\ &w_{2}&&\\\ &&\ddots&\\\ 0&&&w_{n}\end{pmatrix}.$ It now follows that $u=e_{1}+w_{1}$ or $u=e_{2}-w_{2}$. Clearly, $e_{1}$, $e_{2}$ are idempotents and $w_{1}$, $w_{2}$ are nilpotents in $R$, thus proving point (i). Method 2: Setting $I:=\\{(a_{ij})\in T_{n}(R)|a_{ii}=0\\}$, we obtain that it is a nil-ideal in ${\rm T}_{n}(R)$ with $\dfrac{{\rm T}_{n}(R)}{I}\cong R^{n}$. Therefore, Theorem 2.8 and Corollary 2.6 are applicable to get the pursued result. ∎ We know that the direct sum $\mathbb{Z}_{2}[x]\oplus{\rm M}_{2}(\mathbb{Z}_{2})$ is a UWNC ring that is neither WUU nor weakly nil- clean. In this vein, the following example concretely demonstrates an indecomposable UWNC ring that is neither WUU nor weakly nil-clean. ###### Example 2.15. Let $R={\rm M}_{n}(\mathbb{Z}_{2})$ with $n\geq 2$, $S=\mathbb{Z}_{2}[x]$ and $M=S^{n}$. Then, the formal triangular matrix ring $T:=\begin{pmatrix}R&M\\\ 0&S\end{pmatrix}$ is an indecomposable UNC ring invoking [14, Example 2.7]. So, $T$ is an indecomposable UWNC ring. But since $R$ is not a WUU ring and $S$ is not a weakly nil-clean ring, one plainly follows that $T$ is neither a WUU ring nor a weakly nil-clean ring, as claimed. It was proved in [9, Proposition 2.25] that any unital subring of a WUU ring is again a WUU ring. But, curiously, a subring of a UWNC ring may not be a UWNC ring as the next example shows. ###### Example 2.16. Let $T={\rm M}_{2}(\mathbb{Z}_{2})$ and $u=\begin{pmatrix}0&1\\\ 1&1\end{pmatrix}\in T$. Then, one sees that $u^{3}=1$. Now, let $R$ be the unital subring of $T$ generated by $u$. Therefore, one calculates that $R=\\{a1+bu+cu^{2}|a,b,c\in\mathbb{Z}\\}=\\{0,1,u,u^{2},1+u,1+u^{2},1+u+u^{2},u+u^{2}\\}.$ But, we have that $1+u=u^{2}$, $1+u^{2}=u$, $1+u+u^{2}=0$ and $u+u^{2}=1$, so we deduce $R=\\{0,1,u,u^{2}\\}$. It is now easy to see that ${\rm Nil}(R)=\\{0\\}$, so that $R$ is reduced. As $u^{2}\neq u$, $u$ is manifestly not weakly nil-clean utilizing [3, Theorem 20], and so $T$ is a UWNC ring, as asserted. ###### Proposition 2.17. The following two statements are valid: 1. (i) If $R$ is a weakly nil-clean ring, then ${\rm Z}(R)$ is strongly weakly nil- clean. 2. (ii) If $R$ is a UWNC ring, then ${\rm Z}(R)$ is a WUU ring. ###### Proof. 1. (i) Let $a\in{\rm Z}(R)$. Then, $a\in R$ is weakly nil-clean and central, so $a$ is strongly weakly nil-clean in $R$. Thus, $a\pm a^{2}\in{\rm Nil}(R)$ by [5, Theorem 2.1]. But, $a\pm a^{2}\in{\rm Z}(R)$ so that $a\pm a^{2}\in{\rm Nil}(R)\cap{\rm Z}(R)\subseteq{\rm Nil}({\rm Z}(R)).$ Hence, ${\rm Z}(R)$ is strongly weakly nil-clean, as required. 2. (ii) The proof is analogous to (i). ∎ ###### Proposition 2.18. For any ring $R$, the power series ring $R[[x]]$ is not UWNC. ###### Proof. Note the principal fact that the Jacobson radical of $R[[x]]$ is not nil (see, e.g., [17]). Thus, in view of Theorem 2.8, $R[[x]]$ is really not a UWNC ring, as expected. ∎ ###### Lemma 2.19. Let $R$ be a ring. Then, the following two points are equivalent: 1. (i) $R$ is a UNC ring. 2. (ii) $R$ is a UWNC ring and $2\in J(R)$. ###### Proof. (i) $\Longrightarrow$ (ii). Evidently, $R$ is a UWNC ring. Also, we have $2\in J(R)$ in virtue of [14, Lemma 2.4]. (ii) $\Rightarrow$ (i). Notice that $\dfrac{R}{J(R)}$ is of characteristic $2$, because $2\in J(R)$, and so $a=-a$ for every $a\in\dfrac{R}{J(R)}$. That is why, $\dfrac{R}{J(R)}$ is a UNC ring, and thus we can apply [14, Theorem 2.5] since $J(R)$ is nil in view of Theorem 2.8. ∎ Recall that an element $r$ in a ring $R$ is said to be an unipotent if $r-1$ is a nilpotent. The following technical claim is elementary, but rather applicable in the sequel. ###### Lemma 2.20. Let $R$ be a ring and let $r\in R$ be the sum of an idempotent and a nilpotent. If $r^{2}=1$, then $r$ is unipotent. ###### Proof. Write $r=e+n$ with $e\in{\rm Id}(R)$ and $n\in{\rm Nil}(R)$. Set $f:=1-e$ and $x:=n(n+1)\in{\rm Nil}(R)$. Taking into account the equality $fn=f(r-e)=fr$, we compute that $fx=fn(n+1)=fr(r-e+1)=fr(r+f)=fr^{2}+frf=f+frf,$ and, similarly, that $xf=f+frf$. Hence, $fx=xf$, so that $x$ is a nilpotent which commutes with $f$, $e$, $n$ and $r$, respectively. Accordingly, $f=fr^{2}=fr\cdot r=fnr=fx(1+n)^{-1}r=f(1+n)^{-1}r\cdot x$ is a nilpotent and hence $f=0$, as desired. ∎ Our next main result, which sounds quite surprisingly, is the following. ###### Theorem 2.21. Let $R$ be a ring and $2\in U(R)$. Then, the following two items are equivalent: 1. (i) $R$ is a UWNC ring. 2. (ii) $R$ is a WUU ring. ###### Proof. (ii) $\Rightarrow$ (i). This is pretty obvious, so we leave the argumentation. (i) $\Rightarrow$ (ii). First, we show that $R$ is an abelian ring. To this goal, let $e^{2}=e\in R$, and let $a=1-2e$. Then, it is obviously true that $a^{2}=1$. Since $R$ is UWNC, either $a$ or $-a$ is nil-clean. By virtue of Lemma 2.20, one has that $a\in 1+{\rm Nil}(R)$ or $a\in-1+{\rm Nil}(R)$. If $a\in 1+{\rm Nil}(R)$, then $2e\in{\rm Nil}(R)$, and so $e\in{\rm Nil}(R)$. This implies that $e=0$. If, however, $a\in-1+{\rm Nil}(R)$, then $2(1-e)\in{\rm Nil}(R)$, whence $1-e\in{\rm Nil}(R)$. This forces that $e=1$. Therefore, $R$ has only trivial idempotents. Thus, $R$ is abelian, as asserted. Now, let $u\in U(R)$, so $u=\pm e+n$, where $e\in{\rm Id}(R)$ and $n\in{\rm Nil}(R)$. If $u=e+n$, so $e=u-n\in U(R)$, then $e=1$. If, however, $u=-e+n$, so $e=u-n\in U(R)$, then $e=1$. Therefore, $u\in\pm 1+{\rm Nil}(R)$. Finally, one concludes that $R$ is WUU, as formulated. ∎ As an immediate consequence, we derive: ###### Corollary 2.22. Let $R$ be an abelian ring. Then, the following are equivalent: 1. (i) $R$ is a UWNC ring. 2. (ii) $R$ is a WUU ring. Appealing to [9], a commutative ring $R$ is a WUU ring if, and only if, so is $R[x]$. In what follows, we present a generalization of this result. Standardly, the prime radical ${\rm N}(R)$ of a ring $R$ is defined to be the intersection of the prime ideals of $R$. It is know that ${\rm N}(R)={\rm Nil}_{\ast}(R)$, the lower nil-radical of $R$. A ring $R$ is called a $2$-primal ring if ${\rm N}(R)$ coincides with ${\rm Nil}(R)$. For an endomorphism $\alpha$ of a ring $R$, $R$ is called $\alpha$-compatible if, for any $a,b\in R$, $ab=0\Longleftrightarrow a\alpha(b)=0$, and in this case $\alpha$ is clearly injective. We now arrive at our third chief result. ###### Theorem 2.23. Let $R$ be a 2-primal ring and $\alpha$ an endomorphism of $R$ such that $R$ is $\alpha$-compatible. The following issues are equivalent: 1. (i) $R[x,\alpha]$ is a UWNC ring. 2. (ii) $R[x,\alpha]$ is a WUU ring. 3. (iii) $R$ is a WUU ring. 4. (iv) $R$ is a UWNC ring. 5. (v) $J(R)={\rm Nil}(R)$ and $U(R)=1\pm J(R)$. ###### Proof. (ii) $\Rightarrow$ (i) and (iii) $\Rightarrow$ (iv). Straightforward. (i) $\Rightarrow$ (iv). As $\dfrac{R[x,\alpha]}{\langle x\rangle}\cong R$ and all units of $\dfrac{R[x,\alpha]}{\langle x\rangle}$ are lifted to units of $R[x,\alpha]$, the implication easily holds. (ii) $\Longrightarrow$ (iii). We argue as in the proof of (i) $\Longrightarrow$ (iv). (iv) $\Rightarrow$ (v). As $R$ is 2-primal, we have ${\rm Nil}(R)\subseteq J(R)$, so that $J(R)={\rm Nil}(R)$ bearing in mind Theorem 2.8. Let $a\in U(R)$, so by hypothesis we have $u=\pm e+n$, where $e\in{\rm Id}(R)$ and $n\in{\rm Nil}(R)=J(R)$. If $u=e+n$, so $e=u-n\in U(R)$, and thus $e=1$. If, however, $u=-e+n$, so $e=n-u\in U(R)$, and thus $e=1$. Therefore, we receive $u\in\pm 1+J(R)$, and hence $U(R)=\pm 1+J(R)$, as required. (v) $\Rightarrow$ (ii). As $R$ is a $2$-primal ring, with the aid of (v) we have $J(R)={\rm Nil}(R)=\mathrm{Nil}_{\ast}(R)$. Thus, the quotient-ring $\dfrac{R}{J(R)}$ is a reduced ring. Moreover, it is easy to see that $\alpha({\rm Nil}(R))\subseteq{\rm Nil}(R)$, so $\alpha(J(R))\subseteq J(R)$ and $\bar{\alpha}:\dfrac{R}{J(R)}\rightarrow\dfrac{R}{J(R)}$, defined by $\bar{\alpha}(\bar{a})=\overline{\alpha(a)}$, is an endomorphism of $\dfrac{R}{J(R)}$. We next show that $\dfrac{R}{J(R)}$ is $\bar{\alpha}$-compatible. That is, we must show that, for any $a+J(R),b+J(R)\in\dfrac{R}{J(R)}$, the equivalence $(a+J(R))(b+J(R))=J(R)\Leftrightarrow(a+J(R))\bar{\alpha}(b+J(R))=J(R)$ holds. Equivalently, we have to show that, for any $a,b\in R$, the equivalence $ab\in{\rm Nil}(R)\Leftrightarrow a\alpha(b)\in{\rm Nil}(R)$ is true. But this equivalence has been established in the proof of Claims 1 and 2 in [1, Theorem 3.6]. As $\dfrac{R}{J(R)}$ is a reduced factor-ring and also is $\bar{\alpha}$-compatible, with [7, Corollary 2.12] at hand we have $U\left(\dfrac{R}{J(R)}[x,\bar{\alpha}]\right)=U\left(\dfrac{R}{J(R)}\right),$ which is equal to $\\{\pm\bar{1}\\}$ by assumption. So, $\dfrac{R}{J(R)}[x,\bar{\alpha}]\cong\dfrac{R[x,\alpha]}{J(R)[x,\alpha]}$ is a WUU ring. Also, [7, Lemma 2.2] tells us that $\mathrm{Nil}_{*}(R[x,\alpha])=\mathrm{Nil}_{*}(R)[x,\alpha].$ Therefore, $J(R)[x,\alpha]=\mathrm{Nil}_{*}(R[x,\alpha]),$ which is manifestly nil. Hence, [9, Proposition 2.5 and 2.6] ensures that $R[x,\alpha]$ is a WUU ring, as asked for. ∎ As a direct consequence, we deduce: ###### Corollary 2.24. Let $R$ be a 2-primal ring. Then, the following are equivalent: 1. (i) $R$ is a UWNC ring. 2. (ii) $R[x]$ is a UWNC ring. 3. (iii) $J(R)={\rm Nil}(R)$ and $U(R)=\pm 1+J(R)$. The following criterion is worthy of documentation. ###### Proposition 2.25. Suppose $R$ is a commutative ring. Then, $R[x]$ is a UWNC ring if, and only if, $R$ is a UWNC ring. ###### Proof. Method 1: For the necessity, let $R[x]$ be a UWNC ring. We know that $\dfrac{R[x]}{\langle x\rangle}\cong R$. It, therefore, suffices to show that $\dfrac{R[x]}{\langle x\rangle}$ is a UWNC ring. To this aim, choosing $u+\langle x\rangle\in U\left(\dfrac{R[x]}{\langle x\rangle}\right)$, we derive $u\in U(R[x])$, because all units of $\dfrac{R[x]}{\langle x\rangle}$ are lifted to units of $R[x]$. Thus, we write $u=\pm e+n$, where $e\in{\rm Id}(R[x])$ and $n\in{\rm Nil}(R[x])$. So, $u+\langle x\rangle=\pm(e+\langle x\rangle)+(n+\langle x\rangle),$ where $e+\langle x\rangle\in{\rm Id}\left(\dfrac{R[x]}{\langle x\rangle}\right)$ and $n+\langle x\rangle\in{\rm Nil}\left(\dfrac{R[x]}{\langle x\rangle}\right)$. Consequently, $u+\langle x\rangle$ is a weakly nil-clean element and $\dfrac{R[x]}{\langle x\rangle}$ is a UWNC ring, as desired. For the sufficiency, write $f=a_{0}+a_{1}x+\ldots+a_{n}x^{n}\in U(R[x]),$ so $a_{0}\in U(R)$ and $a_{1},\ldots,a_{n}\in{\rm Nil}(R)$. By hypothesis, we have $a_{0}=\pm e+n$, where $e\in{\rm Id}(R)$ and $n\in{\rm Nil}(R)$. This allows us to infer that $f=(\pm e+n)+a_{1}x+\cdots+a_{n}x^{n}=\pm e+(n+a_{1}x+\cdots+a_{n}x^{n}),$ where $e\in{\rm Id}(R[x])$ and $n+a_{1}x+\cdots+a_{n}x^{n}\in{\rm Nil}(R[x])$. Finally, $f$ is a weakly nil-clean element, thus establishing the result. Method 2: It is well know that every commutative ring is a $2$-primal ring, and hence the result can be deduced from Corollary 2.24. ∎ Incidentally, we are able to prove the following curious statement. ###### Proposition 2.26. Let $R$ be a ring, and $m,n\geq 1$. If the matrix rings ${\rm M}_{n}(R)$ and ${\rm M}_{m}(R)$ are both UWNC, then so is the triangular matrix ring ${\rm T}_{n+m}(R)$. ###### Proof. Let $V\in U({\rm T}_{n+m}(R))$ be the $(n+m)\times(n+m)$ triangular matrix which we will write in the block decomposition form as follows $V=\begin{pmatrix}V_{11}&A_{12}\\\ 0&V_{22}\end{pmatrix}$, where $V_{11}\in U({\rm M}_{n}(R))$, $V_{22}\in U({\rm M}_{m}(R))$ and $A_{12}$ is appropriately sized rectangular matrices. By hypothesis, there exist idempotent matrices $E_{1}$, $E_{2}$ and nilpotent matrices $N_{1}$, $N_{2}$ in ${\rm M}_{n}(R)$ and ${\rm M}_{m}(R)$ such that $V_{11}=\pm E_{1}+N_{1}$ and $V_{22}=\pm E_{2}+N_{2}$. Thus, we obtain the decomposition $\displaystyle\begin{pmatrix}V_{11}&A_{12}\\\ 0&V_{22}\end{pmatrix}$ $\displaystyle=\begin{pmatrix}\pm E_{1}+N_{1}&A_{12}\\\ 0&\pm E_{2}+N_{2}\end{pmatrix}$ $\displaystyle=\pm\begin{pmatrix}E_{1}&0\\\ 0&E_{2}\end{pmatrix}+\begin{pmatrix}N_{1}&A_{12}\\\ 0&N_{2}\end{pmatrix}.$ Since $E_{1}$, $E_{2}$ are idempotents, then an easy verification guarantees that $\begin{pmatrix}E_{1}&0\\\ 0&E_{2}\end{pmatrix}$ is an idempotent. It is also readily to see that $\begin{pmatrix}N_{1}&A_{12}\\\ 0&N_{2}\end{pmatrix}$ is a nilpotent. Thus, the above decomposition is the desired weakly nil-clean decomposition. ∎ Let $A$, $B$ be two rings and $M$, $N$ be $(A,B)$-bi-module and $(B,A)$-bi- module, respectively. Also, we consider the bilinear maps $\phi:M\otimes_{B}N\rightarrow A$ and $\psi:N\otimes_{A}M\rightarrow B$ that apply to the following properties. $Id_{M}\otimes_{B}\psi=\phi\otimes_{A}Id_{M},Id_{N}\otimes_{A}\phi=\psi\otimes_{B}Id_{N}.$ For $m\in M$ and $n\in N$, define $mn:=\phi(m\otimes n)$ and $nm:=\psi(n\otimes m)$. Now the $4$-tuple $R=\begin{pmatrix}A&M\\\ N&B\end{pmatrix}$ becomes to an associative ring with obvious matrix operations that is called a Morita context ring. Denote two-side ideals $Im\phi$ and $Im\psi$ to $MN$ and $NM$, respectively, that are called the trace ideals of the Morita context (compare with [2] as well). We now have at our disposal all the ingredients necessary to establish the following statement. ###### Proposition 2.27. Let $R=\left(\begin{array}[]{ll}A&M\\\ N&B\end{array}\right)$ be a Morita context ring such that $MN$ and $NM$ are nilpotent ideals of $A$ and $B$, respectively. If $R$ is a UWNC ring, then $A$ and $B$ are UWNC rings. The converse holds provided one of the $A$ or $B$ is UNC and the other is UWNC. ###### Proof. Canonically, $\dfrac{M}{M_{0}}$ is an $\left(\dfrac{A}{J(A)},\dfrac{B}{J(B)}\right)$-bi-module and $\dfrac{N}{N_{0}}$ is a $\left(\dfrac{B}{J(B)},\dfrac{A}{J(A)}\right)$-bi- module, and this induces a Morita context $\begin{pmatrix}\dfrac{A}{J(A)}&\dfrac{M}{M_{0}}\\\ \dfrac{N}{N_{0}}&\dfrac{B}{J(B)}\end{pmatrix}$, where the context products are given by $(x+M_{0})(y+N_{0})=xy+J(A),(y+N_{0})(x+M_{0})=yx+J(B)$ for all $x\in M$ and $y\in N$. Thus, an appeal to [2, Lemma 4.5], is a guarantor that $\dfrac{R}{J(R)}\cong\begin{pmatrix}\dfrac{A}{J(A)}&\dfrac{M}{M_{0}}\\\ \dfrac{N}{N_{0}}&\dfrac{B}{J(B)}\end{pmatrix}.$ However, as $MN$ and $NM$ are nilpotent ideals of $A$ and $B$, respectively, we then have that $MN\subseteq J(A)$ and $NM\subseteq J(B)$. Therefore, in view of [20], we argue that $J(R)=\begin{pmatrix}J(A)&M\\\ N&J(B)\end{pmatrix}$ and hence $\dfrac{R}{J(R)}\cong\dfrac{A}{J(A)}\times\dfrac{B}{J(B)}$. Since $R$ is a UWNC ring, then the factor $\dfrac{R}{J(R)}$ is a UWNC ring and $J(R)$ is nil consulting with Theorem 2.8, so it follows that both $\dfrac{A}{J(A)}$ and $\dfrac{B}{J(B)}$ are UWNC. As $J(R)$ is nil, $J(A)$ and $J(B)$ are nil too. Thus, $A$ and $B$ are UWNC as well. Oppositely, assuming that $A$ is a UNC ring and $B$ is a UWNC ring, we conclude that $\dfrac{R}{J(R)}$ is a UWNC ring by a combination of [14, Theorem 2.5], Theorem 2.8 and Proposition 2.2. It then suffices to show that $J(R)$ is nil. To this target, suppose $r=\begin{pmatrix}a&m\\\ n&b\end{pmatrix}\in J(R)$. Then, $a\in J(A)$, $b\in J(B)$. In virtue of Theorem 2.8, both ideals $J(A)$ and $J(B)$ are nil. Thus, we can find $n\in\mathbb{N}$ such that $a^{n}=0$ in $A$ and $b^{n}=0$ in $B$. So, $\begin{pmatrix}a&m\\\ n&b\end{pmatrix}^{n+1}\subseteq\begin{pmatrix}MN&M\\\ N&NM\end{pmatrix}.$ Clearly, $\begin{pmatrix}MN&M\\\ N&NM\end{pmatrix}^{2}=\begin{pmatrix}MN&(MN)M\\\ (NM)N&NM\end{pmatrix}.$ Moreover, for any $j\in\mathbb{N}$, one easily checks that $\begin{pmatrix}MN&M\\\ N&NM\end{pmatrix}^{2j}=\begin{pmatrix}MN&(MN)M\\\ (NM)N&NM\end{pmatrix}^{j}=\begin{pmatrix}(MN)^{j}&(MN)^{j}M\\\ (NM)^{j}N&(NM)^{j}\end{pmatrix}.$ By hypothesis, we may assume that $(MN)^{p}=0$ in $A$ and $(NM)^{p}=0$ in $B$. Therefore, $\begin{pmatrix}MN&M\\\ N&NM\end{pmatrix}^{2p}=0.$ Consequently, $\begin{pmatrix}a&m\\\ n&b\end{pmatrix}^{2p(n+1)}=0$, and so $J(R)$ is indeed nil, as desired. ∎ We now state the following. ###### Example 2.28. Consider the Morita context $R=\begin{pmatrix}\mathbb{Z}_{4}&\mathbb{Z}_{4}\\\ 2\mathbb{Z}_{4}&\mathbb{Z}_{4}\end{pmatrix}$, where the context products are the same as the product in $\mathbb{Z}_{4}$. Then, we claim that $R$ is UWNC. Since $\mathbb{Z}_{4}$ is obviously UNC, so we are done exploiting Proposition 2.27. This substantiates our claim. Given a ring $R$ and a central elements $s$ of $R$, the $4$-tuple $\begin{pmatrix}R&R\\\ R&R\end{pmatrix}$ becomes a ring with addition component-wise and with multiplication defined by $\begin{pmatrix}a_{1}&x_{1}\\\ y_{1}&b_{1}\end{pmatrix}\begin{pmatrix}a_{2}&x_{2}\\\ y_{2}&b_{2}\end{pmatrix}=\begin{pmatrix}a_{1}a_{2}+sx_{1}y_{2}&a_{1}x_{2}+x_{1}b_{2}\\\ y_{1}a_{2}+b_{1}y_{2}&sy_{1}x_{2}+b_{1}b_{2}\end{pmatrix}.$ This ring is denoted by ${\rm K}_{s}(R)$. A Morita context $\begin{pmatrix}A&M\\\ N&B\end{pmatrix}$ with $A=B=M=N=R$ is called a generalized matrix ring over $R$. It was observed by Krylov in [16] that a ring $S$ is a generalized matrix ring over $R$ if, and only if, $S={\rm K}_{s}(R)$ for some $s\in{\rm Z}(R)$. Here $MN=NM=sR$, so $MN\subseteq J(A)\Longleftrightarrow s\in J(R)$, $NM\subseteq J(B)\Longleftrightarrow s\in J(R)$, and $MN$, $NM$ are nilpotent $\Longleftrightarrow s$ is a nilpotent. As three corollaries, we extract: ###### Corollary 2.29. Let $R$ be a ring and $s\in{\rm Z}(R)\cap{\rm Nil}(R)$. If ${\rm K}_{s}(R)$ is a UWNC ring, then $R$ is a UWNC ring. The converse holds, provided $R$ is a UNC ring. An other construction of interest is the following one. ###### Example 2.30. As $\mathbb{Z}_{4}$ is a UNC ring, it follows from Corollary 2.29 that the generalized matrix ring ${\rm K}_{(2)}(\mathbb{Z}_{4})=\\{\begin{pmatrix}a&b\\\ c&d\end{pmatrix}|a,b,c,d\in\mathbb{Z}_{4}\\}$ is a UWNC ring. Following Tang and Zhou (cf. [21]), for $n\geq 2$ and for $s\in{\rm Z}(R)$, the $n\times n$ formal matrix ring over $R$ defined by $s$, and denoted by ${\rm M}_{n}(R;s)$, is the set of all $n\times n$ matrices over $R$ with usual addition of matrices and with multiplication defined below: For $(a_{ij})$ and $(b_{ij})$ in ${\rm M}_{n}(R;s)$, $(a_{ij})(b_{ij})=(c_{ij}),\quad\text{where}~{}~{}(c_{ij})=\sum s^{\delta_{ikj}}a_{ik}b_{kj}.$ Here, $\delta_{ijk}=1+\delta_{ik}-\delta_{ij}-\delta_{jk}$, where $\delta_{jk}$, $\delta_{ij}$, $\delta_{ik}$ are the Kroncker delta symbols. We, therefore, have the following. ###### Corollary 2.31. Let $R$ be a ring and $s\in{\rm Z}(R)\cap{\rm Nil}(R)$. If ${\rm M}_{n}(R;s)$ is a UWNC ring, then $R$ is a UWNC ring. The converse holds, provided $R$ is a UNC ring. ###### Proof. If $n=1$, then ${\rm M}_{n}(R;s)=R$. So, in this case, there is nothing to prove. Let $n=2$. By the definition of ${\rm M}_{n}(R;s)$, we have ${\rm M}_{2}(R;s)\cong{\rm K}_{s^{2}}(R)$. Apparently, $s^{2}\in{\rm Nil}(R)\cap{\rm Z}(R)$, so the claim holds for $n=2$ with the help of Corollary 2.29. To proceed by induction, assume now that $n>2$ and that the claim holds for ${\rm M}_{n-1}(R;s)$. Set $A:={\rm M}_{n-1}(R;s)$. Then, ${\rm M}_{n}(R;s)=\begin{pmatrix}A&M\\\ N&R\end{pmatrix}$ is a Morita context, where $M=\begin{pmatrix}M_{1n}\\\ \vdots\\\ M_{n-1,n}\end{pmatrix}\quad\text{and}\quad N=(M_{n1}\dots M_{n,n-1})$ with $M_{in}=M_{ni}=R$ for all $i=1,\dots,n-1,$ and $\displaystyle\psi:N\otimes M\rightarrow N,\quad n\otimes m\mapsto snm$ $\displaystyle\phi:M\otimes N\rightarrow M,\quad m\otimes n\mapsto smn.$ Besides, for $x=\begin{pmatrix}x_{1n}\\\ \vdots\\\ x_{n-1,n}\end{pmatrix}\in M$ and $y=(y_{n1}\dots y_{n,n-1})\in N$, we write $xy=\begin{pmatrix}s^{2}x_{1n}y_{n1}&sx_{1n}y_{n2}&\dots&sx_{1n}y_{n,n-1}\\\ sx_{2n}y_{n1}&s^{2}x_{2n}y_{n2}&\dots&sx_{2n}y_{n,n-1}\\\ \vdots&\vdots&\ddots&\vdots\\\ sx_{n-1,n}y_{n1}&sx_{n-1,n}y_{n2}&\dots&s^{2}x_{n-1,n}y_{n,n-1}\end{pmatrix}\in sA$ and $yx=s^{2}y_{n1}x_{1n}+s^{2}y_{n2}x_{2n}+\dots+s^{2}y_{n,n-1}x_{n-1,n}\in s^{2}R.$ Since $s$ is nilpotent, we see that $MN$ and $NM$ are nilpotent too. Thus, we obtain that $\frac{{\rm M}_{n}(R;s)}{J({\rm M}_{n}(R;s))}\cong\frac{A}{J(A)}\times\frac{R}{J(R)}.$ Finally, the induction hypothesis and Proposition 2.27 yield the claim after all. ∎ A Morita context $\begin{pmatrix}A&M\\\ N&B\end{pmatrix}$ is called trivial, if the context products are trivial, i.e., $MN=0$ and $NM=0$. We now have $\begin{pmatrix}A&M\\\ N&B\end{pmatrix}\cong{\rm T}(A\times B,M\oplus N),$ where $\begin{pmatrix}A&M\\\ N&B\end{pmatrix}$ is a trivial Morita context consulting with [15]. What we can now offer is the following. ###### Corollary 2.32. If the trivial Morita context $\begin{pmatrix}A&M\\\ N&B\end{pmatrix}$ is a UWNC ring, then $A$, $B$ are UWNC rings. The converse holds if one of the rings $A$ or $B$ is UWNC and the other is UNC. ###### Proof. It is apparent to see that the isomorphisms $\begin{pmatrix}A&M\\\ N&B\end{pmatrix}\cong{\rm T}(A\times B,M\oplus N)\cong\begin{pmatrix}A\times B&M\oplus N\\\ 0&A\times B\end{pmatrix}$ are fulfilled. Then, the rest of the proof follows combining Corollary 2.9 and Proposition 2.3. ∎ We now intend to prove the following. ###### Theorem 2.33. Let $R$ be a local ring. Then, the following are equivalent: 1. (i) $R$ is a UWNC ring. 2. (ii) $R$ is a weakly nil-clean ring. ###### Proof. (i) $\Rightarrow$ (ii). As $R$ is local, one finds that ${\rm Id}(R)=\\{0,1\\}$, and so $R$ is abelian. Therefore, $R$ is WUU owing to Corollary 2.22. Also, $R$ is clean, because every local ring is clean. Thus, $R$ is a clean WUU and we apply [9, Corollary 2.15] to find that $R$ is strongly weakly nil-clean and so it is weakly nil-clean. (ii) $\Rightarrow$ (i). It is clear. ∎ We say that $B$ is a unital subring of a ring $A$ if $\emptyset\neq B\subseteq A$ and, for any $x,y\in B$, the relations $x-y$, $xy\in B$ and $1_{A}\in B$ hold. Let $A$ be a ring and $B$ a unital subring of $A$, and denote by $R[A,B]$ the set $\\{(a_{1},\ldots,a_{n},b,b,\ldots):a_{i}\in A,b\in B,1\leq i\leq n\\}$. Then, $R[A,B]$ forms a ring under the usual component-wise addition and multiplication. We, thereby, establish the following. ###### Proposition 2.34. Let $A$ be a ring and a unital subring $B$ of $A$. Then, the following are equivalent: 1. (i) $A$ and $B$ are UWNC. 2. (ii) $R[A,B]$ is UWNC. ## 3\. UWNC Group Rings We begin here with the following simple but useful technicality. ###### Lemma 3.1. Let $R$ and $S$ be rings and $i:R\rightarrow S$, $\varepsilon:S\rightarrow R$ be ring homomorphisms such that $\varepsilon i=id_{R}$. 1. (i) $\varepsilon({\rm Nil}(S))={\rm Nil}(R)$, $\varepsilon(U(S))=U(R)$ and $\varepsilon({\rm Id}(S))={\rm Id}(R)$. 2. (ii) If $S$ is a UWNC ring, then $R$ is a UWNC ring. 3. (iii) If $R$ is a UWNC ring and $\text{ker}\varepsilon\subseteq{\rm Nil}(S)$, then $S$ is a UWNC ring. ###### Proof. 1. (i) Clearly, the inclusions $\varepsilon({\rm Nil}(S))\subseteq{\rm Nil}(R)$, $\varepsilon(U(S))\subseteq U(R)$ and $\varepsilon({\rm Id}(S))\subseteq{\rm Id}(R)$ are valid. On the other hand, we also have that ${\rm Nil}(R)=\varepsilon i({\rm Nil}(R))\subseteq\varepsilon({\rm Nil}(S))$, $U(R)=\varepsilon i(U(R))\subseteq\varepsilon(U(S))$ and ${\rm Id}(R)=\varepsilon i({\rm Id}(R))\subseteq\varepsilon({\rm Id}(S))$. 2. (ii) Method 1: Let $S$ be a UWNC ring. Choose $u\in U(R)=\varepsilon(U(S))$, so $u=\varepsilon(v)$, where $v\in U(S)$. Thus, we can write $v=\pm e+q$, where $e\in{\rm Id}(S)$, $q\in{\rm Nil}(S)$. Therefore, $u=\varepsilon(v)=\varepsilon(\pm e+q)=\pm\varepsilon(e)+\varepsilon(q),$ where $\varepsilon(e)\in{\rm Id}(R)$ and $\varepsilon(q)\in{\rm Nil}(R)$ exploiting (i). Method 2: Let $S$ be a UWNC ring. Hence, $U(S)=\pm{\rm Id}(S)+{\rm Nil}(S)$, whence by (i) one has that $U(R)=\varepsilon(U(S))=\varepsilon(\pm{\rm Id}(S)+{\rm Nil}(S))=\pm\varepsilon({\rm Id}(S))+\varepsilon({\rm Nil}(S))=\pm{\rm Id}(R)+{\rm Nil}(R),$ as required. 3. (iii) If $R$ is a UWNC ring, point (i) enables us that $U(S)=\varepsilon^{-1}(U(R))=\varepsilon^{-1}(\pm{\rm Id}(R)+{\rm Nil}(R))=\pm{\rm Id}(S)+{\rm Nil}(S)+\text{ker}\varepsilon=\pm{\rm Id}(S)+{\rm Nil}(S),$ as required. ∎ ###### Remark 3.2. It is a routine technical exercise to see that Lemma 3.1 (2) and (3) is true also for WUU rings. Suppose now that $G$ is an arbitrary group and $R$ is an arbitrary ring. As usual, $RG$ stands for the group ring of $G$ over $R$. The homomorphism $\varepsilon:RG\rightarrow R$, defined by $\varepsilon(\displaystyle\sum_{g\in G}a_{g}g)=\displaystyle\sum_{g\in G}a_{g}$, is called the augmentation map of $RG$ and its kernel, denoted by $\Delta(G)$, is called the augmentation ideal of $RG$. ###### Proposition 3.3. Let $R$ be a ring and $G$ a group. If $RG$ is a UWNC ring, then $R$ is a UWNC ring. The converse holds if $\Delta(G)\subseteq{\rm Nil}(RG)$. ###### Proof. Let us consider the inclusion $i:R\rightarrow RG$, given by $i(r)=\displaystyle\sum_{g\in G}a_{g}g$, where $a_{1G}=r$ and $a_{g}=0$ provided $g\neq 1_{G}$. It is easy to check that the map $i$ is a ring monomorphism and thus $R$ can also be viewed as a subring of $RG$. Furthermore, it is only enough to apply Lemma 3.1 (ii) to get the claim. ∎ ###### Corollary 3.4. Let $R$ be a ring and $G$ a group. If $RG$ is a WUU ring, then $R$ is a WUU ring. The converse holds if $\Delta(G)\subseteq{\rm Nil}(RG)$. ###### Proof. It follows at once by Proposition 3.3 and Lemma 3.1. ∎ A group $G$ is called locally finite if every finitely generated subgroup of $G$ is finite. Let $p$ be a prime number. A group $G$ is called a $p$-group if the order of each element of $G$ is a power of $p$. We finish off our results with the following statement. ###### Proposition 3.5. Let $R$ be a UWNC ring with $p\in{\rm Nil}(R)$ and let $G$ be a locally finite $p$-group, where $p$ is a prime. Then, $RG$ is a UWNC ring. ###### Proof. Referring to [8, Proposition 16], one verifies that $\Delta(G)$ is nil. Now, the assertion follows from the obvious isomorphism $\dfrac{RG}{\Delta(G)}\cong R$ and Theorem 2.8. ∎ ###### Remark 3.6. It is easily seen that Proposition 3.5 holds also for WUU rings. ## 4\. Open Questions We close the work with the following challenging conjectures and problems. A ring $R$ is called uniquely weakly nil-clean, provided that $R$ is a weakly nil-clean ring in which every nil-clean element is uniquely nil-clean (see [6]). ###### Conjecture 4.1. A ring R is a WUU ring if, and only if, every unit of $R$ is uniquely weakly nil-clean. ###### Conjecture 4.2. A ring $R$ is a strongly weakly nil-clean if, and only if, it is a semi-potent WUU ring. ###### Problem 4.3. Is a clean, UWNC ring a weakly nil-clean ring? ###### Problem 4.4. Characterize semi-perfect UWNC rings. Are they weakly nil-clean? ###### Problem 4.5. Suppose that $R$ is a ring and $n\in\mathbb{N}$. Find a criterion when the full $n\times n$ matrix ring ${\rm M}_{n}(R)$ is UWNC. Funding: The work of the first-named author, P.V. Danchev, is partially supported by the project Junta de Andalucía under Grant FQM 264, and by the BIDEB 2221 of TÜBÍTAK. ## References * [1] A. Alhevaz, A. Moussavi, and M. Habibi, On rings having McCoy-like conditions, Commun. Algebra 40 (2012), 1195-1221. * [2] R. Barati, A. Moussavi, and A. Abyzov, Rings whose elements are sums of $m$-potents and nilpotents, Commun. Algebra 50 (10) (2022), 4437-4459. * [3] S. Breaz, P. Danchev, and Y. Zhou, Rings in which every element is either a sum or a difference of a nilpotent and an idempotent, J. Algebra Appl. 15 (8) (2016). * [4] G. Calugareanu, UU rings, Carpathian J. Math. 31 (2) (2015), 157-163. * [5] H. Chen and M. Sheibani, Strongly weakly nil-clean rings, J. Algebra Appl. 16 (12) (2017). * [6] H. Chen and M. Sheibani, Theory of Clean Rings and Matrices, Word Scientific Publishing Company, 2022. * [7] W. Chen, On constant products of elements in skew polynomial rings, Bull. Iranian Math. Soc. 41 (2) (2015), 453-462. * [8] I. G. Connell, On the group ring, Can. J. Math. 15 (1963), 650-685. * [9] P. V. Danchev, Weakly UU rings, Tsukuba J. Math. 40 (1) (2016), 101-118. * [10] P. V. Danchev, Weakly JU rings, Missouri J. Math. Sci. 29 (2) (2017), 184-196. * [11] P. V. Danchev and T. Y. Lam, Rings with unipotent units, Publ. Math. (Debrecen) 88 (3-4) (2016), 449-466. * [12] P. V. Danchev and W. Wm. McGovern, Commutative weakly nil clean unital rings, J. Algebra 425 (5) (2015), 410-422. * [13] A. J. Diesl, Nil clean rings, J. Algebra 383 (2013), 197-211. * [14] A. Karimi-Mansoub, T. Kosan and Y. Zhou, Rings in which every unit is a sum of a nilpotent and an idempotent, Contemp. Math. 715 (2018). * [15] M. T. Kosan, The P. P. property of trivial extensions, J. Algebra Appl. 14 (2015). * [16] P. A. Krylov, Isomorphism of generalized matrix rings, Algebra Logic 47 (4) (2008), 258-262. * [17] T.-Y. Lam, A First Course in Noncommutative Rings, Graduate Texts in Mathematics, 131, Springer-Verlag, New York, 1991. * [18] A. Mimouni, On the Jacobson unit-like rings, Rocky Mount. J. Math. 54 (4) (2024). * [19] A. R. Nasr-Isfahani, On skew triangular matrix rings, Commun. Algebra. 39 (11) (2011), 4461-4469. * [20] G. Tang, C. Li, and Y. Zhou, Study of Morita contexts, Commun. Algebra 42 (4) (2014), 1668-1681. * [21] G. Tang and Y. Zhou, A class of formal matrix rings, Linear Algebra Appl. 438 (2013), 4672-4688.
OT Divergences induced by Scoring FunctionsPesenti and Vanduffel assumptionAssumption remarkRemark hypothesisHypothesis claimClaim optimisationOptimisation # Optimal Transport Divergences induced by Scoring Functions††thanks: The authors thank Tobias Fissler for fruitful and stimulating discussions on the topic. SP gratefully acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) with funding reference numbers DGECR-2020-00333 and RGPIN-2020-04289. SV is grateful to Research Foundation Flanders (FWO) for financial support with funding reference number FWO SBO S006721N Silvana M. Pesenti Department of Statistical Sciences, University of Toronto, Canada<EMAIL_ADDRESS>Steven Vanduffel Department of Economics and Political Science, Vrije Universiteit Brussel, Belgium (). <EMAIL_ADDRESS> ###### Abstract We employ scoring functions, used in statistics for eliciting risk functionals, as cost functions in the Monge-Kantorovich (MK) optimal transport problem. This gives raise to a rich variety of novel asymmetric MK divergences, which subsume the family of Bregman-Wasserstein divergences. We show that for distributions on the real line, the comonotonic coupling is optimal for the majority the new divergences. Specifically, we derive the optimal coupling of the MK divergences induced by functionals including the mean, generalised quantiles, expectiles, and shortfall measures. Furthermore, we show that while any elicitable law-invariant convex risk measure gives raise to infinitely many MK divergences, the comonotonic coupling is simultaneously optimal. The novel MK divergences, which can be efficiently calculated, open an array of applications in robust stochastic optimisation. We derive sharp bounds on distortion risk measures under a Bregman-Wasserstein divergence constraint, and solve for cost-efficient portfolio strategies under benchmark constraints. ###### keywords: Optimal transport, risk measures, scoring functions, elicitability, coupling, Wasserstein distance, asymmetric Optimal Transport ## 1 Introduction This work connects optimal transport with the statistics of point forecasts and risk measure theory to derive asymmetric optimal transport divergences. Initially developed by Gaspard Monge in the 18-th century to establish the most efficient way of moving piles of soil from one location to another, optimal transport (OT) theory has evolved as a flourishing mathematical discipline that studies the optimal transportation of one distribution to another; see e.g. the monographs [19] and [23]. The theory of OT has numerous applications in various distinct disciplines including economics, engineering, and the natural sciences. For instance, economists use OT to model the flow of goods and resources in financial markets. It find itself in causal inference, partial identification, and the analysis of distributional treatment effects in e.g., biostatistics. In image processing and computer vision, OT is used for solving matching and registration problems whereas in the domain of fluid dynamics it optimises the flow of gases. The most commonly used and best understood OT distance is the (2-)Wasserstein distance, that arises as the minimiser of the Monge-Kantorovich OT problem with the symmetric cost function $c(z_{1},z_{2})=(z_{1}-z_{2})^{2}$. In various problems of interest, however, asymmetry may be desired. A case in point is the quantification of model ambiguity in optimal portfolio strategies, where a decision maker assigns a larger costs to potential losses than to equally large gains (if any). The literature on asymmetric cost function is scarce, with the exception of the recently introduced Bregman- Wasserstein (BW) divergences [5]. A BW divergence is the minimiser of the Monge-Kantorovich OT problem in which the cost function is a Bregman divergence generated by a strictly convex function $\phi(\cdot)$; we refer to [20] for discussions and information geometric interpretations. The BW divergence is an asymmetric generalisation of the Wasserstein distance, recovered for $\phi(x)=x^{2}$, that allows the modelling of dissimilarities. The study of elicitability is a fast growing field in statistics and at its core are scoring functions that incentivise truthful predictions and allow for forecast comparison, model comparison (backtesting), and model calibration [14, 9]. In sensitivity analysis, scoring functions are utilised for defining sensitivity measures which quantify the sensitivity of an elicitable risk measure to perturbations in the model’s input factors [10]. The most well- known family of scoring functions are the Bregman divergences that elicit the mean, where a functional is called elicitable if it is a minimiser of an expected score, see Definition 2.2. Other elicitable functionals are quantiles, expectiles, and shortfall risk measures; tools used in risk management. Scoring functions are by nature asymmetric, making them ideal candidates for asymmetric cost functions in the Monge-Kantorovich OT problem. Indeed, we propose novel asymmetric Monge-Kantorovich (MK) divergences where the OT cost functions are statistical scoring functions. As a Bregman divergence elicits the mean and gives raise to a BW divergence, our new MK divergences can be seen as generalisations of BW divergences, and thus the Wasserstein distance. In addition to scoring functions that elicits the mean, we study scoring functions that elicit the quantile, the expectile, and law- invariant convex risk measures. Interestingly, we find that most of the introduced MK divergences are attained by the comonotonic coupling. Furthermore, as an elicitable functional possesses infinitely many scoring functions, and thus gives raise to infinitely many MK divergences, the comonotonic optimal coupling is typically simultaneously optimal. Using the celebrated Osband’s principle in statistics, we propose ways to create novel MK divergences that are attained by the anti- or comonotonic coupling. Furthermore, we prove that MK divergences induces by any law-invariant elicitable convex risk measure are attained by the comonotonic coupling. Finally, we discuss two applications to robust stochastic optimisation. First, we derive sharp bounds on distortion risk measures when admissible distributions belong to a BW-ball around a reference distribution, thus significantly generalising recent results of [2], who solve this problem for the special case of a Wasserstein ball. Second, we find the cheapest portfolio strategy under the constraint that the distribution of terminal wealth lie within a BW-ball around a benchmark distribution. This paper is organised as follows. Section 2 introduces the MK divergences after reviewing the statistical concepts elicitability and scoring functions and the relevant topics in OT. Section 3 is devoted to MK divergences induced by elicitable risk functionals such as the quantile, expectile, and shortfall risk measure. We find that for distributions on the real line the majority of the new MK divergences are attained by the comonotonic coupling. Applications of the new divergences to risk measure bounds, significantly generalising recent results by [2], and portfolio management are provided in Section 4. ## 2 Monge-Kantorovich divergences induced by scoring functions ### 2.1 Elicitability and scoring functions We first review the statistical concepts of elicitability and scoring functions by following the traditional statistical notation and decision theoretic setup, see e.g., [14] and [9]. For this let $(\Omega,{\mathcal{F}},{\mathbb{P}})$ be a complete probability space and denote by ${\mathcal{L}}:={\mathcal{L}}(\Omega,{\mathcal{F}},{\mathds{R}})$ the space of all random variables. The cumulative distribution function (cdf) of a random variable $X\in{\mathcal{L}}$ is denoted by $F_{X}(\cdot):={\mathbb{P}}(X\leq\cdot)$, and we write ${\mathcal{M}^{0}}:={\mathcal{M}^{0}}({\mathds{R}})$ to denote the space of all cdfs on ${\mathds{R}}$. For a cdf $F\in{\mathcal{M}^{0}}$, we define its corresponding (left-continuous) quantile function by ${\breve{F}}(u):=\inf\\{y\in{\mathds{R}}~{}|~{}F(y)\geq u\\}$, $u\in[0,1]$, with the convention that $\inf\emptyset=+\infty$. Throughout, we will use the notation ${\mathcal{M}}\subseteq\bar{{\mathcal{M}}}\subseteq{\mathcal{M}^{0}}$ to denote sub-classes of cdfs. ###### Definition 2.1 (Scoring function). A scoring function (or score) is a measurable map $S\colon{\mathsf{A}}\times{\mathds{R}}\to[0,\infty]$, where ${\mathsf{A}}\subseteq{\mathds{R}}$ is called the action domain. For a given functional $T\colon\bar{{\mathcal{M}}}\to{\mathsf{A}}$ and a sub-class ${\mathcal{M}}\subseteq\bar{{\mathcal{M}}}$, the scoring function $S$ may satisfy the following properties: 1. $(i)$ $S$ is _${\mathcal{M}}$ -consistent_ for $T$, if for all $F\in{\mathcal{M}}$ and for all $z\in{\mathsf{A}}$ (1) $\int S\big{(}T(F),y\big{)}\,{\mathrm{d}}F(y)\;\leq\;\int S(z,y)\,{\mathrm{d}}F(y)\,.$ 2. $(ii)$ $S$ is _strictly_ ${\mathcal{M}}$-consistent for $T$, if it is ${\mathcal{M}}$-consistent for $T$ and if the inequality in (1) is strict for all $z\neq T(F)$. Throughout, we make the following non-restrictive assumptions on the considered scoring functions. [Normalisation of scores] Let $S$ be an ${\mathcal{M}}$-consistent score for $T$ and denote by $\delta_{y}$, $y\in{\mathds{R}}$, point measures. Then it holds that 1. $(i)$ $S\big{(}T(\delta_{y}),y\big{)}<S(z,y)$ for all $z\neq T(\delta_{y})$ and $y\in{\mathds{R}}$, and 2. $(ii)$ $S\big{(}T(\delta_{y}),y\big{)}=0$ for all $y\in{\mathds{R}}$. Definition 2.1 means that the scores are strictly consistent on the space of point measures and that the scores are normalised to $S(y,y)=0$. Note that any score $S$ satisfying $(i)$ can be normalised to fulfil $(ii)$, by setting $\tilde{S}(z,y):=S(z,y)-S(T(\delta_{y}),y)$. Moreover, Definition 2.1 leads to a unique characterisation of the families of scores of elicitable functionals, see e.g., Propositions 3.1, 3.4, and 3.9. ###### Definition 2.2 (Elicitability). A functional $T\colon\bar{{\mathcal{M}}}\to{\mathsf{A}}$ is 1-elicitable 111 A functional $T$ is called $k$-elicitable, $k\in{\mathds{N}}$, if there exists a strictly ${\mathcal{M}}$-consistent scoring function $S\colon{\mathsf{A}}^{k}\times{\mathds{R}}\to{\mathds{R}}$, ${\mathsf{A}}^{k}\subseteq{\mathds{R}}^{k}$ and a ${\boldsymbol{z}}^{*}\in{\mathds{R}}^{k-1}$, such that $({\boldsymbol{z}}^{*},T(F))=\operatorname*{argmin}_{{\boldsymbol{z}}\in{\mathsf{A}}^{k}}\int S({\boldsymbol{z}},y){\mathrm{d}}F(y)$, for all $F\in{\mathcal{M}}$. There are many functionals that are $k$-elicitable but not $1$-elicitable, see e.g., [11]. on ${\mathcal{M}}\subseteq\bar{{\mathcal{M}}}$, if there exists a strictly ${\mathcal{M}}$-consistent scoring function $S$ for $T$. Moreover, the functional $T$ has the following representation on ${\mathcal{M}}$ (2) $T(F)=\operatorname*{argmin}_{z\in{\mathsf{A}}}\int S(z,y)\,{\mathrm{d}}F(y)\,,\quad\forall\;F\in{\mathcal{M}}\,.$ By Equation (2), a 1-elicitable functional is a Bayes act and a minimiser of an expected score [14]. It is well known that the squared loss $S(z,y)=(z-y)^{2}$ elicits the mean. The squared Euclidean distance, however, is not the only strictly consistent score for the mean. Indeed, from (2) we see that a 1-elicitable functional $T$ has infinitely many strictly consistent scores. In particular, the class of strictly consistent scoring functions for the mean are given by the so-called Bregman divergences, which we recall next. ###### Definition 2.3 (Bregman divergence). Let $\phi\colon{\mathds{R}}\to{\mathds{R}}$ be a convex function. The Bregman divergence associated with $\phi$ is defined as $B_{\phi}\big{(}z_{1},z_{2}\big{)}:=\phi(z_{1})-\phi(z_{2})-\phi^{\prime}(z_{2})(z_{1}-z_{2})\,,\quad z_{1},z_{2}\in{\mathds{R}}\,,$ where $\phi^{\prime}(z):=\frac{d}{dz}\phi(z)$ denotes the derivative of $\phi$. A Bregman divergence $B_{\phi}(z_{1},z_{2})$ can be seen as a measure of the deviation of $z_{2}$ from $z_{1}$. Note that in order to have a mathematical divergence (i.e., $B_{\phi}(z_{1},z_{2})=0$ if and only if $z_{1}=z_{2}$), $\phi$ needs to be strictly convex. While for the choice $\phi(z)=z^{2}$ the Bregman divergence coincides with the squared Euclidean distance, i.e., $B_{\phi}(z_{1},z_{2})=(z_{1}-z_{2})^{2}$, in general, the Bregman divergence is not symmetric. ### 2.2 Monge-Kantorovich divergences induced by scoring functions Next, we use scoring functions as cost functions in the Monge-Kantorovich optimal transport (OT) problem. In what follows, we call a function $c\colon{\mathds{R}}^{2}\to{\mathds{R}}_{+}$, ${\mathds{R}}_{+}:=[0,\infty)$, that is lower-semi-continuous, a cost function. We recall the traditional Monge-Kantorovich optimisation problem and refer the reader to the books [21, 23] for further details. ###### Definition 2.4 (Monge-Kantorovich optimal transport problem). Let $c\colon{\mathds{R}}^{2}\to{\mathds{R}}_{+}$ be a cost function. Then the Monge-Kantorovich optimisation problem with respect to the cdfs $F_{1}\in{\mathcal{M}^{0}}$ and $F_{2}\in{\mathcal{M}^{0}}$ is given by (3) $\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}c\big{(}z_{1},z_{2}\big{)}\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\},$ where $\Pi(F_{1},F_{2})$ denotes the set of all bivariate cdfs with marginal cdfs $F_{1}$ and $F_{2}$, respectively. A bivariate cdf that attains the infimum in (3) is called an optimal coupling, which exists for any choice of cost function, see e.g., Theorem 1.7 in [21]. For the cost function $c(z_{1},z_{2}):=(z_{1}-z_{2})^{p}$, $p\geq 1$, we obtains the well-known $p$-Wasserstein distance $\displaystyle W_{p}(F_{1},F_{2}):$ $\displaystyle=\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\left(\,\int_{{\mathds{R}}^{2}}(z_{1}-z_{2})^{p}\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right)^{\frac{1}{p}}\right\\}$ $\displaystyle=\left(\int_{0}^{1}\left|{\breve{F}}_{1}(u)-{\breve{F}}_{2}(u)\right|^{2}{\mathrm{d}}u\right)^{\frac{1}{2}}\,,$ where ${\breve{F}}_{i}$ is the quantile function of $F_{i}$, $i=1,2$, and where the last equality follows for cdfs on the real line, indicating that the comonotonic coupling is optimal [7]. This work introduces new asymmetric _Monge-Kantorovich (MK) divergences_ – divergences on the space of cdfs – that are defined by optimisation problem (3), where the cost functions are scoring functions. Thus, we not only introduce new MK divergences but also provide a novel perspective on scoring functions as OT cost functions. Specifically, we consider cost functions $c(z_{1},z_{2}):=S(z_{2},z_{1})$, for consistent scoring functions $S$. Note that the arguments of $c$ and $S$ are exchanged, this is due to different notational conventions in statistics and optimal transport, see e.g. Equation (6). Our choice is justified in that we obtain the Bregman-Wasserstein divergence when choosing any consistent scoring functions for the mean functional, see e.g., [20]. We formulate the following definition. ###### Definition 2.5 (Monge-Kantorovich divergence). Let $S$ be a ${\mathcal{M}}$-consistent scoring function for a functional $T$. Then the Monge-Kantorovich (MK) divergence induced by $S$ from the cdf $F_{1}\in{\mathcal{M}^{0}}$ to the cdf $F_{2}\in{\mathcal{M}^{0}}$ is given by (4) ${\mathscr{S}}(F_{1},F_{2}):=\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}S\big{(}z_{2},z_{1}\big{)}\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}\,.$ We term a bivariate cdf that attains the infimum in (4) an optimal coupling. The defined MK divergences may not necessarily be a divergence. They are non- negative and satisfy ${\mathscr{S}}(F_{1},F_{1})=0$, however, additional assumptions on the score $S$ are needed to ensure that $F_{1}=F_{2}$ implies ${\mathscr{S}}(F_{1},F_{1})=0$, see e.g., [20] for the BW divergence. Furthermore, a MK divergence is in general not symmetric which is in contrast to, e.g., the $p$-Wasserstein distance. Clearly, the MK divergence depends on the choice of scoring function, however, for conciseness of the exposition, we refrain from writing ${\mathscr{S}}_{S}$, whenever the scoring function is clear from the context. The assumption to normalise the score, Definition 2.1 $(ii)$, does not affect the optimal coupling as normalisation is achieved by subtracting from the score a function of $z_{2}$ only; see also the discussion after Definition 2.1. If the cost function is the Bregman divergence, i.e., when in (3) one considers $c(z_{1},z_{2})=B_{\phi}(z_{1},z_{2})$, we obtain the Bregman- Wasserstein (BW) divergence [20] (5) ${\mathscr{B}}_{\phi}(F_{1},F_{2}):=\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}B_{\phi}(z_{1},\,z_{2})\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}\,,$ which reduces to the 2-Wasserstein distance for $B_{\phi}$ being the squared loss, i.e. $\phi(x)=x^{2}$. ## 3 Optimal couplings for MK divergences This section is devoted to MK divergences induced by scores of elicitable risk functionals and their optimal couplings. Any elicitable risk functional, such as the mean, quantiles, expectiles, admits infinitely many consistent score, and thus gives raise to an infinite family of MK divergences. Furthermore, while the MK divergences are different for each score that elicits the risk functional, we show that the optimal couplings are the same. ### 3.1 Bregman score as cost function The most commonly used 1-elicitable functional is the mean, whose family of scoring functions are the Bregman scores. ###### Proposition 3.1 (Elicitability of Mean – [14]). Let ${\mathcal{M}}$ denote the class of cdfs with finite mean. If $\phi$ is a (strictly) convex function with subgradient $\phi^{\prime}$ and if $\int|\phi(y)|\,\mathrm{d}F(y)<\infty$ for all $F\in{\mathcal{M}}$, then the scoring function (6) $S_{\phi}(z,y):=B_{\phi}(y,z)\,,\quad z,y\in{\mathds{R}}\,,$ is (strictly) ${\mathcal{M}}$-consistent for the mean. Moreover, on the class of compactly supported measures, any (strictly) consistent score for the mean which is continuously differentiable in its first argument and which satisfies $S(y,y)=0$, is necessarily of the form (6). ###### Theorem 3.2 (Optimal coupling for BW-divergence). The optimal coupling of any BW-divergence is the comonotonic coupling, i.e., $\big{(}{\breve{F}}_{1}(U),{\breve{F}}_{2}(U)\big{)}$, for any $U\sim U(0,1)$, is the optimal coupling. In the language of OT, the comonotonic coupling implies that the _optimal transport map_ of (5), i.e., the deterministic function mapping $F_{1}$ to $F_{2}$, is given by $\alpha(x):={\breve{F}}_{2}\big{(}F_{1}(x)\big{)}$. We refer the reader to [20], who characterise the optimal transport map of the BW divergence for multivariate cdfs. Here, we provided for completeness a constructive proof that is valid for univariate cdfs. ###### Proof 3.3. Let $\phi$ be convex, then the BW-divergence becomes $\displaystyle{\mathscr{B}}_{\phi}(F_{1},F_{2})$ $\displaystyle=\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}\phi(z_{1})-\phi(z_{2})-\phi^{\prime}(z_{2})(z_{1}-z_{2})\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}$ $\displaystyle=\int_{\mathds{R}}\phi(z_{1}){\mathrm{d}}F_{1}(z_{1})+\int_{\mathds{R}}\left(\phi^{\prime}(z_{2})z_{2}-\phi(z_{2})\right){\mathrm{d}}F_{2}(z_{2})$ $\displaystyle\quad-\sup_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}\phi^{\prime}(z_{2})z_{1}\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}\,.$ Thus, to find the optimal coupling, we only have to solve the supremum. Rewriting in terms of random variables, we have (7) $\sup_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}\phi^{\prime}(z_{2})z_{1}\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}=\sup\;\,{\mathbb{E}}\left[\phi^{\prime}(Z_{2})Z_{1}\,\right]\,,\quad Z_{i}\sim F_{i}\,,\;i=1,2\,.$ where the supremum is over all copulae between $(Z_{1},Z_{2})$. Since $\phi^{\prime}$ is increasing, it is well-known that the above supremum is attained by the comonotonic coupling. Denoting the quantile function of $F_{i}$ by ${\breve{F}}_{i}$, $i=1,2$, and since the quantile function of $\phi^{\prime}(Z_{2})$ is $\phi^{\prime}({\breve{F}}_{2}(\cdot))$, we obtain $\sup\;\,{\mathbb{E}}\left[YZ_{1}\,\right]={\mathbb{E}}\left[\phi^{\prime}\left({\breve{F}}_{2}(U)\right)\,{\breve{F}}_{1}(U)\right]\,,\quad U\sim U(0,1)\,,$ which concludes the proof. ### 3.2 Scores of generalised quantiles Next, we consider quantiles and generalised quantiles. Quantiles are elicitable with the family of scoring functions called the generalised piecewise linear scores. ###### Proposition 3.4 (Elicitability of Quantile – [14]). If $g$ is an increasing function, then the scoring function (8) $S_{g}(z,y):=\big{(}{\mathds{1}}_{\\{y\leq z\\}}-\alpha\big{)}\big{(}g(z)-g(y)\big{)}\,,\qquad z,y\in{\mathds{R}}\,,$ is ${\mathcal{M}}$-consistent for the $\alpha$-quantile ${\breve{F}}(\alpha)$ if $\int|g(y)|\,\mathrm{d}F(y)<\infty$ for all $F\in{\mathcal{M}}$. If $g$ is strictly increasing and if for all $F\in{\mathcal{M}}$, $F({\textrm{VaR}}_{\alpha}(F)+\epsilon)>\alpha$ for all $\epsilon>0$, then (8) is strictly ${\mathcal{M}}$-consistent. Moreover, on the class of compactly supported measures, any consistent scoring function for ${\breve{F}}(\alpha)$ which is continuous in its first argument, which admits a continuous derivative for all $z\neq y$, and which satisfies $S(y,y)=0$, is necessarily of the form (8). Hereafter we show that the comonotonic coupling is also optimal for the generalised piecewise linear score as cost function, i.e., when in (3) we choose the cost function $c(z_{1},z_{2}):=S_{g}(z_{2},z_{1})$. ###### Theorem 3.5 (Optimal coupling for generalised piecewise linear scores). The optimal coupling of the MK minimisation problem induced by any consistent generalised piecewise linear score is the comonotonic coupling. ###### Proof 3.6. Let $g$ be increasing, then the MK divergence induced by the score (8) is $\displaystyle{\mathscr{S}}(F_{1},F_{2})$ $\displaystyle=\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}\big{(}{\mathds{1}}_{\\{z_{1}\leq z_{2}\\}}-\alpha\big{)}\big{(}g(z_{2})-g(z_{1})\big{)}\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}$ $\displaystyle=\alpha\int_{\mathds{R}}g(z_{1}){\mathrm{d}}F_{1}(z_{1})-\alpha\int_{\mathds{R}}g(z_{2}){\mathrm{d}}F_{2}(z_{2})$ $\displaystyle\quad+\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}{\mathds{1}}_{\\{z_{1}\leq z_{2}\\}}\big{(}g(z_{2})-g(z_{1})\big{)}\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}\,.$ The infimum rewritten in terms of random variables, is $\displaystyle\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}\big{(}g(z_{2})-g(z_{1})\big{)}_{+}\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}=\inf\;{\mathbb{E}}\left[\,\big{(}g(Z_{2})-g(Z_{1})\big{)}_{+}\,\right]\,,$ where the infimum is over all copulae of $(Z_{1},Z_{2})$. Note that for any bivariate random vector $(X,Y)$, $X\sim F_{X}$, $Y\sim F_{Y}$, it holds that [16] (9) $X^{a}+Y^{a}\prec_{cx}X+Y\prec_{cx}X^{c}+Y^{c}\,,$ where $\prec_{cx}$ denotes inequality in convex order and where $(X^{a},Y^{a})$ and $(X^{c},Y^{c})$ denote the antitonic resp. comonotonic pair with marginal distributions $F_{X}$ and $F_{Y}$, that is $(X^{a},Y^{a}):=(F_{X}^{-1}(U),F_{Y}^{-1}(1-U))$ and $(X^{c},Y^{c})=(F_{X}^{-1}(U),F_{Y}^{-1}(U))$, $U\sim U(0,1)$. As $g$ is increasing, $(g(Z_{2}^{c}),-g(Z_{1}^{c}))$ is an antitonic pair and the first inequality in (9) implies that $g(Z_{2}^{c})-g(Z_{1}^{c})\prec_{cx}g(Z_{2})-g(Z_{1}).$ Since $f(x)=x_{+}$ is a convex function we obtain that $\inf\;{\mathbb{E}}\left[\,\big{(}g(Z_{2})-g(Z_{1})\big{)}_{+}\,\right]={\mathbb{E}}\left[\,\left(g\big{(}{\breve{F}}_{2}(U)\big{)}-g\big{(}{\breve{F}}_{1}(U)\big{)}\right)_{+}\,\right]\,,\quad U\sim U(0,1)\,,$ which concludes the proof. There exist many generalisations of quantiles, such as $L^{p}$ quantiles, $M$ quantiles and $\Lambda$-quantiles. We show in the sequel that consistent scores of these generalised quantiles give raise to the comonotonic coupling being optimal for (4). In this section, we discuss the $\Lambda$-quantile, while the $L^{p}$ and $M$ quantiles are considered in Section 3.5. For a monotone and right-continuous function $\Lambda\colon{\mathds{R}}\to[\underline{\lambda},\,\overline{\lambda}]$, $0<\underline{\lambda}<\overline{\lambda}<1$, the $\Lambda$-quantile is defined by [4] (10) $T_{\Lambda}(F):=\inf\big{\\{}y\in{\mathds{R}}~{}:~{}F(y)>\Lambda(y)\big{\\}}\,.$ The $\Lambda$-quantile is elicitable with score (11) $S(z,y)=(z-y)_{+}-\int_{y}^{z}\Lambda(s)\,{\mathrm{d}}s\,,$ on the space of cdfs that admit only one unique crossing point with $\Lambda(\cdot)$. ###### Theorem 3.7 (Optimal coupling for $\Lambda$-quantile score). The optimal coupling of the MK minimisation problem induced by the score given in (11) is the comonotonic coupling. ###### Proof 3.8. As $\partial_{z}S(z,y)={\mathds{1}}_{\\{y\leq z\\}}-\Lambda(z)$ is decreasing in $y$, for all $z$, and $\partial_{y}S(z,y)=\Lambda(y)-{\mathds{1}}_{\\{y\leq z\\}}$ is decreasing in $z,$ for all $y$, the score (11) is submodular, that is for all $z_{1}\leq z_{2}$, and $y_{1}\leq y_{2}$, it holds that $S(z_{1},y_{1})+S(z_{2},y_{2})\leq S(z_{1},y_{2})+S(z_{2},y_{1}).$ Since the comonotonic coupling is optimal for submodular cost functions [19], the result follows. ### 3.3 Expectile score The comonotonic coupling is also optimal when the cost function is a score that elicits the $\alpha$-expectile. ###### Proposition 3.9 (Elicitability of Expectiles – [14]). Let ${\mathcal{M}}$ denote the class of cdfs with finite mean. If $\phi$ is (strictly) convex with subgradient $\phi^{\prime}$ and if $\int|\phi(y)|\,\mathrm{d}F(y)<\infty$ as well as $\int|y|\,\mathrm{d}F(y)<\infty$ for all $F\in{\mathcal{M}}$, then the scoring function (12) $S(z,y)=\big{|}{\mathds{1}}_{\\{y\leq z\\}}-\alpha\big{|}\,B_{\phi}(y,z)\,,\quad z,y\in{\mathds{R}}\,,$ is (strictly) ${\mathcal{M}}$-consistent for the expectile222The expectile was first introduced in [17] as the minimiser of (2) for the scoring function (12) with $B_{\phi}(y,z)=(z-y)^{2}.$ . Moreover, on the class of compactly supported measures, any (strictly) consistent score for the expectile which is continuously differentiable in its first argument and which satisfies $S(y,y)=0$, is necessarily of the form (6). ###### Theorem 3.10 (Optimal coupling for $\alpha$-expectile scores). The optimal coupling of the MK minimisation problem induced by any consistent score given in $\eqref{eq:expectile}$ is the comonotonic coupling. ###### Proof 3.11. The MK divergence induced by the scoring function (12) is given as $\displaystyle{\mathscr{S}}(F_{1},F_{2})$ $\displaystyle=\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\left\\{\,\int_{{\mathds{R}}^{2}}\big{|}{\mathds{1}}_{\\{z_{1}\leq z_{2}\\}}-\alpha\big{|}\,B_{\phi}(z_{1},z_{2})\,\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\,\right\\}$ $\displaystyle=\inf_{\pi\in\Pi(F_{1},\,F_{2})}\;\bigg{\\{}\int_{{\mathds{R}}^{2}}(1-\alpha)B_{\phi}(z_{1},z_{2}){\mathds{1}}_{\\{z_{1}\leq z_{2}\\}}\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\ $ $\displaystyle\hskip 75.0pt+\int_{{\mathds{R}}^{2}}\alpha B_{\phi}(z_{1},z_{2}){\mathds{1}}_{\\{z_{1}>z_{2}\\}}\pi({\mathrm{d}}z_{1},{\mathrm{d}}z_{2})\bigg{\\}}\,.$ Convexity of $\phi$ implies that $B_{\phi}(z_{1},z_{2})$ is submodular. Hence also the function $(1-\alpha)B_{\phi}(z_{1},z_{2}){\mathds{1}}_{\\{z_{1}\leq z_{2}\\}}+\alpha B_{\phi}(z_{1},z_{2}){\mathds{1}}_{\\{z_{1}>z_{2}\\}}$ is submodular. As the comonotonic coupling is optimal for submodular cost functions, the result follows. ### 3.4 Shortfall score A popular class of risk measures is the family of shortfall risk measures. We first recall their definition and the scoring functions that elicit shortfall risk measures. ###### Definition 3.12 (Shortfall risk measure). Let $\ell\colon{\mathds{R}}\to{\mathds{R}}$ be an increasing and non-constant function satisfying $\ell(w)<0$, whenever $w<0$, and $\ell(w)>0$, whenever $w>0$. Then the shortfall risk measure $T_{\ell}$ is defined by (13) $T_{\ell}(F):=\inf\left\\{x\in{\mathds{R}}~{}\Big{|}~{}\int\ell(w-x)\,dF(w)\leq 0\right\\}\,,$ whenever the infimum exists. If furthermore $\ell$ is left-continuous and strictly increasing on either $(-\infty,{\varepsilon})$ or $({\varepsilon},+\infty)$, for ${\varepsilon}>0$, then $T_{\ell}$ is 1-elicitable with strictly consistent score [1] (14) $S(z,y)=\int_{0}^{y-z}\ell(s)\,ds\,.$ ###### Theorem 3.13 (Optimal coupling for shortfall scores). The optimal coupling of the MK minimisation problem induced by the scoring function given in (14) is the comonotonic coupling. ###### Proof 3.14. The scoring function in (14) is submodular. ### 3.5 Decomposable scores Next we study a family of scores that elicits different risk functionals including quantiles, expectiles, and $M$ and $L^{p}$ quantiles. ###### Theorem 3.15 (Optimal coupling for decomposable scores). Assume the scoring function is of the form (15) $S(z,y)=\phi(|z-y|)\left(\alpha\,{\mathds{1}}_{\\{y>z\\}}+\beta\,{\mathds{1}}_{\\{y\leq z\\}}\right)\,,$ where $\alpha,\beta\in[0,1]$ and $\phi\colon{\mathds{R}}_{+}\to{\mathds{R}}_{+}$ is an increasing convex function satisfying $\phi(0)=0$. Then the optimal coupling of the MK minimisation problem induced by any score given in (15) is the comonotonic coupling. ###### Proof 3.16. We rewrite the scoring function to $S(z,y)=\phi\big{(}|z-y|\big{)}\left(\alpha\,{\mathds{1}}_{\\{y>z\\}}+\beta\,{\mathds{1}}_{\\{y\leq z\\}}\right)=\alpha\,\phi\big{(}(y-z)_{+}\big{)}+\beta\,\phi\big{(}(z-y)_{+}\big{)}\,.$ As the function $\phi(x_{+})$ is convex, the results follows using similar arguments as in the proof of Theorem 3.5. The scoring function (15) elicits the $\alpha$-quantile for the choice $\phi(x)=x$ and $\beta=1-\alpha$ and the $\alpha$-expectile with $\phi(x)=x^{2}$ and $\beta=1-\alpha$. More generally, the score (15) for any arbitrary convex $\phi$ and $\beta=1-\alpha$ elicit the so-called $M$-quantiles, defined via (2) whenever they exits, see e.g., [3]. $M$-quantiles subsume $L^{p}$-quantiles, which correspond to $\phi(x)=x^{p}$, $p\in[1,\infty)$, see [6]. ### 3.6 Osband’s transformation of scores Osband’s principle is highly regarded in statistics to create new elicitable functionals [14]. Here, we use Osband’s principle to characterise the optimal coupling induced by consistent scores for monotone transformations of elicitable functionals. ###### Proposition 3.17 (Osband’s principle for OT). Let $T$ be a strictly monotone transformation of a 1-elicitable functional $\tilde{T}$, i.e. $T:=g\circ\tilde{T}$ for $g\colon{\mathds{R}}\to{\mathds{R}}$ strictly monotone, and denote by $\tilde{S}$ a (strictly) ${\mathcal{M}}$-consistent scoring function for $\tilde{T}$. Then $T$ is 1-elicitable with (strictly) ${\mathcal{M}}$-consistent scoring function (16) $S(z,y):=\tilde{S}\big{(}g^{-1}(z),y\big{)}\,.$ Let the corresponding MK minimisation problem of $\tilde{S}$ be attained by the comonotonic coupling. If further $g$ is increasing (decreasing) then the optimal coupling of the MK minimisation problem induced by the score (16) is the comonotonic (antitonic) coupling. ###### Proof 3.18. By Osband’s principle, the functional $T:=g\circ\tilde{T}$ for any bijective function $g\colon{\mathds{R}}\to{\mathds{R}}$ is 1-elicitable with scoring function given in (16), see e.g., Theorem 4 in [14]. If $g$ is strictly increasing then, $\frac{d}{dx}g^{-1}(x)=\big{\\{}g^{\prime}\,\big{(}g^{-1}(x)\big{)}\big{\\}}^{-1}>0$, where $g^{\prime}(x):=\frac{d}{dx}g(x)$. Thus, the optimal coupling is comonotonic. If $g$ is strictly decreasing, then $\frac{d}{dx}g^{-1}(x)<0$ and the antitonic coupling is optimal. Osband’s principle can be used to create new MK divergences. Indeed any strictly monotonic transformation of an elicitable risk functional leads to a new MK divergence, where the optimal coupling follows from Proposition 3.17. Here, we give an example of the reciprocate of risk functionals. ###### Corollary 3.19 (Inverse of risk functionals). Let $\tilde{T}$ be a 1-elicitable risk functional and assume that the corresponding MK minimisation problem is attained by the comonotonic coupling. Next consider the functional $T(F):=\frac{1}{\tilde{T}(F)}\,,$ then the MK minimisation problem induced by any consistent score for $T$ is attained by the antitonic coupling. ###### Proof 3.20. This follows by Proposition 3.17 with $T(\cdot)=g\big{(}\tilde{T}(\cdot)\big{)}$, where $g(x)=\frac{1}{x}$ is decreasing. The next application of Osband’s principle concern transformations of the distribution. For this, let $F_{Z}$ denote the distribution of a random variable $Z$. ###### Proposition 3.21 (Osband’s principle for OT). Let $\tilde{T}$ be a 1-elicitable risk functional. For a function $h\colon{\mathds{R}}\to{\mathds{R}}$, consider the risk functional $T(F_{Y}):=\tilde{T}\left(F_{h(Y)}\right)$. Assume that the MK minimisation problem induced by a consistent score $\tilde{S}$ for $\tilde{T}$ is attained by the comonotonic coupling. If $h$ is increasing (decreasing), then the optimal coupling of the MK minimisation problem induced by a consistent score of $T$ is the comonotonic (antitonic) coupling. ###### Proof 3.22. A score $\tilde{S}$ is strictly consistent for $\tilde{T}$ applied to $F_{h(Y)}$ if and only if it holds for all $z\neq\tilde{T}(F_{h(Y)})$ that (17) $\int\tilde{S}\Big{(}\tilde{T}\big{(}F_{h(Y)}\big{)},\tilde{y}\Big{)}\,{\mathrm{d}}F_{h(Y)}(\tilde{y})\;\leq\;\int\tilde{S}\left(z,\tilde{y}\right)\,{\mathrm{d}}F_{h(Y)}(\tilde{y})\,.$ The inequalities (17) are equivalent to (18) $\int\tilde{S}\Big{(}T\big{(}F_{Y}\big{)},\,h(y)\Big{)}\,{\mathrm{d}}F_{Y}(y)\leq\int\tilde{S}\big{(}z,\,h(y)\big{)}\,{\mathrm{d}}F_{Y}(y)\,,$ for all $z\neq T(F_{Y})$. Thus, the score $S(z,y):=\tilde{S}\big{(}z,h(y)\big{)}$ is a consistent score for $T(F_{Y})$. The reminder follows using similar arguments as in the proof of Proposition 3.17. An immediate example of Proposition 3.21 is that of the functional $T(F_{Y})={\mathbb{E}}\left[\frac{1}{Y}\right]$ is elicitable and induces MK divergences for which the antitonic coupling is simultaneously optimal for any consistent score of $T$. Using Osband’s principles for OT, we derive the optimal coupling for the scores of entropic risk measures. The entropic risk measure with parameter $\gamma>0$, also known as the exponential premium principle in actuarial science [13], is defined for $F\in{\mathcal{M}^{0}}$, by $T^{\gamma}(F):=\frac{1}{\gamma}\,\log\,\int e^{\gamma y}\,{\mathrm{d}}F(y)\,,$ Any (strictly) ${\mathcal{M}}$-consistent scoring function (under mild regularity conditions) for the entropic risk measure satisfying $S(y,y)=0$ is given by $S(z,y)=\phi\big{(}e^{\gamma y}\big{)}-\phi\big{(}e^{\gamma z}\big{)}+\phi^{\prime}\big{(}e^{\gamma y}\big{)}\left(e^{\gamma z}-e^{\gamma y}\right)\,,$ where $\phi\colon{\mathds{R}}\to{\mathds{R}}$ is (strictly) convex and $\int|\phi\big{(}e^{\gamma y}\big{)}|\,{\mathrm{d}}F(y)<\infty$ for all $F\in{\mathcal{M}}$; for more detail see Appendix A1 in [10]. ###### Corollary 3.23 (Optimal coupling for entropic scores). The optimal coupling of the MK minimisation problem induced by any consistent scoring function for the entropic risk measure is the comonotonic coupling. ###### Proof 3.24. Denote by $\tilde{T}(G):=\int x\,{\mathrm{d}}G(x)$ the expectation functional with consistent score $S_{\phi}$, the Bregman score, i.e. (6). Then, it holds that $T^{\gamma}(F):=g\circ\tilde{T}\big{(}F_{h(Y)}\big{)}\,,$ where $g(x):=\frac{1}{\gamma}\log(x)$ and $h(x):=e^{\gamma x}$. By Osband’s principles, see the proofs of Proposition 3.17 and Proposition 3.21, the consistent score for $T^{\gamma}$ is thus given by $S(z,y)=S_{\phi}\left(g^{-1}(z),\,h(y)\right)=\phi(e^{\gamma z})-\phi(e^{\gamma y})-\phi^{\prime}(e^{\gamma y})(e^{\gamma z}-e^{\gamma y})\,,$ which is indeed the strictly consistent scoring function for the entropic risk measure. Finally, applying Proposition 3.17 and Proposition 3.21 concludes the proof. ### 3.7 Scores of law-invariant risk measures In this section we consider MK divergences derived from elicitable convex and coherent risk measures. We show that the optimal coupling of the MK divergence induced by any strictly consistent score that elicits a coherent or convex risk measure is the comonotonic coupling. For this, denote by $T\colon{\mathcal{M}}^{\infty}\to{\mathds{R}}$ a law-invariant risk measure, where ${\mathcal{M}}^{\infty}\subseteq{\mathcal{M}}^{0}$ is the space of all cdfs with bounded support. A law-invariant risk measure can equivalently be defined on the space of absolutely bounded random variables ${\mathcal{L}}^{\infty}:={\mathcal{L}}^{\infty}(\Omega,{\mathcal{F}},{\mathds{R}})$, as a functional $T\colon{\mathcal{L}}^{\infty}\to{\mathds{R}}$, by setting $T[X]:=T(F_{X})$, whenever $X$ has cdf $F_{X}$. We use the notation $T(\cdot)$, when the risk measure is viewed as a function of cdfs and $T[\cdot]$, when applied to random variables. We say a law-invariant risk measure $T$ is 1. $(i)$ monotone: if $T[X]\leq T[Y]$ whenever $X\leq Y$ ${\mathbb{P}}$-a.s., $X,Y\in{\mathcal{L}}^{\infty}$, 2. $(ii)$ translation invariant: if $T[X+m]=T[X]+m$, for all $X\in{\mathcal{L}}^{\infty}$ and $m\in{\mathds{R}}$, 3. $(iii)$ positive homogeneous: if $T[\lambda\,X]=\lambda\,T[X]$, for all $X\in{\mathcal{L}}^{\infty}$ and $\lambda\geq 0$, 4. $(iv)$ convex: if $T\big{[}\lambda\,X+(1-\lambda)\,Y]\leq\lambda\,T[X]+(1-\lambda)\,T[Y]$, for all $X,Y\in{\mathcal{L}}^{\infty}$ and $\lambda\in[0,1]$, A law-invariant functional $T$ is called a convex risk measure, if it satisfies the properties $(i)$, $(ii)$, and $(iv)$, and a coherent risk measure if it additionally fulfils $(iii)$. For discussions and interpretation of these properties, we refer the reader to [12] and reference therein. ###### Definition 3.25. We consider the following additional properties on scoring functions 1. $(i)$ $S(z,y)=0$ if and only if $z=y$ 2. $(ii)$ $S(z,y)$ is continuous in both $z$ and $y$, 3. $(iii)$ $S(z,y)$ is increasing in $z$, for $z>y$ and decreasing in in $z$ for $z<y$ 4. $(iv)$ for all $z$ in a neighbourhood of 0, $S(z,y)\leq\psi(y)$, where $\psi\colon{\mathds{R}}\to[1,+\infty)$ is a continuous gauge function. Property $(iii)$ is called accuracy rewarding in the statistical literature [15], since it implies that if $T(F)<z_{1}<z_{2}$ or $T(F)>z_{1}>z_{2}$, then $\int S\big{(}T(F),\,y\big{)}\,{\mathrm{d}}F(y)<\int S\big{(}z_{1},\,y\big{)}\,{\mathrm{d}}F(y)<\int S\big{(}z_{2},\,y\big{)}\,{\mathrm{d}}F(y)\,.$ Thus, for accuracy rewarding scores the further away the estimates $z$ are from the truth $T(F)$, the larger are their expected scores. ###### Theorem 3.26 (Coherent Risk Measures). Let $T$ be a 1-elicitable coherent risk measure and $S$ any strictly ${\mathcal{M}}^{\infty}$-consistent score for $T$ that satisfies the properties in Definition 3.25. Then, the optimal coupling of the MK minimisation problem induced by the score $S$ is the comonotonic coupling. ###### Proof 3.27. The class of 1-elicitable coherent risk measures coincides with the $\alpha$-expectiles where $\alpha\in[0.5,1]$, see Theorem 4.9 in [1], or Corollary 12 in [22]. Applying Theorem 3.10 concludes the proof. ###### Theorem 3.28 (Convex Risk Measures). Let $T$ be a 1-elicitable convex risk measure and $S$ any strictly ${\mathcal{M}}^{\infty}$-consistent score for $T$ that satisfies the properties in Definition 3.25. Then, the optimal coupling of the MK minimisation problem induced by the score $S$ is the comonotonic coupling. ###### Proof 3.29. The class of convex risk measures that are 1-elicitable coincides with the class of shortfall risk measures $T_{\ell}$ with convex loss function $\ell$, see Theorem 4.6 in [1]. Applying Theorem 3.13 concludes the proof. ## 4 Applications ### 4.1 Worst-case distortion risk measures Worst-case distortion risk measures are often used in robust stochastic optimisation, see e.g., [2] and [18]. We call $g\colon[0,1]\to[0,1]$ a distortion function, if it is non-decreasing and satisfies $g(0)=0$ and $g(1)=1$. A distortion risk measure evaluated at a cdf $G$ with quantile function ${\breve{G}}$ is defined as the Choquet integral $H_{g}({\breve{G}}):=-\int_{-\infty}^{0}1-g(1-G(x))\,\mathrm{d}x+\int_{0}^{+\infty}g\big{(}1-G(x)\big{)}\,\mathrm{d}x\,,$ in which at least one of the two integrals is finite. If $g$ is absolutely continuous, then the distortion risk measure $H_{g}$ has representation (19) $\displaystyle H_{g}({\breve{G}})$ $\displaystyle=\int_{0}^{1}\gamma(u){\breve{G}}(u)\,\mathrm{d}u\,,$ with weight function $\gamma(u):=\partial_{-}g(x)|_{x=1-u},~{}0<u-<1$, which satisfies $\int_{0}^{1}\gamma(u)\mathrm{d}u=1$ and where $\partial_{-}$ denotes the derivative from the left. In the sequel we assume that representation (19) holds and that $\int_{0}^{1}|\gamma(u)|^{2}\mathrm{d}u<+\infty$. The class of distortion risk measures is broad and contains the majority of risk measures used in financial risk management practice including the quantile (Value-at-Risk) and the average of upper quantiles (Tail Value-at-Risk). Let $F$ be a given reference cdf and consider the optimisation problem (20) $\max_{{\breve{G}}\in{\breve{\mathcal{M}}}}\;H_{g}\big{(}{\breve{G}}\big{)}\,,\quad s.t.\quad{\mathscr{B}}_{\phi}\big{(}G,\,F\big{)}\,\leq\,{\varepsilon}\,,$ where ${\breve{\mathcal{M}}}$ is the set of left-continuous quantile functions on ${\mathds{R}}$. Optimisation problems of the form (20) are of great interest in financial risk management, as risk measures and the values thereof drive decision making (e.g., through solvency requirements), but typically are subject to distributional uncertainty, here quantified via the BW divergence. Specifically, in the optimisation problem (20) one aims to determine the worst-case value a distortion risk measure can attain when the underlying cdf belongs to a set of cdfs that are close, in the BW divergence, to a reference cdf $F$. The special case of optimisation problem (20) for $\phi(x)=x^{2}$, i.e. when the BW divergence reduces to the 2-Wasserstein distance, was recently solved in [2]. Here, we significantly extend this result. ###### Theorem 4.1 (Worst-case Distortion Risk Measures). Assume that the distortion function $g$ is strictly concave and that $\phi$ is strictly convex. The optimal quantile function to the optimisation problem (20) is given by (21) ${\breve{G}}_{\lambda^{*}}(u):=\left(\phi^{\prime}\right)^{-1}\left(\phi^{\prime}\big{(}{\breve{F}}(u)\big{)}+\frac{1}{\lambda^{*}}\gamma(u)\right)\,,$ where $\lambda^{*}>0$ is the unique solution to ${\mathscr{B}}_{\phi}\big{(}G_{\lambda},\,F\big{)}={\varepsilon}$. ###### Proof 4.2. A solution to problem (20) must attain a BW divergence ${\varepsilon}_{0}$, $0\leq{\varepsilon}_{0}\leq{\varepsilon}$, and thus solve for all $\lambda>0,$ the optimisation problem $\displaystyle\operatorname*{argmax}_{{\breve{G}}\in{\breve{\mathcal{M}}}}\;\int_{0}^{1}{\breve{G}}(u)\gamma(u)-\lambda\left(\phi\big{(}{\breve{G}}(u)\big{)}-\phi\big{(}{\breve{F}}(u)\big{)}-\phi^{\prime}\big{(}{\breve{F}}(u)\big{)}{\breve{G}}(u)\ +\ \phi^{\prime}\big{(}{\breve{F}}(u)\big{)}{\breve{F}}(u)-{\varepsilon}_{0}\right)\,.$ Equivalently, the solution solves for all $\lambda>0$ the optimisation problem $\operatorname*{argmin}_{{\breve{G}}\in{\breve{\mathcal{M}}}}\;\int_{0}^{1}\phi\big{(}{\breve{G}}(u)\big{)}-k_{\lambda}(u){\breve{G}}(u)du\,,$ where $k_{\lambda}(u):=\phi^{\prime}\big{(}{\breve{F}}(u)\big{)}+\frac{1}{\lambda}\gamma(u).$ Note that for any given function $k_{\lambda}(\cdot),\lambda>0$, the function $\phi\big{(}{\breve{G}}(u)\big{)}-k_{\lambda}(u){\breve{G}}(u)$ is convex in ${\breve{G}}(u)$. By direct optimisation and noting that $k_{\lambda}(u)$ is an increasing function, we thus obtain that a solution to (20) is of the form (22) ${\breve{G}}_{\lambda}(u):=\left(\phi^{\prime}\right)^{-1}\left(\phi^{\prime}\big{(}{\breve{F}}(u)\big{)}+\frac{1}{\lambda}\gamma(u)\right)\,,$ for some $\lambda>0$. Furthermore, $\lambda_{1}<\lambda_{2}$ implies that ${\breve{G}}_{\lambda_{1}}(u)>{\breve{G}}_{\lambda_{2}}(u)$. As a consequence, it also holds that $H_{g}({\breve{G}}_{\lambda_{1}})>H_{g}({\breve{G}}_{\lambda_{2}})$ and moreover that (recall that $\phi$ is convex) ${\mathscr{B}}_{\phi}(F,G_{\lambda_{1}})>{\mathscr{B}}_{\phi}(F,G_{\lambda_{2}})$. Hence, for a solution to be optimal, $\lambda^{*}$ must satisfy ${\mathscr{B}}_{\phi}(F,G_{\lambda^{*}})={\varepsilon}$. The existence of $\lambda^{*}$ follows since ${\mathscr{B}}_{\phi}(F,G_{\lambda})$ is continuously decreasing in $\lambda>0$, $\lim_{\lambda\to 0}{{\mathscr{B}}_{\phi}(F,G_{\lambda})}=\infty$, and $\lim_{\lambda\to\infty}{{\mathscr{B}}_{\phi}(F,G_{\lambda})}=0.$ Moreover, $\lambda^{*}$ is unique. ### 4.2 Cheapest payoffs Let $S_{T}$ represents the non-negative random value of a risky asset at time $T>0.$ Further consider a bank account earning the continuously compounded risk-free interest rate $r\in\mathbb{R}$. We define the set of payoffs $\mathcal{X}:=\Big{\\{}g(S_{T})~{}|~{}\,g:{\mathds{R}}_{+}\to{\mathds{R}}_{+}\text{ is measurable and }{\mathbb{E}}_{\mathbb{Q}}\big{[}|g(S_{T})|\big{]}<\infty\Big{\\}}\,,$ where ${\mathbb{Q}}$ is a pricing measure equivalent to ${\mathbb{P}}$, and ${\mathbb{E}}_{\mathbb{Q}}[\cdot]$ denotes the expectation under ${\mathbb{Q}}$. For any payoff $X\in\mathcal{X}$, its initial cost is (23) $c(X):=e^{-rT}{\mathbb{E}}_{{\mathbb{Q}}}[X]={\mathbb{E}}_{\mathbb{P}}[\xi X]\,,$ where the last equality follows by defining the state-price-density $\xi:=e^{-rT}\frac{d{\mathbb{Q}}}{d{\mathbb{P}}}$. We assume that the state- price density $\xi$ is continuously distributed and denote its cdf under ${\mathbb{P}}$ by $F_{\xi}$. In the sequel, all cdfs of random variables are taken with respect to $\mathbb{P}$. We consider the problem of finding the cheapest payoff under the constraint that its cdf is close to a benchmark. Specifically, we require that its cdf lies withing a BW-ball around a benchmark cdf $F$, that is we consider the optimisation problem (24) $\min_{X\in\mathcal{X}}\;c(X)\,,\quad s.t.\quad{\mathscr{B}}_{\phi}\big{(}F_{X},\,F\big{)}\,\leq\,{\varepsilon}\,.$ A special case of (24) is considered in [8] who, instead of the BW divergence, consider payoffs with fixed cdf $F$. That is, they solve (25) $\min_{X\in\mathcal{X}}\;c(X)\,,\quad s.t.\quad F_{X}\equiv F\,,$ and show that the unique solution to optimisation problem (25) is given by $X^{*}:={\breve{F}}\big{(}1-F_{\xi}(\xi)\big{)}$. The payoff $X^{*}$ is called cost-efficient as it is decreasing in $\xi$, and thus yields the cheapest payoff with given cdf. Next, we provide the solution to optimisation problem (24). ###### Theorem 4.3 (Cheapest payoffs). The solution to optimisation problem (24) is given by (26) $X^{*}:={{\breve{G}}_{\lambda^{*}}}\big{(}1-F_{\xi}(\xi)\big{)}\,,$ in which (27) ${\breve{G}}_{\lambda^{*}}(u):=\left(\phi^{\prime}\right)^{-1}\left(\phi^{\prime}\big{(}{\breve{F}}(u)\big{)}-\frac{1}{\lambda^{*}}\,{{\breve{F}_{\xi}}}(1-u)\right)\,,$ where $\lambda^{*}>0$ is the unique solution to ${\mathscr{B}}_{\phi}\big{(}G_{\lambda^{*}},\,F\big{)}={\varepsilon}$. ###### Proof 4.4. We first show that any solution has to be cost-efficient. To this regard, let $X$ be a solution to optimisation problem (24) with cdf $G$. Define the random variable $Y:={\breve{G}}\big{(}1-F_{\xi}(\xi)\big{)}$ which has cdf $G$, and thus its cdf lies within the BW-ball. Moreover, unless $Y=X$, ${\mathbb{P}}$-a.s., $Y$ is strictly cheaper than $X$. Hence, an optimal solution has to be cost-efficient. Furthermore, from (23) it follows that the cost of any cost-efficient payoff $Y$ with cdf $G$ can be expressed as (28) $c(Y)=\int_{0}^{1}\gamma(u)\,{\breve{G}}(u)\,du\,,$ where $\gamma(u):=-{{\breve{F}_{\xi}}}(1-u)$ is an increasing function. We observe that solving optimisation problem (24) is equivalent to solving an optimisation problem of the form (20). Next, we apply Theorem 4.1, as an inspection of its proof reveals that Theorem 4.1 also holds when the (increasing) weighting function $\gamma$ is negative. ## References * [1] F. Bellini and V. Bignozzi, On elicitable risk measures, Quant. Finance, 15 (2015), pp. 725–733, https://doi.org/10.1080/14697688.2014.946955. * [2] C. Bernard, S. M. Pesenti, and S. Vanduffel, Robust distortion risk measures, Forthcoming in Mathematical Finance, (2023), https://doi.org/10.1111/mafi.12414. * [3] J. Breckling and R. Chambers, M-quantiles, Biometrika, 75 (1988), pp. 761–771, https://doi.org/10.1093/biomet/75.4.761. * [4] M. Burzoni, I. Peri, and C. M. Ruffo, On the properties of the Lambda value at risk: robustness, elicitability and consistency, Quantitative Finance, 17 (2017), pp. 1735–1743, https://doi.org/10.1080/14697688.2017.1297535. * [5] G. Carlier and C. Jimenez, On monge’s problem for bregman-like cost functions, Journal of Convex Analysis, 14 (2007), p. 647. * [6] Z. Chen, Conditional ${L}_{p}$-quantiles and their application to the testing of symmetry in non-parametric regression, Statistics & probability letters, 29 (1996), pp. 107–115, https://doi.org/10.1016/0167-7152(95)00163-8. * [7] G. Dall’Aglio, Sugli estremi dei momenti delle funzioni di ripartizione doppia, Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, 10 (1956), pp. 35–74. * [8] P. H. Dybvig, Inefficient dynamic portfolio strategies or how to throw away a million dollars in the stock market, The Review of Financial Studies, 1 (1988), pp. 67–88, https://doi.org/10.1093/rfs/1.1.67. * [9] T. Fissler, R. Frongillo, J. Hlavinová, and B. Rudloff, Forecast evaluation of quantiles, prediction intervals, and other set-valued functionals, Electronic Journal of Statistics, 15 (2021), pp. 1034–1084, https://doi.org/10.1214/21-EJS1808, https://doi.org/10.1214/21-EJS1808. * [10] T. Fissler and S. M. Pesenti, Sensitivity measures based on scoring functions, European Journal of Operational Research, 307 (2023), pp. 1408–1423, https://doi.org/10.1016/j.ejor.2022.10.002. * [11] T. Fissler and J. F. Ziegel, Higher order elicitability and Osband’s principle, Annals of Statistics, 44 (2016), pp. 1680–1707, https://doi.org/10.1214/16-AOS1439. * [12] H. Föllmer and A. Schied, Stochastic finance: an introduction in discrete time, Walter de Gruyter, 2011, https://doi.org/10.1515/9783110463453. * [13] H. U. Gerber, On additive premium calculation principles, ASTIN Bulletin: The Journal of the IAA, 7 (1974), pp. 215–222, https://doi.org/10.1017/S0515036100006061. * [14] T. Gneiting, Making and Evaluating Point Forecasts, Journal of the American Statistical Association, 106 (2011), pp. 746–762, https://www.jstor.org/stable/41416407. * [15] N. S. Lambert, D. M. Pennock, and Y. Shoham, Eliciting properties of probability distributions, in Proceedings of the 9th ACM Conference on Electronic Commerce, 2008, pp. 129–138, https://doi.org/10.1145/1386790.1386813. * [16] I. Meilijson and A. Nádas, Convex majorization with an application to the length of critical paths, Journal of Applied Probability, 16 (1979), pp. 671–677, https://doi.org/10.2307/3213097. * [17] W. K. Newey and J. L. Powell, Asymmetric least squares estimation and testing, Econometrica: Journal of the Econometric Society, (1987), pp. 819–847, https://www.jstor.org/stable/1911031. * [18] S. M. Pesenti and S. Jaimungal, Portfolio optimization within a Wasserstein ball, SIAM Journal on Financial Mathematics, 14 (2023), pp. 1175–1214, https://doi.org/10.1137/22M1496803. * [19] S. T. Rachev and L. Rüschendorf, Mass Transportation Problems: Volume I: Theory, vol. 1, Springer Science & Business Media, 1998. * [20] C. Rankin and T.-K. L. Wong, Bregman-Wasserstein divergence: geometry and applications, arXiv preprint arXiv:2302.05833, (2023). * [21] F. Santambrogio, Optimal transport for applied mathematicians, Birkäuser, NY, 55 (2015), p. 94. * [22] I. Steinwart, C. Pasin, R. Williamson, and S. Zhang, Elicitation and identification of properties, in Conference on Learning Theory, PMLR, 2014, pp. 482–526. * [23] C. Villani, Optimal transport: Old and new, vol. 338, Springer Science & Business Media, 2008.
* [BCIK21] Sujoy Bhore, Jean Cardinal, John Iacono, and Grigorios Koumoutsos. Dynamic geometric independent set. In Abstracts of 23rd Thailand-Japan Conference on Discrete and Computational Geometry, Graphs, and Games (TJDCG), 2021. arXiv:2007.08643. * [BDMR01] Piotr Berman, Bhaskar DasGupta, S. Muthukrishnan, and Suneeta Ramaswami. Efficient approximation algorithms for tiling and packing problems with rectangles. J. Algorithms, 41(2):443–470, 2001. doi:10.1006/jagm.2001.1188. * [BF99] Piotr Berman and Toshihiro Fujito. On approximation properties of the independent set problem for low degree graphs. Theory of Computing Systems, 32:115–132, 1999. doi:10.1007/s002240000113. * [BKO22] Sujoy Bhore, Fabian Klute, and Jelle J. Oostveen. On streaming algorithms for geometric independent set and clique. In Proc. 20th International Workshop on Approximation and Online Algorithms (WAOA), volume 13538 of LNCS, pages 211–224. Springer, 2022. doi:10.1007/978-3-031-18367-6\\_11. * [BLN22] Sujoy Bhore, Guangping Li, and Martin Nöllenburg. An algorithmic study of fully dynamic independent sets for map labeling. ACM Journal of Experimental Algorithmics (JEA), 27(1):1–36, 2022. doi:10.1145/3514240. * [Bra08] Peter Brass. Advanced Data Structures. Cambridge University Press, 2008. doi:10.1017/CBO9780511800191. * [BS23] Sarita de Berg and Frank Staals. Dynamic data structures for $k$-nearest neighbor queries. Comput. Geom., 111:101976, 2023. doi:10.1016/j.comgeo.2022.101976. * [BYHN+06] Reuven Bar-Yehuda, Magnús M Halldórsson, Joseph Naor, Hadas Shachnai, and Irina Shapira. Scheduling split intervals. SIAM Journal on Computing, 36(1):1–15, 2006. doi:10.1137/S0097539703437843. * [CCJ90] Brent N. Clark, Charles J. Colbourn, and David S. Johnson. Unit disk graphs. Discret. Math., 86(1-3):165–177, 1990. doi:10.1016/0012-365X(90)90358-O. * [CH12] Timothy M. Chan and Sariel Har-Peled. Approximation algorithms for maximum independent set of pseudo-disks. Discret. Comput. Geom., 48(2):373–392, 2012. doi:10.1007/s00454-012-9417-5. * [Cha03] Timothy M. Chan. Polynomial-time approximation schemes for packing and piercing fat objects. J. Algorithms, 46(2):178–189, 2003. doi:10.1016/S0196-6774(02)00294-8. * [Cha10] Timothy M. Chan. A dynamic data structure for 3-D convex hulls and 2-D nearest neighbor queries. J. ACM, 57(3):16:1–16:15, 2010. doi:10.1145/1706591.1706596. * [Cha20] Timothy M. Chan. Dynamic geometric data structures via shallow cuttings. Discret. Comput. Geom., 64(4):1235–1252, 2020. doi:10.1007/s00454-020-00229-5. * [CIK21] Jean Cardinal, John Iacono, and Grigorios Koumoutsos. Worst-case efficient dynamic geometric independent set. In Proc. 29th European Symposium on Algorithms (ESA), volume 204 of LIPIcs, pages 25:1–25:15, 2021. See also arXiv:2108.08050. arXiv:arXiv:2108.08050. * [CMR23] Spencer Compton, Slobodan Mitrovic, and Ronitt Rubinfeld. New partitioning techniques and faster algorithms for approximate interval scheduling. In Proc. 50th International Colloquium on Automata, Languages, and Programming (ICALP), volume 261 of LIPIcs, pages 45:1–45:16. Schloss Dagstuhl, 2023. doi:10.4230/LIPIcs.ICALP.2023.45. * [EJS05] Thomas Erlebach, Klaus Jansen, and Eike Seidel. Polynomial-time approximation schemes for geometric intersection graphs. SIAM J. Computing, 34(6):1302–1323, 2005. doi:10.1137/S0097539702402676. * [EKNS00] Alon Efrat, Matthew J. Katz, Frank Nielsen, and Micha Sharir. Dynamic data structures for fat objects and their applications. Comput. Geom., 15(4):215–227, 2000. doi:10.1016/S0925-7721(99)00059-0. * [GKKL15] Alexander Gavruskin, Bakhadyr Khoussainov, Mikhail Kokho, and Jiamou Liu. Dynamic algorithms for monotonic interval scheduling problem. Theoretical Computer Science, 562:227–242, 2015. doi:10.1016/j.tcs.2014.09.046. * [GKM+22] Waldo Gálvez, Arindam Khan, Mathieu Mari, Tobias Mömke, Madhusudhan Reddy Pittu, and Andreas Wiese. A 3-approximation algorithm for maximum independent set of rectangles. In Proc. 32rd ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 894–905, 2022. doi:10.1137/1.9781611977073.38. * [HM85] Dorit S. Hochbaum and Wolfgang Maass. Approximation schemes for covering and packing problems in image processing and VLSI. J. ACM, 32(1):130–136, 1985. doi:10.1145/2455.214106. * [HMR+98] Harry B. Hunt III, Madhav V. Marathe, Venkatesh Radhakrishnan, S. S. Ravi, Daniel J. Rosenkrantz, and Richard Edwin Stearns. NC-approximation schemes for NP- and PSPACE-hard problems for geometric graphs. J. Algorithms, 26(2):238–274, 1998. doi:10.1006/jagm.1997.0903. * [HNW20] Monika Henzinger, Stefan Neumann, and Andreas Wiese. Dynamic approximate maximum independent set of intervals, hypercubes and hyperrectangles. In Proc. 36th International Symposium on Computational Geometry (SoCG), volume 164 of LIPIcs, pages 51:1–51:14, 2020. doi:10.4230/LIPIcs.SoCG.2020.51. * [HP11] Sariel Har-Peled. Geometric Approximation Algorithms, volume 173 of Mathematical Surveys and Monographs. AMS, 2011. URL: https://bookstore.ams.org/surv-173/. * [Kar72] Richard M. Karp. Reducibility among combinatorial problems. In Proceedings of a symposium on the Complexity of Computer Computations, The IBM Research Symposia Series, pages 85–103. Plenum Press, New York, 1972. doi:10.1007/978-1-4684-2001-2_9. * [KMP98] Sanjeev Khanna, Shan Muthukrishnan, and Mike Paterson. On approximating rectangle tiling and packing. In Proc. 9th ACM-SIAM Symposium on Discrete algorithms (SODA), volume 98, pages 384–393, 1998. URL: https://dl.acm.org/doi/10.5555/314613.314768. * [KMR+20] Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir. Dynamic planar Voronoi diagrams for general distance functions and their algorithmic applications. Discret. Comput. Geom., 64(3):838–904, 2020. doi:10.1007/s00454-020-00243-7. * [KW87] Kerstin Kirchner and Gerhard Wengerodt. Die dichteste Packung von 36 Kreisen in einem Quadrat. Beiträge zur Algebra und Geometrie / Contributions to Algebra and Geometry, 25:147–160, 1987. * [Liu22] Chih-Hung Liu. Nearly optimal planar $k$ nearest neighbors queries under general distance functions. SIAM J. Comput., 51(3):723–765, 2022. doi:10.1137/20m1388371. * [Mar05] Dániel Marx. Efficient approximation schemes for geometric problems? In Proc. 13th European Symposium on Algorithms (ESA), volume 3669 of LNCS, pages 448–459. Springer, 2005. doi:10.1007/11561071_41. * [Mar07] Dániel Marx. On the optimality of planar and geometric approximation schemes. In Proc. 48th IEEE Symposium on Foundations of Computer Science (FOCS), pages 338–348, 2007. doi:10.1109/FOCS.2007.26. * [MBI+95] Madhav V. Marathe, Heinz Breu, Harry B. Hunt III, Sekharipuram S. Ravi, and Daniel J. Rosenkrantz. Simple heuristics for unit disk graphs. Networks, 25(2):59–68, 1995. doi:10.1002/net.3230250205. * [Mit22] Joseph S.B. Mitchell. Approximating maximum independent set for rectangles in the plane. In Proc. 62nd IEEE Symposium on Foundations of Computer Science (FOCS), pages 339–350, 2022. doi:10.1109/FOCS52979.2021.00042. * [SA95] Micha Sharir and Pankaj K. Agarwal. Davenport-Schinzel Sequences and their Geometric Applications. Cambridge University Press, 1995. * [SS07] Péter Gábor Szabó and Eckard Specht. Packing up to 200 equal circles in a square. In Models and Algorithms for Global Optimization: Essays Dedicated to Antanas Žilinskas on the Occasion of His 60th Birthday, pages 141–156. Springer, Boston, 2007. doi:10.1007/978-0-387-36721-7_9. * [Zuc07] David Zuckerman. Linear degree extractors and the inapproximability of max clique and chromatic number. Theory Comput., 3(1):103–128, 2007. doi:10.4086/toc.2007.v003a006.
# Onset of slow dynamics in dense suspensions of active colloids Antina Ghosh1, Sayan Maity1, Vijayakumar Chikkadi1 1 Indian Institute of Science Education and Research Pune, Pune 411008, India ###### Abstract Slow relaxation and heterogeneous dynamics are characteristic features of glasses. The presence of glassy dynamics in nonequilibrium systems, such as active matter, is of significant interest due to its implications for living systems and material science. In this study, we use dense suspensions of self- propelled Janus particles moving on a substrate to investigate the onset of slow dynamics. Our findings show that dense active suspensions exhibit several hallmark features of slow dynamics similar to systems approaching equilibrium. The relaxation time fits well with the Vogel-Fulcher-Tamman (VFT) equation, and the system displays heterogeneous dynamics. Furthermore, increasing the activity leads to faster relaxation of the system, and the glass transition density predicted by the VFT equation shifts to higher densities. The measurement of the cage length and persistence length reveal they are of the same order over the range of activities explored in our study. These results are in agreement with recent particle simulations. ## I Introduction When molecular liquids are cooled, their timescales of structural relaxation increases. If we avoid crystallization by employing rapid cooling rates, most liquids fall out of equilibrium to enter a metastable phase of supercooled liquids. Subsequently, they form glasses at temperatures lower than the glass transition temperature $T_{g}$ [1, 2]. A similar phenomenon occurs in Brownian colloidal suspensions, where the colloidal glass transition is controlled by the particle density [3, 4, 5, 6]. As density increases, particle motion becomes increasingly hindered by the cages formed by neighboring particles, leading to a slowdown in dynamics. In recent years, the emergence of slow dynamics in active matter has garnered considerable attention due to its relevance to living systems [7]. For instance, the collective motion of cells in tissues during embryo development [8], wound healing [9], and cancer metastasis exhibits glass-like features such as slow relaxation and heterogeneous dynamics [10, 11]. Bacterial cell cytoplasm shares several properties with glass-forming liquids [12]. A central question in these investigations is whether systems driven far from equilibrium display slow glassy dynamics and how does it compare to systems close to equilibrium [13]. In this article, we shed new light on some aspects of slow dynamics in dense active matter by performing experiments using a monolayer of active colloids. Self-propelled particles have served as model systems for understanding various aspects of dense active matter [14, 15]. They have been studied extensively in simulations using two well-established models of active Brownian particles (ABPs) [16, 17] and active Ornstein-Uhlenbeck particles (AOUPs) [18, 19]. In these models the particles undergo overdamped motion in a viscous fluid, and their self-propulsion is described using a force that remains either constant or evolves over time. In the dilute limit, both models depict a persistent random walker, with the mean-square displacement (MSD) given by: $\left<\Delta r^{2}(t)\right>=6Tt+6T_{a}\left[\tau_{p}(e^{-t/\tau_{p}}-1+t)\right]$, where $T$ represents the thermodynamic temperature, $T_{a}$ is the active temperature, and $\tau_{p}$ is the persistence time of the random walker [20]. The active temperature $T_{p}=f^{2}\tau_{p}/3$ for ABPs. In the limit of short times when $t<<\tau_{p}$, the MSD $\left<\Delta r^{2}(t)\right>\approx 6Tt+3T_{a}t^{2}/\tau_{p}$, indicating a quadratic dependence on $t$ due to ballistic motion. However, in the limit of long times $t>>\tau_{p}$, the MSD $\left<\Delta r^{2}(t)\right>\approx 6(T_{a}+T)t\equiv 6T_{eff}t$, which increases linearly with $t$, highlighting diffusive motion. These dilute-limit predictions were first confirmed by Howse and coworkers [20] and have since been validated by other experiments using colloidal models [21, 14, 22]. Investigations into active matter systems at high densities and the onset of slow dynamics have primarily focused on theoretical models and numerical simulations. The earliest studies [23, 13, 24] revealed that slow dynamics in active systems exhibit all the essential features of supercooled liquids approaching an equilibrium glass transition. These features include caging, dynamical slowing down, non-exponential time correlation functions, and dynamic heterogeneity. Particle simulations of active systems based on hard- sphere models have demonstrated a monotonic effect of activity, showing that an increase in activity pushes the glass transition to higher densities [23, 24]. However, subsequent simulations using Lennard-Jones potentials, which include an attractive component, revealed a non-monotonic effect of activity. In these simulations, activity could either enhance or suppress the fluidization of the system [25, 26, 27]. Further simulations conducted by Berthier and coworkers [28], using a model that allowed continuous interpolation from hard-sphere-like repulsive interactions to Lennard-Jones- type models with an attractive component, clarified the contradictory effects of activity. In the hard-sphere limit at low temperatures and moderate densities, activity had a monotonic impact on the relaxation time of the system, leading to faster relaxation with increased activity. However, at moderate temperatures and high densities (resembling Lennard-Jones models), activity exhibited a non-monotonic impact on relaxation. This study showed that interparticle interactions and the structure changes induced by activity have a significant influence on the onset of slow glassy dynamics. A recent study[29] identified that the long-time diffusion constant increases with increasing persistence time when it is small compared to the cage length. An opposite effect is observed when the cage length is smaller than the persistence length. Active glasses are reported to display other interesting features, including multiple aging regimes for highly persistent forces [30, 31], and tunable fragility due to doping of active particles in passive glasses [26]. Experimental investigations of this topic are scarce. The earliest experimental reports of glassy dynamics in active matter involved living systems consisting of tissues [8, 9, 10, 11]. As far as synthetic systems are concerned, a recent study [32] reported the role of topological defects and the re-entrant effect of activity in dense active matter consisting of granular ellipsoidal particles. The only experimental study with colloidal spheres was conducted by Klongvessa and co-workers [33, 34], where they reported a non-monotonic influence of activity on the dynamics of the system. Their primary findings were that increasing activity leads to faster relaxation in an ergodic state at lower densities. However, in a non-ergodic state at higher densities, activity shows a non-monotonic effect on relaxation time. Initially, relaxation time increases with higher activity levels, but decreases again at higher activity intensities. The platinum-coated gold colloids used in their study exhibit characteristics similar to soft particles, such as poly(N-isopropyl acrylamide) (PNiPAM) microgel particles [35]. It is important to clarify whether these conclusions apply generically to quasi-hard particle systems. Recent simulations [29, 36] suggest that the relative sizes of the cage and the persistence length of active particles critically influence the activity’s impact. However, these numerical results lack experimental validation. Furthermore, the nature of slow dynamics in quasi-hard particle systems remains to be explored. In this article, we present an experimental investigation of the onset of slow relaxation in a dense layer of photoactive colloids. The strength of activity is varied by tuning the intensity of UV light. In the absence of activity, the relaxation time of the system increases monotonically, suggesting quasi-hard particle nature of our system. Our experiments reveal a monotonic effect of activity on the relaxation time over the range of densities and activities studied in our experiments. Increasing the activity leads to faster relaxation. The relaxation time over a range of densities and at various magnitudes of activity is well described by a Vogel-Fulcher-Tamann (VFT) relation. The glass transition densities obtained from the VFT fits reveal that increasing the activity shifts the glass transition density to higher values. The examination of structural changes and length-scale of cooperative motion due to activity point to faster relaxation due to activity. The investigation of the ratio of cage size and persistence length due to activity reveals that our results are in agreement with simulation results [23, 29, 28]. ## II Experimental systems The active colloids in our experiments are light-driven, self-propelled Janus particles composed of SiO_2 and anatase TiO_2 halves [37]. The details of the experimental methods followed is given in the supplementary information. To obtain amorphous configurations, we have used a binary mixture of particles with diameters of $\sigma_{s}=2.64~{}\mu m$ and $\sigma_{l}=3.12~{}\mu m$. The ratio of small to large particles in our system is in the range 2:1, and all the analysis was performed using small particles. The Fig.1a shows an image of the dense monolayer of the particles that was obtained by sedimenting particles in a circular cavity of $30\mu m$ thickness, fabricated using photolithography techniques. The diameter of the circular cavity is $150\mu m$. An array of such cavities was created to study the systems over a range of densities. The particles only in the central region or in the unshaded region were considered for analysis to prevent the influence of walls. When the Janus particles are dispersed in an aqueous hydrogen peroxide solution (H_2 O_2 $3\%$, pH$\sim 7$), they display passive Brownian diffusion in the absence of UV light. However, upon illumination of UV light of wavelength $365~{}nm$ the Janus colloids show self-propulsion giving rise to ballistic motion on short time scales and a diffusive motion on long time scales. The propulsion arises from the photocatalytic decomposition of H_2O_2 at the TiO_2 surface by UV- promoted electron–hole pairs [38, 37]. Since the rate of photocatalytic decomposition near the TiO_2 surface depends on the intensity of light, the velocity of Janus particles increases with UV power. The details about these aspects are provided in the supplementary information. The rotational diffusion time of the particles is $\tau_{R}\sim 3s$, which was measured from the MSD of particles (see supplementary information). The UV power is varied in our experiments to obtain velocities in the range of $0.05\mu m/s$ to $0.2\mu m/s$. The strength of activity in our study is expressed using an effective temperature, $T_{eff}$, see supplementary information for details. The density of particles is represented by the area fraction $\phi\sim N\pi\sigma^{2}/4A$, where A is the area of the field of view. Figure 1: Relaxation of passive Brownian system in the absence of activity, corresponding to $\frac{T_{eff}}{T_{0}}=1$. (a) A bright-field image of circular cavity with binary colloids. The diameter of the cavity is $\sim 150\mu m$ and it’s depth is $\sim 30\mu m$. The particles in the unshaded region of diameter $90\mu m$ are considered for analysis. (b) Self part of the intermediate scattering function $F_{s}(k,t)$, where $k=2\pi/\sigma_{s}$, over a range of area fractions of the colloidal particles from $\phi\sim 0.55-0.72$. The x-axis is scaled by the rotational diffusion time scale $\tau_{R}$. (c) The relaxation time $\tau_{\alpha}$ is plotted as a function of the area fraction of colloidal particles. The thick line is a fit to the data of the form . ## III Onset of slow dynamics in the absence of activity When the density of system is small, the particles diffuse freely and the relaxation is fast. With increasing density, the particles motion is hindered due to the cage formed by nearest neighbors, leading to the onset of slow dynamics in the system. The dynamics of particles within the cage and the their escape from it is well captured by the self part of the intermediate scattering function, $F_{s}(k,t)$, given by $F_{s}(k,t)=\left<\frac{1}{N}\sum_{i=1}^{N}\textnormal{exp}\left[j\textbf{k}\cdot\Delta\mathbf{r}_{i}(\tau)\right]\right>,$ (1) where the $k=2\pi/\sigma_{l}$ and $\Delta\mathbf{r}(\tau)$ is the cage- relative displacement [39] defined with respect to $N_{i}$ nearest neighbors in the following way: $\Delta\mathbf{r}_{i}(\tau)=\mathbf{r}_{i}(t+\tau)-\mathbf{r}_{i}(t)-\frac{1}{N_{i}}\sum_{i}^{N_{i}}[\mathbf{r}_{j}(t+\tau)-\mathbf{r}_{j}(t)].$ (2) Previous studies [39, 40] have demonstrated long-wavelength fluctuations in two-dimensional disordered colloidal systems leading to large particle displacements without neighbor changes. Investigating the relative displacement of particles relative to their first neighbors provided a consistent understanding of the relaxation dynamics and the glass transition in both two and three dimensions. Therefore, the displacements in our study are computed relative to their first nearest neighbors, which we refer to as relative displacements in the rest of the article. The main panel of Fig.1b presents the $F_{s}(k,t)$ computed using relative displacements over a range of area-fractions from $\phi\sim 0.55-0.72$ in the absence of activity, which is due to Brownian motion. The slowing down of the dynamics is apparent from the slower decay of $F_{s}(k,t)$ with increasing density. The relaxation time scale $\tau_{\alpha}$ is estimated from the time taken for $F_{s}(k,t)$ to decay by a factor of $1/e$. It is shown as a function of the area fraction of colloids in Fig.1c. The thick line in the figure shows a Vogel-Fulcher-Tamann fit of the form $\tau_{\alpha}=\tau_{R}~{}\textnormal{exp}\left[\frac{A}{(\phi_{c}-\phi)}\right]$, $\phi_{c}$ is the critical area-fraction where $\tau_{\alpha}$ diverges. The best-fit procedure yields a critical density of $\phi_{c}=0.79\pm 0.012$, which reasonably agrees with earlier simulation studies [41]. The non- exponential relaxation of density correlation is apparent from the form of $F_{s}(k,t)$, and it arises from the multiple relaxation time scales of the system. To confirm the onset of slow glassy dynamics in our system, we have tested for the signatures of aging at $\phi\sim 0.72$ and $0.55$, the two extreme densities in our study. The mean square displacement (MSD) of particles was plotted for different waiting times. The dynamics were found to be dependent on the waiting time at $\phi\sim 0.72$, while at $\phi\sim 0.55$, the MSD curves overlapped on each other. See supplementary information for details. These results establish the onset of glassy dynamics at higher densities in our system. Figure 2: Relaxation of active systems. (a)The self part of the intermediate scattering function $F_{s}(k,t)$ at various effective temperatures ranging from $\sim 1-4.44$, and at two different area fractions $\phi\sim 0.72$ and $\phi\sim 0.55$. Different marker symbols are used to distinguish the effective temperatures, and the colors distinguish area fractions. (b) The relaxation time $\tau_{\alpha}$ of the system is presented as a function of area fraction of colloids at various effective temperatures. The marker symbols indicate experimental data and the thick lines are the VFT fit to the curves. (c) The critical area fraction $\phi_{c}$ obtained from the fits in (b) is shown as a function of the effective temperature $T_{eff}$. ## IV Influence of activity on the dynamics We present the influence of activity on the dynamics of the system in this section. As discussed earlier, the activity of our systems is quantified using an effective temperature and it is controlled by the intensity of UV light. The effective temperature is varied from $T_{eff}\sim 1-4.4$, and the intermediate scattering function $F_{s}(k,t)$ show in Fig.2a at $\phi\sim 0.72$ and $0.55$ reveals a monotonic decrease of the relaxation time $\tau_{\alpha}$ with increasing activity. What is also apparent is that the activity has a stronger effect at $\phi\sim 0.72$ compared to $\phi\sim 0.53$. The relaxation time $\tau_{\alpha}$ decresases by a factor of $3$ at $\phi\sim 0.72$. We next present the effect of activity over a range of area fractions for various effective temperatures in Fig. 2b. The lines show a VFT fit to the curves. The fit provides an estimate of the critical density $\phi_{c}$ where $\tau_{\alpha}$ diverges. We plot $\phi_{c}$ as a function of the effective temperature $T_{eff}$ in Fig.2c. Clearly, the critical area-fraction $\phi_{c}$ is pushed to higher densities with increasing activity over the entire range of the area fractions and activities investigated in our experiments. These results are in agreement with simulations of active particles performed using hard-sphere like interactions [23]. Unlike earlier experiments [33, 34] using Janus colloids as active particles, which showed a non-monotonic variation of relaxation times $\tau_{\alpha}$ due to activity in the nonergodic states, our experiments reveal a monotonic variations over the range of effective temperatures explored in our experiments. These results highlight the fact that interparticle interactions play a significant role when the effect of activity is considered on the onset of slow dynamics. Figure 3: Structural changes due to activity. The pair correlation function $g(r)$ of the system at different effective temperatures is presented at $\phi\sim 0.72$ (a) and $\phi\sim 0.55$ (b). The insets show a magnified view of the first peak. ## V Structural changes due to activity ### V.1 Pair correlation function In this section, we investigate the effect of activity on the structure. We elucidate the effect of activity by computing the pair correlation function of small particles in the system, $g_{\alpha\beta}(r)=\frac{A}{N_{\alpha}N_{\beta}}\left<\sum_{i}^{N_{\alpha}}\sum_{j\neq i}^{N_{\beta}}\delta(r-|\mathbf{r}_{i}-\mathbf{r}_{j}|)\right>\\\ $ (3) where the $N_{\alpha}$ and $N_{\beta}$ is the number of particles of different species. Only small particles were considered for all the calculations, so we denote $g_{ss}(r)$ simply as $g(r)$. The Fig.3a presents $g(r)$ for different effective temperatures at $\phi\sim 0.72$. The effect is apparent from the variations in the height and position of the peaks. A magnified view of the first peak in the inset shows a decrease in the height of the peak with increasing activity. Even though the changes are small, there is a systematic variation due to activity. Similar changes are observed in the second peak too. All these observations point to restructuring of particles to achieve higher diffusivity in the system. The main panel in Fig.3b presents the changes in $g(r)$ due to activity at a lower area fraction of $\phi\sim 0.55$. The corresponding inset presents a magnified view of the first peak. The variations in peak height appear to be smaller. A comparison of the changes in the Figs.3a and 3b suggests that the effect of activity on the structure is pronounced at higher densities. Figure 4: Effect of activity on the local hexagonal ordering. (a) & (b) Snapshots of the local hexagonal order of particles at an area fraction of $\phi\sim 0.72$ and varying effective temperatures. The particles are color coded based on their value of $\psi_{6}$, and the panels in (a) and (b) are at effective temperatures $T_{eff}=1.0,\textnormal{ and }4.44$, respectively. (c) The main panel shows the average $\psi_{6}$ as a function of $T_{eff}$ at area fractions $\phi\sim 0.72$ and $0.55$. Different symbols are used to distinguish the area fractions. The inset is the histogram of $\psi_{6}$ at $\phi\sim 0.72$ and $T_{eff}=1$. ### V.2 Local ordering of particles We continue our investigation of the effect of activity on the structure by studying the local ordering of particles. In particular, we quantify the hexagonal order using local bond-orientational order parameter [42] defined as, $\psi_{6}^{i}=\frac{1}{6}\sum_{j=1}^{N_{f}}\textnormal{exp}(i6\theta_{ij}),$ (4) where $N_{f}$ is number of first nearest neighbors and $\theta_{ij}$ is the angle made by the bond between particles $i$ and $j$ with the $x-$axis. The local $\psi_{6}$ calculation were done using all kinds of particles. A color coded representation of local ordering is shown in Fig.4a and Fig.4b, where the particles are colored based on their $\psi_{6}$ values at $\phi\sim 0.72$ and $T_{eff}=1.0$ and $4.44$, respectively. Red color indicates crystal-like hexagonal order and blue color indicates its absence. The presence of ordered patches are apparent and it appears to diminish with increasing activity in Fig.4a and Fig.4b. For a better understanding, a distribution of $\psi_{6}$ is shown in Fig.4c at $\phi$ and $T_{eff}$. Note that symbols are used to distinguish the activity and colors identify the area-fraction $\phi_{s}$. The peak of distribution $P(\psi_{6})$ shifts to smaller values of $\psi_{6}$ with increasing activity. This effect is also clear from the inset of Fig.4c, which shows the mean of the distributions as a function of activity ($T_{eff}$). Figure 5: Displacement fields and correlations that characterize spatially heterogeneous dynamics in dense systems of active colloids. The displacement vectors of particles are computed over a time scale $\tau_{\alpha}$ for $\phi\sim 0.72$ at $T_{eff}=1$ (a) and $T_{eff}=4.44$ (b). (c) The displacement correlations for $\phi\sim 0.72$ are shown at various effective temperatures $T_{eff}$ in the main panel. The inset shows the correlation lengths $\xi$ extracted from the correlation functions in the main panel. ## VI Spatially heterogeneous dynamics One of the hallmark features of passive super-cooled liquids and passive glasses is the spatially heterogeneous dynamics [43]. This means the mobility of particles differs significantly from one region to another region of the system, leading to clusters of slow and fast moving particles. The increase in the relaxation time of system with increasing density is related is attributed to the growth of length-scales of cooperative motion in the system . We analyze the effect of activity on the cooperative motion in our system by computing equal time displacement correlations, $C(r)=\frac{\left<u_{i}u_{j}\right>-\left<u_{i}\right>\left<u_{j}\right>}{\left<u^{2}\right>},$ (5) where $u$ is the displacement of the particle, i and j denote particle indices, and the angular brackets denote averaging over all particles and several instances. The correlations $C(r)$ are dynamic, they depend on the time scale of observation used for determining the displacements [44, 45]. Simulation and experiments have shown that the dynamic correlations are optimal when the time scale of observation equals the relaxation time $\tau_{\alpha}$. So, the relaxation time $\tau_{\alpha}$ obtained from the intermediate scattering function $F_{s}(k,t)$ (Eq.1) is used to determine the displacements $u_{i}$. We first discuss the effect of activity on the displacement correlations in the denser system, which is at $\phi\sim 0.72$. The Figs. 5(a) and 5(b) are snapshots of the displacement vectors at $T_{eff}=1.0\textnormal{ and }4.44$, respectively. The displacements vectors suggest that the motion of particles is correlated over longer distances at $T_{eff}=1$ when compared to the displacements at $T_{eff}=4.4$. To quantify these observations, we compute their correlations $C(r)$ and extract a length scale $\xi$ that characterizes the distance over which the motion is correlated. These results are presented in Fig. 5(c) for $\phi\sim 0.72$ and various effective temperatures ranging from $T_{eff}=1-4.44$. The correlations $C(r)$ are long-ranged at $T_{eff}=1$ as the correlations reveal a power-law decay extending to $20-30\sigma$. This also confirms that the onset of slow dynamics in the system at higher densities. As the activity increases, the correlations decay faster and they display an exponential decay at $T_{eff}=4.4$. We extract a length scale of the correlated motion by fitting the correlations using a function of the form $C(r)\sim(1/r)~{}\textnormal{exp}(-r/\xi)$, where $\xi$ is the correlation length. The best fit method yields length scales $\xi$ that diminishes with increasing activity, which is shown in the inset of Fig. 5c. Clearly, the activity has a strong effect on the length scales of cooperative motion. These observations are similar to the effect of activity on the relaxation time of the system presented in Fig.2a. Figure 6: Displacement of particles and their spatial correlations at $\phi\sim 0.57$. (a) The displacements are computed over a time scale $\tau_{\alpha}$ at $T_{eff}=1$. (c) The displacement correlations C(r) are shown at various effective temperatures $T_{eff}$ in the main panel. The inset shows the correlation lengths $\xi$ extracted from the correlation functions in the main panel. We next focus on the system at a lower area fraction of $\phi\sim 0.57$. A snapshot of the displacement vectors of the particles are shown in Fig.6a. Note that the displacements are computed over a time window corresponding to $\tau_{\alpha}$ at $T_{eff}=1$. The motion of the particles appear to be correlated over shorter distances when compared to displacement vectors in Fig.5a. The correlations $C(r)$ in the main panel of Fig.6b indeed display an exponential decay of the form $C(r)\sim\textnormal{exp}(-x/\xi)$. We also observe that increasing the activity does not lead to significant changes in the correlation length. This is consistent with the observation made in Fig.2a, where the activity had a small effect on the relaxation time of the system at $\phi\sim 0.57$. The correlation lengths extracted from the best fit method is shown in the inset of Fig.6b. The length scales are considerably small compared to high density systems. The analysis presented in this section shows that activity has similar effects on the relaxation timescale and cooperative length scale, both decrease with increase in activity over the range considered in our study. ## VII Cage size and persistence length of active particles Recent simulations [29, 36] have pointed out the critical role of cage length and the persistence length of active particles. The authors report an enhancement of diffusivity due to activity when the persistence length, $l_{p}$, is smaller then the cage size, $l_{c}$, while it is suppressed when $l_{p}>l_{c}$. The central notion is that a particle is efficient in scanning and escaping the cage formed by the neighbors when $l_{p}<l_{c}$. In the other limit, ($l_{p}>>l_{c}$), it sticks to the edge of the cage, taking longer time to escape. We test these ideas in our experiments by determining $l_{p}$ and $l_{c}$. Following the procedure outlined in simulations [29, 36], the $l_{c}$ is the average distance of a particle from its nearest neighbors, which is weighted by the pair correlation function $g(r)$. The other lengthscale $l_{p}$, which is the persistence length of activity, is obtained in the dilute limit. It is obtained from the mean square displacement data, see supplementary information for more details. The Fig. 7 shows the ratio $l_{p}/l_{c}$ as the activity of the systems is varied at an area fraction $\phi\sim 0.72$. We clearly see that this ratio is of order unity, thus underlining the importance of cage size and persistence length of active particles. Figure 7: The normalised persistence length ($l_{p}/l_{c}$) over a range of activities represented by the effective temperature at $\phi\sim 0.72$. The dotted line is a guide to the eye. Increasing the effective temperature beyond $T_{eff}=4.44$ leads to destabilisation of the monolayer, the particles do not sediment, especially the small species in our binary mixture. This prevents us from studying interesting features of active solids that manifest at large activities [46, 47]. This can be overcome using larger sized colloidal particles and the measurements are under progress. ## VIII Conclusions In summary, we present a novel experimental set-up to study the onset of slow dynamics in dense active suspensions containing photoactive Janus colloids. The variation of relaxation timescale with increasing density in Fig.1c and Fig.2b points to a quasi-hard particle nature of active colloids in our system. Besides, these results reveal a monotonic effect of activity on the relaxation of the system across the range of effective temperatures explored in our experiments. Increasing activity leads to faster relaxation at both high and low particle densities. This feature contrasts with the results reported in reference [33], where the ergodic and non-ergodic states of the system displayed different relaxation features. The variation of the relaxation timescale ($\tau_{\alpha}$) with the density of our system is well described by the VFT relation, and the fits indicate that the critical density ($\phi_{c}$) for the glass transition point is pushed to higher densities with increasing activity. These results agree with those found in simulations [23]. The active system exhibits several features of passive thermal systems, such as non-exponential relaxation, cooperative motion, and dynamic arrest. The effect of activity on the structural ordering of the system is analyzed using two-point density correlations $g(r)$ and local hexagonal ordering $\psi_{6}$. The peaks of $g(r)$ move to the right with increasing activity, while the local hexagonal ordering $\psi_{6}$ decreases. Although these changes are small, they point to diminishing structural order due to activity. Furthermore, the displacement correlations reveal a correlation length that decreases with increasing activity. All these observations suggest an enhancement of relaxation due to activity. Recent reports based on simulations and theory [46, 48] suggest a similar effect of activity-driven enhancement of relaxation; however, the dynamic heterogeneity characterized using four-point correlations revealed surprising effects of activity, the relaxation was found to be decoupled from dynamic heterogeneity. Furthermore, simulations [47] also reveal non-linear dispersion relation in active systems. Our experimental system motivates further investigations along these directions. ## References * Debenedetti and Stillinger [2001] P. G. Debenedetti and F. H. Stillinger, Supercooled liquids and the glass transition, Nature 410, 259 (2001). * Berthier and Biroli [2011] L. Berthier and G. Biroli, Theoretical perspective on the glass transition and amorphous materials, Reviews of modern physics 83, 587 (2011). * Pusey [1991] P. Pusey, Colloidal suspensions, in _Liquids, Freezing and Glass Transition : Les Houches Session LI, 3-28 July, 1989_ , edited by J. Z.-J. J.P. Hansen, D. Levesque (North-Holland, Amsterdam, 1991) pp. 199–269. * Hunter and Weeks [2012] G. L. Hunter and E. R. Weeks, The physics of the colloidal glass transition, Reports on progress in physics 75, 066501 (2012). * Weeks _et al._ [2000] E. R. Weeks, J. C. Crocker, A. C. Levitt, A. Schofield, and D. A. Weitz, Three-dimensional direct imaging of structural relaxation near the colloidal glass transition, Science 287, 627 (2000). * Pusey and Van Megen [1986] P. N. Pusey and W. Van Megen, Phase behaviour of concentrated suspensions of nearly hard colloidal spheres, Nature 320, 340 (1986). * Marchetti _et al._ [2013] M. C. Marchetti, J.-F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Hydrodynamics of soft active matter, Reviews of modern physics 85, 1143 (2013). * Schoetz _et al._ [2013] E.-M. Schoetz, M. Lanio, J. A. Talbot, and M. L. Manning, Glassy dynamics in three-dimensional embryonic tissues, Journal of The Royal Society Interface 10, 20130726 (2013). * Tambe _et al._ [2011] D. T. Tambe, C. Corey Hardin, T. E. Angelini, K. Rajendran, C. Y. Park, X. Serra-Picamal, E. H. Zhou, M. H. Zaman, J. P. Butler, D. A. Weitz, _et al._ , Collective cell guidance by cooperative intercellular forces, Nature materials 10, 469 (2011). * Oswald _et al._ [2017] L. Oswald, S. Grosser, D. M. Smith, and J. A. Käs, Jamming transitions in cancer, Journal of physics D: Applied physics 50, 483001 (2017). * Grosser _et al._ [2021] S. Grosser, J. Lippoldt, L. Oswald, M. Merkel, D. M. Sussman, F. Renner, P. Gottheil, E. W. Morawetz, T. Fuhs, X. Xie, _et al._ , Cell and nucleus shape as an indicator of tissue fluidity in carcinoma, Physical Review X 11, 011033 (2021). * Parry _et al._ [2014] B. R. Parry, I. V. Surovtsev, M. T. Cabeen, C. S. O’Hern, E. R. Dufresne, and C. Jacobs-Wagner, The bacterial cytoplasm has glass-like properties and is fluidized by metabolic activity, Cell 156, 183 (2014). * Berthier and Kurchan [2013] L. Berthier and J. Kurchan, Non-equilibrium glass transitions in driven and active matter, Nature Physics 9, 310 (2013). * Buttinoni _et al._ [2012] I. Buttinoni, G. Volpe, F. Kümmel, G. Volpe, and C. Bechinger, Active brownian motion tunable by light, Journal of Physics: Condensed Matter 24, 284129 (2012). * Janssen [2019] L. M. Janssen, Active glasses, Journal of Physics: Condensed Matter 31, 503002 (2019). * Fily and Marchetti [2012] Y. Fily and M. C. Marchetti, Athermal phase separation of self-propelled particles with no alignment, Physical review letters 108, 235702 (2012). * Romanczuk _et al._ [2012] P. Romanczuk, M. Bär, W. Ebeling, B. Lindner, and L. Schimansky-Geier, Active brownian particles: From individual to collective stochastic dynamics, The European Physical Journal Special Topics 202, 1 (2012). * Szamel [2014] G. Szamel, Self-propelled particle in an external potential: Existence of an effective temperature, Physical Review E 90, 012111 (2014). * Maggi _et al._ [2015] C. Maggi, U. M. B. Marconi, N. Gnan, and R. Di Leonardo, Multidimensional stationary probability distribution for interacting active particles, Scientific reports 5, 10742 (2015). * Howse _et al._ [2007] J. R. Howse, R. A. Jones, A. J. Ryan, T. Gough, R. Vafabakhsh, and R. Golestanian, Self-motile colloidal particles: from directed propulsion to random walk, Physical review letters 99, 048102 (2007). * Theurkauff _et al._ [2012] I. Theurkauff, C. Cottin-Bizonne, J. Palacci, C. Ybert, and L. Bocquet, Dynamic clustering in active colloidal suspensions with chemical signaling, Phys. Rev. Lett. 108, 268303 (2012). * Bechinger _et al._ [2016] C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, and G. Volpe, Active particles in complex and crowded environments, Reviews of Modern Physics 88, 045006 (2016). * Ni _et al._ [2013] R. Ni, M. A. C. Stuart, and M. Dijkstra, Pushing the glass transition towards random close packing using self-propelled hard spheres, Nature communications 4, 2704 (2013). * Berthier [2014] L. Berthier, Nonequilibrium glassy dynamics of self-propelled hard disks, Physical review letters 112, 220602 (2014). * Szamel _et al._ [2015] G. Szamel, E. Flenner, and L. Berthier, Glassy dynamics of athermal self-propelled particles: Computer simulations and a nonequilibrium microscopic theory, Physical Review E 91, 062304 (2015). * Mandal _et al._ [2016] R. Mandal, P. J. Bhuyan, M. Rao, and C. Dasgupta, Active fluidization in dense glassy systems, Soft Matter 12, 6268 (2016). * Flenner _et al._ [2016] E. Flenner, G. Szamel, and L. Berthier, The nonequilibrium glassy dynamics of self-propelled particles, Soft matter 12, 7136 (2016). * Berthier _et al._ [2017] L. Berthier, E. Flenner, and G. Szamel, How active forces influence nonequilibrium glass transitions, New Journal of Physics 19, 125006 (2017). * Debets _et al._ [2021] V. E. Debets, X. M. De Wit, and L. M. Janssen, Cage length controls the nonmonotonic dynamics of active glassy matter, Physical Review Letters 127, 278002 (2021). * Mandal and Sollich [2020] R. Mandal and P. Sollich, Multiple types of aging in active glasses, Physical Review Letters 125, 218001 (2020). * Janzen and Janssen [2022] G. Janzen and L. M. Janssen, Aging in thermal active glasses, Physical Review Research 4, L012038 (2022). * Arora _et al._ [2022] P. Arora, A. Sood, and R. Ganapathy, Motile topological defects hinder dynamical arrest in dense liquids of active ellipsoids, Physical Review Letters 128, 178002 (2022). * Klongvessa _et al._ [2019a] N. Klongvessa, F. Ginot, C. Ybert, C. Cottin-Bizonne, and M. Leocmach, Active glass: Ergodicity breaking dramatically affects response to self-propulsion, Physical review letters 123, 248004 (2019a). * Klongvessa _et al._ [2019b] N. Klongvessa, F. Ginot, C. Ybert, C. Cottin-Bizonne, and M. Leocmach, Nonmonotonic behavior in dense assemblies of active colloids, Physical Review E 100, 062603 (2019b). * Philippe _et al._ [2018] A.-M. Philippe, D. Truzzolillo, J. Galvan-Myoshi, P. Dieudonné-George, V. Trappe, L. Berthier, and L. Cipelletti, Glass transition of soft colloids, Physical Review E 97, 040601 (2018). * Debets and Janssen [2022] V. E. Debets and L. M. Janssen, Influence of particle softness on active glassy dynamics, Physical Review Research 4, L042033 (2022). * Singh _et al._ [2017] D. P. Singh, U. Choudhury, P. Fischer, and A. G. Mark, Non-equilibrium assembly of light-activated colloidal mixtures, Advanced Materials 29, 1701328 (2017). * Hong _et al._ [2010] Y. Hong, M. Diaz, U. M. Córdova-Figueroa, and A. Sen, Light-driven titanium-dioxide-based reversible microfireworks and micromotor/micropump systems, Advanced Functional Materials 20, 1568 (2010). * Vivek _et al._ [2017] S. Vivek, C. P. Kelleher, P. M. Chaikin, and E. R. Weeks, Long-wavelength fluctuations and the glass transition in two dimensions and three dimensions, Proceedings of the National Academy of Sciences 114, 1850 (2017). * Illing _et al._ [2017] B. Illing, S. Fritschi, H. Kaiser, C. L. Klix, G. Maret, and P. Keim, Mermin–wagner fluctuations in 2d amorphous solids, Proceedings of the National Academy of Sciences 114, 1856 (2017). * Desmond and Weeks [2009] K. W. Desmond and E. R. Weeks, Random close packing of disks and spheres in confined geometries, Phys. Rev. E 80, 051305 (2009). * Nelson and Halperin [1979] D. R. Nelson and B. Halperin, Dislocation-mediated melting in two dimensions, Physical Review B 19, 2457 (1979). * Ediger [2000] M. D. Ediger, Spatially heterogeneous dynamics in supercooled liquids, Annual review of physical chemistry 51, 99 (2000). * Dasgupta _et al._ [1991] C. Dasgupta, A. Indrani, S. Ramaswamy, and M. Phani, Is there a growing correlation length near the glass transition?, Europhysics Letters 15, 307 (1991). * Berthier _et al._ [2011] L. Berthier, G. Biroli, J.-P. Bouchaud, and R. L. Jack, _Overview of different characterisations of dynamic heterogeneity_ , Vol. 150 (Oxford University Press Oxford, 2011) p. 68. * Paul _et al._ [2023] K. Paul, A. Mutneja, S. K. Nandi, and S. Karmakar, Dynamical heterogeneity in active glasses is inherently different from its equilibrium behavior, Proceedings of the National Academy of Sciences 120, e2217073120 (2023). * Dey _et al._ [2024] S. Dey, A. Bhattacharya, and S. Karmakar, Enhanced long wavelength mermin-wagner fluctuations in two-dimensional active crystals and glasses, arXiv preprint arXiv:2402.10625 (2024). * Nandi _et al._ [2018] S. K. Nandi, R. Mandal, P. J. Bhuyan, C. Dasgupta, M. Rao, and N. S. Gov, A random first-order transition theory for an active glass, Proceedings of the National Academy of Sciences 115, 7688 (2018).
# Central Limit Theorem of Overlap for the Mean Field Ghatak-Sherrington model Yueqi Sheng School of Engineering & Applied Sciences, Harvard University, Cambridge, Massachusetts, USA. Email<EMAIL_ADDRESS>Qiang Wu School of Mathematics, University of Minnesota, MN, USA. Email<EMAIL_ADDRESS> ###### Abstract The Ghatak-Sherrington (GS) spin glass model is a random probability measure defined on the configuration space $\\{0,\pm 1,\pm 2,\ldots,\pm\mathcal{S}\\}^{N}$ with system size $N$ and $\mathcal{S}\geqslant 1$ finite. This generalizes the classical Sherrington-Kirkpatrick (SK) model on the boolean cube $\\{-1,+1\\}^{N}$ in order to capture more complex behaviors, including the spontaneous inverse freezing phenomenon. Although many results on the physics side have been established to understand the GS model, mathematical exploration of the model remains scarce. Overlap, the normalized inner product of two configurations, acts as the system order parameter to understand the phase transition of mean-field spin glass models. In this paper, we use moment method combined with the cavity approach to rigorously establish a quantitative joint central limit theorem for the overlap and self-overlap array. The results hold at high temperatures under arbitrary crystal and external fields. Compared to the SK model, the main challenge comes from the non-trivial self-overlap terms which also correlated with the standard overlap terms. ###### Contents 1. 1 Introduction 1. 1.1 Setting of the model 2. 1.2 Relation to prior works 3. 1.3 Notations 4. 1.4 Main result 5. 1.5 Proof outline 6. 1.6 Organization of the paper 7. 1.7 Acknowledgement 2. 2 Cavity method and second moment estimates 1. 2.1 Preliminaries 2. 2.2 Variance of overlaps and self-overlaps 1. 2.2.1 Variance of $T_{1,2}$ 2. 2.2.2 Variance of $T_{1}$ and $S_{1}$ 3. 2.2.3 Covariance: $S_{1}T_{1}$ term 3. 3 General moments computation 1. 3.1 Induction on $T_{k,l}$ 2. 3.2 Recursive relation for correlated "basis" 1. 3.2.1 Induction on $T_{k}$ and $S_{k}$ 2. 3.2.2 Proof of Lemma 3.6 3. 3.2.3 Induction on $T$ and $S$ 4. 3.2.4 Proof of Lemma 3.14 3. 3.3 Proof of Lemma 3.1 4. 4 Appendix 1. 4.1 Proof of Lemma 3.3 2. 4.2 Proof of Lemma 3.9, 3.10 3. 4.3 Proof of Lemma 3.18, 3.19 ## 1 Introduction Mean field spin glass theory has undergone a flourishing development in the last 20 years, in particular, it culminated with the seminal work of proof for the celebrated Parisi’s formula by Talagrand [Tal06] and Panchenko [Pan14]. After that, a lot of rigorous results for the mean field spin glass system have been established. Among those, the most notable models are the Sherrington-Kirkpatrick model and its $p$-spin variants. Those models take the standard $N$ dimensional boolean cube $\\{-1,+1\\}^{N}$ as the underlying configuration space. An extensive list of those exciting developments can be found in [CMP+23]. However, the binary spin value configuration is rather restrictive. There are more realistic but also more complicated spin glass models that are much less understood, where the spin values are beyond binary and even general vectors in Euclidean space. Some examples include the Ghatak- Sherrington model, Potts spin glass, XY-spin glass, etc. In this work, we take one step forward to study the mean-field Ghatak-Sherrington model. Compared to the classical SK model and its $p$-spin variants, where the spin value is restricted to be binary. The Ghatak-Sherrington model, where configuration space is the general hypercube, was first introduced in [GS77] to study the so-called inverse freezing phenomenon. More specifically, this inverse freezing phenomenon predicts that at a low enough temperature, there is another replica symmetric regime. This is in sharp contrast to the binary spin-valued models, such as SK and its $p$-spin variants, where the model in the low-temperature regime is widely believed to exhibit replica symmetry breaking only. One major reason for the inverse freezing phenomenon is due to the crystal field effect (see the Hamiltonian definition in (1.2)). The crystal field term is just trivially reduced to the system size $N$ in the SK case. There are many results [DCdA00, dCYS94, KH99, LDA82, MS85, Leu07] due to physicists in order to understand this phenomenon. However, to the best of our knowledge, the number of mathematically rigorous results concerning the GS model is quite limited. In [Pan05], Panchenko first proved a variational formula for limiting free energy by generalizing Talagrand’s method to the GS model and also later in [Pan18] via a different approach. Recently, Auffinger and Chen [AC21] used the cavity method to establish the Thouless-Anderson- Palmer equation for local magnetization. In this work, we focus on investigating the behavior of overlap, which acts as the system order parameter. The asymptotics of overlap contain fruitful information about the Gibbs measure. In particular, it can be used to understand the inverse- freezing phenomenon. The main contribution of this work is proving a joint central limit theorem for the overlap in the high-temperature regime. Our approach is based on moment computations and the cavity idea. Let us first introduce the concrete setting for the mean-field Ghatak- Sherrington model as follows. ### 1.1 Setting of the model The Ghatak-Sherrington (GS) model is defined as follows: for each configuration $\displaystyle\bm{\sigma}=(\sigma_{1},\sigma_{2},\cdots,\sigma_{N})\in{}_{N,\operatorname{\mathcal{S}}}:=\\{0,\pm 1,\cdots,\pm\operatorname{\mathcal{S}}\\}^{N},$ (1.1) where $\operatorname{\mathcal{S}}\geqslant 1$, the Hamiltonian of GS model is given by $\displaystyle H_{N}(\bm{\sigma})=\frac{\beta}{\sqrt{N}}\sumop\slimits@_{i<j}g_{i,j}\sigma_{i}\sigma_{j}+D\sumop\slimits@_{i=1}^{N}\sigma_{i}^{2}+h\sumop\slimits@_{i=1}^{N}\sigma_{i},$ (1.2) where $\beta>0$ is the inverse temperature, and $h\geqslant 0$ and $D\in\px@BbbR$ represent the external and crystal fields respectively. One of the fundamental questions in statistical physics is to compute the limiting free energy $F_{N}(\beta,h,D):=\frac{1}{N}\log Z_{N}(\beta,h,D),$ as the volume size $N\to\infty$, where the partition function of the GS model is given by $Z_{N}(\beta,D,h):=\sumop\slimits@_{\sigma\in{}_{N,S}}\exp(H_{N}(\sigma)).$ The associated GS Gibbs measure is $\displaystyle dG_{\beta,h,D}(\bm{\sigma})=\frac{\exp(H_{N}(\bm{\sigma}))}{Z_{N}(\beta,h,D)}\cdot d\bm{\sigma},$ (1.3) where $d\sigma$ is the uniform reference measure on ${}_{N,\operatorname{\mathcal{S}}}$. In the following context, we will suppress the dependence on $\beta,h,D$ for the above objects unless it causes confusion. Furthermore, we will use angular bracket $\langle\cdot\rangle$ to denote the expectation w.r.t the Gibbs measure $dG(\bm{\sigma})$. Another important object in the mean field spin glass theory, acting as the order parameters of the system, is the overlap between two configurations $\bm{\sigma}^{1},\bm{\sigma}^{2}\in{}_{N,\operatorname{\mathcal{S}}}$, $\displaystyle R_{1,2}=\frac{1}{N}\sumop\slimits@_{i=1}^{N}\sigma^{1}_{i}\sigma^{2}_{i},\quad <EMAIL_ADDRESS>(1.4) If $\sigma^{1}=\sigma^{2}$, the overlap becomes the self overlap, which is defined below <EMAIL_ADDRESS> Compared to the Sherrington-Kirkpatrick (SK) model, where the configuration space is $\\{-1,+1\\}^{N}$, the self-overlap is no longer trivially reduced to 1. The self-overlap in the GS model not just acts as an extra term, but also correlates with the usual overlap. This makes the analysis more challenging. Before presenting our main results, let us briefly discuss some existing works addressing the fluctuation problem. ### 1.2 Relation to prior works In this section, we review existing fluctuation results on the overlap in the mean field spin glass theory and its related applications in other problems of interest. For the classical SK model, a central limit theorem of overlap in the zero external fields was first proved in [CN95] via a stochastic calculus approach. In the presence of a positive external field, the central limit theorem for overlap for the array of overlaps was proved in [Tal11, GT02] using the moment method combined with cavity method computations. Proving a CLT for overlap is not just interesting itself, since overlap acts as the order parameter of the system, it also has many further implications. For instance, in the proof of Hanen’s theorem [Han07] for the limiting law of spin covariances, the moment estimates for the overlap arrays play an important role. Another application is on the spin covariance matrix, this question was recently studied in [AG23] while deriving a sharp upper bound for the operator norm of the spin covariance matrix, the CLT of overlap for the SK model was crucially used. Investigating overlap in the low-temperature regime is a highly challenging open problem for Ising spin glass models. In the spherical SK model, due to a nice contour integral representation of the partition function, the fluctuation results for the overlap have been well understood in the near- critical temperature [NS19] and low-temperature regime [LS22]. Moreover, a recent result by [CCM23] proved a central limit theorem for the overlap in the Ising SK model on the so-called Nishimori line. On the other hand, oftentimes establishing fluctuation results for the overlap to other generalized spin glass models can be a quite challenging task, even at the high-temperature regime. In [DW21], for the multi-species SK model, some second-moment computation was done to compute the variance-covariance matrix for the overlap array. However, the general moments’ computation involves many matrix operations and can be highly technical if not impossible. Besides the classical SK type model, the central limit theorems of overlap in various regimes for the Hopfield model [Hop82, Tal98] were also established in [GL99, Gen96a, Gen96b] by Gentz et.al. In both Hopefield and SK models, the spin values are restricted as binary. The goal of this work is to extend the fluctuation results to the non-binary spin settings. ### 1.3 Notations * • We denote $\langle\cdot\rangle$ as the Gibbs average and $\nu(\cdot):=\operatorname*{\px@BbbE}[\langle\cdot\rangle]$, where $\operatorname*{\px@BbbE}$ denotes the average w.r.t the disorder. * • Let $n$ be the number of replicas, $N$ be the number of spins (or the system size) and $\operatorname{\mathcal{S}}:=\|\bm{\sigma}\|_{\infty}$ be the largest value of a spin can take. * • For $k,l\in[n]$, denote $R_{k,l}$ as the overlap for the configuration $\bm{\sigma}^{k},\bm{\sigma}^{l}\in{}_{N,\operatorname{\mathcal{S}}}$. (Setting $k=l$ gives the self-overlaps, $R_{k,k}$). We use $Q_{k,l}$ to denote where the overlap/self-overlap concentrates. The specific values of $Q_{k,l}$ are given in Proposition 1.1. * • We use $b:=\langle\sigma_{1}\rangle$ and $\tilde{b}:=\langle\sigma_{1}\sigma_{1}\rangle$ to denote the first and second moment for a single spin under quenched Gibbs measure, i.e., for fixed disorder. * • We use $\varepsilon_{l}$ to denote the last spin of $\bm{\sigma}^{l}$ and $\varepsilon_{k,l}:=\varepsilon_{k}\varepsilon_{l}$. Moreover, $R^{-}_{k,l}=R_{k,l}-\frac{1}{N}\varepsilon_{k,l}$ is the overlap without counting contribution from the last spin. * • For a positive integer $n$, denote $[n]=\\{1,2,\cdots,n\\}$ as the set of all positive integer up to $n$. Let $\operatorname{\mathcal{C}}_{n}:=\\{(k,l):k,l\in[n],k\leqslant l\\}$ be the set of all replica pairs contained in $[n]$. * • We say a term is of order $H$, denote it as $O(H)$, if it is asymptotically of order $N^{-H/2}$. ### 1.4 Main result To prepare the main results, a joint central limit theorem for the array of overlaps and self-overlaps of the GS model, we begin by introducing some necessary notations and existing results. In the high-temperature regime, it is expected that the GS model is replica symmetric in the sense that the overlap and self-overlap concentrate on some fixed points respectively. Those concentration results first appeared in [AC21], we recall it here. ###### Proposition 1.1 ([AC21, Proposition 2]). There exist a $\tilde{\beta}>0$ s.t for $\beta\in[0,\tilde{\beta})$, $h\geqslant 0$ and $\operatorname{\mathcal{D}}\in\px@BbbR$, we have $\operatorname*{\px@BbbE}\langle(R_{1,2}-q)^{2}\rangle\leqslant\frac{16\mathcal{S}^{2}}{N},$ $\operatorname*{\px@BbbE}\langle(R_{1,1}-p)^{2}\rangle\leqslant\frac{16\mathcal{S}^{4}}{N}.$ where $\displaystyle p=\operatorname*{\px@BbbE}\left[\frac{\sumop\slimits@_{\gamma=1}^{\operatorname{\mathcal{S}}}\gamma^{2}\cdot 2\cosh[\gamma(\sqrt{q}\beta\eta+h)]\exp(\gamma^{2}[\operatorname{\mathcal{D}}+\frac{\beta^{2}}{2}(p-q)])}{1+\sumop\slimits@_{\gamma=1}^{\operatorname{\mathcal{S}}}\gamma^{2}\cdot 2\cosh[\gamma(\sqrt{q}\beta\eta+h)]\exp(\gamma^{2}[\operatorname{\mathcal{D}}+\frac{\beta^{2}}{2}(p-q)])}\right],$ (1.5) $\displaystyle q=\operatorname*{\px@BbbE}\left[\frac{\sumop\slimits@_{\gamma=1}^{\operatorname{\mathcal{S}}}\gamma^{2}\cdot 2\sinh[\gamma(\sqrt{q}\beta\eta+h)]\exp(\gamma^{2}[\operatorname{\mathcal{D}}+\frac{\beta^{2}}{2}(p-q)])}{1+\sumop\slimits@_{\gamma=1}^{\operatorname{\mathcal{S}}}\gamma^{2}\cdot 2\cosh[\gamma(\sqrt{q}\beta\eta+h)]\exp(\gamma^{2}[\operatorname{\mathcal{D}}+\frac{\beta^{2}}{2}(p-q)])}\right]^{2},$ (1.6) where $\eta\sim N(0,1)$ in the above. Note that in high-temperature regime, the solution to the fixed point equations (1.5) and (1.6) was also shown to be unique in [AC21]. In this paper, for convenience, we will use the following notation to denote where the overlap and self-overlap concentrate. ###### Definition 1.2. Let $Q_{k,l}:=\begin{cases}p,&\text{if }k\neq l,\\\ q,&\text{if }k=l.\end{cases}$ where $p,q$ are the solutions to (1.5), (1.6) respectively. Our main result is a quantitative joint central limit theorem for the overlap and self-overlap array among a set of replicas $[n]$, i.e. $\\{R_{k,l}\\}_{(k,l)\in\operatorname{\mathcal{C}}_{n}}$, see $\operatorname{\mathcal{C}}_{n}$ in Section 1.3. We show that the overlap and self-overlap array behave like a family of correlated Gaussians asymptotically $N\to\infty$. We achieve this by computing all the moments of the (self-)overlaps $\\{R_{k,l}-Q_{k,l}:(k,l)\in\operatorname{\mathcal{C}}_{n}\\}$. ###### Theorem 1.3. Consider a set of nonnegative integers $\\{p(k,l)\in\px@BbbN:1\leqslant k\leqslant l\leqslant n\\}$. Set $p=\sumop\slimits@_{1\leqslant k\leqslant l\leqslant n}p(k,l),$ and let $\\{\eta_{k,l}:(k,l)\in\operatorname{\mathcal{C}}_{n}\\}$ be a family of centered Gaussian with covariances $\mathrm{Cov}(\eta_{k,l},\eta_{k^{\prime},l^{\prime}}):=\begin{cases}A_{2}^{2}\delta(|(k,l)\cap(k^{\prime},l^{\prime})|=2)+|(k,l)\cap(k^{\prime},l^{\prime})|A_{1}^{2}+A_{0}^{2},&\text{if}\ |(k,l)|=|(k^{\prime},l^{\prime})|=2,\\\ B_{1}^{2}\delta(|(k,l)\cap(k^{\prime},l^{\prime})|=1)+B_{0}^{2},&\text{if}\ |(k,l)|=|(k^{\prime},l^{\prime})|=1,\\\ C_{1}^{2}\delta(|(k,l)\cap(k^{\prime},l^{\prime})|=1)+C_{0}^{2},&\text{if}\ |(k,l)|\neq|(k^{\prime},l^{\prime})|.\end{cases}$ where the constants $A_{2}^{2},A_{1}^{2},B_{1}^{2},C_{1}^{2},A_{0}^{2},B_{0}^{2},C_{0}^{2}$ are defined in Lemma 3.1. There exists $\beta^{\prime}\in(0,\tilde{\beta}]$ s.t. for $\beta<\beta^{\prime}$, we have $N^{\frac{p}{2}}\cdot\nu\left({}_{k\leqslant l}(R_{k,l}-Q_{k,l})^{p(k,l)}\right)=\operatorname*{\px@BbbE}\left[{}_{(k,l)\in\operatorname{\mathcal{C}}_{n}}\eta_{k,l}^{p(k,l)}\right]+O(N^{-1/2}).$ The structure of the covariance matrix is inherently related to the decomposition of overlaps presented below in (1.7), by first rewriting each centered overlap and self-overlap as a sum of "basis" $\\{T_{k,l},T_{k},S_{k},T,S\\}$ as in (1.7) and expand the product. By Lemma 1.4, the only pairs of basis that are not independent are $(T_{k},S_{k})$ for any $k\in[n]$ and $(T,S)$. The covariance structures essentially correspond to the variance-covariance of "basis" components. In particular, in the case $\lvert(k,l)\rvert=\lvert(k^{\prime},l^{\prime})\rvert=2$ i.e.,, $k\neq l,k^{\prime}\neq l^{\prime}$, which corresponds to the overlap case. Theorem 1.3 says that moments of (self-)overlap array asymptotically equals to the corresponding moment of a correlated Gaussian. The key idea of the proof is based on moment computations via the cavity approach, similar to [Tal11, Chapter 1.10] for the SK model. However, in the GS model, the challenging part is to handle self-overlap and more importantly, the correlation between overlap and self-overlap. This makes the analysis much more involved than in the SK case. ### 1.5 Proof outline We briefly sketch the proof idea in this section. The first step is to decompose the (self-)overlaps as the sum of some "basis" that are mostly pairwise independent. This allows us to rewrite the moments of (self-)overlaps as a homogeneous polynomial over the "basis" terms, where each term is a moment of the basis. Our main technical Lemma (Lemma 3.1) says the moments of the basis behave like moments of Gaussian asymptotically. The "correct basis" we use to decompose (self-) overlap is a generalization of those of the SK model. For overlap, let $b:=\langle\sigma_{1}\rangle$, we define the following basis components, $T_{k,l}:=\frac{1}{N}\sumop\slimits@_{i=1}^{N}(\sigma^{k}_{i}-b)(\sigma^{l}_{i}-b),\quad T_{k}:=\frac{1}{N}\sumop\slimits@_{i=1}^{N}(\sigma^{k}_{i}-b)b,\quad T:=b^{2}-q.$ For self-overlap, similarly denote $\tilde{b}:=\langle\sigma_{1}\sigma_{1}\rangle$, and the corresponding basis components are $S_{l}=\frac{1}{N}\sumop\slimits@_{i=1}\sigma^{l}_{i}\sigma^{l}_{i}-\tilde{b},\ \text{and}\ S=\tilde{b}-p.$ It’s clear to see that by definition, we have the following decomposition $\displaystyle R_{k,l}-q=T_{k,l}+T_{k}+T_{l}+T,\quad\text{and}\quad R_{l,l}-p=S_{l}+S.$ (1.7) The following lemma states that the terms in the above decomposition are nearly pair-wise independent of each other under $\nu(\cdot)$. ###### Lemma 1.4. The random variables $\\{\\{T_{k,l}\\}_{1\leqslant k<l\leqslant n},\\{T_{k}\\}_{k\leqslant n},T,\\{S_{k}\\}_{k\leqslant n},S\\}$ are pairwise independent under $\nu(\cdot)$ except the pairs $\\{S_{k},T_{k}\\}$ for $k\leqslant n$ and $\\{S,T\\}$. Now in order to show Theorem 1.3, it suffices to show that the set of basis $\\{T_{k,l},T_{k},S_{k},T,S:k,l\in[n]\\}$ are asymptotically Gaussian. This is the statement of our main technical lemma below. ###### Lemma 1.5 (Informal version of Lemma 3.1). For any set of integers $\\{h(k,l):1\leqslant k<l\leqslant n\\}$ and $\\{h(k):1\leqslant k\leqslant n\\}$ and $\\{h^{\prime}(k):1\leqslant k\leqslant n\\}$ and $h^{\prime},h$. Let $H:=\sumop\slimits@_{1\leqslant k<l\leqslant n}h(k,l)+\sumop\slimits@_{1\leqslant k\leqslant n}h(k)+\sumop\slimits@_{1\leqslant l\leqslant n}h^{\prime}(l)+h+h^{\prime},$ Then there exists a family of centered Gaussians indexed by all possible "basis" s.t. $\displaystyle\nu\left({}_{k,l}T_{k,l}^{h(k,l)}{}_{k}T_{k}^{h(k)}T^{h}{}_{l}S_{l}^{h^{\prime}(l)}S^{h^{\prime}}\right)$ $\displaystyle=\operatorname*{\px@BbbE}[{}_{k,l}g_{T_{k,l}}^{h(k,l)}{}_{k}g_{T_{k}}^{h(k)}{}_{l}g_{S_{l}}^{h^{\prime}(l)}g_{T}^{h}g_{S}^{h}]+O(N^{-\frac{H+1}{2}}).$ Moreover, for any pair of "basis" $(X,Y)$, the corresponding Gaussians $g_{X},g_{Y}$ are independent unless $\\{X,Y\\}\in\\{\\{S_{k},T_{k}\\}:1\leqslant k\leqslant n\\}\cup\\{\\{S,T\\}\\}.$ Note that the family of Gaussains in Lemma 1.5 are independent except the cases $\\{S_{k},T_{k}\\}$ and $\\{S,T\\}$. It’s easy to check that Theorem 1.3 follows from Lemma 1.5 by setting $\eta_{k,l}:=\begin{cases}g_{T_{k,l}}+g_{T_{k}}+g_{T_{l}}+g_{T},&\text{if }k\neq l,\\\ g_{S_{k}}+g_{S},&\text{if }k=l.\end{cases}$ The proof of Lemma 1.5 is based on the cavity method, which was used to prove CLT of overlap in the classical SK model, The main difference here is that we need to handle terms corresponding to self overlaps, i.e. $S_{k},S$, and characterize the correlation between $T_{k}$ and $S_{k}$. In the rest of this paper, we will focus on the proof of Lemma 1.5. ### 1.6 Organization of the paper The paper is strcutured as follows. In Section 2, we first introduce the setup for the cavity method and give some technical preliminaries. The second moment computations for the variance-covariance estimation are carried out in Section 2.2. In Section 3, we generalize the second moment computation in Section 2.2 to general moments of the "basis" $T_{k,l},T_{k},S_{k},T,S$. The results are formally stated in Lemma 3.1. More specifically, the inductive relations on different "basis" are given in Section 3.1 and 3.2. Some lemmas involving technical but repetitive computations are deferred to the Apppendix Section 4. Finally, as we pointed out in Section 1.5, to prove Theorem 1.3, it suffices to prove Lemma 3.1, whose proof is included in Section 3.3. ### 1.7 Acknowledgement We thank Juspreet Singh Sandhu for the discussions in the initial stage. ## 2 Cavity method and second moment estimates We begin with the idea of the cavity method and show how one can use it to obtain the second-moment estimation of the "basis". The idea of the cavity method is based on an induction from the size $N-1$ model to size $N$. We are interested in studying the effects of this small change on the spin glass system. The cavity idea is formally formulated into the following interpolation scheme. For $t\in[0,1]$, the interpolated Hamiltonian at time $t$ is given by $\displaystyle H_{N}^{t}(\bm{\sigma})=H_{N-1}(\bm{\rho})$ $\displaystyle+\sigma_{N}\left(\sqrt{t}\cdot\frac{\beta}{\sqrt{N}}\sumop\slimits@_{i=1}^{N-1}g_{i,N}\sigma_{i}+\sqrt{1-t}\cdot\beta\eta\sqrt{q}\right)$ (2.1) $\displaystyle\qquad\qquad+(1-t)\cdot\frac{\beta^{2}}{2}(p-q)\sigma_{N}^{2}+D\sigma_{N}^{2}+h\sigma_{N},$ (2.2) where $\bm{\rho}:=(\sigma_{1},\ldots,\sigma_{N-1})$, $\eta\sim N(0,1)$ independent of $g_{i,j}$ and $H_{N-1}(\bm{\rho}):=\frac{\beta}{\sqrt{N}}\sumop\slimits@_{1\leqslant i<j\leqslant N-1}g_{i,j}\sigma_{i}\sigma_{j}+D\sumop\slimits<EMAIL_ADDRESS> At $t=0$, the last spin is decoupled from the original system, which brings out a small change, heuristically known as "cavity"; at $t=1$, $H_{N}^{1}(\bm{\sigma})$ is just the original GS Hamiltonian. In the following, we use $\varepsilon_{l}$ to denote the last spin of $l$-th replica, that is, $\varepsilon_{l}:=\sigma_{N}^{l}$. For a pair of replica $k,l\in[n]$, we denote the (self-)overlap without the last spin as $R^{-}_{k,l}:=R_{k,l}-\frac{1}{N}\varepsilon_{k,l}.$ In this paper, we use $\langle\cdot\rangle_{t}$ as the corresponding Gibbs average at time $t$ and $\nu_{t}(\cdot):=\operatorname*{\px@BbbE}[\langle\cdot\rangle_{t}]$. In particular, at $t=1$, $\nu_{1}(\cdot)=\nu(\cdot)=\operatorname*{\px@BbbE}\langle\cdot\rangle$. ### 2.1 Preliminaries Recall that the goal is to compute the joint moments of (self-)overlaps, the first step is the decomposition to "basis" terms given in (1.7). We begin by proving some basic properties of the "basis". ##### Properties of "basis" First, we show that the set of random variables $\\{T_{k,l},T_{k},S_{k},T,S\\}$ are mostly pari-wise independent as stated in Lemma 1.4. See 1.4 ###### Proof of Lemma 1.4. For pairs of random variable that doesn’t involve $S_{k}$, the proof is the same as in SK mode. We will present the proof for the pairwise independence of $S_{l}$ and $\\{T_{k,k^{\prime}},T_{k},S_{k}:k\neq l\\}$. For $X,Y\in\\{T_{k,l},T_{k}\\}$, $\nu(XY)=0$ follows directly from symmetry of types of (self-)overlaps. For pairs of term involving $T_{k,l},S_{h}$: Consider a set of constants $\\{k,l,h\\}$ s.t. $k\neq l$ and some constant $h^{\prime}\notin\\{k,l,h\\}$. $\displaystyle\nu(T_{k,l}S_{h})=\nu((R_{h,h}-R_{h^{\prime},h^{\prime}})T_{k,l})$ Note that there exists a replica in $\\{k,l\\}$ that does not appear in $S_{h}$. WLOG, assume $h\neq l$, the integrate w.r.t. $\sigma^{l}$ gives $\nu(T_{k,l}S_{h})=0$ For pair of term involving $T_{k},S_{h}$: if $k\neq h$, then by symmetry $\nu(T_{k}S_{h})=\nu((R_{h,h}-R_{h^{\prime},h^{\prime}})T_{k})=0$ ∎ To continue, we introduce another trick to express the "basis" random variables with (self)-overlaps by introducing a new replica for each occurrence of $b,\tilde{b}$. This trick has been used many times in [Tal11, Chapter 1.8], and we record it here for completeness. ###### Claim 2.1. Fix some integer $n$. For $k,l\notin[n]$ s.t. $k\neq l$, $\nu(T_{1,2})=\nu(R_{1,2}-R_{1,l}-R_{k,2}+R_{k,l}),$ $\nu(T_{1})=\nu(R_{1,l}-R_{k,l}),\quad\nu(S_{1})=\nu(R_{1,1}-R_{k,k}),$ $\nu(T)=\nu(R_{k,l}-q),\quad\nu(S)=\nu(R_{k,k}-p).$ ###### Proof. The proof follows from the linearity of expectation. We will show a proof for $S_{1}$, the other terms can be proved using the same technique. $\displaystyle\nu(S_{1})$ $\displaystyle=\operatorname*{\px@BbbE}[\langle\frac{1}{N}\sumop\slimits@_{i}(\sigma_{1}^{i})^{2}-\tilde{b}\rangle]=\operatorname*{\px@BbbE}[\langle\frac{1}{N}\sumop\slimits@_{i}(\sigma_{1}^{i})^{2}-\langle\varepsilon_{k}^{2}\rangle\rangle]$ $\displaystyle=\operatorname*{\px@BbbE}[\langle\frac{1}{N}\sumop\slimits@_{i}\left((\sigma_{1}^{i})^{2}-(\sigma_{i}^{k})^{2}\right)\rangle]$ $\displaystyle=\nu(R_{1,1}-R_{k,k}).$ where the second equality is the definition of $\tilde{b}$ and the third equality uses symmetry between sites. ∎ This implies that we can expand moments of basis as a homogeneous polynomial of (self-)overlaps over a set of replicas. ##### Approximation of moments We use the following definition to capture the degree of a term. ###### Definition 2.2. For $f:{}^{\otimes n}_{N,S}\to\px@BbbR$, we say $f$ is of order $H$ if $f$ is a product of $H$ centered overlaps or self-overlaps, $R_{k,l}-Q_{k,l}$ for $k,l\in[n]$. Estimating the magnitude of order $H$ functions follows a standard application of concentration of overlaps and Hölder’s inequality. The following Lemma generalizes the second-moment estimates of centered (self-)overlaps in Proposition 1.1. ###### Lemma 2.3 ([Che22, Proposition 5]). For $\beta<\tilde{\beta}$, there exist some constant $C>0$ such that for any $k\geqslant 1$ and $l,l^{\prime}\in[n]$, we have $\nu\left((R_{l,l^{\prime}}-Q_{l,l^{\prime}})^{2k}\right)\leqslant\left(\frac{Ck}{N}\right)^{k},$ This implies that if $f$ is an order $H$ function, then there exists a constant $C$ that doesn’t depend on $N$ s.t. $\nu(f)\leqslant C\cdot N^{-\frac{H}{2}}.$ We will see this type of dependency on $N$ many times below. To lighten the notation, we overwrite the big $O$ notation and say a quantity $A=O(H)$ if $|A|\leqslant K\cdot N^{-\frac{H}{2}},$ for some constant $K$ that does not depend on $N$. Note that the constant $K$ can depend on other parameters such as $\beta,n,\operatorname{\mathcal{S}}$. One of the main tools we use in the cavity method is $\nu_{1}(f)\approx v_{0}(f)+v^{\prime}_{0}(f)$. Let’s first recall the structure of $\nu^{\prime}_{t}(f)$. ###### Lemma 2.4 ([AC21, Lemma 3]). Let $f:{}_{N,\mathcal{S}}^{\otimes n}\to\px@BbbR$ be any function of $n$ replicas, for $t\in(0,1)$, we have $\displaystyle 2\frac{d}{dt}\nu_{t}(f)=$ $\displaystyle\beta^{2}\sumop\slimits@_{1\leqslant k,l\leqslant n}\nu_{t}(\varepsilon_{k}\varepsilon_{l}(R^{-}_{k,l}-Q_{k,l}))f)$ $\displaystyle-2\beta^{2}\sumop\slimits@_{1\leqslant k\leqslant n;n+1\leqslant l\leqslant 2n}\nu_{t}(\varepsilon_{k}\varepsilon_{l}(R^{-}_{k,l}-q)f)$ $\displaystyle+\beta^{2}n(n+1)\nu_{t}(\varepsilon_{n+1,n+2}(R^{-}_{n+1,n+2}-q)f)-\beta^{2}n\nu_{t}(\varepsilon_{n+1}^{2}(R^{-}_{n+1,n+1}-p)f).$ ###### Remark 2.5. We present a convenient way of rewriting the above lemma. For $a,b\in[2n]$, let $\displaystyle\text{sgn}(a,b):=(-1)^{|\\{a,b\\}\cap[n]|},$ (2.3) then we have $\displaystyle\frac{d}{dt}\nu_{t}(f)=$ $\displaystyle\operatorname{\mathcal{R}}_{n,f}+\frac{\beta^{2}}{2}\sumop\slimits@_{a,b\in[2n]}\text{sgn}(a,b)\nu_{t}(\varepsilon_{a,b}(R^{-}_{a,b}-Q_{a,b})f),$ (2.4) where $\operatorname{\mathcal{R}}_{n,f}$ corresponds to additional terms from replicas independent from $f$. For $a\in[2n]$, denote $a^{\prime}=2n+a$ $\displaystyle\operatorname{\mathcal{R}}_{n,f}:=\frac{\beta^{2}}{2}\sumop\slimits@_{a,b\in[2n]}\text{sgn}(a,b)\nu_{t}(\varepsilon_{a^{\prime},b^{\prime}}(R^{-}_{a^{\prime},b^{\prime}}-Q_{a^{\prime},b^{\prime}})f).$ (2.5) To quantify the difference between $\nu_{1}(f)$ and $\nu_{0}(f)$ by the "degree" of $f$, we have ###### Proposition 2.6. For $f:{}^{\otimes n}_{N,S}\to\px@BbbR$ s.t. $f$ is a product of $H$ centered overlaps or self-overlaps, $R_{a,b}-Q_{a,b}$ for $a,b\in[n]$, $\displaystyle|\nu_{0}(f)-\nu(f)|=O(H+1),$ (2.6) $\displaystyle|\nu_{0}(f)+\nu_{0}^{\prime}(f)-\nu(f)|=O(H+2).$ (2.7) The proof of Proposition 2.6 is based on the concentration of overlaps and Hölder’s inequality. For the mean field GS spin glass model, those types of results were already established in [AC21, Che22]. First, we have an upper bound for $\nu_{t}(f)$. ###### Lemma 2.7 ([AC21, Lemma 4]). For $f:{}_{N,S}^{\otimes n}\to[0,\infty)$, we have $\nu_{t}(f)\leqslant\exp(6n^{2}\beta^{2}\operatorname{\mathcal{S}}^{4})\nu(f).$ The overlap and self-overlap concentration results already stated in Proposition 1.1, the following presents higher order moments estimate. Consequently, we get similar results for $R^{-}_{l,l^{\prime}}$. ###### Proof of Proposition 2.6. To prove (2.6), note that $|\varepsilon_{k,l}(R^{-}_{k,l}-Q_{k,l})|\leqslant|\varepsilon_{k,l}(R_{k,l}-Q_{k,l})|+\frac{\operatorname{\mathcal{S}}^{4}}{N},$ we will bound $\nu^{\prime}_{t}(f)$ using Lemma 2.4, 2.7 and Hölder’s inequality. $\displaystyle|\nu_{1}(f)-\nu_{0}(f)|$ $\displaystyle\leqslant\sup_{t}\Bigg{\\{}3n^{2}\operatorname{\mathcal{S}}^{2}\beta^{2}\left(\nu_{t}(|f|^{p})^{\frac{1}{p}}\nu_{t}(|R_{1,2}-Q_{1,2}|^{q})^{\frac{1}{q}}+\nu_{t}(|f|^{p})^{\frac{1}{p}}\nu_{t}(|R_{1,1}-Q_{1,1}|^{q})^{\frac{1}{q}}+\nu_{t}(|f|)\frac{\operatorname{\mathcal{S}}^{2}}{N}\right)\Bigg{\\}}$ $\displaystyle\leqslant\exp(6n^{2}\beta^{2}S^{4})3n^{2}\operatorname{\mathcal{S}}^{2}\beta^{2}\left(\nu_{1}(|f|^{p})^{\frac{1}{p}}\nu_{1}(|R^{-}_{1,2}-Q_{1,2}|^{q})^{\frac{1}{q}}+\nu_{t}(|f|^{p})^{\frac{1}{p}}\nu_{t}(|R_{1,1}-Q_{1,1}|^{q})^{\frac{1}{q}}+\nu_{1}(|f|)\frac{\operatorname{\mathcal{S}}^{2}}{N}\right),$ Since $f$ is of order $H$, by Lemma 2.3, apply Hölder’s inequality with $p=q=2$ gives the desired result. For (2.7), observe that $|\nu_{1}(f)-\nu_{0}(f)-\nu^{\prime}_{0}(f)|\leqslant\sup\limits_{\begin{subarray}{c}0\leqslant t\leqslant 1\end{subarray}}|\nu_{t}^{\prime\prime}(f)|.$ By Lemma 2.4, $\nu^{\prime\prime}(f)$ brings an additional factor of $R^{-}_{i,j}-Q_{i,j}$. Apply the above proof on $f(R_{1,2}-p)$ and $f(R_{1,1}-q)$ gives the desired result. ∎ Later in the proof, we will need to study terms involving (self-)overlaps without the last spin, i.e. $R^{-}_{k,l}-Q_{k,l}$. Here we establish the analog of the above results on $R^{-}_{k,l}-Q_{k,l}$. ###### Corollary 2.8 (of Lemma 2.3). For $\beta<\tilde{\beta}$, there exist some constant $C^{\prime}>0$ such that for any $k\geqslant 1$, we have $\nu\left((R^{-}_{l,l^{\prime}}-Q_{l,l^{\prime}})^{2k}\right)\leqslant\left(\frac{C^{\prime}}{N}\right)^{k},$ ###### Proof. By Minkowski’s inequality, $\nu\left((R^{-}_{l,l^{\prime}}-Q_{l,l^{\prime}})^{2k}\right)^{\frac{1}{2k}}\leqslant\nu\left((R_{l,l^{\prime}}-Q_{l,l^{\prime}})^{2k}\right)^{\frac{1}{2k}}+\frac{\operatorname{\mathcal{S}}^{2}}{N}\leqslant\left(\frac{C^{\prime}}{N}\right)^{\frac{1}{2}},$ where the last inequality follows from Lemma 2.3. Raise both sides to $2k$-th power gives the desired result. ∎ ###### Lemma 2.9. Fix an integer $n$ and $H$, for each $1\leqslant v\leqslant H$, consider $v_{1},v_{2}\in[n]$ and $U_{v}\in\\{T_{v_{1},v_{2}},T_{v_{1}},T,S_{v_{1}},S\\}$. Let $f:{}_{N,\operatorname{\mathcal{S}}}^{\otimes n}\to\px@BbbR$ s.t. $f={}_{1\leqslant v\leqslant H}U_{v}$. Denote $f^{-}:={}_{v}U^{-}_{v}$. We have $|\nu(f)-\nu(f^{-})|=O(H+1).$ ###### Proof. Observe that each term of $f$ is of the form $\nu(T_{1,2}),\nu(T_{1}),\nu(S_{1}),\nu(T),\nu(S)$ and can be written as linear combination of (self-)overlaps where each occurance of $b:=\langle\sigma_{1}\rangle$ and $\tilde{b}:=\langle\sigma_{1}^{2}\rangle$ corresponds to a new replica. For example, it’s easy to check that $\nu(T_{1,2})=\nu(\langle\sigma_{1}-\sigma_{3},\sigma_{2}-\sigma_{4}\rangle)=\nu((R_{1,2}-q)-(R_{1,4}-q)-(R_{2,3}-q)+(R_{1,4}-q)).$ The rest of the terms can be rewritten in a similar way. Since $f$ is the product of $H$ such terms, expanding the product shows that it’s a sum of functions of order $H$. By Lemma 2.3 and Holder’s inequality, $f=O(H)$. Moreover, for $I\subset[H]$, $\nu({}_{u\notin I}U_{p(u),q(u)})=O(|I|)$. Again by Lemma 2.3 $\displaystyle|\nu(f)-\nu(f^{-})|$ $\displaystyle\leqslant\Bigg{|}\sumop\slimits@_{I\subset[H]}\nu\left({}_{v\in I}\frac{\varepsilon(v)}{N}{}_{u\notin I}U_{p(u),q(u)}\right)\Bigg{|}=O(H+1).$ ∎ The following Corollary tells us that the error to approximate $\nu(f)$ by $\nu_{0}(f)$ is small. ###### Corollary 2.10. Let $f:{}^{n}_{N,\operatorname{\mathcal{S}}}\to\px@BbbR$ be an order $H$ function, we have $\nu_{0}(f^{-})=\nu(f)+O(H+1).$ ###### Proof. Applying Corollary 2.8 and equation (2.6) for $f^{-}$, combining with Lemma 2.9 gives $\nu_{0}(f^{-})=\nu(f^{-})+O(H+1)=\nu(f)+O(H+1).$ ∎ ### 2.2 Variance of overlaps and self-overlaps In this section, we compute the variance-covariance structure of a subset of the "basis": $T_{1,2},T_{1},S_{1}$. The variance-covariance computation of $S,T$ follows the same idea as $T_{1},S_{1}$ and we will show it as a special case of general moments in Theorem 3.17. The main goal is to get a sense of how to handle the additional self-overlap terms. We further note that the following variance results hold at sufficiently high temperature, that is, $\beta<\beta^{\prime}$ from some $\beta^{\prime}$ as in the Theorem 1.3. While stating the results in the following context, we might not repeatedly specify the high temperature condition ($\beta<\beta^{\prime}$). We begin by demonstrating how the cavity method is used to compute the second moment of the basis random variables. With some abuse of notation, let $X$ be the expansion given in Claim 2.1 using (self-)overlaps for $\\{T_{1,2},T_{1},S_{1}\\}$ and $\varepsilon_{X}$ be the expression by replacing each overlap in $R_{k,l}=\frac{1}{N}\sumop\slimits@_{i}\sigma^{k}_{i}\sigma^{l}_{i}$ $X$ by the last spin $\varepsilon_{k}\varepsilon_{l}$. Let $X^{-}:=X-\frac{1}{N}\varepsilon_{X}$ be the part of the basis that depends only on the first $N-1$ spins. Note that for $X\in T_{1,2},T_{1},S_{1}$, by symmetry of spins $\nu(X^{2})=\nu(\varepsilon_{X}X).$ We can further decouple the last spin from the expression to get $\displaystyle\nu(X^{2})$ $\displaystyle=\frac{1}{N}\nu(\varepsilon_{X}^{2})+\nu(\varepsilon_{X}X^{-})$ (2.8) $\displaystyle=\frac{1}{N}\nu(\varepsilon_{X}^{2})+\nu^{\prime}_{0}(\varepsilon_{X}X^{-})+O(3),$ (2.9) where the last equality follows from (2.7) and $\nu_{0}(\varepsilon_{X}X^{-})=\nu_{0}(\varepsilon_{X})\nu_{0}(X^{-})=0$. (Note that in the above expression, each copy of $X$ will introduce at least one new replica.) This is the starting point of the variance-covariance calculations. To simplify notations, we record some constants corresponding to the expectation of the last spins. ###### Definition 2.11. We define the following constants, * • From $\nu(\varepsilon_{S_{1}}^{2})$, $D:=\nu_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{1}^{2})$, * • From $\nu(\varepsilon_{S_{1}}\varepsilon_{T_{1}})$, $E:=\nu_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{1,3})$ and $H:=\nu_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon^{2}_{1}))=E$, * • From $\nu(\varepsilon_{T_{1}}^{2})$, $F:=\nu_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon_{1,3})$ and $G:=\nu_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon_{1,4})$, * • From $\nu(\varepsilon_{T_{1,2}}^{2})$, $A:=\nu_{0}\left((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})\varepsilon_{1,2}\right)=F-G$. The list of constants below will occur many times in computation involving $S_{1},T_{1},S,T$ as normalizing constants and we record them here for future references. * • $M_{1}:=1-\beta^{2}(F-3G)$, * • $M_{2}:=1-\frac{\beta^{2}}{2}D$, * • $M_{3}:=(1-\beta^{2}(F-G))$, * • $M=M_{1}M_{2}+\beta^{4}E^{2}$. Note that $M_{1},M_{2},M_{3},M$ are independent from $N$. ###### Remark 2.12. When $\beta^{\prime}<\frac{1}{2\operatorname{\mathcal{S}}^{2}}$: check that $F-G=\operatorname*{\px@BbbE}(\langle\varepsilon_{1,1}-\varepsilon_{1,2}\rangle_{0}^{2})$, $G=\operatorname*{\px@BbbE}[b^{2}\left(\langle\varepsilon_{1,1}\rangle_{0}-\langle\varepsilon_{1}\rangle_{0}^{2}\right)]$ and $D=\operatorname*{\px@BbbE}[\langle\varepsilon_{1}^{4}\rangle_{0}-\langle\varepsilon_{1}^{2}\rangle_{0}^{2}]$, we have $F-G,F-3G,D\in(0,4\operatorname{\mathcal{S}}^{4}]$. For $0\leqslant\beta\leqslant\beta^{\prime}<\frac{1}{2\operatorname{\mathcal{S}}^{2}}$, we have $M_{1},M_{2},M_{3}>0$. #### 2.2.1 Variance of $T_{1,2}$ We begin by checking $\nu(T_{12}^{2})$. By Lemma 1.4, we should expect this term to behave the same as in the SK model ([Tal11, Proposition 1.8.7.]). This is indeed the case as we will show below. ###### Lemma 2.13. For $\beta<\beta^{\prime}$, we have $\nu(T_{12}^{2})=A_{2}^{2}+O(3),$ where $A_{2}^{2}:=\frac{A}{N(1-\beta^{2}A)}.$ ###### Proof. Using (2.9) with $X=T_{1,2}$, we have $\displaystyle\nu(T_{1,2}^{2})$ $\displaystyle=\nu\left(\frac{(\bm{\sigma}^{1}-\bm{b})\cdot(\bm{\sigma}^{2}-\bm{b})}{N}\frac{(\bm{\sigma}^{1}-\bm{b})\cdot(\bm{\sigma}^{2}-\bm{b})}{N}\right)$ $\displaystyle=\nu\left(\frac{(\bm{\sigma}^{1}-\bm{\sigma}^{3})\cdot(\bm{\sigma}^{2}-\bm{\sigma}^{4})}{N}\frac{(\bm{\sigma}^{1}-\bm{\sigma}^{5})\cdot(\bm{\sigma}^{2}-\bm{\sigma}^{6})}{N}\right),$ where the last equality follows from replacing each occurrence of $\bm{b}=(\langle\sigma_{1}\rangle,\cdots,\langle\sigma_{N}\rangle)$ by a new replica. Rewrite the above formula by expanding the inner products and replacing each term with appropriate overlaps, we get $\displaystyle\nu(T_{1,2}^{2})$ $\displaystyle=\nu((R_{1,2}-R_{1,4}-R_{2,3}+R_{3,4})(R_{1,2}-R_{1,6}-R_{2,5}+R_{5,6}))$ $\displaystyle=\frac{1}{N}\nu((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})(\varepsilon_{1}-\varepsilon_{5})(\varepsilon_{2}-\varepsilon_{6}))+\nu((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})(R^{-}_{1,2}-R^{-}_{1,6}-R^{-}_{2,5}+R^{-}_{5,6}))$ $\displaystyle=\frac{1}{N}\nu_{0}((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})(\varepsilon_{1}-\varepsilon_{5})(\varepsilon_{2}-\varepsilon_{6}))+\nu_{0}^{\prime}((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})(R^{-}_{1,2}-R^{-}_{1,6}-R^{-}_{2,5}+R^{-}_{5,6}))+O(3),$ where the first equality follows from the symmetry of spins and isolates the last spin from the overlaps and the second step is due to Proposition 2.6. For the first term $\displaystyle\nu_{0}((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})(\varepsilon_{1}-\varepsilon_{5})(\varepsilon_{2}-\varepsilon_{6}))$ $\displaystyle=\nu_{0}((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})\varepsilon_{1,2})$ $\displaystyle=A.$ For the second term, $\displaystyle\nu^{\prime}_{0}((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})(R^{-}_{1,2}-R^{-}_{1,6}-R^{-}_{2,5}+R^{-}_{5,6}))$ $\displaystyle=$ $\displaystyle\frac{\beta^{2}}{2}\sumop\slimits@_{a,b}\text{sgn}(a,b)\nu_{0}((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})\varepsilon_{ab})\cdot\nu_{0}((R^{-}_{a,b}-Q_{a,b})(R^{-}_{1,2}-R^{-}_{1,6}-R^{-}_{2,5}+R^{-}_{5,6}))+\operatorname{\mathcal{R}}_{4,T_{1,2}^{2}}.$ Observe that the term involving last spins, $\nu_{0}((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})\varepsilon_{ab})$, is non-zero only when $\\{a,b\\}\in\\{\\{1,2\\},\\{1,4\\},\\{2,3\\},\\{3,4\\}\\}$. Summing over all such $a,b$, by Corollary 2.10, we have $\nu^{\prime}_{0}((\varepsilon_{1}-\varepsilon_{3})(\varepsilon_{2}-\varepsilon_{4})(R^{-}_{1,2}-R^{-}_{1,6}-R^{-}_{2,5}+R^{-}_{5,6}))=A\nu(T_{1,2}^{2})+O(3).$ Together, the two terms give $\nu(T_{1,2}^{2})=\frac{A}{N}+\beta^{2}A\nu(T_{1,2}^{2}).$ ∎ The following relation involving $\nu(T_{1,2}^{2})$ will be useful later and we record it here for convenience. ###### Claim 2.14. By definition, $A=F-G$ $\displaystyle\beta^{2}A_{2}^{2}+\frac{1}{N}$ $\displaystyle:=\frac{1}{N(1-\beta^{2}(F-G))}=\frac{1}{NM_{3}}$ (2.10) for $A_{2}^{2}$ given in Lemma 2.13. #### 2.2.2 Variance of $T_{1}$ and $S_{1}$ We will now check the variance of $S_{1},T_{1}$. Unlike in the SK model, the basis are not independent each other any more. This hints that we should handle $S_{1},T_{1}$ together. ###### Theorem 2.15. For $\beta<\tilde{\beta}$, the variance of $T_{1},S_{1}$ are given by $\nu(T_{1}^{2})=A_{1}^{2}+O(3),$ where $A_{1}^{2}:=\frac{GM_{2}+\frac{\beta^{2}}{2}E^{2}}{M}\cdot\frac{1}{NM_{3}}=\frac{1}{2\beta^{2}N}\left(\frac{1}{M_{3}}-\frac{M_{2}}{M}\right),$ and $\nu(S_{1}^{2})=B_{1}^{2}+O(3),$ where $B_{1}^{2}=\frac{DM_{1}-2\beta^{2}E^{2}}{NM}=\frac{2}{N\beta^{2}}(\frac{M_{1}}{M}-1),$ The covariance is $\nu(S_{1}T_{1})=C_{1}^{2}+O(3),$ where $C_{1}^{2}:=\frac{E}{NM}.$ The above theorem could be viewed as a generalization of showing $\nu(T_{1}^{2})$ in the SK model, with the addition of handling self-overlap terms from $\nu^{\prime}_{0}(f)$ in (2.9). We will compute each part of the theorem in Lemma 2.16, Lemma 2.19, and Lemma 2.21. ###### Lemma 2.16. For $\beta\leqslant\beta^{\prime}$, we have $\nu(T_{1}^{2})=A_{1}^{2}+O(3),$ where $A_{1}^{2}:=\frac{GM_{2}+\frac{\beta^{2}}{2}EH}{M}\cdot\frac{1}{NM_{3}}=\frac{1}{2\beta^{2}N}\left(\frac{1}{M_{3}}-\frac{M_{2}}{M}\right).$ To prove this, we will use the following lemma to characterize the relation between $\nu(T_{1}^{2})$ and $\nu(T_{1}S_{1})$. ###### Lemma 2.17. We have $\displaystyle(1-\beta^{2}(F-3G))\nu(T_{1}^{2})=G\left(\frac{1}{N}+\beta^{2}A_{2}^{2}\right)+\frac{\beta^{2}}{2}H\nu(S_{1}T_{1})+O(3),$ (2.11) and $\displaystyle\frac{(1-\frac{\beta^{2}}{2}D)}{E}\nu(S_{1}T_{1})=(\frac{1}{N}+\beta^{2}A_{2}^{2})-2\beta^{2}\nu_{0}(T_{1}^{2})+O(3).$ (2.12) Note that Lemma 2.16 follows immediately from Lemma 2.17. ###### Proof of Lemma 2.16. Plug (2.12) into (2.11) and rearrange gives $\displaystyle\left(1-\beta^{2}(F-3G)+\frac{\beta^{4}EH}{1-\frac{\beta^{2}}{2}D}\right)\nu(T_{1}^{2})$ $\displaystyle=\left(G+\frac{\beta^{2}EH}{2(1-\frac{\beta^{2}}{2}D)}\right)\left(\frac{1}{N}+\beta^{2}A_{2}^{2})\right)+O(3)$ (2.13) $\displaystyle\stackrel{{\scriptstyle(\ref{claim:T12})}}{{=}}\left(G+\frac{\beta^{2}EH}{2(1-\frac{\beta^{2}}{2}D)}\right)\frac{1}{NM_{3}}+O(3).$ (2.14) ∎ We now turn to the proof of Lemma 2.17. ###### Proof of Lemma 2.17. Using (2.9) with $X=T_{1}$, we can rewrite $\nu(T_{1}^{2})$ by introducing a new replica for each occurrence of $b$ and get $\displaystyle\nu(T_{1}^{2})$ $\displaystyle=\nu((R_{1,3}-R_{2,3})(R_{1,5}-R_{4,5}))$ (2.15) $\displaystyle=\frac{1}{N}\nu((\varepsilon_{1,3}-\varepsilon_{2,3})(\varepsilon_{15}-\varepsilon_{4,5}))+\nu^{\prime}_{0}((\varepsilon_{13}-\varepsilon_{23})(R^{-}_{1,5}-R^{-}_{4,5}))+O(3).$ (2.16) For the first term, not that by symmetry, $\nu((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon_{4,5})=0$. Thus we have To expand the second term, we use (2.4) with $N=5$ gives $\displaystyle\nu^{\prime}_{0}((\varepsilon_{13}-\varepsilon_{23})(R^{-}_{1,5}-R^{-}_{4,5}))=$ $\displaystyle\frac{\beta^{2}}{2}\sumop\slimits@_{a,b\in[10]}\text{sgn}(a,b)\nu_{0}((\varepsilon_{13}-\varepsilon_{23})\varepsilon_{ab})\nu_{0}((R^{-}_{a,b}-\mu_{a,b})(R^{-}_{1,5}-R^{-}_{4,5}))+R_{5,T_{1}^{2}}.$ Many terms will vanish due to $\nu_{0}((\varepsilon_{13}-\varepsilon_{23})\varepsilon_{ab})=0$. We will see that the non-vanishing pairs of replica $(a,b)$ introduce some structures that correspond to either $T_{1}$ or $S_{1}$. To capture which pair of $(a,b)$ having $\nu_{0}((\varepsilon_{13}-\varepsilon_{23})\varepsilon_{ab})\neq 0$, let’s expand the product into two terms. Observe the value of $\nu_{0}(\varepsilon_{13}\varepsilon_{ab})$ is characterized by the type of multiset $\\{1,3,a,b\\}$ and that the replica $2$ in $\nu_{0}(\varepsilon_{23}\varepsilon_{ab})$ is equivalent to replica $1$ in $\nu_{0}((\varepsilon_{13}\varepsilon_{ab})$. Thus we have that $\nu_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon_{a,b})\neq 0\iff|\\{a,b\\}\cap\\{1,2\\}|=1.$ What’s left to do is to check $\nu_{0}((R^{-}_{a,b}-\mu_{a,b})(R^{-}_{1,5}-R^{-}_{4,5}))$ for such pair $(a,b)\in\operatorname{\mathcal{C}}_{10}$ * • If $a=b$: in this case $a\in\\{1,2\\}$. Combine the two cases gives $\frac{1}{2}\nu_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon_{1,1})\nu_{0}((R^{-}_{1,1}-R^{-}_{2,2})(R^{-}_{1,5}-R^{-}_{4,5}))\stackrel{{\scriptstyle\ref{cor: last spin}}}{{=}}\frac{1}{2}H\nu(S_{1}T_{1})+O(3).$ * • For $\\{a,b\\}\in\\{\\{1,3\\},\\{2,3\\}\\}$, we have $\nu_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon_{1,3})\nu_{0}((R^{-}_{1,3}-R^{-}_{2,3})(R^{-}_{1,5}-R^{-}_{4,5}))\stackrel{{\scriptstyle\ref{cor: last spin}}}{{=}}F\nu(T_{1}^{2})+O(3).$ * • Now we count the case when $a\in\\{1,2\\}$, $b\notin\\{1,2,3\\}$. Here is when the rectangles appear. Recall that for each of the $5$ replicas, we introduce a new replica. Let’s index them with $\\{k+5:k\leqslant 5\\}$. Gather terms for $b\in\\{4,9\\}$ (equivalently $\\{5,10\\}$) $\nu_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon_{1,4})\nu_{0}((R^{-}_{1,4}-R^{-}_{2,4}-R^{-}_{1,9}+R^{-}_{2,9})(R^{-}_{1,5}-R^{-}_{4,5})).$ Using (2.6) and Lemma 2.9, we can rewrite the second term with $T_{k,l},T_{k},T_{l}$ involving those new replicas, $\nu_{0}((R^{-}_{1,4}-R^{-}_{2,4}-R^{-}_{1,9}+R^{-}_{2,9})(R^{-}_{1,5}-R^{-}_{4,5}))=\nu((T_{1,4}-T_{2,4}-T_{1,9}+T_{2,9})(T_{1,5}-T_{4,5}+T_{1}-T_{4})).$ We see that there are no even moments of $T_{k,l}$ here, thus this term is $O(3)$ by Lemma 2.13. For $b\in\\{5,10\\}$, $\displaystyle\nu_{0}((R^{-}_{1,5}-R^{-}_{2,5}-R^{-}_{1,10}+R^{-}_{2,10})(R^{-}_{1,5}-R^{-}_{4,5}))$ $\displaystyle\stackrel{{\scriptstyle\ref{cor: last spin}}}{{=}}$ $\displaystyle\nu((T_{1,5}-T_{2,5}-T_{1,10}+T_{2,10})(T_{1,5}-T_{4,5}+T_{1}-T_{4}))=\nu(T_{1,5}^{2}).$ Thus the total contribution from this case is $G\nu(T_{1,5}^{2})+O(3).$ * • Now we left with the cases $a\in\\{1,2\\}$ and $b\in\\{6,7,8\\}$ which are the new replica corresponds to $\\{1,2,3\\}$. For those terms, WLOG, $3\nu_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})\varepsilon_{1,4})\nu_{0}((-R^{-}_{1,6}+R^{-}_{2,6})(R^{-}_{1,5}-R^{-}_{4,5})).$ Note that since the new replica is not used by our second copy of $T_{1}$, namely $R^{-}_{1,5}-R^{-}_{4,5}$, this term can be written as $-3G\nu(T_{1}^{2})+O(3).$ Combining all the terms for the second term, $\displaystyle\nu^{\prime}_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})(R^{-}_{1,5}-R^{-}_{4,5}))=$ $\displaystyle\frac{\beta^{2}}{2}H\nu(S_{1}T_{1})+\beta^{2}F\nu(T_{1}^{2})$ $\displaystyle+\beta^{2}G\nu(T_{12}^{2})-3\beta^{2}G\nu(T_{1}^{2})+O(3).$ Plugging this back into (2.15), we have $\displaystyle\nu(T_{1}^{2})=$ $\displaystyle\frac{1}{N}\nu((\varepsilon_{1,3}-\varepsilon_{2,3})((\varepsilon_{1,5}-\varepsilon_{4,5})))+\nu((\varepsilon_{1,3}-\varepsilon_{2,3})(R^{-}_{1,5}-R^{-}_{4,5})),$ $\displaystyle=$ $\displaystyle G(\frac{1}{N}+\beta^{2}\nu(T_{1,2}^{2}))+\frac{\beta^{2}}{2}H\nu(S_{1}T_{1})+\beta^{2}\left[F-3G\right]\nu(T_{1}^{2})+O(3).\ $ Rearranging gives (2.11), $(1-\beta^{2}(F-3G))\nu(T_{1}^{2})=G(\frac{1}{N}+\beta^{2}\nu(T_{12}^{2}))+\frac{\beta^{2}}{2}H\nu(S_{1}T_{1})+O(3).$ Plug in $\nu(T_{12}^{2})=A_{2}^{2}+O(3)$ gives (2.11). ###### Remark 2.18. In SK, the mixed term $S_{1}T_{1}$ vanishes. If we look at the constant for $T_{1}^{2}$, $F=\nu_{0}((\varepsilon_{13}-\varepsilon_{23})\varepsilon_{13})=b(2)-b(1),$ $G=\nu_{0}((\varepsilon_{13}-\varepsilon_{23})\varepsilon_{14})=b(1)-b(0).$ Combining them, we get back the original constants $1-4q+3\hat{q}$, which is one of the "eigenvalues" and thus we get back the second moment of $T_{1}$ (see the equation (1.259) in [Tal11]). ##### A way of writing covariance of $S_{1},T_{1}$ To handle the occurrence of $\nu(S_{1}T_{1})$ in the final expression, we will use the symmetry of spin to write $\displaystyle\nu(S_{1}T_{1})=\nu((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})(R_{1,4}-R_{3,4}))=\frac{1}{N}\nu((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{1,3})+\nu((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})(R^{-}_{1,4}-R^{-}_{3,4})).$ (2.17) This type of expansion helps reduce the moment of $S_{1}$. As shown above in $\nu(T_{1}^{2})$, to control the second term, it is enough to look at $\nu^{\prime}_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})(R^{-}_{14}-R^{-}_{34}))$. $\displaystyle\nu^{\prime}_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})(R^{-}_{1,4}-R^{-}_{3,4}))=$ $\displaystyle\frac{\beta^{2}}{2}\sumop\slimits@_{a,b}\text{sgn}(a,b)\nu_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-\mu_{a,b})(R^{-}_{1,4}-R^{-}_{3,4}))$ $\displaystyle+R_{4,S_{1}T_{1}}.$ Observe that $\nu_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{a,b})\neq 0\iff|\\{a,b\\}\cap\\{1,2\\}|=1,$ Let’s iterate over those pairs $(a,b)$: * • For $a=b$: either $a=b=1$ or $a=b=2$, $\frac{1}{2}\nu_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{1}^{2})\nu_{0}((R^{-}_{1,1}-R^{-}_{2,2})(R^{-}_{1,4}-R^{-}_{3,4}))\stackrel{{\scriptstyle(\ref{lemma:last spin})}}{{=}}\frac{1}{2}D\nu(S_{1}T_{1})+O(3).$ * • For $|\\{a,b\\}\cap\\{1,2\\}|=1$, assume $a\in\\{1,2\\}$ and $b\in\\{3,4,\cdots 8\\}$. As shown above, for $b\in\\{3,7\\}$ or $\\{4,8\\}$, we have $\nu_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{1,3})\nu_{0}((R^{-}_{1,3}-R^{-}_{2,3}+R^{-}_{1,7}-R^{-}_{2,7})(R^{-}_{1,4}-R^{-}_{3,4}))=O(3),$ and $\nu_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{13})\nu_{0}((R^{-}_{1,4}-R^{-}_{2,4}+R^{-}_{1,8}-R^{-}_{2,8})(R^{-}_{1,4}-R^{-}_{3,4}))=E\nu(T_{1,4}^{2})+O(3).$ For $b\in\\{5,6\\}$, we have $-\nu_{0}((\varepsilon_{1}^{2}-\varepsilon_{2}^{2})\varepsilon_{1,3})\nu_{0}((R^{-}_{1,5}-R^{-}_{2,5})(R^{-}_{1,4}-R^{-}_{2,4}))=-E\nu(T_{1}^{2})+O(3).$ Thus $\displaystyle\nu^{\prime}_{0}(S_{1}T_{1})=$ $\displaystyle\frac{\beta^{2}}{2}D\nu(S_{1}T_{1})+\beta^{2}E\nu(T_{1,2}^{2})-2\beta^{2}E\nu(T_{1}^{2})+O(3).$ Plugging this back to the equation (2.17) gives (2.12) $\displaystyle(1-\frac{\beta^{2}}{2}D)\nu(S_{1}T_{1})=$ $\displaystyle(\frac{1}{N}+\beta^{2}\nu(T_{12}^{2}))E-2\beta^{2}E\nu(T_{1}^{2})+O(3),$ (2.18) Plugging in $\nu(T_{12}^{2})=A_{2}^{2}+O(3)$ gives (2.12). ∎ ###### Lemma 2.19. For $\beta\leqslant\beta^{\prime}$, we have $\nu(S_{1}^{2})=B_{1}^{2}+O(3),$ where $B_{1}^{2}=\frac{DM_{1}-2\beta^{2}E^{2}}{NM}=\frac{2}{N\beta^{2}}\left(\frac{M_{1}}{M}-1\right)$ To prove the Lemma 2.19, we need to show the following two relations. ###### Lemma 2.20. We have $\left(1-\frac{\beta^{2}}{2}D\right)\nu(S_{1}^{2})=\frac{1}{N}D-2\beta^{2}E\cdot\nu(S_{1}T_{1})+O(3),$ $(1-\beta^{2}(F-3G))\nu(S_{1}T_{1})=\frac{1}{N}E+\frac{\beta^{2}}{2}H\cdot\nu(S_{1}^{2})+O(3).$ ###### Proof of Lemma 2.19. As in the $\nu(T_{1}^{2})$ case, Lemma 2.19 follows from combining the above two relations and the definition of $M_{1},M$. $\left(\frac{(1-\frac{\beta^{2}}{2}D)(1-\beta^{2}(F-3G))+\beta^{4}EH}{(1-\beta^{2}(F-3G))}\right)\nu(S_{1}^{2})=\frac{D((1-\beta^{2}(F-3G)))-2\beta^{2}E^{2}}{N(1-\beta^{2}(F-3G))}+O(3).$ Rearrange gives for $B_{1}^{2}=\frac{2}{N\beta^{2}}(\frac{M_{1}}{M}-1)$, $\nu(S_{1}^{2})=B_{1}^{2}+O(3)$. ∎ ###### Proof of Lemma 2.20. The proof is similar to the previous case. Denote $\varepsilon_{k,k}=(\sigma^{k}_{N})^{2}$, $\displaystyle\nu(S_{1}^{2})$ $\displaystyle=\nu((R_{1,1}-R_{2,2})(R_{1,1}-R_{3,3})),$ (2.19) $\displaystyle=\frac{1}{N}\nu((\varepsilon_{1,1}-\varepsilon_{2,2})(\varepsilon_{1,1}-\varepsilon_{3,3}))+\nu((\varepsilon_{1,1}-\varepsilon_{2,2})(R^{-}_{1,1}-R^{-}_{3,3})),$ (2.20) $\displaystyle=\frac{1}{N}\nu((\varepsilon_{1,1}-\varepsilon_{2,2})\varepsilon_{1,1})+\nu((\varepsilon_{1,1}-\varepsilon_{2,2})(R^{-}_{1,1}-R^{-}_{3,3})).$ (2.21) To control the second term, observe that by (2.7), and $\nu_{0}((\varepsilon_{1,1}-\varepsilon_{2,2})(R^{-}_{1,1}-R^{-}_{3,3}))=0$, $\nu((\varepsilon_{1,1}-\varepsilon_{2,2})(R^{-}_{1,1}-R^{-}_{3,3}))=\nu^{\prime}_{0}((\varepsilon_{1,1}-\varepsilon_{2,2})(R^{-}_{1,1}-R^{-}_{3,3}))+O(3).$ By (2.4) $\displaystyle\nu^{\prime}_{0}((\varepsilon_{11}-\varepsilon_{22})(R^{-}_{1,1}-R^{-}_{3,3}))=$ $\displaystyle\frac{\beta^{2}}{2}\sumop\slimits@_{a,b}\text{sgn}(a,b)\nu_{0}((\varepsilon_{1,1}-\varepsilon_{2,2})\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-Q_{a,b})(R^{-}_{1,1}-R^{-}_{3,3}))$ $\displaystyle+R_{3,S_{1}^{2}}.$ Note that $\nu_{0}((\varepsilon_{1,1}-\varepsilon_{2,2})\varepsilon_{a,b})\neq 0\iff|\\{a,b\\}\cap\\{1,2\\}|=1.$ To count the contribution for all such $a,b$, * • For $a=b$, combine the contribution of two terms gives $\frac{\beta^{2}}{2}D\nu_{0}((R^{-}_{1,1}-R^{-}_{2,2})(R^{-}_{1,1}-R^{-}_{3,3}))\stackrel{{\scriptstyle(\ref{lemma:last spin}),\eqref{eq:1st approx}}}{{=}}\frac{\beta^{2}}{2}D\nu(S_{1}^{2})+O(3).$ * • If $a\neq b$, WLOG, suppose $a\in\\{1,2\\}$ and $b\in\\{3,6\\}$: by Lemma 2.9 $\beta^{2}E\nu_{0}((R^{-}_{13}-R^{-}_{2,3}-R^{-}_{1,6}+R^{-}_{2,6})(R^{-}_{1,1}-R^{-}_{3,3}))=\beta^{2}E\nu_{0}((R_{13}-R_{2,3}-R_{1,6}+R_{2,6})(R_{1,1}-R_{3,3}))+O(3).$ Rewrite the last part using 1.2, $\displaystyle\beta^{2}E\nu_{0}((R_{13}-R_{2,3}-R_{1,6}+R_{2,6})(R_{1,1}-R_{3,3}))+O(3),$ $\displaystyle=$ $\displaystyle\beta^{2}E\nu_{0}((T_{13}-T_{2,3}-T_{1,6}+T_{2,6})(S_{1}-S_{3})+O(3),$ $\displaystyle=$ $\displaystyle O(3).$ * • For $a\in\\{1,2\\}$ and $b\in\\{4,5\\}$, combine the two terms gives $-2\beta^{2}E\nu_{0}((R^{-}_{1,4}-R^{-}_{2,4})(R^{-}_{1,1}-R^{-}_{3,3}))=-2\beta^{2}E\nu(S_{1}T_{1})+O(3).$ Plug this back in (2.19) $\displaystyle\nu(S_{1}^{2})$ $\displaystyle=\frac{1}{N}\nu((\varepsilon_{1,1}-\varepsilon_{2,2})\varepsilon_{1,1})+\frac{\beta^{2}}{2}D\nu(S_{1}^{2})-2\beta^{2}E\nu(S_{1}T_{1})+O(3),$ (2.22) $\displaystyle=\frac{1}{N}D+\frac{\beta^{2}}{2}D\nu(S_{1}^{2})-2\beta^{2}E\nu(S_{1}T_{1})+O(3).$ (2.23) ##### Alternative way of writing $\nu(S_{1}T_{1})$ We’ve seen one way of decomposing $S_{1}T_{1}$ in lemma 2.17, which reduces the moment of $S_{1}$. While we may directly apply (2.12) here, we show another way of decomposing $S_{1}T_{1}$ by reducing the moment of $T_{1}$, as it will be helpful in the general case. The idea is same $\displaystyle\nu(S_{1}T_{1})=\nu((R_{1,1}-R_{2,2})(\varepsilon_{1,4}-\varepsilon_{3,4}))=\frac{1}{N}\nu((\varepsilon_{1,1}-\varepsilon_{2,2})(\varepsilon_{1,4}-\varepsilon_{3,4}))+\nu((R^{-}_{1,1}-R^{-}_{2,2})(\varepsilon_{1,4}-\varepsilon_{3,4}))$ We then rewrite the second term as before $\displaystyle\nu((R^{-}_{1,1}-R^{-}_{2,2})(\varepsilon_{1,4}-\varepsilon_{3,4}))=\nu_{0}^{\prime}((R^{-}_{1,1}-R^{-}_{2,2})(\varepsilon_{1,4}-\varepsilon_{3,4}))+O(3)$ $\displaystyle=\frac{\beta^{2}}{2}\sumop\slimits@_{a,b}\nu_{0}((R^{-}_{1,1}-R^{-}_{2,2})(R^{-}_{a,b}-Q_{a,b}))\nu_{0}(\varepsilon_{a,b}(\varepsilon_{1,4}-\varepsilon_{3,4}))$ As shown in Lemma 2.17, $\nu_{0}((\varepsilon_{1,4}-\varepsilon_{3,4})\varepsilon_{a,b})\neq 0\iff|\\{a,b\\}\cap\\{1,3\\}|=1$ Let’s go over all cases of such size two subsets: * • If $a=b$: this term gives $\frac{\beta^{2}}{2}H\nu_{0}((R^{-}_{1,1}-R^{-}_{3,3})(R^{-}_{1,1}-R^{-}_{2,2}))\stackrel{{\scriptstyle(\ref{lemma:last spin}),\eqref{eq:1st approx}}}{{=}}\frac{\beta^{2}}{2}H\nu(S_{1}^{2})+O(3)$ * • For $\\{a,b\\}\in\\{\\{1,4\\},\\{3,4\\}\\}$, we have $\beta^{2}F\nu_{0}((R^{-}_{1,1}-R^{-}_{2,2})(R^{-}_{1,4}-R^{-}_{3,4}))\stackrel{{\scriptstyle(\ref{lemma:last spin}),\eqref{eq:1st approx}}}{{=}}\beta^{2}F\nu(S_{1}T_{1})+O(3)$ * • Now we count the case when $a\in\\{1,3\\}$, $b\notin\\{1,3,4\\}$. Gather terms for $b\in\\{2,6\\}$ and rewrite $\displaystyle\beta^{2}G\nu_{0}((R^{-}_{1,1}-R^{-}_{2,2})(R^{-}_{1,2}-R^{-}_{3,2}-R^{-}_{1,6}+R^{-}_{3,6}))$ $\displaystyle\stackrel{{\scriptstyle(\ref{lemma:last spin}),\eqref{eq:1st approx}}}{{=}}\beta^{2}G\nu_{0}((S_{1}-S_{2})(T_{1,2}-T_{3,2}-T_{1,6}+T_{3,6}))$ $\displaystyle=O(3)$ * • Now we are left with $a\in\\{1,3\\}$ and $b\in\\{5,7,8\\}$ $-3\beta^{2}G\nu_{0}((R^{-}_{1,1}-R^{-}_{2,2})(R^{-}_{1,5}-R^{-}_{3,5}))\stackrel{{\scriptstyle(\ref{lemma:last spin}),\eqref{eq:1st approx}}}{{=}}-3\beta^{2}G\nu(S_{1}T_{1})+O(3)$ Combine we get $\nu(S_{1}T_{1})=\frac{1}{N}E+\frac{\beta^{2}}{2}H\nu(S_{1}^{2})+\beta^{2}(F-3G)\nu(S_{1}T_{1})+O(3)$ ∎ #### 2.2.3 Covariance: $S_{1}T_{1}$ term ###### Lemma 2.21. For $\beta\leqslant\beta^{\prime}$, we have $\nu(S_{1}T_{1})=C_{1}^{2}+O(3),$ where $C_{1}^{2}:=\frac{E}{NM}.$ ###### Proof. Note that one can deduce $\nu(S_{1}T_{1})$ from both 2.17 and 2.20. From lemma 2.17, we get $\displaystyle\nu(S_{1}T_{1})$ $\displaystyle=\frac{E}{M_{2}}\left[\frac{1}{NM_{3}}-2\beta^{2}A_{1}^{2}\right]+O(3)$ $\displaystyle=\frac{E}{M_{2}}\left(\frac{1}{NM_{3}}-\frac{1}{N}\left(\frac{1}{M_{3}}-\frac{M_{2}}{M}\right)\right)+O(3)=\frac{E}{MN}+O(3).$ From lemma 2.20 $\displaystyle\nu(S_{1}T_{1})$ $\displaystyle=\frac{1}{M_{1}}\left(\frac{E}{N}+\frac{\beta^{2}}{2}HB_{1}^{2}\right)+O(3)=\frac{E}{NM}+O(3),$ where the last equality follows from $E=H$. ∎ ## 3 General moments computation In Section 2.2, we obtained the variance and covariance: $\nu(T_{1,2}^{2}),\nu(T_{1}^{2}),\nu(S_{1}^{2})$, $\nu(S_{1}T_{1})$ by rewriting moments with lower order terms. In this section, we extend this idea to general moments of $T_{1,2},T_{1},S_{1},T,S$. ###### Lemma 3.1 (Formal version of Lemma 1.5). Fix an integer $n$, consider the following sets of integers $\\{h(k,l):1\leqslant k<l\leqslant n\\}$, $\\{h(k):1\leqslant k\leqslant n\\}$ and $\\{h^{\prime}(k):1\leqslant k\leqslant n\\}$ and $h^{\prime},h$. Let $H:=\sumop\slimits@_{1\leqslant k<l\leqslant n}h(k,l)+\sumop\slimits@_{1\leqslant k\leqslant n}h(k)+\sumop\slimits@_{1\leqslant l\leqslant n}h^{\prime}(l)+h+h^{\prime},$ let $g_{X}$ be a centered Gaussians vector where the index $X$ belongs to $\\{T_{k,l},T_{k},S_{k},T,S:1\leqslant k<l\leqslant n\\},$ and its covariance matrix is $\mathrm{Cov}(g_{X},g_{Y})=\begin{cases}A_{2}^{2},&\text{ if }X=Y=T_{k,l},\\\ A_{1}^{2},&\text{ if }X=Y=T_{k},\\\ A_{0}^{2},&\text{ if }X=Y=T,\\\ B_{1}^{2},&\text{ if }X=Y=S_{k},\\\ B_{0}^{2},&\text{ if }X=Y=S,\\\ C_{1}^{2},&\text{ if }\\{X,Y\\}=\\{T_{k},S_{k}\\},\\\ C_{0}^{2},&\text{ if }\\{X,Y\\}=\\{T,S\\}.\\\ \end{cases}$ then for $\beta<\beta^{\prime}\leqslant\tilde{\beta}$, we have $\displaystyle\nu\left({}_{k,l}T_{k,l}^{h(k,l)}{}_{k}T_{k}^{h(k)}T^{h}{}_{l}S_{l}^{h^{\prime}(l)}S^{h^{\prime}}\right)$ $\displaystyle=\operatorname*{\px@BbbE}[{}_{k,l}g_{T_{k,l}}^{h(k,l)}{}_{k}g_{T_{k}}^{h(k)}{}_{l}g_{S_{l}}^{h^{\prime}(l)}g_{T}^{h}g_{S}^{h}]+O(H+1).$ Similar to the proof of CLT in SK model, the proof for Lemma 3.1 consists of three parts, first we separate any $T_{k,l}$ terms s.t. $h(k,l)>0$ from the mixed moments, then $(T_{k},S_{k})$ terms and then the $(T,S)$ term. This is based on the Lemma 1.4, which states that the set of random variables is pairwise independent besides $(T_{k},S_{k})$ for some $k\in[n]$ and $(T,S)$. Thus, we expect the mixed moments to be decomposed into ${}_{k,l}\nu\left(T_{k,l}^{h(k,l)}\right)\cdot{}_{k}\nu\left(T_{k}^{h(k)}S_{k}^{h^{\prime}(k)}\right)\cdot\nu(T^{h}S^{h^{\prime}}).$ Therefore, it is then enough to characterize the moments of the form: $T_{k,l}^{h(k,l)},T_{k}^{h(k)}S_{k}^{h^{\prime}(k)},T^{h}S^{h^{\prime}}$. The formal statements can be found in Theorem 3.2, 3.5 and 3.11. Before we start the proofs, we will introduce the necessary notations to index each term within the mixed moments. Let’s first rewrite each term using (self-)overlaps by the expansion given in Claim 2.1. For $v\in[H]$, denote $V_{v}=\\{v_{1},v_{2},\cdots\\}$ as the set of replicas appeared in the corresponding term $U_{v}$. Define $U_{v}$ as $U_{v}:=\begin{cases}R_{v_{1},v_{2}}-R_{v_{1},v_{4}}-R_{v_{3},v_{2}}+R_{v_{3},v_{4}},&\text{ if }v\text{-th term corresponds to }T_{k,l},\\\ R_{v_{1},v_{3}}-R_{v_{2},v_{3}},&\text{ if }v\text{-th term corresponds to }T_{k},\\\ R_{v_{1},v_{1}}-R_{v_{2},v_{2}},&\text{ if }v\text{-th term corresponds to }S_{l},\\\ R_{v_{1},v_{2}}-p,&\text{ if }v\text{-th term corresponds to }T,\\\ R_{v_{1},v_{1}}-q,&\text{ if }v\text{-th term corresponds to }S.\end{cases}$ Then the general moments can be rewritten as $\displaystyle\nu\left({}_{k,l}T_{k,l}^{h(k,l)}{}_{k}T_{k}^{h(k)}T^{h}{}_{l}S_{l}^{h^{\prime}(l)}S^{h^{\prime}}\right)=\nu({}_{v\geqslant 1}U_{v}).$ (3.1) By symmetry of spins, we can replace one of $U_{v}$ by the same expression on the last spin. To do this, let’s define the following notation: For $v\in[H]$, let $\varepsilon(v):=\begin{cases}\varepsilon_{v_{1},v_{2}}-\varepsilon_{v_{1},v_{4}}-\varepsilon_{v_{3},v_{2}}+\varepsilon_{v_{3},v_{4}},&\text{ if }v\text{-th term corresponds to }T_{k,l},\\\ \varepsilon_{v_{1},v_{3}}-\varepsilon_{v_{2},v_{3}},&\text{ if }v\text{-th term corresponds to }T_{k},\\\ \varepsilon_{v_{1},v_{1}}-\varepsilon_{v_{2},v_{2}},&\text{ if }v\text{-th term corresponds to }S_{l},\\\ \varepsilon_{v_{1},v_{2}}-p,&\text{ if }v\text{-th term corresponds to }T,\\\ \varepsilon_{v_{1},v_{1}}-q,&\text{ if }v\text{-th term corresponds to }S.\end{cases}$ Finally, define $U^{-}_{v}$ as the part of $U_{v}$ that doesn’t depend on the last spin $U^{-}_{v}:=U_{v}-\frac{1}{N}\varepsilon(v).$ Finally, following the cavity method, one should try to separate as many parts of the expression that depend on the last spin as possible. To this end, let’s further decompose (3.1) as $\displaystyle\nu\left({}_{k,l}T_{k,l}^{h(k,l)}{}_{k}T_{k}^{h(k)}T^{h}{}_{l}S_{l}^{h^{\prime}(l)}S^{h^{\prime}}\right)$ $\displaystyle=\nu({}_{v\geqslant 1}U_{v})$ (3.2) $\displaystyle=\nu(\varepsilon(1){}_{v>1}U^{-}_{v})+\frac{1}{N}\sumop\slimits@_{u\geqslant 2}\nu(\varepsilon(1)\varepsilon(u){}_{v\neq 1,u}U^{-}_{v})+O(H+1).$ (3.3) ### 3.1 Induction on $T_{k,l}$ We first generalize the result in Lemma 2.13 to show that $T_{1,2}$ behaves like independent Gaussian w.r.t. other basis terms. ###### Theorem 3.2. For $\beta<\beta^{\prime}$, we have $\nu({}_{(k,l)}T_{k,l}^{h(k,l)}{}_{k}T_{k}^{h(k)}T^{h}{}_{k}S_{k}^{h^{\prime}(k)}S^{h^{\prime}})={}_{(k,l)}a(h(k,l))A_{2}^{h(k,l)}\nu({}_{k}T_{k}^{h(k)}T^{h}{}_{k}S_{k}^{h^{\prime}(k)}S^{h^{\prime}})+O(H+1),$ where $a(x)=\operatorname*{\px@BbbE}[g^{x}]$ with $g\sim N(0,1)$. The proof of this theorem is the same as its analog in the SK model. We include the proof for completeness. ###### Proof. The proof goes by inducting on $\sumop\slimits@_{k,l}h(k,l)$. WLOG, we assume that $h(1,2)\geqslant 1$ and reduce the moment of $T_{1,2}$. For the sake of simplicity, let’s define a function $g_{1,2}(x)$ that tracks the moment of $T_{1,2}$ s.t. $g_{1,2}(x)=\begin{cases}T_{1,2}^{x}{}_{(k,l)\nequiv(1,2)}T_{k,l}^{h(k,l)}{}_{k}T_{k}^{h(k)}T^{h}{}_{l}S_{l}^{h^{\prime}(l)}S^{h^{\prime}},&\text{if }h(1,2)\geqslant 0,\\\ 0,&\text{ otherwise}.\end{cases}$ Assume that $U_{1}$ corresponds to a copy of $T_{1,2}$. Using (3.1), we have $\displaystyle\nu(g(h(1,2)))=\nu(\varepsilon(1){}_{v>1}U^{-}_{v})+\frac{1}{N}\sumop\slimits@_{u\geqslant 2}\nu(\varepsilon(1)\varepsilon(u){}_{v\neq 1,u}U^{-}_{v})+O(H+1),$ (3.4) where $\varepsilon(1)=\varepsilon_{1_{1},1_{2}}-\varepsilon_{1_{1},1_{4}}-\varepsilon_{1_{3},1_{2}}+\varepsilon_{1_{3},1_{4}}.$ The second term is again approximated by $\nu^{\prime}_{0}(\cdot)$ using (2.7). ###### Lemma 3.3. For $\beta<\beta^{\prime}$, suppose $h(1,2)\geqslant 1$ and $U_{1}$ corresponds to a copy of $T_{1,2}$, we have $\nu^{\prime}_{0}(\varepsilon(1){}_{v>1}U^{-}_{v})=\beta^{2}A\cdot\nu(g_{1,2}(h(1,2)))+O(H+1).$ The proof of the above lemma is essentially the same as in the SK model; we include it in the appendix for completeness. For the first term, by (2.6), $\displaystyle\frac{1}{N}\sumop\slimits@_{u\geqslant 2}\nu(\varepsilon(1)\varepsilon(u){}_{v\neq 1,u}U^{-}_{v})$ $\displaystyle=\frac{1}{N}\sumop\slimits@_{u\geqslant 2}\nu_{0}(\varepsilon(1)\varepsilon(u){}_{v\neq 1,u}U^{-}_{v})+O(H+1),$ $\displaystyle=\frac{1}{N}\sumop\slimits@_{u\geqslant 2}\nu_{0}(\varepsilon(1)\varepsilon(u))\nu_{0}({}_{v\neq 1,u}U^{-}_{v})+O(H+1).$ Note that following a similar arguement as in Lemma 1.4, $\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\neq 0$ only when $a,b$ appears in the expression of $\varepsilon(1)$. However, by construction, $1_{2},1_{3}$ do not appear in any other terms besides $U_{1}$, thus the only possible pair of replicas that appears in $U_{1}$ that also appears in other terms are when $(u_{1},u_{2})\equiv(1,2)$. $\displaystyle\nu_{0}(\varepsilon(1)\varepsilon(u))=\begin{cases}A,&\text{ if }U_{u}\text{ corresponds to a copy of }T_{1,2},\\\ 0,&\text{otherwise}.\end{cases}$ (3.5) Summing up all non-zero terms and applying Corollary 2.10, we have $\frac{1}{N}\sumop\slimits@_{u\geqslant 2}\nu(\varepsilon(1)\varepsilon(u){}_{v\neq 1,u}U^{-}_{v})=(h(1,2)-1)\cdot\frac{A}{N}\cdot\nu(g_{1,2}(h(1,2)-2))+O(H+1).$ Combine with Lemma 3.3 and rearrange gives $\displaystyle\nu(g(1,2))$ $\displaystyle=(h(1,2)-1)\cdot\frac{A}{N(1-\beta^{2}A)}\cdot\nu(g_{1,2}(h(1,2)-2))+O(H+1),$ (3.6) $\displaystyle=(h(1,2)-1)\cdot A_{2}^{2}\cdot\nu(g_{1,2}(h(1,2)-2))+O(H+1).$ (3.7) Now we are ready to perform induction. If $h(1,2)=1$ holds since $a(1)=0$. For higher moments, we apply the inductive hypothesis on $\nu(g_{1,2}(h(1,2)-2))$. Plug this back in 3.6 and denote $h^{\prime}(k,l)$ as the moments of $T_{k,l}$ in $g_{1,2}(h(1,2)-2)$, $\displaystyle\nu(g(1,2))$ $\displaystyle=(h(1,2)-1)\cdot A_{2}^{2}\cdot\nu(g_{1,2}(h(1,2)-2))+O(H+1),$ $\displaystyle=(h(1,2)-1)\cdot A_{2}^{2}\cdot{}_{(k,l)}a(h^{\prime}(k,l))A_{2}^{h^{\prime}(k,l)}\nu({}_{k}T_{k}^{h(k)}T^{h}{}_{k}S_{k}^{h^{\prime}(k)}S^{h^{\prime}})+O(H+1),$ $\displaystyle={}_{(k,l)}a(h(k,l))A_{2}^{h(k,l)}\nu({}_{k}T_{k}^{h(k)}T^{h}{}_{k}S_{k}^{h^{\prime}(k)}S^{h^{\prime}})+O(H+1).$ where the last equality follows from $a(h(1,2))=(h(1,2)-1)a(h^{\prime}(1,2))$. ∎ ### 3.2 Recursive relation for correlated "basis" As we mentioned in Section 1.5, our goal is to obtain a recursive relation for moments of the basis as in [Tal11, Chapter 1.10]. We need to do a little more work for $T_{1},S_{1}$ and $T,S$ because we expect them to be correlated. We describe the additional step here before delving into the moment computations. By the Gaussian integration by part formula (see e.g. [Tal11] A.4), suppose $[g_{1},g_{2}]\sim\operatorname{\mathcal{N}}(0,\Sigma)$ and some contents $a,b\geqslant 2$, the two ways of expanding $\operatorname*{\px@BbbE}[g_{1}^{a}g_{2}^{b}]$ are $\displaystyle\operatorname*{\px@BbbE}[g_{1}^{a}g_{2}^{b}]$ $\displaystyle=(a-1){}_{1,1}\operatorname*{\px@BbbE}[g_{1}^{a-2}g_{1}^{b}]+b{}_{1,2}\operatorname*{\px@BbbE}[g_{1}^{a-1}g_{1}^{b-1}],$ (3.8) $\displaystyle=a{}_{1,2}\operatorname*{\px<EMAIL_ADDRESS>(3.9) As we saw in Section 2.2, the cavity method almost gives the above type of relations. The cavity method allows us to decouple the last spin at time $0$, thus if we use the symmetry of spins to rewrite one of the terms using only the last spin, as in e.g. (2.15), we almost reduce the moment by $1$. The problem seems to be that even though the last spin is independent of the rest at time $0$, our function still depends on the corresponding replicas. By Lemma 2.4, those replicas are treated the same as every other replicas and could introduce terms that can increase the moment of the RHS of the recursive relation. To get some intuition, let’s consider the case $T_{1}^{2}$, recall that we can rewrite $\nu(T_{1}^{3})$ by applying symmetry of spin on one of the $T_{1}$ $\nu(T_{1}^{3})=\nu_{1}((\varepsilon_{1,3}-\varepsilon_{2,3})T_{1}^{2}).$ Becauase $T_{1}^{2}$ is an order $2$ function, we need to invoke (2.7) and compute $\nu^{\prime}_{0}((\varepsilon_{1,3}-\varepsilon_{2,3})T_{1}^{2})$ to get a good enough approximation. By Lemma 2.4, even though $\sigma_{2}$ is only used by the first term i.e. $(\varepsilon_{1,3}-\varepsilon_{2,3})$, we still need to consider their contribution in $\nu^{\prime}_{0}(T_{1}^{3})$. Gathering terms correspond to $(a,b)\in\\{(1,1),(2,2)\\}$ gives $\nu_{0}((R_{1,1}-R_{2,2})T_{1})\equiv\nu_{0}(T_{1}S_{1}).$ Even if $S_{1}$ does not appear in the initial expression, taking the derivative at time $0$ would introduce a term where the moment of $S_{1}$ is increased by $1$. Still, if we restrict our attention to some fixed replica $k$, we can expand the mixed moments of $(T_{k},S_{k})$ or $(T,S)$ in two different ways similar to (3.8). Including moments of terms associated with replicas doesn’t change the picture here. Intuitively, this follows the pair $(T_{k},S_{k})$ (and $(S,T)$) being independent from all other basis terms that don’t depend on replica $k$, as indicated in Lemma 1.4. We prove this formally in Lemma 3.6 and 3.14 below. To avoid repetition, let’s first characterize the condition under which the relations given by the cavity method imply the desired recursive relation for proving CLT. ###### Lemma 3.4. Consider two sets of constants $\alpha_{2},\alpha_{1},\alpha_{0}$ and $\beta_{2},\beta_{1},\beta_{0}$. Suppose there exist $H\geqslant 0$ and $C\in\px@BbbR$. Suppose a function $f:\px@BbbZ\times\px@BbbZ\to\px@BbbR$ with $f(h,h^{\prime})=0$ if $h<0$ or $h^{\prime}<0$ and $f(0,0)=C$ satisfies the following relation: For $h,h^{\prime}>0$ and $(h,h^{\prime})\neq(0,0)$, $\displaystyle f(h,h^{\prime})$ $\displaystyle=\alpha_{2}(h-1)f(h-2,h^{\prime})+\alpha_{1}h^{\prime}f(h-1,h^{\prime}-1)+\alpha_{0}f(h-1,h^{\prime}+1)+O(h+h^{\prime}+H+1),$ (3.10) $\displaystyle=\beta_{2}(h^{\prime}-1)f(h,h^{\prime}-2)+\beta_{1}hf(h-1,h^{\prime}-1)+\beta_{0}f(h+1,h^{\prime}-1)+O(h+h^{\prime}+H+1).$ (3.11) If the sets of constants satisfy $\displaystyle\alpha_{1}+\alpha_{0}\beta_{2}=\beta_{1}+\beta_{0}\alpha_{2}:=\gamma,$ (3.12) then we can find a set of constants $C(2,0),C(0,2),C(1,1)$ s.t. $f$ satisfies the following recursive relations $\displaystyle f(h,h^{\prime})=$ $\displaystyle(h-1)C(2,0)f(h-2,h^{\prime})+h^{\prime}C(1,1)f(h-1,h^{\prime}-1)+O(h+h^{\prime}+H+1),$ (3.13) $\displaystyle=$ $\displaystyle(h^{\prime}-1)C(0,2)f(h,h^{\prime}-2)+hC(1,1)f(h-1,h^{\prime}-1)+O(h+h^{\prime}+H+1).$ (3.14) with $C(2,0)=\frac{\alpha_{2}+\alpha_{0}\beta_{1}}{1-\alpha_{0}\beta_{0}}\quad C(0,2)=\frac{\beta_{2}+\beta_{0}\alpha_{1}}{1-\alpha_{0}\beta_{0}}\quad C(1,1)=\frac{\gamma}{1-\alpha_{0}\beta_{0}}$ ###### Proof. The idea is to use (3.11) and (3.10) to rewrite $f(h-1,h^{\prime}+1)$. One thing we need to check is that the resulting constants in front of $f(h-1,h^{\prime}+1)$ are the same in both equations. ##### Base case: Note that $f(1,0)=f(0,1)=O(h+h^{\prime}+H+1)$. We will first handle the case when $(h,h^{\prime})\in\\{(2,0),(0,2),(1,1)\\}$. Plug in the corresponding values for $h,h^{\prime}$ gives (3.10) and (3.11) gives $f(2,0)=\alpha_{2}f(0,0)+\alpha_{0}f(1,1)+O(3+H),$ $f(0,2)=\beta_{2}f(0,0)+\beta_{0}f(1,1)+O(3+H),$ $f(1,1)=\alpha_{1}f(0,0)+\alpha_{0}f(0,2)+O(3+H)=\beta_{1}f(0,0)+\beta_{0}f(2,0)+O(3+H).$ Solve the above system of linear equations gives $\displaystyle(1-\alpha_{0}\beta_{0})f(2,0)=(\alpha_{2}+\alpha_{0}\beta_{1})f(0,0)++O(3+H),$ $\displaystyle(1-\alpha_{0}\beta_{0})f(0,2)=(\beta_{2}+\beta_{0}\alpha_{1})f(0,0)+O(3+H).$ By (3.12) and the expression for $f(2,0)$, $f(0,2)$, we have $\displaystyle f(1,1)$ $\displaystyle=\left(\alpha_{1}+\alpha_{0}\frac{\beta_{2}+\beta_{0}\alpha_{1}}{1-\alpha_{0}\beta_{0}}\right)f(0,0)+O(3+H)=\left(\frac{\alpha_{1}+\alpha_{0}\beta_{2}}{1-\alpha_{0}\beta_{0}}\right)f(0,0)+O(3+H),$ $\displaystyle=\left(\frac{\beta_{1}+\beta_{0}\alpha_{2}}{1-\alpha_{0}\beta_{0}}\right)f(0,0)+O(3+H)=\left(\beta_{1}+\beta_{0}\frac{\alpha_{2}+\alpha_{0}\beta_{1}}{1-\alpha_{0}\beta_{0}}\right)f(0,0)+O(3+H).$ Rearrange the above equations gives $f(2,0)=C(2,0)f(0,0)+O(3+H),\ \ f(0,2)=C(0,2)f(0,0)+O(3+H),\ \text{and}\ f(1,1)=C(1,1)f(0,0)+O(3+H).$ For the case when $h^{\prime}=0$ and $h\geqslant 0$, the equation (3.10) becomes $\displaystyle f(h,0)$ $\displaystyle=\alpha_{2}(h-1)f(h-2,0)+\alpha_{0}f(h-1,1)+O(h+1+H),$ $\displaystyle=\alpha_{2}(h-1)f(h-2,0)+\alpha_{0}\left[\beta_{1}(h-1)f(h-2,0)+\beta_{0}f(h,0)\right]+O(h+1+H).$ Rearrange and plug in the values of $C(2,0)$ gives $f(h,0)=(h-1)C(2,0)f(h-2,0)+O(h+1+H).$ For $h^{\prime}\geqslant 0$ and $h=0$, the same arguement applies by starting from (3.11) with $h=0$. $f(0,h^{\prime})=(h^{\prime}-1)C(0,2)f(0,h^{\prime}-2)+O(h^{\prime}+1+H).$ ##### General case: Assume $h,h^{\prime}\geqslant 1$. Start from (3.10) and expand $f(h-1,h^{\prime}+1)$ using (3.11) gives $\displaystyle f(h,h^{\prime})=$ $\displaystyle\alpha_{2}(h-1)f(h-2,h^{\prime})+\alpha_{1}h^{\prime}f(h-1,h^{\prime}-1)+\alpha_{0}f(h-1,h^{\prime}+1)+O(h+h^{\prime}+H+1),$ $\displaystyle=$ $\displaystyle\alpha_{2}(h-1)f(h-2,h^{\prime})+\alpha_{1}h^{\prime}f(h-1,h^{\prime}-1)+O(h+h^{\prime}+H+1)$ $\displaystyle+\alpha_{0}\left(\beta_{2}h^{\prime}f(h-1,h^{\prime}-1)+\beta_{1}(h-1)f(h-2,h^{\prime})+\beta_{0}f(h,h^{\prime})\right)+O(h+h^{\prime}+H+1).$ Rearrange, we have $f(h,h^{\prime})=(h-1)C(2,0)f(h-2,h^{\prime})+h^{\prime}C(1,1)f(h-1,h^{\prime}-1)+O(h+h^{\prime}+H+1).$ Similarily, start from (3.11) instead and repeat the above arguement gives $f(h,h^{\prime})=(h^{\prime}-1)C(0,2)f(h,h^{\prime}-2)+hC(1,1)f(h-1,h^{\prime}-1)+O(h+h^{\prime}+H+1).$ ∎ #### 3.2.1 Induction on $T_{k}$ and $S_{k}$ In this section, we examine the mixed moments of $T_{k}$ and $S_{k}$. Assume that there are $n$ replicas in total and the moments of $T_{k,l}$, $h(k,l)=0$, for all $1\leqslant k<\leqslant n$. Denote the total moments of $T_{k}$ and $S_{l}$ as $h_{T}=\sumop\slimits@_{k}h(k),\quad h_{S}=\sumop\slimits@_{l}h^{\prime}(l),\quad H_{1}=h_{T}+h_{S}+h+h^{\prime}.$ ###### Theorem 3.5. Let $\\{(g_{T_{k}},g_{S_{k}}):k\in[n]\\}$ be i.i.d Gaussian with mean $[0,0]$ and covariance matrix ${}_{1}:=\begin{bmatrix}A_{1}^{2}&C_{1}^{2}\\\ C_{1}^{2}&B_{1}^{2}\end{bmatrix},$ We have $\nu({}_{k}T_{k}^{h(k)}{}_{l}S_{l}^{h^{\prime}(l)}T^{h}S^{h^{\prime}})=\left({}_{k}\operatorname*{\px@BbbE}[g_{T_{k}}^{h(k)}g_{S_{k}}^{h^{\prime}(k)}]\right)\nu(T^{h}S^{h^{\prime}})+O(H_{1}+1)$ Following the symmetry of replicas and the idea from Lemma 3.4, we will try to expand higher-order mixed moments by reducing the moment of $T_{k}$ or $S_{k}$ for some fixed replica $k$. WLOG, suppose $h(1)+h^{\prime}(1)>0$. Let $g_{1}:{}^{2}\to\px@BbbR$ be the function that tracks the moment of $T_{1}$ and $S_{1}$ only. $\displaystyle g_{1}(x,y):=\begin{cases}\nu(T_{1}^{x}S_{1}^{y}{}_{k>1}T_{k}^{h(k)}{}_{l>1}S_{l}^{h^{\prime}(l)}T^{h}S^{h^{\prime}}),&\text{ if }x,y\geqslant 0,\\\ 0,&\text{ otherwise}.\end{cases}$ (3.15) The lemma below is a generalization of Lemma 2.17 and 2.20. ###### Lemma 3.6. For $h(1)>1$, $h^{\prime}(1)\geqslant 0$, $\displaystyle M_{1}\nu(g(h(1),h^{\prime}(1)))$ $\displaystyle=\frac{\beta^{2}}{2}H\nu(g_{1}(h(1)-1,h^{\prime}(1)+1)$ (3.16) $\displaystyle+(h(1)-1)\left(\beta^{2}GA_{2}^{2}+\frac{G}{N}\right)\nu(g_{1}(h(1)-2,h^{\prime}(1)))$ (3.17) $\displaystyle+h^{\prime}(1)\frac{H}{N}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1)$ (3.18) $\displaystyle+O(H_{1}+1).$ (3.19) For $h(1)\geqslant 0$, $h^{\prime}(1)>1$, $\displaystyle M_{2}\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle=-2\beta^{2}E\nu(g_{1}(h(1)+1,h^{\prime}(1)-1)$ (3.20) $\displaystyle+h(1)\left(\beta^{2}EA_{2}^{2}+\frac{E}{N}\right)\nu(g_{1}(h(1)-1,h^{\prime}(1)-1))$ (3.21) $\displaystyle+(h^{\prime}(1)-1)\frac{D}{N}\nu(g_{1}(h(1),h^{\prime}(1)-2))$ (3.22) $\displaystyle+O(H_{1}+1).$ (3.23) ###### Remark 3.7. Observe that (3.16) is again a generalization of the recursive relation in SK model, see (1.320) (1.323) in [Tal11]. To compare our result to the SK model, recall that $F\equiv b(2)-b(1)=1-q$, $G\equiv b(1)-b(0)=q-\hat{q}$ and $H=0$, (3.16) becomes $\displaystyle(1-\beta^{2}(1-4q+3\hat{q}))\nu(f(h(1),0))=$ $\displaystyle(h(1)-1)(q-\hat{q})(\beta^{2}A^{2}_{1}+\frac{1}{N})\nu(f(h(1)-2,0))+O(H+1).$ The proof of Lemma 3.6 can be found in the following section. Let’s first see how one can deduce Theorem 3.5 Lemma 3.6. Following the intuition from the beginning of this section, we apply Lemma 3.4 to get recursive relations that are of the same form as Gaussian moments. ###### Proof of Theorem 3.5. We apply Lemma 3.4 to the recursive relations in Lemma 3.6 with the following constants $\alpha_{2}=\frac{G}{M_{1}}\left(\beta^{2}A_{2}^{2}+\frac{1}{N}\right),\quad\alpha_{1}=\frac{H}{NM_{1}},\quad\text{and}\ \ \alpha_{0}=\frac{\beta^{2}}{2M_{1}}H.$ $\beta_{2}=\frac{D}{M_{2}N},\quad\beta_{1}=\frac{E}{M_{2}}\left(\beta^{2}A_{2}^{2}+\frac{1}{N}\right),\quad\text{and}\ \ \beta_{0}=-\frac{2\beta^{2}E}{M_{2}}.$ To apply Lemma 3.4, we need to check the consistance condition, then compute $C(2,0),C(0,2)$ and $C(1,1)$ to get the final result. The consistency condition is verified by Lemma 2.21. If $g_{1}(x,y)=T_{1}^{x}S_{1}^{y}$, then we recover results from Section 2.2. We include the computation for general cases for completeness. ##### To check the consistency condition Note that by Claim 2.14, we have $\alpha_{2}=\frac{G}{M_{1}}\frac{1}{NM_{3}},\quad\beta_{1}=\frac{E}{M_{2}}\frac{1}{NM_{3}}.$ To verify (3.12) holds for the current set of constants, check that $\displaystyle\alpha_{1}+\alpha_{0}\beta_{2}$ $\displaystyle=\frac{H}{NM_{1}}(1+\frac{\beta^{2}}{2}\frac{D}{M_{2}})=\frac{H}{NM_{1}M_{2}},$ $\displaystyle\beta_{1}+\beta_{0}\alpha_{2}$ $\displaystyle=\frac{E}{NM_{2}M_{3}}(1-\frac{2\beta^{4}G}{M_{1}})$ $\displaystyle=\frac{E}{NM_{2}M_{3}}\frac{M_{1}-2\beta^{2}G}{M_{1}},$ $\displaystyle=\frac{E}{NM_{2}M_{3}}\frac{M_{3}}{M_{1}}=\alpha_{1}+\alpha_{0}\beta_{2}.$ The only things left are to compute $C(2,0)$, $C(0,2)$ and $C(1,1)$. First check that the common denominator for $C(2,0)$, $C(0,2)$ and $C(1,1)$ is $1-\alpha_{0}\beta_{0}=1+\frac{\beta^{4}E^{2}}{M_{1}M_{2}}=\frac{M}{M_{1}M_{2}}.$ The three constants are then given by $\displaystyle C(2,0)$ $\displaystyle=\frac{\alpha_{2}+\alpha_{0}\beta_{1}}{1-\alpha_{0}\beta_{0}}=\frac{M_{1}M_{2}}{M}\left(\frac{G}{M_{1}}\frac{1}{NM_{3}}+\frac{\beta^{2}H}{2M_{1}}\frac{E}{M_{2}}\frac{1}{NM_{3}}\right),$ $\displaystyle=\frac{M_{1}M_{2}}{M}\cdot\frac{GM_{2}+\frac{\beta^{2}}{2}E^{2}}{NM_{1}M_{2}M_{3}}=\frac{GM_{2}+\frac{\beta^{2}}{2}E^{2}}{MN},$ $\displaystyle=A_{1}^{2}.$ $\displaystyle C(0,2)$ $\displaystyle=\frac{\beta_{2}+\beta_{0}\alpha_{1}}{1-\alpha_{0}\beta_{0}}=\frac{M_{1}M_{2}}{M}\left(\frac{D}{M_{2}N}-\frac{2\beta^{2}E}{M_{2}}\frac{H}{NM_{1}}\right),$ $\displaystyle=\frac{M_{1}M_{2}}{M}\frac{DM_{1}-2\beta^{2}E^{2}}{M_{1}M_{2}N}=\frac{DM_{1}-2\beta^{2}E^{2}}{NM},$ $\displaystyle=B_{1}^{2}.$ $\displaystyle C(1,1)=\frac{\gamma}{1-\alpha_{0}\beta_{0}}=\frac{M_{1}M_{2}}{M}\cdot\frac{H}{NM_{1}M_{2}}=\frac{H}{MN}=C_{1}^{2}.$ By Lemma 3.4, we have $\displaystyle\nu(g_{1}(h(1),h^{\prime}(1)))=$ $\displaystyle(h(1)-1)A_{1}^{2}\nu(g_{1}(h(1)-2,h^{\prime}(1)))+h^{\prime}(1)C_{1}^{2}\nu(g(h(1)-1,h^{\prime}(1)-1))+O(H_{1}+1),$ (3.24) $\displaystyle=$ $\displaystyle(h^{\prime}(1)-1)B_{1}^{2}\nu(g(h(1),h^{\prime}(1)-2))+h(1)C_{1}^{2}\nu(g(h(1)-1,h^{\prime}(1)-1))+O(H_{1}+1).$ (3.25) The proof then is completed by induction on $H_{1}$. The statement holds if $H_{1}=1$, since $\operatorname*{\px@BbbE}[g_{T_{k}}]=\operatorname*{\px@BbbE}[g_{S_{k}}]=0$. For $H_{1}\geqslant 2$: suppose $h(1)+h(2)\geqslant 2$. The terms on the right-hand side of (3.24) have total moment $H_{1}-2$. We can apply the inductive hypothesis on the right-hand side gives $\displaystyle\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle=\left[(h(1)-1)A_{1}^{2}\operatorname*{\px@BbbE}[g_{T_{1}}^{h(1)-2}g_{S_{1}}^{h^{\prime}(1)}]+h^{\prime}(1)C_{1}^{2}\operatorname*{\px@BbbE}[g_{T_{1}}^{h(1)-1}g_{S_{1}}^{h^{\prime}(1)-1}]\right]\operatorname{\mathcal{C}}+O(H_{1}+1).$ where $\operatorname{\mathcal{C}}=\left({}_{k>1}\operatorname*{\px@BbbE}[g_{T_{k}}^{h(k)}g_{S_{k}}^{h^{\prime}(k)}]\right)\nu(T^{h}S^{h^{\prime}})$. Similarly, we get $\displaystyle\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle=\left[(h^{\prime}(1)-1)B_{1}^{2}\operatorname*{\px@BbbE}[g_{T_{1}}^{h(1)}g_{S_{1}}^{h^{\prime}(1)-2}]+h(1)C_{1}^{2}\operatorname*{\px@BbbE}[g_{T_{1}}^{h(1)-1}g_{S_{1}}^{h^{\prime}(1)-1}]\right]\operatorname{\mathcal{C}}+O(H_{1}+1).$ from the second recursive relation from (3.24). Note that mixed moments of $g_{T_{1}},g_{S_{1}}$ satisfies (3.8) with $a=h(1)$ and $b=h^{\prime}(1)$. This completes the induction. ∎ ###### Remark 3.8. Note that if $g_{1}(x,y)=T_{1}^{x}S_{1}^{y}$, then we have $\nu(g_{1}(2,0))=\nu(T_{1}^{2})$, $\nu(g_{1}(0,2))=\nu(S_{1}^{2})$ and $\nu(g_{1}(1,1))=\nu(T_{1}S_{1})$. In this case, $\nu(g(0,0))=1$. Lemma 3.4 says that the same relation holds for a more general initial expression $g(0,0)$. In the proof above, we recovered the same set of constants from (3.16) and (3.20) as from the variance calculation in Section 2.2. #### 3.2.2 Proof of Lemma 3.6 Recall the definition of $U_{v},\varepsilon(v),U^{-}_{v}$ from the beginning of this section and that we denote $V_{v}=\\{v_{1},v_{2},\cdots\\}$ as the set of replicas appear in term $U_{v}$. The first step is to approximate $g(h(1),h^{\prime}(1))$ by (3.2) $\displaystyle\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle=\nu({}_{1\leqslant v\leqslant H_{1}}U_{v}),$ (3.26) $\displaystyle=\nu(\varepsilon(1){}_{1<v\leqslant H_{1}}U^{-}_{v})+\frac{1}{N}\sumop\slimits@_{1<v\leqslant H_{1}}\nu(\varepsilon(1)\varepsilon(v){}_{u\neq 1,v}U^{-}_{u}).$ (3.27) The idea is to apply the cavity method when $U_{1}$ corresponds to $T_{1}$ and $S_{1}$. ##### To reduce the moment of $T_{1}$ Suppose $U_{1}$ corresponds to $T_{1}$, then $\varepsilon(1)=\varepsilon_{1_{1},1_{3}}-\varepsilon_{1_{2},1_{3}}.$ As usual, the first term in (3.27) is an order $H_{1}-1$ function, thus needs to be approximated using (2.7) as shown in Lemma 3.9. The proof is deferred to Appendix. ###### Lemma 3.9 (First order derivative structure for $T_{k}$). For $h(1)\geqslant 1$ and $h^{\prime}(1)\geqslant 0$, suppose $U_{1}$ corresponds to a copy of $T_{1}$ $\displaystyle\nu(\varepsilon(1){}_{v>1}U_{v})=$ $\displaystyle\beta^{2}(F-3G)\nu(g_{1}(h(1),h^{\prime}(1))$ $\displaystyle+\frac{\beta^{2}}{2}H\nu(g_{1}(h(1)-1,h^{\prime}(1)+1)$ $\displaystyle+\beta^{2}(h(1)-1)GA_{2}^{2}\nu(g_{1}(h(1)-2,h^{\prime}(1)))$ $\displaystyle+O(H_{1}+1).$ The second term is approximated using (2.6). $\displaystyle\frac{1}{N}\sumop\slimits@_{1<v\leqslant H_{1}}\nu(\varepsilon(1)\varepsilon(v){}_{u\neq 1,v}U^{-}_{u})$ $\displaystyle=\frac{1}{N}\sumop\slimits@_{1<v\leqslant H_{1}}\nu_{0}(\varepsilon(1)\varepsilon(v))\nu_{0}({}_{u\neq 1,v}U^{-}_{u})+O(H_{1}+1).$ By Lemma 1.4, $\nu_{0}(\varepsilon(1)\varepsilon(v))=0$ if $U_{v}$ doesn’t correspond to $T_{1}$ or $S_{1}$. Moreover, $\nu_{0}(\varepsilon(1)\varepsilon(v))=\begin{cases}G,&\text{ if }U_{v}\text{ corresponds to }T_{1},\\\ H,&\text{ if }U_{v}\text{ corresponds to }S_{1}.\end{cases}$ There are $(h(1)-1)$ terms $T_{1}$ and $h^{\prime}(1)$ terms $S_{1}$ in ${}_{v\geqslant 1}U^{-}_{v}$. Summing up all terms of the same type and applying Corollary 2.10 on all terms, $\displaystyle\frac{1}{N}\sumop\slimits@_{1<v\leqslant H_{1}}\nu(\varepsilon(1)\varepsilon(v){}_{u\neq 1,v}U^{-}_{u})=$ $\displaystyle(h(1)-1)\frac{G}{N}\nu(g_{1}(h(1)-2,h^{\prime}(1)),$ $\displaystyle+h^{\prime}(1)\frac{H}{N}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1)+O(H_{1}+1).$ Combine results for both first and second term of (3.27) and rearrange gives 3.16 $\displaystyle(1-\beta^{2}(F-3G))\nu(g(h(1),h^{\prime}(1)))$ $\displaystyle=\frac{\beta^{2}}{2}H\nu(g_{1}(h(1)-1,h^{\prime}(1)+1)$ $\displaystyle+(h(1)-1)\left(\beta^{2}GA_{2}^{2}+\frac{G}{N}\right)\nu(g_{1}(h(1)-2,h^{\prime}(1)))$ $\displaystyle+h^{\prime}(1)\frac{H}{N}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1)$ $\displaystyle+O(H_{1}+1).$ ##### To reduce the moment of $S_{1}$ Suppose, in this case, $U_{1}$ corresponds to $S_{1}$ term. $\varepsilon(1)=\varepsilon_{1_{1},1_{1}}-\varepsilon_{1_{2},1_{2}}.$ The first term in (3.27) is characterized by the following lemma. ###### Lemma 3.10 (First order derivative structure for $S_{k}$). If $h^{\prime}(1)\geqslant 1$ and $h(1)\geqslant 0$, suppose $U_{1}$ corresponds to a copy of $S_{1}$ $\displaystyle\nu(\varepsilon(1){}_{v>1}U_{v})=$ $\displaystyle\frac{\beta^{2}}{2}D\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle+\beta^{2}h(1)EA_{2}^{2}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1))$ $\displaystyle-2\beta^{2}E\nu(g_{1}(h(1)+1,h^{\prime}(1)-1)$ $\displaystyle+O(H_{1}+1)).$ For the second term, again, we have $\displaystyle\frac{1}{N}\sumop\slimits@_{1<v\leqslant H_{1}}\nu(\varepsilon(1)\varepsilon(v){}_{u\neq 1,v}U^{-}_{u})$ $\displaystyle=\frac{1}{N}\sumop\slimits@_{1<v\leqslant H_{1}}\nu_{0}(\varepsilon(1)\varepsilon(v))\nu_{0}({}_{u\neq 1,v}U^{-}_{u})+O(H_{1}+1).$ Check that $\nu_{0}(\varepsilon(1)\varepsilon(v))=\begin{cases}D,&\text{ if }U_{v}\text{ corresponds to a copy of }S_{1},\\\ E,&\text{ if }U_{v}\text{ corresponds to a copy of }T_{1},\\\ 0,&\text{ otherwise. }\end{cases}$ Plug in the above equation gives $\displaystyle\frac{1}{N}\sumop\slimits@_{1<v\leqslant H_{1}}\nu(\varepsilon(1)\varepsilon(v){}_{u\neq 1,v}U^{-}_{u})$ $\displaystyle=(h^{\prime}(1)-1)\frac{D}{N}\nu(g_{1}(h(1),h^{\prime}(1)-2))$ $\displaystyle+h(1)\frac{E}{N}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1))+O(H_{1}+1).$ Combine the estimations of the two terms gives (3.20) $\displaystyle(1-\frac{\beta^{2}}{2}D)\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle=-2\beta^{2}E\nu(g_{1}(h(1)+1,h^{\prime}(1)-1)$ $\displaystyle+h(1)\left(\beta^{2}EA_{2}^{2}+\frac{E}{N}\right)\nu(g_{1}(h(1)-1,h^{\prime}(1)-1))$ $\displaystyle+(h^{\prime}(1)-1)\frac{D}{N}\nu(g_{1}(h(1),h^{\prime}(1)-2))$ $\displaystyle+O(H_{1}+1).$ #### 3.2.3 Induction on $T$ and $S$ In this section, we consider functions in the form of $T^{h}S^{h^{\prime}}$ for $h,h^{\prime}\in{}_{\geqslant 0}$. As in previous sections, the idea is to write $T^{h}S^{h^{\prime}}$ as a formula of $T^{h-1}S^{h^{\prime}-1}$ and $\\{T^{h-2}S^{h^{\prime}},T^{h}S^{h^{\prime}-2}\\}$. To this end, let’s define $g(h,h^{\prime})=\begin{cases}T^{h}S^{h^{\prime}},&\text{ if }h,h^{\prime}\geqslant 0,\\\ 0,&\text{ otherwise. }\end{cases}$ ###### Theorem 3.11. Let $\\{(g_{T},g_{S})\\}$ be a Gaussian vector with mean $[0,0]$ and covariance matrix $\begin{bmatrix}A_{0}^{2}&C_{0}^{2}\\\ C_{0}^{2}&B_{0}^{2}\end{bmatrix},$ where $A_{0}^{2},B_{0}^{2},C_{0}^{2}$ are given in Theorem 3.17. Then we have $\nu(g(h,h^{\prime}))=\operatorname*{\px@BbbE}[g_{T}^{h}g_{S}^{h^{\prime}}]+O(h+h^{\prime}+1).$ The proof of Theorem 3.11 uses the same idea as Theorem 3.5: we first use cavity method to obtain a recursive relation, then apply Lemma 3.4 to see that moment of $S,T$ is the moments of a correlated Gaussian. The only difference lies in the structure of overlaps in cavity computation. Because of this difference, we will first introduce a more refined set of constants that will appear in the cavity computation, thus also the recursive relations of moments. ##### Constants To motivate the set of constants we need to compute the moment of $T,S$, recall that for variance computation, we started from (2.9). $\displaystyle\nu(X^{2})$ $\displaystyle=\frac{1}{N}\nu(\varepsilon_{X}^{2})+\nu(\varepsilon_{X}X^{-}),$ (3.28) $\displaystyle=\frac{1}{N}\nu(\varepsilon_{X}^{2})+\nu^{\prime}_{0}(\varepsilon_{X}X^{-})+O(3).$ (3.29) By setting $X=T$ or $X=S$, we record the following constants corresponding to the expectation of the last spin. They mainly appears in $\nu^{\prime}_{0}(\varepsilon_{X}X^{-})$ as a result of formula from 2.4. ###### Definition 3.12. From $\nu(\varepsilon_{T}\varepsilon_{X})$: * • $I_{1}=\nu_{0}((\varepsilon_{12}-q)\varepsilon_{12})=\nu_{0}(\varepsilon_{12}\varepsilon_{12}-q^{2})$, * • $I_{2}=\nu_{0}((\varepsilon_{12}-q)\varepsilon_{13})=\nu_{0}(\varepsilon_{12}\varepsilon_{13}-q^{2})$, * • $I_{3}=\nu_{0}((\varepsilon_{12}-q)\varepsilon_{34})=\nu_{0}(\varepsilon_{12}\varepsilon_{34}-q^{2})$, * • $I_{4}=\nu_{0}((\varepsilon_{12}-q)\varepsilon_{11})=\nu_{0}(\varepsilon_{12}\varepsilon_{11}-pq)$, * • $I_{5}=\nu_{0}((\varepsilon_{12}-q)\varepsilon_{33})=\nu_{0}(\varepsilon_{12}\varepsilon_{33}-pq)$. From $\nu(\varepsilon_{S}\varepsilon_{X})$: * • $K_{1}=\nu_{0}((\varepsilon_{1,1}-p)\varepsilon_{12})=\nu_{0}(\varepsilon_{11}\varepsilon_{12})-pq=I_{4}$, * • $K_{2}=\nu_{0}((\varepsilon_{1,1}-p)\varepsilon_{11})=\nu_{0}(\varepsilon_{11}\varepsilon_{11})-p^{2}$, * • $K_{3}=\nu_{0}((\varepsilon_{1,1}-p)\varepsilon_{23})=\nu_{0}(\varepsilon_{11}\varepsilon_{23})-pq=I_{5}$, * • $K_{4}=\nu_{0}((\varepsilon_{1,1}-p)\varepsilon_{22})=\nu_{0}(\varepsilon_{11}\varepsilon_{22})-p^{2}$. The constants defined in 2.11 are linear combinations of the ones defined above. ###### Claim 3.13. $I_{1}-I_{2}=F,\quad I_{2}-I_{3}=G,\quad I_{4}-I_{5}=E,$ $K_{1}-K_{3}=E,\quad K_{2}-K_{4}=D.$ ###### Proof. The claim follows by checking the definition of in 2.11 and comparing with the ones in Definition 3.12 ∎ ##### Proof of Theorem 3.11 In this section, we prove Theorem 3.11 and record the variance of $T,S$ as a special case. As in the proof of Theorem 3.5, we first give two recursive formulas for mixed moments of $T,S$ using the cavity method. The proof of Theorem 3.11 follows from rewriting the relations using Lemma 3.4. The proof of the Lemma 3.14 will be shown in the next subsection. ###### Lemma 3.14. For $h\geqslant 1;h^{\prime}\geqslant 0$, we have $\displaystyle M_{1}\nu(g(h,h^{\prime}))=$ $\displaystyle\beta^{2}E\nu(g(h-1,h^{\prime}+1))$ (3.30) $\displaystyle+\beta^{2}(h-1)\left[I_{5}C_{1}^{2}+2(2G-I_{3})A_{1}^{2}+I_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}I_{3}\right]\nu(g(h-2,h^{\prime}))$ (3.31) $\displaystyle+\beta^{2}h^{\prime}\left[\frac{1}{2}I_{5}B_{1}^{2}+(2G-I_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}I_{5}\right]\nu(g(h-1,h^{\prime}-1))$ (3.32) $\displaystyle+O(h+h^{\prime}+1).$ (3.33) For $h\geqslant 0;h^{\prime}\geqslant 1$ $\displaystyle M_{2}\nu(g(h,h^{\prime}))=$ $\displaystyle-\beta^{2}E\nu(g(h+1,h^{\prime}-1))$ (3.34) $\displaystyle+\beta^{2}h\left[K_{4}C_{1}^{2}+2(E-K_{3})A_{1}^{2}+K_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}K_{3}\right]\nu(g(h-1,h^{\prime}-1))$ (3.35) $\displaystyle+\beta^{2}(h^{\prime}-1)\left[\frac{1}{2}K_{4}B_{1}^{2}+(E-K_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}K_{4}\right]\nu(g(h,h^{\prime}-2))$ (3.36) $\displaystyle+O(h+h^{\prime}+1).$ (3.37) ###### Remark 3.15. First check that (3.32) reduces to the equation (1.262) of [Tal11] in SK model: For SK mode, there’s no self-overlap terms and $C_{1}^{2}=0$. To check other constants, $I_{3}=\hat{q}-q^{2}$, $2G-I_{3}=2I_{2}-3I_{3}=2(q-q^{2})-3(\hat{q}-q^{2})=2q+q^{2}-3\hat{q}.$ Combined with Remark 2.18, we have $\beta^{2}\left[I_{5}C_{1}^{2}+2(2G-I_{3})A_{1}^{2}+I_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}I_{3}\right]\equiv\frac{\hat{q}-q^{2}}{N}+\beta^{2}(\hat{q}-q^{2})A^{2}+2\beta^{2}(2q+q^{2}-3\hat{q})B^{2}+O(3).$ where $A,B,C$ are defined in [Tal11, Chapter 1.8]. To apply Lemma 3.4 on $f(h,h^{\prime})=\nu(g(h,h^{\prime}))$, we first need to check (3.32) and (3.36) satisfies the consistency condition. That is, the goal is to verify (3.12) with $\alpha_{2}=\frac{\beta^{2}}{M_{1}}\left[I_{5}C_{1}^{2}+2(2G-I_{3})A_{1}^{2}+I_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}I_{3}\right],$ $\alpha_{1}=\frac{\beta^{2}}{M_{1}}\left[\frac{1}{2}I_{5}B_{1}^{2}+(2G-I_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}I_{5}\right],$ $\alpha_{0}=\frac{\beta^{2}E}{M_{1}}.$ and $\beta_{2}=\frac{\beta^{2}}{M_{2}}\left[\frac{1}{2}K_{4}B_{1}^{2}+(E-K_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}K_{4}\right],$ $\beta_{1}=\frac{\beta^{2}}{M_{2}}\left[K_{4}C_{1}^{2}+2(E-K_{3})A_{1}^{2}+K_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}K_{3}\right],$ $\beta_{0}=-\frac{\beta^{2}E}{M_{2}}.$ We begin by recording some useful expressions of the constants that will simplify the proof. ###### Claim 3.16. $2A_{1}^{2}-(A_{2}^{2}+\frac{1}{\beta^{2}N})=-\frac{M_{2}}{\beta^{2}NM},$ $\frac{B_{1}^{2}}{2}+\frac{1}{\beta^{2}N}=\frac{M_{1}}{\beta^{2}MN},$ $M_{1}-M_{3}=2\beta^{2}G.$ ###### Proof. The third equation follows from the definition of $M_{1}$ and $M_{3}$. For the first and second equation: By Claim 2.14, $\beta^{2}A_{2}^{2}+\frac{1}{N}=\frac{1}{NM_{3}}.$ Recall the definition of $A_{1}^{2},B_{1}^{2}$ $A_{1}^{2}:=\frac{GM_{2}+\frac{\beta^{2}}{2}EH}{M}\frac{1}{NM_{3}}=\frac{1}{2\beta^{2}N}\left(\frac{1}{M_{3}}-\frac{M_{2}}{M}\right),$ $B_{1}^{2}=\frac{DM_{1}-2\beta^{2}E^{2}}{NM}=\frac{2}{N\beta^{2}}(\frac{M_{1}}{M}-1).$ Combine and rearrange to give the desired result. ∎ To simplify notations, denote LHS and RHS as $\text{LHS}:=\alpha_{1}+\alpha_{0}\beta_{2},\quad\text{and}\ \ \beta_{1}+\beta_{0}\alpha_{2}=:\text{RHS}.$ We now begin to verify (3.12) by comparing the coefficients in front of $K_{4},I_{5},I_{3}$ in LHS and RHS. ##### For $K_{4}$ $\displaystyle\text{LHS}=\frac{\beta^{4}E}{M_{1}M_{2}}\left(\frac{1}{2}B_{1}^{2}+\frac{1}{\beta^{2}N}\right)\stackrel{{\scriptstyle\ref{claim:rel1}}}{{=}}\frac{\beta^{4}E}{M_{1}M_{2}}\frac{M_{1}}{\beta^{2}MN}=\frac{\beta^{2}}{M_{2}}\frac{E}{MN}\stackrel{{\scriptstyle\ref{lem: var s1t1}}}{{=}}\frac{\beta^{2}C_{1}^{2}}{M_{2}}=\text{RHS}.$ ##### For $I_{5}$ Recall that by definition, $I_{5}=K_{3}$. On one hand, LHS $\displaystyle=\frac{\beta^{2}}{M_{1}}\left(\frac{1}{2}B_{1}^{2}+\frac{1}{\beta^{2}N}\right)-\frac{\beta^{2}E}{M_{1}}\frac{\beta^{2}}{M_{2}}C_{1}^{2}\stackrel{{\scriptstyle\ref{claim:rel1}}}{{=}}\frac{\beta^{2}}{M_{1}}\frac{M_{1}}{\beta^{2}MN}-\frac{\beta^{4}E}{M_{1}M_{2}}\frac{E}{MN},$ $\displaystyle=\frac{1}{MN}\left(1-\frac{\beta^{4}E^{2}}{M_{1}M_{2}}\right).$ RHS $\displaystyle=\frac{\beta^{2}}{M_{2}}(-2A_{1}^{2}+A_{2}^{2}+\frac{1}{\beta^{2}N})-\frac{\beta^{2}E}{M_{2}}\frac{\beta^{2}}{M_{1}}C_{1}^{2}\stackrel{{\scriptstyle\ref{claim:rel1}}}{{=}}\frac{\beta^{2}}{M_{2}}\frac{M_{2}}{\beta^{2}MN}-\frac{\beta^{4}E^{2}}{M_{2}M_{1}MN},$ $\displaystyle=\frac{1}{MN}(1-\frac{\beta^{4}E^{2}}{M_{1}M_{2}})=\text{LHS}.$ ##### For $I_{3}$ RHS $\displaystyle=-\frac{\beta^{2}E}{M_{2}}\frac{\beta^{2}}{M_{1}}(-2A_{1}^{2}+A_{2}^{2}+\frac{1}{\beta^{2}N})\stackrel{{\scriptstyle\ref{claim:rel1}}}{{=}}-\frac{\beta^{4}E}{M_{2}M_{1}}\frac{M_{2}}{\beta^{2}MN},$ $\displaystyle=-\frac{\beta^{2}E}{M_{1}MN}=-\frac{\beta^{2}}{M_{1}}C_{1}^{2}=\text{LHS}.$ ##### The remaining terms With some abuse of notations, check that LHS $\displaystyle=\frac{\beta^{2}}{M_{1}}2GC_{1}^{2}+\frac{\beta^{2}E}{M_{1}}\frac{\beta^{2}}{M_{2}}EC_{1}^{2}=\frac{C_{1}^{2}\left(2\beta^{2}GM_{2}+\beta^{4}E^{2}\right)}{M_{1}M_{2}}\stackrel{{\scriptstyle\ref{T1 var}}}{{=}}\frac{2E\beta^{2}A_{1}^{2}M_{3}}{M_{1}M_{2}},$ RHS $\displaystyle=\frac{\beta^{2}}{M_{2}}2EA_{1}^{2}-\frac{\beta^{2}E}{M_{2}}\frac{\beta^{2}}{M_{1}}4GA_{1}^{2}=2\beta^{2}EA_{1}^{2}\frac{(M_{1}-\beta^{2}2G)}{M_{1}M_{2}},$ $\displaystyle\stackrel{{\scriptstyle\ref{claim:rel1}}}{{=}}2\beta^{2}EA_{1}^{2}\frac{M_{3}}{M_{1}M_{2}}=\text{LHS}.$ This allows us to apply Lemma 3.4 to obtain a recursive relation for $\nu(T^{h}S^{h^{\prime}})$. We start by computing the variance of $T$ and $S$. ###### Theorem 3.17. For $\beta<\beta^{\prime}$, we have $\nu(T^{2})=A_{0}^{2}+O(3),$ where $\displaystyle A_{0}^{2}$ $\displaystyle=\frac{\beta^{2}}{M}\left(\beta^{2}EK_{4}+M_{2}I_{5}\right)C_{1}^{2}+2\frac{\beta^{2}}{M}\left(\beta^{2}E(E-K_{3})+M_{2}(2G-I_{3})\right)A_{1}^{2}$ $\displaystyle+\frac{\beta^{2}}{M}\left(\beta^{2}EK_{3}+M_{2}I_{3}\right)A_{2}^{2}+\frac{1}{MN}\left(\beta^{2}EK_{3}+M_{2}I_{3}\right).$ And we further have $\nu(S^{2})=B_{0}^{2}+O(3),$ where $B_{0}^{2}=\frac{\beta^{2}}{2M}\left(M_{1}K_{4}-\beta^{2}EI_{5}\right)B_{1}^{2}+\frac{\beta^{2}}{M}\left(M_{1}(E-K_{3})-E\beta^{2}(2G-I_{3})\right)C_{1}^{2}+\frac{1}{MN}\left(M_{1}K_{4}-\beta^{2}EI_{5}\right).$ Finally we also have $\nu(ST)=C_{0}+O(3),$ where $C_{0}=\frac{\beta^{2}}{M}\left(\frac{1}{2}M_{2}I_{5}+\beta^{2}EK_{4}\right)B_{1}^{2}+\frac{\beta^{2}}{M}\left(M_{2}(2G-I_{3})+\beta^{2}E(E-K_{3})\right)C_{1}^{2}+\frac{1}{MN}\left(M_{2}I_{5}+\beta^{2}EK_{4}\right).$ ###### Proof. Since the coefficients in (3.32) and (3.36) satisfy the condition (3.12), we can apply Lemma 3.4 with $h,h^{\prime}\in\\{0,2\\}$ to obtain the desired result. First, the common denominator of $C(2,0),C(0,2),C(1,1)$ is $1-\alpha_{0}\beta_{0}=1+\frac{\beta^{4}E^{2}}{M_{1}M_{2}}=\frac{M}{M_{1}M_{2}}.$ For variance of $T$: $\displaystyle(1-\alpha_{0}\beta_{0})\nu(T^{2})$ $\displaystyle=(1-\alpha_{0}\beta_{0})\nu(g(2,0))=\alpha_{2}+\alpha_{0}\beta_{1}$ $\displaystyle=\frac{\beta^{2}}{M_{1}}\left[I_{5}C_{1}^{2}+2(2G-I_{3})A_{1}^{2}+I_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}I_{3}\right]$ $\displaystyle+\frac{\beta^{4}E}{M_{1}M_{2}}\left[K_{4}C_{1}^{2}+2(E-K_{3})A_{1}^{2}+K_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}K_{3}\right]+O(3).$ Rearrange gives $\displaystyle\nu(T^{2})$ $\displaystyle=\frac{\beta^{2}}{M}\left(\beta^{2}EK_{4}+M_{2}I_{5}\right)C_{1}^{2}+2\frac{\beta^{2}}{M}\left(\beta^{2}E(E-K_{3})+M_{2}(2G-I_{3})\right)A_{1}^{2}$ $\displaystyle+\frac{\beta^{2}}{M}\left(\beta^{2}EK_{3}+M_{2}I_{3}\right)A_{2}^{2}+\frac{1}{MN}\left(\beta^{2}EK_{3}+M_{2}I_{3}\right)+O(3).$ Next, we compute the variance of $S$: $\displaystyle(1-\alpha_{0}\beta_{0})\nu(S^{2})$ $\displaystyle=(1-\alpha_{0}\beta_{0})\nu(g(0,2))=\beta_{2}+\beta_{0}\alpha_{1}$ $\displaystyle=\frac{\beta^{2}}{M_{2}}\left[\frac{1}{2}K_{4}B_{1}^{2}+(E-K_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}K_{4}\right]-\frac{\beta^{4}E}{M_{1}M_{2}}\left[\frac{1}{2}I_{5}B_{1}^{2}+(2G-I_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}I_{5}\right]+O(3).$ Rearrange gives $\displaystyle\nu(S^{2})$ $\displaystyle=\frac{\beta^{2}}{2M}\left(M_{1}K_{4}-\beta^{2}EI_{5}\right)B_{1}^{2}+\frac{\beta^{2}}{M}\left(M_{1}(E-K_{3})-E\beta^{2}(2G-I_{3})\right)C_{1}^{2}+\frac{1}{MN}\left(M_{1}K_{4}-\beta^{2}EI_{5}\right)+O(3).$ For the covariance $\nu(TS)$, $\displaystyle(1-\alpha_{0}\beta_{0})\nu(TS)$ $\displaystyle=(1-\alpha_{0}\beta_{0})\nu(g(1,1))=\alpha_{1}+\alpha_{0}\beta_{2},$ $\displaystyle=\frac{\beta^{2}}{M_{1}}\left[\frac{1}{2}I_{5}B_{1}^{2}+(2G-I_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}I_{5}\right]+\frac{\beta^{4}E}{M_{1}M_{2}}\left[\frac{1}{2}K_{4}B_{1}^{2}+(E-K_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}K_{4}\right].$ Rearrange gives $\displaystyle\nu(TS)=\frac{\beta^{2}}{M}\left(\frac{1}{2}M_{2}I_{5}+\beta^{2}EK_{4}\right)B_{1}^{2}+\frac{\beta^{2}}{M}\left(M_{2}(2G-I_{3})+\beta^{2}E(E-K_{3})\right)C_{1}^{2}+\frac{1}{MN}\left(M_{2}I_{5}+\beta^{2}EK_{4}\right)+O(3).$ ∎ Now we turn to the proof of the general moments $T^{h}S^{h^{\prime}}$. ###### Proof of Theorem 3.11. By Lemma 3.4, we have the following recursive relation for moments of $S,T$. $\displaystyle\nu(T^{h}S^{h^{\prime}})$ $\displaystyle=(h-1)A_{0}^{2}\nu(T^{h-2}S^{h^{\prime}})+h^{\prime}B_{0}^{2}\nu(T^{h-1}S^{h^{\prime}-1})+O(h+h^{\prime}+1),$ (3.38) $\displaystyle=hA_{0}^{2}\nu(T^{h-1}S^{h^{\prime}-1})+(h^{\prime}-1)B_{0}^{2}\nu(T^{h}S^{h^{\prime}-2})+O(h+h^{\prime}+1).$ (3.39) The proof then proceeds with induction on $h^{\prime}+h$. If $h+h^{\prime}=1$, the expression holds as odd moments of Gaussian is $0$. For $h+h^{\prime}\geqslant 2$, applying the inductive hypothesis on two terms on the right-hand side gives $\displaystyle\nu(T^{h}S^{h^{\prime}})$ $\displaystyle=(h-1)A_{0}^{2}\operatorname*{\px@BbbE}[g_{T}^{h-2}g_{S}^{h^{\prime}}]+h^{\prime}B_{0}^{2}\operatorname*{\px@BbbE}[g_{T}^{h-1}g_{S}^{h^{\prime}-1}]+O(h+h^{\prime}+1),$ (3.40) $\displaystyle=hA_{0}^{2}\operatorname*{\px@BbbE}[g_{T}^{h-1}g_{S}^{h^{\prime}-1}]+(h^{\prime}-1)B_{0}^{2}\operatorname*{\px@BbbE}[g_{T}^{h}g_{S}^{h^{\prime}-2}]+O(h+h^{\prime}+1).$ (3.41) Using Gaussian integration by parts (3.8) to rewrite RHS completes the proof. ∎ #### 3.2.4 Proof of Lemma 3.14 In this section, we derive Lemma 3.14 using the cavity method. Recall the definition of $U_{v},\varepsilon(v),U^{-}_{v}$ from the beginning of this section and that we denote $V_{v}=\\{v_{1},v_{2},\cdots\\}$ as the set of replicas appears in term $U_{v}$. Here $|V_{v}|=2$ if $U_{v}$ corresponds to $T$ and $|V_{v}|=1$ if $U_{v}$ corresponds to $S$. ##### To reduce the moment of $T$ We start by proving (3.32). As usual, the first term in (3.2) is approximated using (2.7). We record the result in Lemma 3.18 The proof is technical but straight forward, thus pushed to the appendix (Lemma 4.7). ###### Lemma 3.18 (First order derivative structure for $T$). If $|V_{1}|=2$, then $\displaystyle\nu(g(h,h^{\prime}))=$ $\displaystyle\beta^{2}(F-3G)\nu(g(h,h^{\prime})+\beta^{2}E\nu(g(h-1,h^{\prime}+1))$ $\displaystyle+\beta^{2}(h-1)\left[I_{5}C_{1}^{2}+2(2G-I_{3})A_{1}^{2}+I_{3}A_{2}^{2}\right]\nu(g(h-2,h^{\prime}))$ $\displaystyle+\beta^{2}h^{\prime}\left[\frac{1}{2}I_{5}B_{1}^{2}+(2G-I_{3})C_{1}^{2}\right]\nu(g(h-1,h^{\prime}-1))$ $\displaystyle+O(h+h^{\prime}+1).$ ##### For the second term in (3.2) $\displaystyle\frac{1}{N}\sumop\slimits@_{v=2}^{h^{\prime}+h}\nu(\varepsilon(1)\varepsilon(v){}_{u\neq v}U^{-}_{u})$ $\displaystyle=\frac{1}{N}\sumop\slimits@_{v=2}^{h^{\prime}+h}\nu((\varepsilon_{1_{1},1_{2}})-Q_{1_{1},1_{2}})\varepsilon((\varepsilon_{v_{1},v_{2}})-Q_{v_{1},v_{2}})){}_{u\neq v}U^{-}_{u}),$ (3.42) $\displaystyle\stackrel{{\scriptstyle(\ref{lemma:last spin})}}{{=}}\frac{h-1}{N}I_{3}\nu(T^{h-2}S^{h^{\prime}})+\frac{h^{\prime}}{N}I_{5}\nu(T^{h-1}S^{h^{\prime}-1})+O(h^{\prime}+h+1),$ (3.43) $\displaystyle=\frac{h-1}{N}I_{3}\nu(g(h-2,h^{\prime}))+\frac{h^{\prime}}{N}I_{5}\nu(g(h-1,h^{\prime}-1))+O(h^{\prime}+h+1).$ (3.44) Combine Lemma 3.18 and (3.42) gives (3.32) $\displaystyle\left(1-\beta^{2}(F-3G)\right)\nu(g(h,h^{\prime}))=$ $\displaystyle\beta^{2}E\nu(g(h-1,h^{\prime}+1))$ $\displaystyle+\beta^{2}(h-1)\left[I_{5}C_{1}^{2}+2(2G-I_{3})A_{1}^{2}+I_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}I_{3}\right]\nu(g(h-2,h^{\prime}))$ $\displaystyle+\beta^{2}h^{\prime}\left[\frac{1}{2}I_{5}B_{1}^{2}+(2G-I_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}I_{5}\right]\nu(g(h-1,h^{\prime}-1))$ $\displaystyle+O(h+h^{\prime}+1).$ ##### To reduce the moment of $S$ Similarily, we approximate the first term in (3.2) using (2.7) to get Lemma 3.19 The proof can be found in appendix (Lemma 4.8). ###### Lemma 3.19 (First order derivative structure for $S$). Suppose $|V_{1}|=1$, $\displaystyle\nu(g(h,h^{\prime}))=$ $\displaystyle\frac{\beta^{2}}{2}D\nu(g(h,h^{\prime}))$ $\displaystyle+\beta^{2}h\left[K_{4}C_{1}^{2}+2(E-K_{3})A_{1}^{2}+K_{3}A_{2}^{2}\right]\nu(g(h-1,h^{\prime}-1))$ $\displaystyle+\beta^{2}(h^{\prime}-1)\left[\frac{1}{2}K_{4}B_{1}^{2}+(E-K_{3})C_{1}^{2}\right]\nu(g(h,h^{\prime}-2))$ $\displaystyle-\beta^{2}E\nu(g(h+1,h^{\prime}-1))$ $\displaystyle+O(h+h^{\prime}+1).$ ##### For the second term in (3.2) $\displaystyle\frac{1}{N}\sumop\slimits@_{v=2}^{h^{\prime}+h}\nu(\varepsilon(1)\varepsilon(v){}_{u\neq v}U^{-}_{u})$ $\displaystyle=\frac{1}{N}\sumop\slimits@_{v=2}^{h^{\prime}+h}\nu((\varepsilon_{1_{1},1_{2}})-Q_{1_{1},1_{2}})\varepsilon((\varepsilon_{v_{1},v_{2}})-Q_{v_{1},v_{2}})){}_{u\neq v}U^{-}_{u}),$ (3.45) $\displaystyle\stackrel{{\scriptstyle(\ref{lemma:last spin})}}{{=}}\frac{h}{N}K_{3}\nu(g(h-1,h^{\prime}-1))+\frac{h^{\prime}-1}{N}K_{4}\nu(g(h,h^{\prime}-2))+O(h^{\prime}+h+1).$ (3.46) Combining results from Lemma 3.19 and (3.45) gives the desired result. $\displaystyle\left(1-\frac{\beta^{2}}{2}D\right)\nu(g(h,h^{\prime}))=$ $\displaystyle\beta^{2}h\left[K_{4}C_{1}^{2}+2(E-K_{3})A_{1}^{2}+K_{3}A_{2}^{2}+\frac{1}{\beta^{2}N}K_{3}\right]\nu(g(h-1,h^{\prime}-1))$ $\displaystyle+\beta^{2}(h^{\prime}-1)\left[\frac{1}{2}K_{4}B_{1}^{2}+(E-K_{3})C_{1}^{2}+\frac{1}{\beta^{2}N}K_{4}\right]\nu(g(h,h^{\prime}-2))$ $\displaystyle-\beta^{2}E\nu(g(h+1,h^{\prime}-1))$ $\displaystyle+O(h+h^{\prime}+1).$ ### 3.3 Proof of Lemma 3.1 In this section, we put all the pieces together to compute the general mixed moments of $T_{k,l},T_{K},S_{k},S,T$. ###### Proof. For $k,l\in[n]$ $k\neq l$, let $\\{g_{T_{k,l}}\\}$ be the family of independent centered Gaussian random varaible with $\operatorname*{\px@BbbE}[g^{2}_{T_{k,l}}]=A^{2}_{2}$ as in Theorem 3.2. For $k\in[n]$, let $\\{(g_{T_{k}},g_{S_{k}})\\}$ be the family of independent centered Gaussian with covariance matrix 1 as in Theorem 3.5 and independent from $\\{g_{T_{k,l}}\\}$. Similarily, let $(g_{S},g_{T})$ be the Gaussian random vector with covariance matrix 0 as in Theorem 3.11 and independent from $\\{g_{T_{k,l}}\\}$ and $\\{(g_{T_{k}},g_{S_{k}})\\}$. Apply Theorem 3.2, then 3.5, and finally 3.11 gives the desired result. ∎ ## References * [AC21] Antonio Auffinger and Cathy Xi Chen. Thouless-Anderson-Palmer equations for the Ghatak-Sherrington mean field spin glass model. J. Stat. Phys., 184(2):Paper No. 22, 25, 2021. * [AG23] Ahmed El Alaoui and Jason Gaitonde. Bounds on the covariance matrix of the sherrington-kirkpatrick model, 2023\. * [CCM23] Francesco Camilli, Pierluigi Contucci, and Emanuele Mingione. Central limit theorem for the overlaps on the nishimori line, 2023. * [Che22] Xi Chen. Thouless-Anderson-Palmer Equations for the Ghatak-Sherrington Mean Field Spin Glass Model. PhD thesis, 2022. Copyright - Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works; Last updated - 2023-03-08. * [CMP+23] Patrick Charbonneau, Enzo Marinari, Giorgio Parisi, Federico Ricci-tersenghi, Gabriele Sicuro, Francesco Zamponi, and Marc Mezard. Spin Glass Theory and Far Beyond: Replica Symmetry Breaking after 40 Years. World Scientific, 2023. * [CN95] F. Comets and J. Neveu. The Sherrington-Kirkpatrick model of spin glasses and stochastic calculus: the high temperature case. Comm. Math. Phys., 166(3):549–564, 1995. * [DCdA00] FA Da Costa and JM de Araújo. Zero-temperature tap equations for the ghatak-sherrington model. The European Physical Journal B-Condensed Matter and Complex Systems, 15:313–316, 2000. * [dCYS94] Francisco A da Costa, Carlos SO Yokoi, and Silvio RA Salinas. First-order transition in a spin-glass model. Journal of Physics A: Mathematical and General, 27(10):3365, 1994\. * [DW21] Partha S. Dey and Qiang Wu. Fluctuation results for multi-species Sherrington-Kirkpatrick model in the replica symmetric regime. J. Stat. Phys., 185(3):Paper No. 22, 40, 2021. * [Gen96a] Barbara Gentz. An almost sure central limit theorem for the overlap parameters in the Hopfield model. Stochastic Process. Appl., 62(2):243–262, 1996. * [Gen96b] Barbara Gentz. A central limit theorem for the overlap in the Hopfield model. Ann. Probab., 24(4):1809–1841, 1996. * [GL99] Barbara Gentz and Matthias Löwe. The fluctuations of the overlap in the Hopfield model with finitely many patterns at the critical temperature. Probab. Theory Related Fields, 115(3):357–381, 1999. * [GS77] SK Ghatak and D Sherrington. Crystal field effects in a general s ising spin glass. Journal of Physics C: Solid State Physics, 10(16):3149, 1977. * [GT02] Francesco Guerra and Fabio Lucio Toninelli. Central limit theorem for fluctuations in the high temperature region of the Sherrington-Kirkpatrick spin glass model. J. Math. Phys., 43(12):6224–6237, 2002. * [Han07] Albert Hanen. Un théorème limite pour les covariances des spins dans le modèle de Sherrington-Kirkpatrick avec champ externe. Ann. Probab., 35(1):141–179, 2007. * [Hop82] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. U.S.A., 79(8):2554–2558, 1982. * [KH99] Katsuki Katayama and Tsuyoshi Horiguchi. Ghatak-sherrington model with spin s. Journal of the Physical Society of Japan, 68(12):3901–3910, 1999\. * [LDA82] EJS Lage and JRL De Almeida. Stability conditions of generalised ising spin glass models. Journal of Physics C: Solid State Physics, 15(33):L1187, 1982. * [Leu07] Luca Leuzzi. Spin-glass model for inverse freezing. Philosophical Magazine, 87(3-5):543–551, 2007. * [LS22] Benjamin Landon and Philippe Sosoe. Fluctuations of the overlap at low temperature in the 2-spin spherical SK model. Ann. Inst. Henri Poincaré Probab. Stat., 58(3):1426–1459, 2022\. * [MS85] PJ Mottishaw and D Sherrington. Stability of a crystal-field split spin glass. Journal of Physics C: Solid State Physics, 18(26):5201, 1985. * [NS19] Vu Lan Nguyen and Philippe Sosoe. Central limit theorem near the critical temperature for the overlap in the 2-spin spherical SK model. J. Math. Phys., 60(10):103302, 13, 2019. * [Pan05] Dmitry Panchenko. Free energy in the generalized Sherrington-Kirkpatrick mean field model. Rev. Math. Phys., 17(7):793–857, 2005. * [Pan14] Dmitry Panchenko. The Parisi formula for mixed $p$-spin models. Ann. Probab., 42(3):946–958, 2014. * [Pan18] Dmitry Panchenko. Free energy in the mixed $p$-spin models with vector spins. Ann. Probab., 46(2):865–896, 2018. * [Tal98] Michel Talagrand. Rigorous results for the Hopfield model with many patterns. Probab. Theory Related Fields, 110(2):177–276, 1998. * [Tal06] Michel Talagrand. The Parisi formula. Ann. of Math. (2), 163(1):221–263, 2006. * [Tal11] Michel Talagrand. Mean field models for spin glasses. Volume I, volume 54 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin, 2011. Basic examples. ## 4 Appendix In this section, we prove the technical lemmas (Lemma 3.3, 3.6, 3.14) that characterize the recursive relations of general moments using cavity method. Recall the decomposition of general moments given in (3.2) $\displaystyle\nu\left({}_{k,l}T_{k,l}^{h(k,l)}{}_{k}T_{k}^{h(k)}T^{h}{}_{l}S_{l}^{h^{\prime}(l)}S^{h^{\prime}}\right)$ $\displaystyle=\nu({}_{v\geqslant 1}U_{v})$ (4.1) $\displaystyle=\nu(\varepsilon(1){}_{v>1}U^{-}_{v})+\frac{1}{N}\sumop\slimits@_{u\geqslant 2}\nu(\varepsilon(1){}_{v\neq 1,u}U^{-}_{v})+O(H+1).$ (4.2) Note that the first term is of order $H-1$, we should apply the second order approximation, (2.7) and compute its first order derivative at time $0$. With some abuse of notation, we will always assume the first term $U_{1}$ corresponds to the type of basis, $T_{1},S_{1},T,S$, that we wish to "peel off" from the expression. Note that regardless of the type of $U_{1}$, $\nu_{0}(\varepsilon(1))=0$ by symmetry. $\displaystyle\nu(\varepsilon(1){}_{v>1}U^{-}_{v})$ $\displaystyle=\nu^{\prime}_{0}(\varepsilon(1){}_{v>1}U^{-}_{v})+O(H+1).$ (4.3) This section is dedicated to characterizing the structure of such terms. ### 4.1 Proof of Lemma 3.3 ###### Lemma 4.1. Suppose $h(1,2)\geqslant 1$ and $U_{1}$ corresponds to a copy of $T_{1,2}$ $\nu^{\prime}_{0}(\varepsilon(1){}_{v>1}U^{-}_{v})=\beta^{2}A\nu(g_{1,2}(h(1,2)))+O(H+1).$ ###### Proof of lemma 3.3. Let $U_{1}$ be of type $T_{1,2}$, by Claim 2.1, $\varepsilon(1)=\varepsilon_{1_{1},1_{2}}-\varepsilon_{1_{1},1_{4}}-\varepsilon_{1_{3},1_{2}}+\varepsilon_{1_{3},1_{4}},$ By (2.4), denote $m$ as the total number of replicas used by $\varepsilon(1){}_{v>1}U^{-}_{v}$. $\displaystyle\nu^{\prime}_{0}(\varepsilon(1){}_{v>1}U^{-}_{v})$ $\displaystyle=\frac{\beta^{2}}{2}\sumop\slimits@_{1\leqslant a,b\leqslant 2m}\text{sgn}(a,b)\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-\mu_{a,b}){}_{v>1}U^{-}_{v})-\mathcal{R}_{m,\varepsilon(1){}_{v>1}U^{-}_{v}}.$ As we noted in (3.5), $\nu_{0}(\varepsilon(1)\varepsilon_{a,b})=0$ unless $\varepsilon_{a,b}$ is a monomial in $\varepsilon(1)$. Since $1_{3},1_{4}$ can not appear in any other terms $\\{U_{v}:v>1\\}$, $\nu_{0}(\varepsilon(1)\varepsilon_{a,b})=A>0$ only when $a,b\subset V_{1}$ and $a\neq b$. Summing over all such pairs of replicas gives the desired result. $\displaystyle\nu^{\prime}_{0}(\varepsilon(1){}_{v>1}U^{-}_{v})$ $\displaystyle=A\beta^{2}\nu(g_{1,2}(h(1,2)))+O(H+1).$ (4.4) ∎ ### 4.2 Proof of Lemma 3.9, 3.10 In this section, we derive the structure of (4.3) with $\nu({}_{v\geqslant 1}U_{v})=\nu\left({}_{k}T_{k}^{h(k)}T^{h}{}_{l}S_{l}^{h^{\prime}(l)}S^{h^{\prime}}\right).$ Recall that the total moments of each type are $h_{T}=\sumop\slimits@_{k}h(k),\quad h_{S}=\sumop\slimits@_{l}h^{\prime}(l),\quad H_{1}=h_{T}+h_{S}+h+h^{\prime}.$ and $g_{1}$ is the function indexed by moments of $T_{1},S_{1}$ s.t. $\nu(g_{1}(h(1),h^{\prime}(1)))=\nu({}_{1\leqslant v\leqslant H_{1}}U_{v}).$ We begin by introducing the following notations for referencing different terms in $\nu^{\prime}_{t}(\cdot)$ given in (2.4). Denote $m$ as as the number of total replicas used by $g(h(1),h^{\prime}(1))$ $m:=n+2h_{T}+h_{S}$ For $a\in[2m]$, denote $a^{{}^{\prime\prime}}$ as the new replicas that first appear in (2.4). For $a,b\in[2m]$, let $\text{sgn}(a,b):=-1^{|\\{a,b\\}\cap[m]|}$ . Our goal is to compute the following derivative with $U_{1}$ corresponding to a copy of $T_{1}$ or $S_{1}$ $\displaystyle\nu_{0}^{\prime}(\varepsilon(1){}_{v>1}U^{-}_{v})$ $\displaystyle=\frac{\beta^{2}}{2}\sumop\slimits@_{1\leqslant a,b\leqslant 2m}\text{sgn}(a,b)\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-Q_{a,b}){}_{v>1}U^{-}_{v})-\mathcal{R}_{m,\varepsilon(1){}_{v>1}U^{-}_{v}}.$ (4.5) In both cases, we need to consider contributions from terms $\text{sgn}(a,b)\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-\mu_{a,b}){}_{v>1}U^{-}_{v})-\mathcal{R}_{m,\varepsilon(1){}_{v>1}U^{-}_{v}}.$ Before proceeding, let’s exploit the symmetry of $T_{1}$ and $S_{1}$ to rule out certain types of replica pair $(a,b)$. ###### Lemma 4.2. Suppose $\varepsilon(v)=\varepsilon_{v_{1},v_{3}}-\varepsilon_{v_{2},v_{3}}$ or $\varepsilon(v)=\varepsilon_{v_{1},v_{1}}-\varepsilon_{v_{2},v_{2}}$. If $|\\{a,b\\}\cap\\{v_{1},v_{2}\\}|\neq 1$, then $\nu_{0}(\varepsilon(v)\varepsilon_{a,b})=0.$ Moreover, for any replica $k\in[2m]\backslash\\{v_{1},v_{2}\\}$, and $(a,b)\in\\{(v_{1},k),(v_{2},k)\\}$, we have $\displaystyle\nu_{0}(\varepsilon(v)\varepsilon_{v_{1},k})=-\nu_{0}(\varepsilon(v)\varepsilon_{v_{2},k}).$ (4.6) ###### Proof. The value of $\nu(\varepsilon_{a,b}\varepsilon_{c,d})$ depends only on the size of union and intersection of $\\{a,b\\}$ and $\\{c,d\\}$. Check that if $|\\{a,b\\}\cap\\{v_{1},v_{2}\\}|\neq 1$, the two terms in $\varepsilon(v)\varepsilon_{a,b}$ are equvilent: If $|\\{a,b\\}\cap\\{v_{1},v_{2}\\}|=0$, we have $\nu_{0}(\varepsilon_{v_{1},v_{3}}\varepsilon_{a,b})=\nu_{0}(\varepsilon_{1,2}\varepsilon_{3,4})=\nu_{0}(\varepsilon_{v_{2},v_{3}}\varepsilon_{a,b}),$ and $\nu_{0}(\varepsilon_{v_{1},v_{1}}\varepsilon_{a,b})=\nu_{0}(\varepsilon_{1,1}\varepsilon_{2,3})=\nu_{0}(\varepsilon_{v_{2},v_{2}}\varepsilon_{a,b}).$ If $|\\{a,b\\}\cap\\{v_{1},v_{2}\\}|=2$, we have $\nu_{0}(\varepsilon_{v_{1},v_{3}}\varepsilon_{v_{1},v_{2}})=\nu_{0}(\varepsilon_{1,2}\varepsilon_{1,3})=\nu_{0}(\varepsilon_{v_{2},v_{3}}\varepsilon_{v_{1},v_{2}}),$ and $\nu_{0}(\varepsilon_{v_{1},v_{1}}\varepsilon_{v_{1},v_{2}})=\nu_{0}(\varepsilon_{1,1}\varepsilon_{1,2})=\nu_{0}(\varepsilon_{v_{2},v_{2}}\varepsilon_{v_{1},v_{2}}).$ Suppose $(a,b)\in\\{\\{v_{1},k\\},\\{v_{2},k\\}\\}$. To check (4.6): If $U_{v}$ corresponds to $T_{1}$, $\displaystyle\nu_{0}(\left(\varepsilon_{v_{1},v_{3}}-\varepsilon_{v_{2},v_{3}}\right)\varepsilon_{v_{1},k})$ $\displaystyle=\nu_{0}(\varepsilon_{v_{1},v_{1}}\varepsilon_{v_{3},k}-\varepsilon_{v_{1},v_{2}}\varepsilon_{v_{3},k}),$ $\displaystyle=\nu_{0}(\varepsilon_{v_{2},v_{2}}\varepsilon_{v_{3},k}-\varepsilon_{v_{1},v_{2}}\varepsilon_{v_{3},k}),$ $\displaystyle=-\nu_{0}(\left(\varepsilon_{v_{1},v_{3}}-\varepsilon_{v_{2},v_{3}}\right)\varepsilon_{v_{2},k}).$ If $U_{v}$ corresponds to $S_{1}$, $\displaystyle\nu_{0}(\left(\varepsilon_{v_{1},v_{1}}-\varepsilon_{v_{2},v_{2}}\right)\varepsilon_{v_{1},k})$ $\displaystyle=\nu_{0}(\varepsilon_{v_{1},v_{1}}\varepsilon_{v_{1},k}-\varepsilon_{v_{2},v_{2}}\varepsilon_{v_{1},k}),$ $\displaystyle=\nu_{0}(\varepsilon_{v_{2},v_{2}}\varepsilon_{v_{2},k}-\varepsilon_{v_{1},v_{1}}\varepsilon_{v_{2},k}),$ $\displaystyle=-\nu_{0}(\left(\varepsilon_{v_{1},v_{1}}-\varepsilon_{v_{2},v_{2}}\right)\varepsilon_{v_{2},k}).$ where the second equality follows from $k\notin\\{v_{1},v_{2}\\}$ and by the linearity of expectation, exchanging the index of replica doesn’t affect the expectation under $\nu_{0}$. ∎ By Lemma 4.2, the set of replica pairs $(a,b)$ s.t. $\nu_{0}(\varepsilon(1)(\varepsilon_{a,b}-q))\neq 0$ is given by $\operatorname{\mathcal{P}}_{1}=\\{(a,b):|\\{a,b\\}\cap\\{1_{1},1_{2}\\}|=1;1\leqslant a,b\leqslant[2m]\\}.$ Summing up all non-trivial terms in (4.5), the goal can be simplified to $\nu_{0}^{\prime}(\varepsilon(1){}_{v>1}U^{-}_{v})=\frac{\beta^{2}}{2}\sumop\slimits@_{(a,b)\in\operatorname{\mathcal{P}}_{1}}sgn(a,b)\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-\mu_{a,b}){}_{v>1}U^{-}_{v}).$ Now we proceed to study the case when $U_{1}$ corresponds to a copy of $T_{1}$. ###### Lemma 4.3 (restatment of Lemma 3.9). For $h(1)\geqslant 1$ and $h^{\prime}(1)\geqslant 0$, suppose $U_{1}$ corresponds to a copy of $T_{1}$ $\displaystyle\nu(\varepsilon(1){}_{v>1}U_{v})=$ $\displaystyle\beta^{2}(F-3G)\nu(g_{1}(h(1),h^{\prime}(1))$ $\displaystyle+\frac{\beta^{2}}{2}H\nu(g_{1}(h(1)-1,h^{\prime}(1)+1)$ $\displaystyle+\beta^{2}(h(1)-1)GA_{2}^{2}\nu(g_{1}(h(1)-2,h^{\prime}(1)))$ $\displaystyle+O(H_{1}+1).$ ###### Proof of Lemma 3.9. The proof follows the same idea as in Section 2.2. Let’s assume that $h(1)\geqslant 1$, and $U_{1}$ corresponds to a copy of $T_{1}$ (By definition, this corresponds to $1_{1}=1$ but we will use the notation $1_{1}$ for the sake of consistency.). $\varepsilon(1)=\varepsilon_{1_{1},1_{3}}-\varepsilon_{1_{2},1_{3}}.$ To compute $\nu_{0}^{\prime}(\varepsilon(1){}_{v>1}U^{-}_{v})$, we will count the contribution of terms from $(a,b)\in\operatorname{\mathcal{P}}_{1}$. By Lemma 4.2, it make sense to group $(a,b)$ based on $\\{a,b\\}\cap[2m]\backslash\\{v_{1},v_{2}\\}$. For each of those subset, we will first apply (4.6) to compute the $\nu_{0}(\varepsilon(1)\varepsilon_{a,b})$ part, then characterize the structure of $\nu_{0}((R^{-}_{a,b}-\mu_{a,b}){}_{v>1}U^{-}_{v})$. * • If $a=b$: In this case, we have $(a,b)\in\\{(v_{1},v_{1}),(v_{2},v_{2})\\}$. By a similar argument as the proof of (4.6), $\nu_{0}(\varepsilon(1)\varepsilon_{1_{1},1_{1}})=-\nu_{0}(\varepsilon(1)\varepsilon_{1_{2},1_{2}})=H$ The sum of the two terms are $H\nu_{0}((R^{-}_{1_{1},1_{1}}-R^{-}_{1_{2},1_{2}}){}_{v>1}U^{-}_{v})=H\nu(g_{1}(h(1)-1,h^{\prime}(1)+1)+O(H_{1}+1).$ * • If $\\{a,b\\}\in\\{\\{1_{1},1_{3}\\},\\{1_{2},1_{3}\\}\\}$. For those terms, $\nu_{0}(\varepsilon(1)\varepsilon_{1_{1},1_{3}})=F.$ Summing over the two terms gives $2F\nu_{0}((R^{-}_{1_{1},1_{3}}-R^{-}_{1_{2},1_{3}}){}_{v>1}U^{-}_{v})=2F\nu(g_{1}(h(1),h^{\prime}(1)))+O(H_{1}+1).$ * • To sum over remaining $(a,b)\in\operatorname{\mathcal{P}}_{1}$ corresponds to iterating over all replicas $k\in[2m]\backslash V_{1}$ and sum over pairs $\\{1_{1},k\\},\\{1_{2},k\\}$. It is easier if we account for $k\in[m]\backslash V_{1}$ and the corresponding new replica $k^{\prime}\in[2m]\backslash[m]$ introduced by Lemma 2.4. * – For $k\in[m]\backslash V_{1}$, let $k^{\prime}:=m+k$ be the corresponding new replica from Lemma 2.4. Summing over all four terms gives $\displaystyle G\nu_{0}((R^{-}_{1_{1},k}-R^{-}_{1_{2},k}-R^{-}_{1_{1},k^{\prime}}+R^{-}_{1_{2},k^{\prime}}){}_{v>1}U^{-}_{v})$ $\displaystyle=G\nu((T_{1_{1},k}-T_{1_{2},k}-T_{1_{1},k^{\prime}}+T_{1_{2},k^{\prime}}){}_{v>1}U_{v})+O(H_{1}+1).$ We now will explore the structure of this term by viewing it as some general moment of $T_{1,2},T_{1},S_{1},T,S$. By Theorem 3.2, only even moments of $T_{k,l}$ give a non-trivial contribution to the sum. By construction, the replica $k^{\prime}$ and $1_{2}$ do not appear in any other term $U_{v}$. Thus the non-trivial portion of $(T_{1_{1},k}-T_{1_{2},k}-T_{1_{1},k^{\prime}}+T_{1_{2},k^{\prime}}){}_{v>1}U_{v}$ can only come from $\nu(T_{1_{1},k}{}_{v>1}U_{v})$ Now the goal becomes checking if $T_{1_{1},k}$ occurs in ${}_{v>1}U_{v}$. Becuase $g_{1}(h(1),h^{\prime}(1))$ contains terms of the form $T_{k},S_{k},T,S$, the only terms where $\\{1_{1},k\\}$ appear together are terms correspond to $T_{1}$ or $S_{1}$. We will show that only $T_{1}$ terms are non-trivial. Let’s first rewrite $U_{v}$ using the "basis". For $U_{v}$ corresponds to a copy of $T_{1_{1}}$ $U_{v}=T_{v_{1},v_{3}}-T_{v_{2},v_{3}}+T_{v_{1}}-T_{v_{2}}.$ Thus if $k=u_{3}$ for some $u>1$, then $\nu(T_{1_{1},k}{}_{v>1}U_{v})=\nu(T_{1_{1},k}^{2}{}_{v\neq 1,u}U_{v})=A_{2}^{2}\nu(g_{1}(h(1)-2,h^{\prime}(1)))+O(H_{1}+1).$ If $k$ appears in a copy of $S_{1_{1}}$, $U_{u}$, then the corresponding term becomes $\nu(T_{1_{1},k}(S_{1_{1}}-S_{k}){}_{v\neq 1,u}U_{v}))=O(H_{1}+1).$ Thus we only need to count countribution of $\\{1_{1},k\\}$ where $\\{1_{1},k\\}$ appear together in some term $U_{v}$ where $V_{v}=\\{v_{1},v_{2},v_{3}\\}$ and by definition, $v_{1}=1_{1}$ $v_{2}=k$. $\displaystyle 2\times(h(1)-1)GA_{2}^{2}\nu(g_{1}(h(1)-2,h^{\prime}(1))).$ * – The only $k$ that are not counted now are those that correspond to replicas in $V_{1}$. For each such $k$, since such $k$ as well as $1_{2}$ do not appear in any other terms in ${}_{v>1}U^{-}_{v}$ $-G\nu_{0}((R^{-}_{1_{1},k}-R^{-}_{1_{2},k}){}_{v>1}U^{-}_{v})=-G\nu(T_{1_{1}}{}_{v>1}U^{-}_{v})+O(H_{1}+1).$ Summing over terms from all $6$ pairs gives $-2\times 3G\nu_{0}(g_{1}(h(1),h^{\prime}(1))).$ Combine all the terms gives $\displaystyle\nu(\varepsilon(1){}_{v>1}U_{v})$ $\displaystyle=\frac{\beta^{2}}{2}H\nu(g_{1}(h(1)-1,h^{\prime}(1)+1)$ $\displaystyle+\beta^{2}F\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle+\beta^{2}(h(1)-1)GA_{2}^{2}\nu(g_{1}(h(1)-2,h^{\prime}(1)))$ $\displaystyle-\beta^{2}3G\nu_{0}(g_{1}(h(1),h^{\prime}(1)))$ Rearrange gives the desired result. ∎ ###### Lemma 4.4 (restatement of Lemma 3.10). If $h^{\prime}(1)\geqslant 1$ and $h(1)\geqslant 0$, suppose $U_{1}$ corresponds to a copy of $S_{1}$, then $\displaystyle\nu(\varepsilon(1){}_{v>1}U_{v})=$ $\displaystyle\frac{\beta^{2}}{2}D\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle+\beta^{2}h(1)EA_{2}^{2}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1))$ $\displaystyle-2\beta^{2}E\nu(g_{1}(h(1)+1,h^{\prime}(1)-1)$ $\displaystyle+O(H_{1}+1)).$ ###### Proof of Lemma 3.10. The proof is similar to that of Lemma 3.9, but with $\varepsilon(1)=\varepsilon_{1_{1},1_{1}}-\varepsilon_{1_{2},1_{2}}.$ We included it here for completeness. Let’s count the contribution from each pair $(a,b)$ in $\operatorname{\mathcal{P}}_{1}$. * • $a=b\in V_{1}$: In this case, we have $(a,b)\in\\{(v_{1},v_{1}),(v_{2},v_{2})\\}$. By a similar argument as the proof of (4.6), $\nu_{0}(\varepsilon(1)\varepsilon_{1_{1},1_{1}})=-\nu_{0}(\varepsilon(1)\varepsilon_{1_{2},1_{2}})=D.$ Combining the two terms gives $D\nu_{0}((R^{-}_{1_{1},1_{1}}-R^{-}_{1_{2},1_{2}}){}_{v>1}U^{-}_{v})=D\nu(g_{1}(h(1),h^{\prime}(1)))+O(H_{1}+1).$ * • $a\neq b$. * – For each replica $k\in[m]\backslash V_{1}$, let $k^{\prime}=m+k$ be the corresponding new replica introduced by the derivative formula. WLOG, first fix $a\in V_{1}$. Combine terms corresponds to $b\in\\{k,k^{\prime}\\}$, we have $\displaystyle E\nu_{0}((R^{-}_{1_{1},k}-R^{-}_{1_{2},k}-R^{-}_{1_{1},k^{\prime}}+R^{-}_{1_{2},k^{\prime}}){}_{v>1}U^{-}_{v})$ $\displaystyle=E\nu_{0}((T_{1_{1},k}-T_{1_{2},k}-T_{1_{1},k^{\prime}}+T_{1_{2},k^{\prime}}){}_{v>1}U_{v}).$ Following the same argument as in the corresponding type of pair in Lemma 3.9, the only non-trival contributions come from when $\\{1_{1},k\\}$ appears in some $V_{u}$ where $U_{u}$ is a copy of $T_{1_{1}}$ and $1_{1}=v_{1}$, $k=v_{2}$. The contribution from such terms are $E\nu_{0}(T_{1_{1},k}^{2}{}_{v\neq 1,u}U_{v})=EA_{2}^{2}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1))+O(H_{1}+1)$ Summing up all contributions from such terms gives $2\times h(1)EA_{2}^{2}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1))+O(H_{1}+1).$ * – The only $k\in[2m]\backslash V_{1}$ that are not counted are the two new replicas corresponding to $V_{1}$. For each such replica, the contribution is $-E\nu_{0}((R^{-}_{1_{1},k}-R^{-}_{1_{2},k}){}_{v>1}U^{-}_{v})=-E\nu(g_{1}(h(1)+1,h^{\prime}(1)-1))+O(H_{1}+1).$ The total contribution is $2\times(-2E\nu(g_{1}(h(1)+1,h^{\prime}(1)-1))+O(H_{1}+1)).$ Summing over contribution from all pairs in $\operatorname{\mathcal{P}}_{1}$, $\displaystyle\nu(\varepsilon(1){}_{v>1}U_{v})=$ $\displaystyle\frac{\beta^{2}}{2}D\nu(g_{1}(h(1),h^{\prime}(1)))$ $\displaystyle+\beta^{2}h(1)EA_{2}^{2}\nu(g_{1}(h(1)-1,h^{\prime}(1)-1))$ $\displaystyle-2\beta^{2}E\nu(g_{1}(h(1)+1,h^{\prime}(1)-1)$ $\displaystyle+O(H_{1}+1)).$ ∎ ### 4.3 Proof of Lemma 3.18, 3.19 Recall that $T^{h}S^{h^{\prime}}=g(h,h^{\prime})={}_{v\geqslant 1}U_{v},\quad\text{and}\quad\varepsilon(v)=\varepsilon_{v_{1},v_{2}}-Q_{v_{1},v_{2}}.$ For $T^{h}S^{h^{\prime}}$, we have the additional property that if $v\neq v^{\prime}$, then $\displaystyle V_{v}\cap V_{v^{\prime}}=\emptyset.$ (4.7) As in the previous section, we first introduce some notation to characterize the formula in (2.4). Denote the number of total replicas appear $g(h,h^{\prime})$ as $m:=2h+h^{\prime}.$ Denote $a^{{}^{\prime\prime}}$ as the new replicas in $\nu^{\prime}_{0}(g(h,h^{\prime}))$ for each $a\in[2m]$, and $\text{sgn}(a,b):=-1^{|\\{a,b\\}\cap[m]|}$. Our goal is to compute the following derivative of $\nu(g(h,h^{\prime}))$, $\displaystyle\nu_{0}^{\prime}(\varepsilon(1){}_{v>1}U^{-}_{v})$ $\displaystyle=\frac{\beta^{2}}{2}\sumop\slimits@_{1\leqslant a,b\leqslant 2m}\text{sgn}(a,b)\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-\mu_{a,b}){}_{v>1}U^{-}_{v})-R_{n,g},$ (4.8) $\displaystyle=\frac{\beta^{2}}{2}\sumop\slimits@_{1\leqslant a,b\leqslant 2m}\text{sgn}(a,b)\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-Q_{a,b}){}_{v>1}U^{-}_{v})$ (4.9) $\displaystyle-\frac{\beta^{2}}{2}\sumop\slimits@_{1\leqslant a,b\leqslant 2m}\text{sgn}(a,b)\nu_{0}(\varepsilon(1)\varepsilon_{a^{{}^{\prime\prime}},b^{{}^{\prime\prime}}})\nu_{0}((R^{-}_{a^{{}^{\prime\prime}},b^{{}^{\prime\prime}}}-Q_{a^{{}^{\prime\prime}},b^{{}^{\prime\prime}}}){}_{v>1}U^{-}_{v}).$ (4.10) Unlike in the case of $T_{1},S_{1}$, $\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\neq 0$ for all pairs of replica $(a,b)$. To simplify the computation, for each $(a,b)\in[2m]\times[2m]$, consider the corresponding term $\displaystyle D_{a,b}:=\nu_{0}(\varepsilon(1)\varepsilon_{a,b})\nu_{0}((R^{-}_{a,b}-Q_{a,b}){}_{v>1}U^{-}_{v})-\nu_{0}(\varepsilon(1)\varepsilon_{a^{{}^{\prime\prime}},b^{{}^{\prime\prime}}})\nu_{0}((R^{-}_{a^{{}^{\prime\prime}},b^{{}^{\prime\prime}}}-Q_{a^{{}^{\prime\prime}},b^{{}^{\prime\prime}}}){}_{v>1}U^{-}_{v}).$ (4.11) Then we can rewrite 2.4 as a sum of all such pairs $(a,b)$. $\nu_{0}^{\prime}(\varepsilon(1){}_{v>1}U^{-}_{v})=\frac{\beta^{2}}{2}\sumop\slimits@_{(a,b)\in[2m]\times[2m]}D_{a,b}.$ We first characterize paris of $(a,b)$ s.t. $D_{a,b}=0$. Observe that if $\\{a,b\\}\cap[m]=\emptyset$, neither $(a,b)$ nor $a^{{}^{\prime\prime}},b^{{}^{\prime\prime}}$ appear in any terms $U^{-}_{v}$.
# Sparse HP Filter: Finding Kinks in the COVID-19 Contact Rate††thanks: We would like to thank the editor and an anonymous referee for helpful comments. This work is in part supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2018S1A5A2A01033487), the McMaster COVID-19 Research Fund (Stream 2), the European Research Council (ERC-2014-CoG-646917-ROMIA) and the UK Economic and Social Research Council for research grant (ES/P008909/1) to the CeMMAP. Sokbae Lee Department of Economics, Columbia University, 420 West 118th Street, New York, NY 10027, USA. Centre for Microdata Methods and Practice, Institute for Fiscal Studies, 7 Ridgmount Street, London WC1E 7AE, UK. E-mail: <EMAIL_ADDRESS>Yuan Liao Department of Economics, Rutgers University, 75 Hamilton St., New Brunswick, NJ 08901, USA. Email<EMAIL_ADDRESS>Myung Hwan Seo Correspondence. Department of Economics, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea. E-mail: <EMAIL_ADDRESS>Youngki Shin Department of Economics, McMaster University, 1280 Main St. W., Hamilton, ON L8S 4L8, Canada. Email: <EMAIL_ADDRESS> ###### Abstract In this paper, we estimate the time-varying COVID-19 contact rate of a Susceptible-Infected-Recovered (SIR) model. Our measurement of the contact rate is constructed using data on actively infected, recovered and deceased cases. We propose a new trend filtering method that is a variant of the Hodrick-Prescott (HP) filter, constrained by the number of possible kinks. We term it the _sparse HP filter_ and apply it to daily data from five countries: Canada, China, South Korea, the UK and the US. Our new method yields the kinks that are well aligned with actual events in each country. We find that the sparse HP filter provides a fewer kinks than the $\ell_{1}$ trend filter, while both methods fitting data equally well. Theoretically, we establish risk consistency of both the sparse HP and $\ell_{1}$ trend filters. Ultimately, we propose to use time-varying _contact growth rates_ to document and monitor outbreaks of COVID-19. Keywords: COVID-19, trend filtering, knots, piecewise linear fitting, Hodrick- Prescott filter JEL codes: C51, C52, C22 ## 1 Introduction Since March 2020, there has been a meteoric rise in economic research on COVID-19. New research outputs have been appearing on the daily and weekly basis at an unprecedented level.111The major outlets for economists are: arXiv working papers, NBER working papers, and CEPR’s new working paper series called “Covid Economics: Vetted and Real-Time Papers” among others. To sample a few, Ludvigson et al. (2020) quantified the macroeconomic impact of COVID-19 by using data on costly and deadly disasters in recent US history; Manski and Molinari (2020) and Manski (2020) applied the principle of partial identification to the infection rate and antibody tests, respectively; Chernozhukov et al. (2020) used the US state-level data to study determinants of social distancing behavior. Across a wide spectrum of research, there is a rapidly emerging strand of literature based on a Susceptible-Infected-Recovered (SIR) model and its variants (e.g., Hethcote, 2000, for a review of the SIR and related models). Many economists have embraced the SIR-type models as new tools to study the COVID-19 pandemic. Avery et al. (2020) provided a review of the SIR models for economists, calling for new research in economics. A variety of economic models and policy simulations have been built on the SIR-type models. See Acemoglu et al. (2020), Alvarez et al. (2020), Atkeson (2020), Eichenbaum et al. (2020), Pindyck (2020), Stock (2020), Kim et al. (2020), and Toda (2020) among many others. One of central parameters in the SIR-type models is the contact rate, typically denoted by $\beta$.222It is also called the transmission rate by Stock (2020). It measures “the average number of adequate contacts (i.e., contacts sufficient for transmission) of a person per unit time” (Hethcote, 2000). The contact number $\beta/\gamma$ is the product between $\beta$ and the average infectious period, denoted by $1/\gamma$; the contact number is interpreted as “the average number of adequate contacts of a typical infective during the infectious period” (Hethcote, 2000). The goal of this paper is to estimate the time-varying COVID-19 contact rate, say $\beta_{t}$. In canonical SIR models, $\beta$ is a time-constant parameter. However, it may vary over time due to multiple factors. For example, as pointed by Stock (2020), self-isolation, social distancing and lockdown may reduce $\beta$. To estimate a SIR-type model, Fern’andez- Villaverde and Jones (2020) allowed for a time-varying contact rate to reflect behavioral and policy-induced changes associated with social distancing. In particular, they estimated $\beta_{t}$ using data on deaths at city, state and country levels. Their main focus was to simulate future outcomes for many cities, states and countries. Researchers have also adopted nonlinear time-series models from the econometric toolbox. For example, Li and Linton (2020) analyzed the daily data on the number of new cases and the number of new deaths with a quadratic time trend model in logs. Their main purpose was to estimate the peak of the pandemic. Liu et al. (2020) studied the density forecasts of the daily number of active infections for a panel of countries/regions. They modeled the growth rate of active infections as autoregressive fluctuations around a piecewise linear trend with a single break. Hartl et al. (2020) used a linear trend model in logs with a trend break to fit German confirmed cases. Harvey and Kattuman (2020) used a Gompertz model with a time-varying trend to fit and forecast German and UK new cases and deaths. In this paper, we aim to synthesize the time-varying contact rate with nonparametric time series modeling. Especially, we build a new nonparametric regression model for $\beta_{t}$ that allows for a piecewise linear trend with multiple kinks at unknown dates. We analyze daily data from Johns Hopkins University Center for Systems Science and Engineering (Dong et al., 2020, JHU CSSE) and suggest a particular transformation of data that can be regarded as a noisy measurement of time-varying $\beta_{t}$. Our measurement of $\beta_{t}$, which is constructed from daily data on confirmed, recovered and deceased cases, is different from that of Fern’andez-Villaverde and Jones (2020) who used only death data. We believe both measurements are complements to each other. However, the SIR model is at best a first-order approximation to the real world; a raw series of $\beta_{t}$ would be too noisy to draw on inferences regarding the underlying contact rate. In fact, the raw series exhibits high degrees of skewness and time-varying volatility even after the log transformation. To extract the time-varying signal from the noisy measurements, we consider nonparametric trend filters that produce possibly multiple kinks in $\beta_{t}$ where the kinks are induced by government policies and changes in individual behavior. A natural candidate method that yields the kinks is $\ell_{1}$ trend filtering (e.g., Kim et al., 2009). However, $\ell_{1}$ trend filtering is akin to LASSO; hence, it may have a problem of producing too many kinks, just like LASSO selects too many covariates. In view of this concern, we propose a novel filtering method by adding a constraint on the maximum number of kinks to the popular Hodrick and Prescott (1997) (HP) filter. It turns out that this method produces a smaller number of the kink points than $\ell_{1}$ trend filtering when both methods fit data equally well. In view of that, we call our new method the _sparse HP filter_. We find that the estimated kinks are well aligned with actual events in each country. To document and monitor outbreaks of COVID-19, we propose to use piecewise constant _contact growth rates_ using the piecewise linear trend estimates from the sparse HP filter. They provide not only an informative summary of past outbreaks but also a useful surveillance measure. The remainder of the paper is organized as follows. In Section 2, we describe a simple time series model of the time-varying contact rate. In Section 3, we introduce two classes of filtering methods. In Section 4, we have a first look at the US data, as a benchmark country. In Section 5, we present empirical results for five countries: Canada, China, South Korea, the UK and the US. In Section 6, we establish risk consistency of both the sparse HP and $\ell_{1}$ trend filters. Section 7 concludes and appendices include additional materials. The replication R codes for the empirical results are available at https://github.com/yshin12/sparseHP. Finally, we add the caveat that the empirical analysis in the paper was carried out in mid-June using daily observations up to June 8th. As a result, some remarks and analysis might be out of sync with the COVID-19 pandemic in real time. ## 2 A Time Series Model of the COVID-19 Contact Rate In this section, we develop a time-series model of the contact rate. Our model specification is inspired by the classical SIR model which has been adopted by many economists in the current coronavirus pandemic. We start with a discrete version of the SIR model, augmented with deaths, adopted from Pindyck (2020): $\displaystyle\begin{split}\Delta I_{t}&=\beta S_{t-1}I_{t-1}-\gamma I_{t-1},\\\ \Delta D_{t}&=\gamma_{d}I_{t-1},\\\ \Delta R_{t}&=\gamma_{r}I_{t-1},\\\ 1&=S_{t}+I_{t}+D_{t}+R_{t},\\\ \gamma&=\gamma_{r}+\gamma_{d},\end{split}$ (2.1) where the (initial) population size is normalized to be 1, $S_{t}$ is the proportion of the population that is susceptible, $I_{t}$ the fraction infected, $D_{t}$ the proportion that have died, and $R_{t}$ the fraction that have recovered. The parameter $\gamma=\gamma_{r}+\gamma_{d}$ governs the rate at which infectives transfer to the state of being deceased or recovered. In the emerging economics literature on COVID-19, the contact rate $\beta$ is viewed as the parameter that can be affected by changes in individual behavior and government policies through social distancing and lockdown. We follow this literature and let $\beta=\beta_{t}$ be time-varying. Let $C_{t}$ be the proportion of confirmed cases, that is $C_{t}=I_{t}+R_{t}+D_{t}$. In words, the confirmed cases consist of actively infected, recovered and deceased cases. Use the equations in (2.1) to obtain $\displaystyle\beta_{t}=Y_{t}:=\frac{\Delta C_{t}}{I_{t-1}S_{t-1}}.$ (2.2) Assume that we have daily data on $\Delta C_{t}$, $\Delta R_{t}$ and $\Delta D_{t}$. From these, we can construct cumulative $C_{t}$, $R_{t}$ and $D_{t}$. Then $S_{t}=1-C_{t}$ and $I_{t}=C_{t}-R_{t}-D_{t}$. This means that we can obtain time series of $\beta_{t}$ from $Y_{t}$. We formally assume this in the following. ###### Assumption 1 (Data). For each $t$, we observe $(C_{t},R_{t},D_{t})$. By Assumption 1, we can construct $Y_{t}=\Delta C_{t}/(I_{t-1}S_{t-1})$. Assumption 1 is a key assumption in the paper. We use daily data from JHU CSSE and they are subject to measurement errors, which could bias our estimates. In Appendix A, we show that the time series model given in this section is robust to some degree of under-reporting of confirmed cases. However, our estimates are likely to be biased if the underreporting is time-varying. For example, this could happen because testing capacity in many countries has expanded over the time period. Nonetheless, we believe that our measurement of $Y_{t}$ primarily captures the genuine underlying trend of $\beta_{t}$. Moreover, because the SIR model in (2.1) is at best a first-order approximation, a raw series of $Y_{t}$ would be too noisy to be used as the actual series of the underlying contact rate $\beta_{t}$. In other words, $\beta_{t}\neq Y_{t}$ in actual data and it would be natural to include an error term in $Y_{t}$. Because $\beta_{t}$ has to be positive, we adopt a multiplicative error structure and make the following assumption. ###### Assumption 2 (Time-Varying Signal plus Regression Error). For each $t$, the unobserved random variable $\beta_{t}$ satisfies $\displaystyle\log Y_{t}=\log\beta_{t}+u_{t},$ where the error term $u_{t}$ has the following properties: 1. 1. $\mathbb{E}[u_{t}|\mathcal{F}_{t-1}]=0$, where $\mathcal{F}_{t-1}$ is the natural filtration at time $t-1$, 2. 2. $\mathbb{E}[u_{t}^{2}|\mathcal{F}_{t-1}]=\sigma_{t}^{2}>0$ for some time- varying conditional variance $\sigma_{t}^{2}$. Define $\displaystyle y_{t}:=\log(\Delta C_{t})-\log(I_{t-1})-\log S_{t-1}.$ (2.3) Under Assumption 2, (2.2) can be rewritten as $\displaystyle y_{t}=\log\beta_{t}+u_{t},$ (2.4) The time-varying parameter $\log\beta_{t}$ would not be identified without further restrictions. Because it is likely to be affected by government policies and cannot change too rapidly, we will assume that it follows a piecewise trend: ###### Assumption 3 (Piecewise Trend). The time-varying parameter $f_{0,t}:=\log\beta_{t}$ follows a piecewise trend with at most $\kappa$ kinks, where the set of kinks is defined by $\\{t=1,...,T:f_{0,t}-f_{0,t-1}\neq f_{0,t+1}-f_{0,t}\\}$ and the locations of kinks are unknown. The main goal of this paper is to estimate $\log\beta_{t}$ and its kinks under Assumptions 1, 2 and 3. ## 3 Filtering the COVID-19 Contact Rate We consider two different classes of trend filtering methods to produce piecewise estimators of $f_{0,t}:=\log\beta_{t}$. The first class is based on $\ell_{1}$ trend filtering, which has become popular recently. See, e.g., Kim et al. (2009), Tibshirani (2014), and Wang et al. (2016) among others. The starting point of the second class is the HP filter, which has been popular in macroeconomics and has been frequently used to separate trend from cycle. The standard convention in the literature is to set $\lambda=1600$ for quarterly time series. For example, Ravn and Uhlig (2002) suggested a method for adjusting the HP filter for the frequency of observations; de Jong and Sakarya (2016) and Cornea-Madeira (2017) established some representation results; Hamilton (2018) provided criticism on the HP filter; Phillips and Shi (2019) advocated a boosted version of the HP filter via $L_{2}$-boosting (Bühlmann and Yu, 2003) that can detect multiple structural breaks. We view that the kinks might be more suitable than the breaks for modelling $\beta_{t}$ using daily data. It is unlikely that in a few days, the degree of contagion of COVID-19 would be diminished with an abrupt jump by social distancing and lockdown. The original HP filter cannot produce any kink just as ridge regression does not select any variable. We build the sparse HP filter by drawing on the recent literature that uses an $\ell_{0}$-constraint or -penalty (see, e.g. Bertsimas et al., 2016; Chen and Lee, 2018; Chen and Lee, 2020; Huang et al., 2018). ### 3.1 $\ell_{1}$ Trend Filtering In $\ell_{1}$ trend filtering, the trend estimate $f_{t}$ is a minimizer of $\displaystyle\sum_{t=1}^{T}(y_{t}-f_{t})^{2}+\lambda\sum_{t=2}^{T-1}|f_{t-1}-2f_{t}+f_{t+1}|,$ (3.1) which is related to Hodrick and Prescott (1997) filtering; the latter is the minimizer of $\displaystyle\sum_{t=1}^{T}(y_{t}-f_{t})^{2}+\lambda\sum_{t=2}^{T-1}(f_{t-1}-2f_{t}+f_{t+1})^{2}.$ (3.2) In this paper, the main interest is to find the kinks in the trend. For that purpose, $\ell_{1}$ trend filtering is more suitable than the HP filtering. The main difficulty of using (3.1) is the choice of $\lambda$. This is especially challenging since the time series behavior of $y_{t}$ is largely unknown. The $\ell_{1}$ trend filter is akin to LASSO. In view of an analogy to square- root LASSO (Belloni et al., 2011), it might be useful to consider a square- root variant of (3.1): $\displaystyle\left(\sum_{t=1}^{T}(y_{t}-f_{t})^{2}\right)^{1/2}+\lambda\sum_{t=2}^{T-1}|f_{t-1}-2f_{t}+f_{t+1}|.$ (3.3) We will call (3.3) _square-root $\ell_{1}$ trend filtering_. Both (3.1) and (3.3) can be solved via convex optimization software, e.g., CVXR (Fu et al., 2017). ### 3.2 Sparse Hodrick-Prescott Trend Filtering As an alternative to $\ell_{1}$ trend filtering, we may exploit Assumption 3 and consider an $\ell_{0}$-constrained version of trend flitering: $\displaystyle\begin{split}&\sum_{t=1}^{T}(y_{t}-f_{t})^{2}\\\ &\text{ subject to }\\\ &\sum_{t=2}^{T-1}1\\{f_{t}-f_{t-1}\neq f_{t+1}-f_{t}\\}\leq\kappa.\end{split}$ (3.4) The formulation in (3.4) is related to the method called best subset selection (see, e.g. Bertsimas et al., 2016; Chen and Lee, 2018). It requires only the input of $\kappa$. However, because of the nature of the $\ell_{0}$-(pseudo)norm, it would not work well if the signal-to-noise ratio (SNR) is low (Hastie et al., 2017; Mazumder et al., 2017). This is likely to be a concern for our measurement of the log contact rate. To regularize the best subset selection procedure, it has been suggested in the literature that (3.4) can be combined with $\ell_{1}$ or $\ell_{2}$ penalization (Bertsimas and Van Parys, 2020; Mazumder et al., 2017). We adopt Bertsimas and Van Parys (2020) and propose an $\ell_{0}$-constrained version of the Hodrick-Prescott filter: $\displaystyle\begin{split}&\sum_{t=1}^{T}(y_{t}-f_{t})^{2}+\lambda\sum_{t=2}^{T-1}(f_{t-1}-2f_{t}+f_{t+1})^{2}\\\ &\text{subject to }\\\ &\sum_{t=2}^{T-1}1\\{f_{t}-f_{t-1}\neq f_{t+1}-f_{t}\\}\leq\kappa.\end{split}$ (3.5) As in (3.4), the tuning parameter $\kappa$ controls how many kinks are allowed for. Thus, we have a direct control of the resulting segments of different slopes. The $\ell_{2}$ penalty term is useful to deal with the low SNR problem with the COVID 19 data. We will call (3.5) _sparse HP trend filtering_. Problem (3.5) can be solved by mixed integer quadratic programming (MIQP). Rewrite the objective function in (3.5) as $\displaystyle\sum_{t=1}^{T}(y_{t}-f_{t})^{2}+\lambda\sum_{t=2}^{T-1}(f_{t-1}-2f_{t}+f_{t+1})^{2}$ subject to $z_{t}\in\\{0,1\\},t=2,\ldots,T-1$, $\underline{f}\leq f_{t}\leq\overline{f}$, $\sum_{t=2}^{T-1}z_{t}\leq\kappa$, and $\displaystyle-Mz_{t}\leq f_{t-1}-2f_{t}+f_{t+1}\leq Mz_{t},\;t=2,\ldots,T-1.$ This is called a big-M formulation that requires that $\max_{t}|f_{t-1}-2f_{t}+f_{t+1}|\leq M.$ We need to choose the auxiliary parameters $\underline{f}$, $\overline{f}$ and $M$. We set $\underline{f}=\min y_{t}$ and $\overline{f}=\max y_{t}$. One simple practical method for choosing $M$ is to set $\displaystyle M=\max_{t=2,\ldots,T-1}|y_{t-1}-2y_{t}+y_{t+1}|.$ (3.6) To implement the proposed method, it is simpler to write the MIQP problem above in matrix notation. Let $\bm{y}$ denote the $(T\times 1)$ vector of $y_{t}$’s and $\bm{1}$ a vector of 1’s whose dimension may vary. We solve $\displaystyle\min_{\bm{f},\bm{z}}\left[(\bm{y}-\bm{f})^{\top}(\bm{y}-\bm{f})+\lambda\bm{f}^{\top}\bm{D}^{\top}\bm{D}\bm{f}\right]$ (3.7) subject to $\bm{z}\in\\{0,1\\}^{T-2}$, $\underline{f}\bm{1}\leq\bm{f}\leq\overline{f}\bm{1}$, $\bm{1}^{\top}\bm{z}\leq\kappa$, $-M\bm{z}\leq\bm{D}\bm{f}\leq M\bm{z}$, where $\bm{D}$ is the $(T-2)\times T$ second-order difference matrix such that $\bm{D}=\left[\begin{matrix}1&-2&1&&&&\\\ &1&-2&1&&&\\\ &&\ddots&\ddots&\ddots&&\\\ &&&1&-2&1&\\\ &&&&1&-2&1\\\ \end{matrix}\right]$ with entries not shown above being zero. Let $\widehat{\bm{f}}$ and $\widehat{\bm{z}}$ denote the resulting maximizers. It is straightforward to see that $\widehat{\bm{f}}$ also solves (3.5). Therefore, $\widehat{\bm{f}}$ is the $(T\times 1)$ vector of trend estimates and $\widehat{K}:=\\{t=2,\ldots,T-1:\widehat{z}_{t}=1\\}$ is the index set of estimated kinks. The MIQP problem can be solved via modern mixed integer programming software, e.g., Gurobi. Because the sample size for $y_{t}$ is typically less than 100, the computational speed of MIQP is fast enough to carry out cross-validation to select tuning parameters. We summarize the equivalence between the original and MIQP formulation in the following proposition. ###### Proposition 3.1. Define $\displaystyle\begin{split}\mathbb{F}(\kappa):=\\{\bm{f}=(f_{1},\ldots,f_{T}):&\min_{t}f_{t}\geq\underline{f},\max_{t}f_{t}\leq\overline{f},\max_{t}|f_{t-1}-2f_{t}+f_{t+1}|\leq M,\\\ &\sum_{t=2}^{T-1}1\\{f_{t}-f_{t-1}\neq f_{t+1}-f_{t}\\}\leq\kappa\\}.\end{split}$ Let $\widehat{\bm{f}}_{\textrm{SHP}}:\\{\widehat{f}_{t}:t=1,\ldots,T\\}$ denote a solution to $\displaystyle\min_{\bm{f}\in\mathbb{F}(\kappa)}S_{T}(\bm{f},\lambda):=(\bm{y}-\bm{f})^{\top}(\bm{y}-\bm{f})+\lambda\bm{f}^{\top}\bm{D}^{\top}\bm{D}\bm{f}.$ Let $\bm{\widehat{f}}_{\textrm{MIQP}}$ denote a solution to (3.7). Then, both $\widehat{\bm{f}}_{\textrm{SHP}}$ and $\bm{\widehat{f}}_{\textrm{MIQP}}$ are equivalent in the sense that $\widehat{\bm{f}}_{\textrm{SHP}}\in\mathbb{F}(\kappa),$ $\bm{\widehat{f}}_{\textrm{MIQP}}\in\mathbb{F}(\kappa),$ and $S_{T}(\widehat{\bm{f}}_{\textrm{SHP}},\lambda)=S_{T}(\bm{\widehat{f}}_{\textrm{MIQP}},\lambda).$ ### 3.3 Selection of Tuning Parameters We first consider the sparse HP filter. There are two tuning parameters: $\lambda$ and $\kappa$. It is likely that there will be an initial stage of coronavirus spread, followed by lockdown or social distancing. Even without any policy intervention, it will come down since many people will voluntarily select into self-isolation and there is a chance of herd immunity. Hence, the minimum $\kappa$ is at least 1. If $\kappa$ is too large, it is difficult to interpret the resulting kinks. In view of these, we set the possible values $\kappa\in\mathcal{K}=\\{2,3,4\\}$. For each pair of $(\kappa,\lambda)$, let $\widehat{\bm{f}}_{-s}(\kappa,\lambda)$ denote the leave-one-out estimator of $\bm{f}_{s}$. That is, it is the sparse HP filter estimate by solving: $\displaystyle\begin{split}&\sum_{t=1,t\neq s}^{T}(y_{t}-f_{t})^{2}+\lambda\sum_{t=2}^{T-1}(f_{t-1}-2f_{t}+f_{t+1})^{2}\\\ &\text{subject to }\\\ &\sum_{t=2}^{T-1}1\\{f_{t}-f_{t-1}\neq f_{t+1}-f_{t}\\}\leq\kappa.\end{split}$ (3.8) The only departure from (3.5) is that we replace the fidelity term $\sum_{t=1}^{T}(y_{t}-f_{t})^{2}$ with $\sum_{t=1,t\neq s}^{T}(y_{t}-f_{t})^{2}$. We choose the optimal $(\kappa,\lambda)$ by $\displaystyle\min_{(\kappa,\lambda)\in\mathcal{K}\times\mathcal{L}}\sum_{t=1}^{T}\left\\{y_{t}-\widehat{\bm{f}}_{-t}(\kappa,\lambda)\right\\}^{2},$ (3.9) where $\mathcal{L}$ is the set for possible values of $\lambda$. We view $\lambda$ as an auxiliary tuning parameter that mitigates the low SNR problem. Hence, we take $\mathcal{L}$ to be in the range of relatively smaller values than the typical values used for the HP filter. In the numerical work, we let $\Lambda$ to a grid of equi-spaced points in the $\log_{2}$-scale. We now turn to the HP, $\ell_{1}$ and square-root $\ell_{1}$ trend filters. For each filter, we choose $\lambda$ such that the fidelity term $\sum_{t=1}^{T}(y_{t}-f_{t})^{2}$ is the same as that of the sparse HP filter. In this way, we can compare different methods holding the same level of fitting the data. Alternatively, we may choose $\lambda$ by leave-one-out cross validation for each filtering method. However, in that case, it would be more difficult to make a comparison across different methods. Since our main focus is to find the kinks in the contact rate, we will fine-tune all the filters to have the same level of $\sum_{t=1}^{T}(y_{t}-f_{t})^{2}$ based on the sparse HP filter’s cross validation result. ## 4 A First Look at the Time-Varying Contact Rate As a benchmark, we have a first look at the US data. The dataset is obtained via R package coronavirus (Krispin, 2020), which provides a daily summary of COVID-19 cases from Johns Hopkins University Center for Systems Science and Engineering (Dong et al., 2020). Following Liu et al. (2020), we set the first date of the analysis to begin when the number of cumulative cases reaches 100 (that is, March 4 for the US). To smooth data minimally, we take $Y_{t}$ in (2.2) to be a three-day simple moving average: that is, $Y_{t}=(\breve{Y}_{t}+\breve{Y}_{t-1}+\breve{Y}_{t-2})/3$, where $\breve{Y}_{t}$ is the daily observation of $Y_{t}$ constructed from the dataset.333Liu et al. (2020) used one-sided three-day rolling averages; Fern’andez-Villaverde and Jones (2020) took 5-day centered moving averages. Then, we take the log to obtain $y_{t}=\log Y_{t}$. Figure 1: US Data | ---|--- | Note: The orange vertical line denotes the lockdown date, March 30. The population size is normalized to be 1. Figure 1 has four panels. The top-left panel shows the fraction of daily positives, the top-right panel the fraction of lagged cumulative infectives, the bottom-left panel the fraction of lagged cumulative susceptibles, and the bottom-right $Y_{t}=\Delta C_{t}/(I_{t-1}S_{t-1})$. In the US, statewide stay- at-home orders started in California on March 20 and extended to 30 states by March 30 (The New York Times, 2020d). The inserted vertical line in the figure corresponds to March 30, which we will call the “lockdown” date for simplicity, although there was no lockdown at the national level. As a noisy measurement of $\beta_{t}$, $Y_{t}$ shows enormous skewness and fluctuations especially in the beginning of the study period. This indicates that the signal-to-noise ratio is high and is time-varying as well. This pattern of the data has motivated Assumption 2. Because $S_{t-1}$ is virtually one throughout the analysis period (0.994 on June 8, which is the last date of the sample), $Y_{t}\approx\Delta C_{t}/I_{t-1}$, which is daily positives divided by the lagged infectives. Figure 2: $\log Y_{t}$ as a raw time series of $\log\beta_{t}$ and parametric fitting | ---|--- | Note: The orange vertical line denotes the lockdown date, March 30. Figure 2 shows the raw data along with parametric fitting. The top-left panel shows the logarithm of $Y_{t}$, which still exhibits some degree of skewness and time-varying variance. The fitted regression line is based on the following parametric regression model: $\displaystyle y_{t}=\alpha_{0}+\alpha_{1}(t-t_{0})1(t>t_{0})+\varepsilon_{t},$ (4.1) where $t_{0}$ is March 30. The simple idea behind (4.1) is that an initial, time-constant contact rate began to diminish over time after a majority of US states imposed stay-at-home orders. In simple SIR models, the contact number $\beta/\gamma$ is identical to the basic reproduction number denoted by $R_{0}$, which is viewed as a key threshold quantity in the sense that “an infection can get started in a fully susceptible population if and only if $R_{0}>1$ for many deterministic epidemiology models” (Hethcote, 2000). Since $\beta_{t}$ is time-varying in our framework, we may define a time-varying basic reproduction number by $R_{0}(t):=\beta_{t}/\gamma$. The top-right panel shows the estimates of time-varying $R_{0}(t)$:444The formula given in (4.2) is valid if errors are homoskedastic, which is unlikely to be true in actual data. However, we present (4.2) here because it is simpler. Our main analysis focuses on estimation of the kinks based on $y_{t}$, not on estimating $R_{0}(t)$. We use the latter mainly to appreciate the magnitude of the kinks. $\displaystyle\widehat{R}_{0}(t):=\exp[\widehat{\alpha}_{0}+\widehat{\alpha}_{1}(t-t_{0})1(t>t_{0})]/\gamma,$ (4.2) where $\gamma=1/18$ is taken from Acemoglu et al. (2020). This corresponds to 18 days of the average infectious period. The parametric estimates of $R_{0}(t)$ started above 4 and reached $0.15$ at the end of the sample period. The left-bottom panel shows the residual plot in terms of $y_{t}$ and the right-bottom panel the residual plot in terms of $R_{0}(t)$. In both panels, the estimated residuals seem to be biased and show autocorrelation. Especially, the positive values of residuals at the end of the sample period is worrisome because the resulting prediction would be too optimistic. ## 5 Estimation Results In this section, we present estimation results for five countries: Canada, China, South Korea, the UK and the US. These countries are not meant to be a random sample of the world; they are selected based on our familiarity with them so that we can interpret the estimated kinks with narratives. We look at the US as a benchmark country and provide a detailed analysis in Section 5.1. A condensed version of the estimation results for other countries are provided in Section 5.2. ### 5.1 Benchmark: the US Figure 3 summarizes the results of leave-one-out cross validation (LOOCV) as described in Section 3.3. The range of tuning parameters were: $\kappa\in\\{2,3,4\\}$ and $\lambda=\\{2^{0},2^{1},\ldots,2^{5}\\}$. We can see that the choice of $\kappa$ seems to matter more than that of $\lambda$. Clearly, $\kappa=2$ provides the worst result and $\kappa=3$ and $\kappa=4$ are relatively similar. The LOOCV criterion function was minimized at $(\widehat{\kappa},\widehat{\lambda})=(4,1)$. Figure 3: Sparse HP Filtering: Leave-One-Out Cross Validation --- Note: The red vertical line denotes the minimizer $(\widehat{\kappa},\widehat{\lambda})=(4,1)$ of the cross-validation objective function. The x-axis is represented by the $\log_{2}$ scale. Figure 4: Sparse HP Filtering | ---|--- | Note: The grey curve in each panel represents the data $y_{t}$. Sparse HP filtering solves (3.5) and the parametric fit uses the linear regression (4.1). The estimated kinks denoted by blue vertical lines are: March 16, March 20, April 14, and May 13. The orange vertical line denotes the lockdown date, March 30. Based on the tuning parameter selection in Figure 3, we show estimation results for the sparse HP filter in Figure 4. The structure of Figure 4 is similar to that of Figure 2. The top-left panel shows estimates of the sparse HP filter along with the raw series of $y_{t}$ and the parametric estimates shown in Figure 2. The top-right panel displays counterparts in terms of $R_{0}(t)$. The bottom panels exhibit residual plots for the $\log\beta_{t}$ and $R_{0}(t)$ scales. The trend estimates from the sparse HP filter fit the data much better than the simple parametric estimates. The estimated kink dates are: March 16, March 20, April 14, and May 13. There are five periods based on them. 1. 1. March 4 - March 16: this period corresponds to the initial epidemic stage; 2. 2. March 16 - March 20: the contact rate was peaked at the end of this period; 3. 3. March 20 - April 14: a sharp decrease of the contact rate is striking; 4. 4. April 14 - May 13: the contact rate decreased but less steeply; 5. 5. May 13 - June 8: it continued to go down but its slope got more flattened. To provide narratives on these dates, President Trump declared a national emergency on March 13; The Centers for Disease Control and Prevention (CDC) recommended no gatherings of 50 or more people on March 15; New York City’s public schools system announced that it would close on March 16; and California started stay-at-home orders on March 20 (The New York Times, 2020b, d). These events indicate that the second period was indeed the peak of the COVID-19 epidemic in the US. The impact of social distancing and stay-at-home orders across a majority of states is clearly visible in the third period. The fourth and fifth periods include state reopening: for example, stay-at-home order expired in Georgia and Texas on April 30; in Florida on May 4; in Massachusetts on May 18; in New York on May 28 (The New York Times, 2020c). In short, unlike the parametric model with a single kink, the nonparametric trend estimates detect multiple changes in the slopes and provide kink dates, which are well aligned with the actual events. We now turn to different filtering methods. In Figure 5, we show selection of $\lambda$ for the HP, $\ell_{1}$ and square-root $\ell_{1}$ filters. As explained in Section 3.3, the penalization parameter $\lambda$ is chosen to ensure that all different methods have the same level of fitting the data. Figure 6 shows the estimation results for the HP filter. The HP trend estimates trace data pretty well after late March, as clear in residual plots. However, there is no kink in the estimates due to the nature of the $\ell_{2}$ penalty term in the HP filter. The tuning parameter was $\lambda=30$, which is 30 times as large as the one used in the sparse HP filter. This is because for the HP filter, $\lambda$ is the main tuning parameter; however, for the sparse HP filter, $\lambda$ plays a minor role of regularizing the $\ell_{0}$ constrained method. Figure 5: Selection of $\lambda$ | | ---|---|--- Note: The thunning parameter $\lambda$ is chosen by minimizing the distance between two fidelities as described in Section 3.3. The selected tuning parameters for HP, $\ell_{1}$, and square-root $\ell_{1}$ are as 30, 0.9, and 0.5, repectively. Figure 6: HP Filtering | ---|--- | Note: HP filtering solves (3.2). The orange vertical line denotes the lockdown date, March 30. Figure 7: $\ell_{1}$ Filtering | ---|--- | Note: $\ell_{1}$ filtering solves (3.1). The estimated kinks denoted by blue vertical lines are: March 7, March 15, March 16, March 20, March 21, March 30, April 14, April 21, May 12, and May 27. The orange vertical line denotes the lockdown date, March 30. The $\ell_{1}$-filtering kink dates are calculated by any $t$ such that $|\Delta^{2}\log\hat{\beta}_{t}|>\eta$, where $\eta=10^{-6}$ is an effective zero. Figure 8: Square-root $\ell_{1}$ Filtering | ---|--- | Note: Square-root $\ell_{1}$ filtering solves (3.3). The estimated kinks denoted by blue vertical lines are: March 7, March 15, March 16, March 20, March 21, March 30, April 14, April 21, May 12, and May 27. The orange vertical line denotes the lockdown date, March 30. Figure 9: Sparse HP and $\ell_{1}$ Filtering for the US | ---|--- | Note: The Sparse HP kinks (blue) are: March 16, March 20, April 14, and May 13. The $\ell_{1}$ kinks (red) are: March 7, March 15, March 16, March 20, March 21, March 30, April 14, April 21, May 12, and May 27. The orange vertical line denotes the lockdown date, March 30. In Figure 7, we plot estimation results using $\ell_{1}$ trend filtering. The results look similar to those in Figure 6, but there are now 10 kink points: March 7, March 15, March 16, March 20, March 21, March 30, April 14, April 21, May 12, May 27. They are dates $t$ such that $|\Delta^{2}\log\widehat{\beta}_{t}|>\eta$, where $\Delta^{2}$ is the double difference operator and $\eta=10^{-6}$ is an effective zero.555The results are robust to the size of the effective zero and do not change even if we set $\eta=10^{-3}$. Gurobi used for the Sparse HP filtering also imposes some effective zeros in various constraints. We use the default values of them. For example, the integer tolerance level and the general feasibility tolerance level are $10^{-5}$ and $10^{-6}$, respectively. The tuning parameter $\lambda=0.9$ was chosen by minimizing the distance between the fidelity of $\ell_{1}$ and that of the Sparse HP. Recall that the sparse HP filter produces the kinks on March 16, March 20, April 14, and May 13. In other words, the $\ell_{1}$ filter estimates 6 more kinks than the sparse HP filter when both fit the data equally well. It is unlikely that two adjacent dates (March 15-16 and March 20-21) correspond to two different regimes in the time- varying contact rate. This suggests that the $\ell_{1}$ filter may over- estimate the number of kinks. Figure 8 shows estimation results for the square-root $\ell_{1}$ trend filters. The chosen $\lambda=0.5$ was smaller than that of the $\ell_{1}$ trend filter due to the change in the scale of the fidelity term; however, the trend estimates look very similar and the estimated kinks are identical between the $\ell_{1}$ and square-root $\ell_{1}$ trend filters. In Figure 9, we plot the sparse HP filter estimates along with $\ell_{1}$ filter estimates. Both methods have produced very similar trend estimates, but the number of kinks is substantially different: only 4 kinks for the sparse HP filter but 10 kinks for the $\ell_{1}$ filter. ### 5.2 Other Countries: Canada, China, South Korea and the UK In this section, we provide condensed estimation results for other countries. We focus on the sparse HP and $\ell_{1}$ filters whose tuning parameters are chosen as in the previous section. Appendices B and C contain the details of the selection of tuning parameters. Figure 10 shows the empirical results of Canada. The estimated kink dates are: March 18 and April 11. Based on them, we can classify observations into three periods: 1. 1. March 6 - March 18: This is an initial period of the epidemic in Canada. The contact rate was peaked at the end of this period. Several lockdown measures started to be imposed. 2. 2. March 18 - April 11: We observe a sharp decrease in the contact rate in this period. Additional measures were imposed. 3. 3. April 11 - June 8: The contact rate decreased but less steeply. Quebec and Ontario are the two provinces hardest hit by COVID-19. In Quebec, daycares, public schools, and universities are closed on March 13 followed by non-essential businesses and public gathering places on March 15. Montreal declared state of emergency on March 27 (CTV News, 2020). Similarly, all public schools in Ontario are closed on March 12. The state of emergency was announced in Ontario on March 17 and ordered to close all non-essential businesses on March 23 (Global News, 2020). We set the lockdown date in Canada on March 13 as other provincial governments as well as the federal government started to recommend the social distancing measures strongly along with the cancellation of various events on the date (CBC News, 2020). These tight lockdown and social distancing measures seemed to contribute the sharp decline of the contact rate in the second period. Both governments started to announce the plans to lift the lockdown measures at the end of April, which corresponds to the third period. Lockdown fatigue would also cause the slower decrease of the contact rate. In sum, a series of social distancing measures have been effective to decrease the contact rate but with some lags. The sparse HP filtering separates these periods reasonably well. However, the $\ell_{1}$ filtering overfits the model with 5 kinks. Figure 10: Sparse HP and $\ell_{1}$ Filtering for Canada | ---|--- | Note: The Sparse HP kinks (blue) are: March 18 and April 11. The $\ell_{1}$ kinks (red) are: March 17, March 18, March 24, April 11, and May 24. The orange vertical line denotes the lockdown date, March 13. Figure 11 shows the results for China. Since the pandemic is almost over in China, we use the data censored on April 26th when the 3-day-average of newly confirmed cases is less than 10. The estimated kink dates are: January 28, March 14, March 24, and April 18. Based on them, we can classify observations into five periods: 1. 1. January 23 - January 28: This is an initial period of the epidemic in China. Since the official confirmation of the novel coronavirus on December 31, 2019, the confirmed cases had increased rapidly. President Xi presided and issued instructions on the epidemic control on January 20. The travel ban on Wuhan was imposed on January 23, 2020 in the period of the Lunar New Year holidays (The New York Times, 2020b). We set this date as the lockdown date. 2. 2. January 28 - March 14: The contact rate shows a sharp decrease during this period. The Lunar New Year holiday was extended to February 2 across the country. China’s National Health Commission (NHC) imposed social distancing measures on January 26. By January 29, all 31 provinces in China upgraded the public health emergency response to the most serious level. By early February, nationwide strict social distancing policies were in place. 3. 3. March 14 - March 24: This period shows a V-turn of the contact rate in terms of the $\log\beta_{t}$ scale. It also shows an upward trending in the $R_{0}(t)$ scale but the level is lower than that in early February. The mass quarantine of Wuhan was partially lifted on March 19 (Bloomberg, 2020). Most provinces downgraded their public health emergency response level, where factories and stores started to reopen in this period. 4. 4. March 24 - April 18: The contact rate still increased but at a lower rate. It started to decrease again at the end of this period. We can see a slight increase in $R_{0}(t)$. The mass quarantine of Wuhan was lifted more and the travel to other provinces was allowed on April 8 (Bloomberg, 2020). 5. 5. April 18 - April 26: The contact rate went down quickly and was flattened at a low level. The last hospitalized Covid-19 patient in Wuhan was discharged on April 26 (Xinhua, 2020). Figure 11: Sparse HP and $\ell_{1}$ Filtering for China | ---|--- | Note: The Sparse HP kinks (blue) are: January 28, March 14, March 24, and April 18. The $\ell_{1}$ kinks (red) are: January 29, February 14, February 22, March 13, March 14, March 26, March 27, and April 17. The orange vertical line denotes the lockdown date, January 23. Figure 12 shows the results for South Korea. For the same reason in China, we use the data censored on April 29. The estimated kink dates are: March 3, March 15, April 2, and April 21. Based on them, we can classify observations into five periods: 1. 1. February 21 - March 3: This period is the beginning of the coronavirus spread in South Korea. On February 21, Shincheonji Church of Jesus, a secretive church in South Korea was linked to a surge of infections in the country (The New York Times, 2020b). The sharp decline of $\log\beta_{t}$ could be due to the fact that the number of active infections is relatively small in this period and thus, $Y_{t}=\Delta C_{t}/(I_{t-1}S_{t-1})$ might not be properly measured. 2. 2. March 3 - March 15: A sharp decrease in $\log\beta_{t}$ in this period corresponds to Korean government’s swift reactions to the outbreak through active testing and contact tracing (The New York Times, 2020a; Aum et al., 2020; Kim et al., 2020), highlighted by prompt containment of an outbreak started on March 8 at a call center in Seoul (Park et al., 2020). 3. 3. March 15 - April 2: This period shows a modest V-turn of the contact rate in terms of the $\log\beta_{t}$ scale but it is much less visible in the $R_{0}(t)$ scale. 4. 4. April 2 - April 21: This period displays a further reduction of the contact rate. A remarkable event was parliamentary elections on April 15 when 30 million people voted without triggering a new outbreak. 5. 5. April 21 - April 29: The contact rate was flattened at a low level. Figure 12: Sparse HP and $\ell_{1}$ Filtering for South Korea | ---|--- | Note: The Sparse HP kinks (blue) are: March 3, March 15, April 2, and April 21. The $\ell_{1}$ kinks (red) are: March 3, March 12, March 15, March 16, April 2, April 3, and April 21. South Korea have not imposed any nation-wide lockdown measure. Figure 13 shows the empirical results of the UK. The estimated kink dates are: March 12 and March 14. Based on them, we can classify observations into three periods: 1. 1. March 6 - March 12: This is an initial period of the epidemic in the UK. The downward trend might be due to the fact that the cumulative number of confirmed cases is relatively small and therefore, its growth rate can be easily over-estimated. 2. 2. March 12 - March 14: This is still an early stage of the epidemic. The steep increase in the contact rate is again possibly due to the small number of the confirmed cases. 3. 3. March 14 - June 8: This period shows a steady and constant decrease in the contact rate. The lockdown measures began in the UK on March 23 (BBC News, 2020b). On May 10, the British prime minister Boris Johnson relaxed certain restrictions and announced the plan for reopening (BBC News, 2020a) but it keeps the downward trending. Overall, the trend of the contact rate is quite similar to those of the US and Canada. The location of the kinks are around more in the initial periods but it shows the steady downward trending after the prime minister’s lockdown announcement. This results in a smooth curve in the $R_{0}(t)$ scale. The trend estimates of the $\ell_{1}$ filter is almost identical to those of the sparse HP filter; however, it indicates 10 kinks, which seem overly excessive. Figure 13: Sparse HP and $\ell_{1}$ Filtering for the UK | ---|--- | Note: The Sparse HP kinks (blue) are: March 12 and March 14. The $\ell_{1}$ kinks (red) are: March 11, March 20, March 28, April 3, April 22, April 23, May 8, May 20, May 21, and May 27. The orange vertical line denotes the lockdown date, March 24. ### 5.3 A Measure of Surveillance and Policy Implications The sparse HP filter produces the kinks where the slope changes in the $\log\beta_{t}$ scale, thereby providing a good surveillance measure for monitoring the ongoing epidemic situation. The policy responses are based on various scenarios and the contact rate is one of the most important measures that determine different developments. As a summary statistic of the time- varying contact rate, we propose to consider the time-varying growth rate of the contact rate, which we call _contact growth rates_ : $\displaystyle\xi(t):=\frac{\beta_{t}-\beta_{t-1}}{\beta_{t-1}}\times 100.$ Recall that we have defined the time-varying basic reproduction number by $R_{0}(t)=\beta_{t}/\gamma$. Because $\gamma$ is fixed over time, we have that $\displaystyle\xi(t)=\frac{R_{0}(t)-R_{0}(t-1)}{R_{0}(t-1)}\times 100.$ Therefore, $\xi(t)$ can be interpreted as the _time-varying growth rate of the basic reproduction number_ ; it does not require the knowledge of $\gamma$ and solely depends on $\beta_{t}$. Furthermore, by simple algebra, $\displaystyle\xi(t)=\left[\exp(\log\beta_{t}-\log\beta_{t-1})-1\right]\times 100,$ (5.1) which implies that $\xi(t)$ will be piecewise constant if $\log\beta_{t}$ is piecewise linear. This simple algebraic relationship shows that a change in the slope at the kink in the $\log\beta_{t}$ scale is translated to a break in the time-varying contact growth rates and therefore in growth rates of the time-varying basic reproduction number. When $\xi(t)$ is a large positive number, that will be a warning signal for the policymakers. On the contrary, if $\xi(t)$ is a big negative number, that may suggest that policy measures imposed before are effective to reduce the contagion. Table 1: Time-Varying Contact Growth Rates | US | Canada | China | South Korea | UK ---|---|---|---|---|--- Period 1 | -1.55 | 7.08 | 15.04 | -15.23 | -10.96 Period 2 | 7.48 | -5.02 | -12.27 | -20.34 | 31.10 Period 3 | -7.67 | -2.82 | 30.23 | 4.47 | -4.70 Period 4 | -3.39 | NA | 4.41 | -7.88 | NA Period 5 | -1.04 | NA | -22.95 | 1.57 | NA Note: The growth rates, expressed as percentages, are obtained by (5.1) using the sparse HP trend estimates. The contact growth rates are also growth rates of $R_{0}(t)$. The kink dates separating distinct periods are different for each country and they are reported in Sections 5.1 and 5.2. Table 1 reports the time-varying contact growth rates in the five countries that we investigate, using the sparse HP trend estimates. For the US, the explosive growth rate of 7.5% in the second period is followed by the negative growth rates of $-7.7$%, $-3.4$%, and $-1$%, albeit at diminishing magnitudes. The trajectory of Canada is similar to that of the US. The growth rates of China fluctuated up and down: it started with a high positive 15% followed by $-12$%; a sharp V-turn at the end of the second period (March 14) with the resulting explosive growth rate of 30%, followed by moderate 4% and impressive $-23$%. It might be the case that the up-and-down pattern observed in China is in part due to data quality issues since China was the first country to experience the pandemic. For South Korea, we can see the stunning drop of the growth rates culminating on March 15 (the end of the second period). A modest positive growth rate during period 3 is offset by a larger magnitude of negative growth rate in period 4. The UK has experienced steady—but not spectacular—negative growths over the sample period following a sharp fluctuation in mid-March. This hints the degrees of effectiveness of the UK lockdown policy. As early pandemic epicenters, China and South Korea experienced V-turns in the time-varying growth rates of basic reproduction number. Canada, the UK and the US may face similar trajectories as they reopen their countries. Our surveillance statistic can be a useful indicator to monitor a new outbreak of COVID-19. However, it will be mainly useful for a short-term projection of the contact growth rate because it is not designed to make long-term trend predictions. ## 6 Theory In this section, we examine theoretical properties of the sparse HP and $\ell_{1}$ filters in terms of risk consistency. Let $\|\cdot\|_{0}$ denote the usual $\ell_{0}$-(pseudo)norm, that is the number of nonzero elements, and let $\|\cdot\|_{r}$ and $\|\cdot\|_{\infty}$, respectively, denote the $\ell_{r}$ norm for $r=1,2$ and the sup-norm. ### 6.1 Risk Consistency of the Sparse HP Filter Define $\displaystyle\mathcal{F}=\mathcal{F}(\kappa,M):=\left\\{\bm{f}:\|\bm{D}\bm{f}\|_{0}\leq\kappa,\|\bm{D}\bm{f}\|_{\infty}\leq M\right\\},$ (6.1) where $M$ is defined in $\eqref{def:M}$. For each $\bm{f}\in\mathcal{F}$, define $\displaystyle S(\bm{f}):=\mathbb{E}_{\bm{y}}\left[\frac{1}{T}(\bm{y}-\bm{f})^{\top}(\bm{y}-\bm{f})\right].$ Let $\bm{f^{*}}$ denote the ideal sparse filter in the sense that $\displaystyle\bm{f^{*}}\in\text{argmin}_{\bm{f}\in\mathcal{F}}S(\bm{f}).$ Let $\bm{\widehat{f}}$ denote the sparse HP filter defined in Section 3.2. Then, $\displaystyle R(\bm{\widehat{f}},\bm{f^{*}}):=S(\bm{\widehat{f}})-S(\bm{f^{*}})$ (6.2) is always nonnegative. Following the literature on empirical risk minimization, we bound the excess risk $R$ in (6.2) and establish conditions under which it converges to zero. Recall that the sparse HP filter minimizes $\displaystyle Q_{n}(\bm{f}):=\frac{1}{T}(\bm{y}-\bm{f})^{\top}(\bm{y}-\bm{f})+\frac{\lambda}{T}\bm{f}^{\top}\bm{D}^{\top}\bm{D}\bm{f}$ subject to $\bm{f}\in\mathcal{F}$. Let $S_{n}(\bm{f}):=T^{-1}(\bm{y}-\bm{f})^{\top}(\bm{y}-\bm{f})$. Write $\displaystyle R(\bm{\widehat{f}},\bm{f^{*}})$ $\displaystyle=S(\bm{\widehat{f}})-Q_{n}(\bm{f^{*}})+Q_{n}(\bm{f^{*}})-S(\bm{f^{*}})$ $\displaystyle\leq S(\bm{\widehat{f}})-Q_{n}(\bm{\widehat{f}})+Q_{n}(\bm{f^{*}})-S(\bm{f^{*}})$ $\displaystyle=S(\bm{\widehat{f}})-S_{n}(\bm{\widehat{f}})-\frac{\lambda}{T}\bm{\widehat{f}}^{\top}\bm{D}^{\top}\bm{D}\bm{\widehat{f}}+S_{n}(\bm{f^{*}})+\frac{\lambda}{T}{\bm{f^{*}}}^{\top}\bm{D}^{\top}\bm{D}\bm{f^{*}}-S(\bm{f^{*}})$ $\displaystyle\leq 2\sup_{\bm{f}\in\mathcal{F}}\left|S_{n}(\bm{f})-S(\bm{f})\right|+2\frac{\lambda}{T}\sup_{\bm{f}\in\mathcal{F}}{\bm{f}}^{\top}\bm{D}^{\top}\bm{D}\bm{f}.$ Therefore, it suffices to bound two terms above. For the second term, we can use (3.6) and (6.1) to bound $\displaystyle 2\frac{\lambda}{T}\sup_{\bm{f}\in\mathcal{F}}{\bm{f}}^{\top}\bm{D}^{\top}\bm{D}\bm{f}$ $\displaystyle\leq\frac{2\lambda M^{2}\kappa}{T}.$ We summarize discussions above in the following lemma. ###### Lemma 6.1. Let $\bm{\widehat{f}}$ denote the sparse HP filter. Then, $\displaystyle R(\bm{\widehat{f}},\bm{f^{*}})\leq 2\sup_{\bm{f}\in\mathcal{F}}\left|S_{n}(\bm{f})-S(\bm{f})\right|+\frac{2\lambda\kappa}{T}\max_{t=2,\ldots,T-1}|y_{t-1}-2y_{t}+y_{t+1}|^{2}.$ To derive an asymptotic result, we introduce subscripts indexed by the sample size $T$, when necessary for clarification. Let $\mathcal{G}_{\kappa}$ denote the set of every continuous and piecewise linear function whose slopes and the function itself is bounded by $C_{1}$ and $C_{2}$, respectively, and the number of kinks is bounded by $\kappa$. ###### Assumption 4. Assume that $\mathcal{F}$ in (6.1) satisfies $\displaystyle\mathcal{F}(\kappa,M)\subseteq\mathcal{F}_{T}:=\left\\{\bm{f_{T}}=(f_{T,1},...,f_{T,T}):f_{T,t}=f(t/T),f\in\mathcal{G}_{\kappa}\right\\}.$ (6.3) Moreover, $y_{t}=f_{T,t}^{*}+u_{t}$, where $\log\beta_{t}=f_{T,t}^{*}$, $\bm{f^{*}_{T}}\in\mathcal{F}_{T}$ and $u_{t}$ satisfies $\sup_{t=1,2,\ldots}\mathbb{E}|u_{t}|^{p}<\infty$ for some $p\geq 2$ and Assumption 2. Finally, $\lambda\kappa T^{-(1-1/p)}\rightarrow 0$ as $T\rightarrow\infty$. Then, we have the following proposition. ###### Proposition 6.1. Let Assumption 4 hold. Then, we have that as $T\rightarrow\infty$, $\displaystyle\sup_{\bm{f}\in\mathcal{F}}\left|\frac{1}{T}\sum_{t=1}^{T}\left\\{(y_{t}-f_{t})^{2}-\mathbb{E}(y_{t}-f_{t})^{2}\right\\}\right|\rightarrow_{p}0$ (6.4) and $\displaystyle\frac{\lambda\kappa}{T}\max_{t=2,\ldots,T-1}|y_{t-1}-2y_{t}+y_{t+1}|^{2}\rightarrow_{p}0.$ (6.5) Therefore, $R(\bm{\widehat{f}},\bm{f^{*}})\rightarrow_{P}0$. Theorem 6.1 establishes the consistency in terms of the excess risk $R$. Assumption 4 provides sufficient conditions for (6.4) and (6.5). Condition (6.4) is a uniform law of large numbers for the class $\mathcal{F}$ and condition (6.5) imposes a weak condition on $\lambda$. Proposition 6.1 follows immediately from Lemma 6.1 once (6.4) and (6.5) are established. ###### Proof of Proposition 6.1. Note that the summand in (6.4) can be rewritten $(y_{t}^{2}-\mathbb{E}y_{t}^{2})-2f_{t}(y_{t}-\mathbb{E}y_{t})$. Then, $y_{t}^{2}-\mathbb{E}y_{t}^{2}=u_{t}^{2}-\sigma_{u}^{2}+2(f_{T,t}^{*}-f_{t})u_{t}$ and $2f_{t}(y_{t}-\mathbb{E}y_{t})=2f_{t}u_{t}$. Furthermore, $T^{-1}\sum_{t=1}^{T}u_{t}^{2}-\sigma_{u}^{2}=o_{p}(1)$ due to the law of large numbers (LLN) for a martingale difference sequence (mds). We now turn to $\sup_{\bm{f}\in\mathcal{F}}\left(T^{-1}\sum_{t=1}^{T}f_{t}u_{t}\right)$. The marginal convergence is straightforward since $f_{t}u_{t}$ is an mds with bounded second moments due to the LLN for mds. Next, note that for a constant $\eta>0$ $\sup_{|\bm{f}-\bm{f^{\prime}}|_{\infty}<\eta}\left|T^{-1}\sum_{t=1}^{T}(f_{t}-f_{t}^{\prime})u_{t}\right|\leq\eta\left(T^{-1}\sum_{t=1}^{T}|u_{t}|\right),$ which implies the stochastic equicontinuity of the process indexed by $\bm{f}\in\mathcal{F}_{T}$. Finally, recall Arzelà-Ascolli theorem, see e.g. Van Der Vaart and Wellner (1996), to conclude that $\mathcal{G}_{\kappa}$ is totally bounded with respect to $|\cdot|_{\infty}$. Therefore, $\sup_{\bm{f}\in\mathcal{F}_{T}}\left(T^{-1}\sum_{t=1}^{T}f_{t}u_{t}\right)=o_{p}(1)$ by a generic uniform convergence theorem, e.g. Andrews (1992). To show the condition (6.5), note that it is bounded by $16\frac{\lambda\kappa}{T}\max_{t=1,\ldots,T}y_{t}^{2}$, which is in turn $O_{p}(\lambda\kappa T^{-(1-1/p)})$ due to the moment condition on $u_{t}$. ∎ ### 6.2 Risk Consistency of the $\ell_{1}$ Filter The $\ell_{1}$ trend filtering (3.1) can be expressed as $\widetilde{\bm{f}}:=\arg\min_{\bm{f}\in\mathbb{R}^{T}}\|\bm{y}-\bm{f}\|_{2}^{2}+\lambda\|\bm{D}\bm{f}\|_{1}.$ We now derive the deviation bound for $\|\widetilde{\bm{f}}-\bm{f}^{*}\|_{2}.$ First, the problem is equivalent to a regular LASSO problem as stated in Lemma 6.2 below. Write $\bm{D}=(\bm{D}_{3},\bm{D}_{2})$ where $\bm{D}_{2}$ has two columns. Additionally, write $\bm{G}_{2}:=\begin{pmatrix}\bm{D}_{3}^{-1}\\\ \mathbf{0}\end{pmatrix},\quad\bm{g}_{1}:=\begin{pmatrix}-\bm{D}_{3}^{-1}\bm{D}_{2}\\\ \bm{I}_{2}\end{pmatrix},$ where $\bm{0}$ is $2\times(T-2)$, $\bm{g}_{1}$ is $T\times 2$ and $\bm{G}_{2}$ is $T\times(T-2).$ Let $\bm{P}_{\bm{g}_{1}}=\bm{g}_{1}(\bm{g}_{1}^{\top}\bm{g}_{1})^{-1}\bm{g}_{1}^{\top}.$ ###### Lemma 6.2. We have $\widetilde{\bm{f}}=\bm{y}-\widetilde{\bm{y}}+\widetilde{\bm{X}}\widehat{\theta}$, where $\widetilde{\bm{y}}:=(\bm{I}-\bm{P}_{\bm{g}_{1}})\bm{y}$, $\widetilde{\bm{X}}:=(\bm{I}-\bm{P}_{\bm{g}_{1}})\bm{G}_{2}$ and $\widehat{\theta}:=\arg\min_{\theta}\|\widetilde{\bm{y}}-\widetilde{\bm{X}}\theta\|_{2}^{2}+\lambda\|\theta\|_{1}.$ ###### Proof of Lemma 6.2. Let $\bm{D}_{1}=(\mathbf{0}:\bm{I}_{2})$ be a $2\times T$ matrix, so that $\bar{\bm{D}}:=\begin{pmatrix}\bm{D}\\\ \bm{D}_{1}\end{pmatrix}$ is upper triangular and invertible. Then, $\bm{G}:=\bar{\bm{D}}^{-1}=(\bm{G}_{2},\bm{g}_{1}).$ Then for a generic $\bm{f}\in\mathbb{R}^{T}$, we can define $\bm{\alpha}:=\bar{\bm{D}}\bm{f}=\begin{pmatrix}\bm{D}\bm{f}\\\ \bm{D}_{1}\bm{f}\end{pmatrix}:=\begin{pmatrix}\theta\\\ \bm{a}\end{pmatrix}.$ So $(\theta,\bm{a})$ also depend on $\bm{f}$ and $\bm{G}\bm{\alpha}=\bm{g}_{1}\bm{a}+\bm{G}_{2}\theta.$ Then the problem is equivalent to: $\widetilde{\bm{f}}=\bm{g}_{1}\widehat{\bm{a}}+\bm{G}_{2}\widehat{\theta}$, where $(\widehat{\bm{a}},\widehat{\theta}):=\min_{\bm{a},\theta}\|\bm{y}-(\bm{g}_{1}\bm{a}+\bm{G}_{2}\theta)\|_{2}^{2}+\lambda\|\theta\|_{1}.$ To solve the problem, we concentrate out $\bm{a}$: Given $\theta$, the optimal $\bm{a}$ is $(\bm{g}_{1}^{\top}\bm{g}_{1})^{-1}\bm{g}_{1}^{\top}(\bm{y}-\bm{G}_{2}\theta)$ and the optimal $\bm{g}_{1}\bm{a}$ is $\bm{P}_{\bm{g}_{1}}(\bm{y}-\bm{G}_{2}\theta)$. Substituting, so the problem becomes a regular LASSO problem: $\min_{\theta}\|\widetilde{\bm{y}}-\widetilde{\bm{X}}\theta\|_{2}^{2}+\lambda\|\theta\|_{1}.$ Finally, $\widetilde{\bm{f}}=\bm{P}_{\bm{g}_{1}}(\bm{y}-\bm{G}_{2}\widehat{\theta})+\bm{G}_{2}\widehat{\theta}=\bm{y}-\widetilde{\bm{y}}+\widetilde{\bm{X}}\widehat{\theta}.$ ∎ Next, let $J$ denote the indices of $t$ so that $f_{0,t-1}-f_{0,t}\neq f_{0,t}-f_{0,t+1}$ when $t\in J$; let $J^{c}$ denote the indices of $t$ so that $f_{0,t-1}-f_{0,t}=f_{0,t}-f_{0,t+1}$ when $t\in J$. Here, $\\{f_{0,t}:t=1,\ldots,T\\}$ denote the true elements of $\bm{f}$. For a generic vector $\theta\in\mathbb{R}^{T-2}$, let $\theta_{J}$ and $\theta_{J^{c}}$ respectively be its subvectors whose elements are in $J$ and $J^{c}.$ No we define the restricted eigenvalue constant $\zeta:=\inf_{\|\theta_{J^{c}}\|_{1}\leq 9\|\theta_{J}\|_{1}}\frac{\|\frac{1}{\sqrt{T}}\widetilde{\bm{X}}\theta\|_{2}^{2}}{\|\theta\|_{2}^{2}}.$ ###### Proposition 6.2. Let $\bm{f}^{*}$ denote the true value of $\bm{f}$ and $\bm{u}:=\bm{y}-\bm{f}^{*}$. Suppose the event $2.5\|{\bm{u}}^{\top}\widetilde{\bm{X}}\|_{\infty}<\lambda$ holds. Then on this event $\displaystyle R(\widetilde{\bm{f}},\bm{f}^{*})$ $\displaystyle\leq\frac{2}{T}\bm{u}^{\top}\bm{P}_{\bm{g}_{1}}\bm{u}+2\|\frac{1}{T}\widetilde{\bm{X}}^{\top}\widetilde{\bm{X}}\|_{\infty}\left(\frac{18\lambda}{\zeta T}\|J\|_{0}\right)^{2}.$ (6.6) ###### Proof of Proposition 6.2. Let $\theta^{*}=\bm{D}\bm{f}^{*}$. Consider the vector form of the model $\bm{y}=\bm{f}^{*}+\bm{u}$. Then $\widetilde{\bm{y}}=\widetilde{\bm{X}}\theta^{*}+\widetilde{\bm{u}}$ where $\widetilde{\bm{u}}=(\bm{I}-\bm{P}_{\bm{g}_{1}})\bm{u}$. By Lemma 6.2, $\widetilde{\bm{f}}=\bm{y}-\widetilde{\bm{y}}+\widetilde{\bm{X}}\widehat{\theta}$, where $\widehat{\theta}:=\arg\min_{\theta}\|\widetilde{\bm{y}}-\widetilde{\bm{X}}\theta\|_{2}^{2}+\lambda\|\theta\|_{1}.$ The standard argument for the LASSO deviation bound implies, on the event $2.5\|{\bm{u}}^{\prime}\widetilde{\bm{X}}\|_{\infty}<\lambda$, $\|\widehat{\theta}-\theta^{*}\|_{1}\leq\frac{18\lambda}{\zeta T}\|J\|_{0}.$ Finally, $\widetilde{\bm{f}}-\bm{f}^{*}=\bm{P}_{\bm{g}_{1}}\bm{u}+\widetilde{\bm{X}}(\widehat{\theta}-\theta^{*})$ implies $R(\widetilde{\bm{f}},\bm{f}^{*})=\frac{1}{T}\|\widetilde{\bm{f}}-\bm{f}^{*}\|_{2}^{2}\leq\frac{2}{T}\bm{u}^{\top}\bm{P}_{\bm{g}_{1}}\bm{u}+2\|\frac{1}{T}\widetilde{\bm{X}}^{\top}\widetilde{\bm{X}}\|_{\infty}\|\widehat{\theta}-\theta^{*}\|_{1}^{2}.$ ∎ To achieve risk consistency, $\lambda$ has to be chosen to make the second term on the right-hand side of (6.6) asymptotically small and to ensure that the event $2.5\|{\bm{u}}^{\top}\widetilde{\bm{X}}\|_{\infty}<\lambda$ holds with high probability. The first term on the right-hand side of (6.6) will converge to zero under mild conditions on $\bm{u}$. It is reassuring that the $\ell_{1}$ trend filter fits COVID-19 data well in our empirical results. ### 6.3 Risk Consistency of $\exp(\widehat{f}_{t})$ and $\exp(\widetilde{f}_{t})$ In this subsection, we obtain risk consistency of $\exp(\widehat{f}_{t})$ and $\exp(\widetilde{f}_{t})$. To do so, we first rewrite the excess risk in (6.2) as $R(\bm{f},\bm{f^{*}})=\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}(f_{t}-f_{t}^{*})^{2},$ for $\bm{f}=(f_{1},...,f_{T})^{\top}$ and $\bm{f^{*}}=(f_{1}^{*},...,f_{T}^{*})^{\top}$. We have proved in the previous sections that $R(\bm{\widehat{f}},\bm{f^{*}})\rightarrow_{P}0$ and $R(\bm{\widetilde{f}},\bm{f^{*}})\rightarrow_{P}0$. Then, under the assumption that there exists a constant $C<\infty$ such that $\max_{t}|f_{t}|+\max_{t}|f_{t}^{*}|<C$ for all $\\{f_{t}\\}$ on the parameter space, we get, uniformly for all $f$ on the parameter space, $\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}(\exp(f_{t})-\exp(f_{t}^{*}))^{2}=\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\exp(2\tilde{f}_{t})(f_{t}-f_{t}^{*})^{2}\leq\exp(2C)\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}(f_{t}-f_{t}^{*})^{2},$ where in the first equality we used the mean value theorem for some $\tilde{f}_{t}\in(f_{t},f_{t}^{*}).$ Therefore, $R(\exp(\bm{\widehat{f}}),\exp({\bm{f^{*}}}))\leq\exp(2C)R(\bm{\widehat{f}},\bm{f^{*}})=o_{P}(1)$ and the analogous result folds for $\exp(\bm{\widetilde{f}})$. ## 7 Conclusions We have developed a novel method to estimate the time-varying COVID-19 contact rate using data on actively infected, recovered and deceased cases. Our preferred method called the sparse HP filter has produced the kinks that are well aligned with actual events in each of five countries we have examined. We have also proposed contact growth rates to document and monitor outbreaks. Theoretically, we have outlined the basic properties of the sparse HP and $\ell_{1}$ filters in terms of risk consistency. The next step might be to establish a theoretical result that may distinguish between the two methods by looking at kink selection consistency. It would also be important to develop a test for presence of kinks as well as an inference method on the location and magnitude of kinks and on contact growth rates. In the context of the nonparametric kernel regression of the trend function, Delgado and Hidalgo (2000) explored the distribution theory for the jump estimates but did not offer testing for the presence of a jump. Compared to the kernel smoothing approach, it is easier to determine the number of kinks using our approach, as we have demonstrated. Furthermore, the linear trend specification is more suitable for forecasting immediate future outcomes, at least until the next kink arises. The long-term prediction is more challenging and it is beyond the scope of this paper. Finally, it would be useful to develop a panel regression model for the contact rate at the level of city, state or country. These are interesting research topics for future research. ## Appendices ## Appendix A Under-Reporting of Positive Cases In Section 2, it is assumed that we observe $(C_{t},R_{t},D_{t})$. In this appendix, we show that our time series model in Section 2 is robust to some degree of under-reporting of positive cases. Assume that what we observe is only a fraction of changes in $C_{t}$. This assumption reflects the reality that a daily reported number of newly positive cases of COVID-19 is likely to be underreported. Suppose that we observe $\Delta c_{t}$ in period $t$ such that $\Delta c_{t}:=\rho\Delta C_{t},$ where $0<\rho<1$ is unknown. Then, $c_{t}=\sum_{t=1}^{T}\Delta c_{t}=\rho\sum_{t=1}^{T}\Delta C_{t}=\rho C_{t},$ assuming that $c_{0}=C_{0}=0$. In words, $\rho$ is the constant ratio between reported and true cases. Formally, we make the following assumption. ###### Assumption 5 (Fraction Reporting). For each $t$, we observe $(c_{t},r_{t},d_{t})$ such that $\displaystyle c_{t}:=\rho C_{t},\ \ r_{t}:=\rho R_{t}\ \ \ \ \text{ and }\ \ d_{t}:=\rho D_{t},$ where $0<\rho<1$. The two simplifying conditions in Assumption 5 is that (i) $\rho$ is identical among the three time series and (ii) $\rho$ is constant over time. In reality, a fraction of reported deaths might be higher than that of reported cases; $\rho$ might be time-varying especially in the beginning of the pandemic due to capacity constraints in testing. However, we believe that $\rho$ is unlikely to vary over time as much as $\beta_{t}$ changes over time; thus, we take a simple approach to minimize complexity. The common $\rho$ can be thought of a broad measure of detecting COVID-19 in a community. Define $i_{t}:=c_{t}-r_{t}-d_{t}$ and $s_{t}:=1-c_{t}$. Under Assumption 5, the reported fraction infected at time $t$ ($i_{t}$) is underestimated, but the reported fraction of the proportion that is susceptible at time $t$ ($s_{t}$) is overestimated. Note that $\displaystyle g_{t}:=\frac{\Delta c_{t}}{i_{t-1}}=\frac{\rho\Delta C_{t}}{\rho I_{t-1}}=\frac{\Delta C_{t}}{I_{t-1}}.$ However, $\displaystyle s_{t-1}=1-\rho_{t-1}C_{t-1}\neq S_{t-1}.$ In words, we have a measurement error problem on $s_{t-1}$ but not on $g_{t}$. It follows from (2.2) that the observed $g_{t}$ and $s_{t-1}$ are related by $\displaystyle g_{t}=\beta_{t}s_{t-1}+v_{t},$ (A.1) where $\displaystyle v_{t}=\beta_{t}(S_{t-1}-s_{t-1})=\beta_{t}(\rho-1)C_{t-1}.$ (A.2) The right-hand side of (A.2) is likely to exhibit an increasing trend since $C_{t-1}$ is the cumulative fraction ever infected. To alleviate this problem, we now divide both sides of (A.1) by $c_{t-1}$, which is positive, to obtain $\displaystyle\frac{g_{t}}{c_{t-1}}=\beta_{t}\left[\frac{s_{t-1}}{c_{t-1}}+\frac{\rho-1}{\rho}\right].$ (A.3) On one hand, if $\rho=1$, (A.3) is identical to (2.2). On other hand, if $\rho\rightarrow 0$, the term inside the brackets on the right-hand side of (A.3) diverges to infinity. In the intermediate case, it depends on the relative size between ${s_{t-1}}/{c_{t-1}}$ and ${(\rho-1)}/{\rho}$. We now use the UK data to argue that the latter is negligible to the former. According to the estimate by Office for National Statistics (2020), “an average of 0.25% of the community population had COVID-19 in England at any given time between 4 May and 17 May 2020 (95% confidence interval: 0.16% to 0.38%).” In the UK data used for estimation, the changes in the number of cumulative positives between 4 May and 17 May 2020 is 0.08% of the UK population. Then, an estimate of $\rho=0.08/0.25=0.32$, resulting in $(\rho-1)/\rho=-2.12$. However, the sample maximum, median, minimum values of ${s_{t-1}}/{c_{t-1}}$ are 572412, 804, and 264, respectively. Therefore, the correction term $(\rho-1)/\rho$ is negligible and therefore, (A.3) reduces to $\displaystyle g_{t}\approx\beta_{t}s_{t-1},$ (A.4) which is virtually the same as (2.2). ## Appendix B Sparse HP Filtering: Leave-One-Out Cross-Validation for Canada, China, South Korea and the UK Figure 14: Sparse HP Filtering: LOOCV for Other Countries | ---|--- | Note: The red dashed line denotes the minimizer of the cross-validation objective function: $(\widehat{\kappa},\widehat{\lambda})=(2,16)$ for Canada; $(\widehat{\kappa},\widehat{\lambda})=(4,2)$ for China; $(\widehat{\kappa},\widehat{\lambda})=(4,4)$ for South Korea; and $(\widehat{\kappa},\widehat{\lambda})_{=}(2,1)$ for the UK. The analysis period is ended if the number of newly confirmed cases averaged over 3 days is smaller than 10: April 26 (China) and April 29 (South Korea). The grid points are: $\kappa\in\\{2,3,4\\}$ and $\lambda=\\{2^{0},2^{1},\ldots,2^{5}\\}$. The x-axis is represented by the $\log_{2}$ scale. ## Appendix C $\ell_{1}$ Trend Filtering: Selection of $\lambda$ Figure 15: $\ell_{1}$ Trend Filtering: Selection of $\lambda$ | ---|--- | Note: The red dashed line denotes the equalizer between the fidelity of the sparse HP filter and that of the $\ell_{1}$ filter: $\widehat{\lambda}=4.9$ for Canada; $\widehat{\lambda}=8.9$ for China; $\widehat{\lambda}=3.0$ for South Korea; and $\widehat{\lambda}=2.7$ for the UK. The analysis period is ended if the number of newly confirmed cases averaged over 3 days is smaller than 10: April 26 (China) and April 29 (South Korea). ## References * Acemoglu et al. (2020) Acemoglu, D., V. Chernozhukov, I. Werning, and M. D. Whinston (2020). Optimal targeted lockdowns in a multi-group SIR model. Working Paper 27102, National Bureau of Economic Research. * Alvarez et al. (2020) Alvarez, F. E., D. Argente, and F. Lippi (2020). A simple planning problem for COVID-19 lockdown. Working Paper 26981, National Bureau of Economic Research. * Andrews (1992) Andrews, D. W. (1992). Generic uniform convergence. Econometric theory 8(2), 241–257. * Atkeson (2020) Atkeson, A. (2020). What will be the economic impact of COVID-19 in the US? Rough estimates of disease scenarios. Working Paper 26867, National Bureau of Economic Research. * Aum et al. (2020) Aum, S., S. Y. T. Lee, and Y. Shin (2020). COVID-19 doesn’t need lockdowns to destroy jobs: The effect of local outbreaks in Korea. Working Paper 27264, National Bureau of Economic Research. * Avery et al. (2020) Avery, C., W. Bossert, A. Clark, G. Ellison, and S. F. Ellison (2020). Policy implications of models of the spread of coronavirus: Perspectives and opportunities for economists. Working Paper 27007, National Bureau of Economic Research. * BBC News (2020a) BBC News (2020a). Boris Johnson speech: PM unveils ‘conditional plan’ to reopen society. https://www.bbc.co.uk/news/uk-52609952. Updated: 2020-05-10. Accessed: 2020-06-14. * BBC News (2020b) BBC News (2020b). Coronavirus: Boris Johnson’s address to the nation in full. https://www.bbc.co.uk/news/uk-52011928. Updated: 2020-03-23. Accessed: 2020-06-14. * Belloni et al. (2011) Belloni, A., V. Chernozhukov, and L. Wang (2011). Square-root lasso: pivotal recovery of sparse signals via conic programming. Biometrika 98(4), 791–806. * Bertsimas et al. (2016) Bertsimas, D., A. King, and R. Mazumder (2016, 04). Best subset selection via a modern optimization lens. Annals of Statistics 44(2), 813–852. * Bertsimas and Van Parys (2020) Bertsimas, D. and B. Van Parys (2020). Sparse high-dimensional regression: Exact scalable algorithms and phase transitions. Annals of Statistics 48(1), 300–323. * Bloomberg (2020) Bloomberg (2020). China to lift lockdown over virus epicenter Wuhan on april 8. https://www.bloomberg.com/news/articles/2020-03-24/china-to-lift-lockdown-over-virus-epicenter-wuhan-on-april-8. Updated: 2020-03-24. Accessed: 2020-06-15. * Bühlmann and Yu (2003) Bühlmann, P. and B. Yu (2003). Boosting with the $l_{2}$ loss. Journal of the American Statistical Association 98(462), 324–339. * CBC News (2020) CBC News (2020). Coronavirus: Here’s what’s happening in Canada and around the world on march 13. https://www.cbc.ca/news/canada/coronavirus-updates-1.5496334. Updated: 2020-03-13. Accessed: 2020-06-15. * Chen and Lee (2018) Chen, L.-Y. and S. Lee (2018). Best subset binary prediction. Journal of Econometrics 206(1), 39–56. * Chen and Lee (2020) Chen, L.-Y. and S. Lee (2020). Binary classification with covariate selection through $\ell_{0}$-penalized empirical risk minimization. Econometrics Journal. forthcoming, https://doi.org/10.1093/ectj/utaa017. * Chernozhukov et al. (2020) Chernozhukov, V., H. Kasaha, and P. Schrimpf (2020). Causal impact of masks, policies, behavior on early Covid-19 pandemic in the U.S. arXiv:2005.14168. https://arxiv.org/abs/2005.14168. * Cornea-Madeira (2017) Cornea-Madeira, A. (2017). The explicit formula for the Hodrick-Prescott filter in a finite sample. Review of Economics and Statistics 99(2), 314–318. * CTV News (2020) CTV News (2020). Covid-19 in Quebec: A timeline of key dates and events. https://montreal.ctvnews.ca/covid-19-in-quebec-a-timeline-of-key-dates-and-events-1.4892912. Updated: 2020-05-23. Accessed: 2020-06-14. * de Jong and Sakarya (2016) de Jong, R. M. and N. Sakarya (2016). The econometrics of the Hodrick-Prescott filter. Review of Economics and Statistics 98(2), 310–317. * Delgado and Hidalgo (2000) Delgado, M. A. and J. Hidalgo (2000). Nonparametric inference on structural breaks. Journal of Econometrics 96(1), 113–144. * Dong et al. (2020) Dong, E., H. Du, and L. Gardner (2020). An interactive web-based dashboard to track COVID-19 in real time. The Lancet infectious diseases 20(5), 533–534. * Eichenbaum et al. (2020) Eichenbaum, M. S., S. Rebelo, and M. Trabandt (2020). The macroeconomics of epidemics. Working Paper 26882, National Bureau of Economic Research. * Fern’andez-Villaverde and Jones (2020) Fern’andez-Villaverde, J. and C. I. Jones (2020). Estimating and simulating a SIRD model of COVID-19 for many countries, states, and cities. https://web.stanford.edu/~chadj/. Updated: 2020-05-29 (Version 2.01). Accessed: 2020-06-10. * Fu et al. (2017) Fu, A., B. Narasimhan, and S. Boyd (2017). CVXR: An R package for disciplined convex optimization. arXiv:1711.07582. https://arxiv.org/abs/1711.07582. * Global News (2020) Global News (2020). A timeline of the novel coronavirus in Ontario. https://globalnews.ca/news/6859636/ontario-coronavirus-timeline/. Updated: 2020-06-08. Accessed: 2020-06-14. * Hamilton (2018) Hamilton, J. D. (2018). Why you should never use the Hodrick-Prescott filter. Review of Economics and Statistics 100(5), 831–843. * Hartl et al. (2020) Hartl, T., K. Wälde, and E. Weber (2020). Measuring the impact of the German public shutdown on the spread of Covid-19. Covid Economics, Vetted and Real-Time Papers. Issue 1, 3 April 2020. * Harvey and Kattuman (2020) Harvey, A. and P. Kattuman (2020). Time series models based on growth curves with applications to forecasting coronavirus. Covid Economics, Vetted and Real-Time Papers. Issue 24, 1 June 2020. * Hastie et al. (2017) Hastie, T., R. Tibshirani, and R. J. Tibshirani (2017). Extended comparisons of best subset selection, forward stepwise selection, and the lasso. arXiv:1707.08692. https://arxiv.org/abs/1707.08692. * Hethcote (2000) Hethcote, H. W. (2000). The mathematics of infectious diseases. SIAM Review 42(4), 599–653. * Hodrick and Prescott (1997) Hodrick, R. J. and E. C. Prescott (1997). Postwar U.S. business cycles: An empirical investigation. Journal of Money, Credit and Banking 29(1), 1–16. * Huang et al. (2018) Huang, J., Y. Jiao, Y. Liu, and X. Lu (2018). A constructive approach to $l_{0}$ penalized regression. Journal of Machine Learning Research 19(10), 1–37. * Kim et al. (2009) Kim, S.-J., K. Koh, S. Boyd, and D. Gorinevsky (2009). $\ell_{1}$ trend filtering. SIAM review 51(2), 339–360. * Kim et al. (2020) Kim, Y.-J., M. H. Seo, and H.-E. Yeom (2020). Estimating a breakpoint in the pattern of spread of covid-19 in south korea. International Journal of Infectious Diseases 97, 360–364. * Krispin (2020) Krispin, R. (2020). coronavirus: The 2019 Novel Coronavirus COVID-19 (2019-nCoV) Dataset. R package version 0.2.0 https://github.com/RamiKrispin/coronavirus. * Li and Linton (2020) Li, S. and O. Linton (2020). When will the Covid-19 pandemic peak? Working Paper 2020/11, University of Cambridge. https://www.inet.econ.cam.ac.uk/research-papers/wp-abstracts?wp=2011. * Liu et al. (2020) Liu, L., H. R. Moon, and F. Schorfheide (2020). Panel forecasts of country-level Covid-19 infections. Working Paper 27248, National Bureau of Economic Research. * Ludvigson et al. (2020) Ludvigson, S. C., S. Ma, and S. Ng (2020). Covid19 and the macroeconomic effects of costly disasters. Working Paper 26987, National Bureau of Economic Research. * Manski (2020) Manski, C. F. (2020). Bounding the predictive values of COVID-19 antibody tests. Working Paper 27226, National Bureau of Economic Research. * Manski and Molinari (2020) Manski, C. F. and F. Molinari (2020). Estimating the COVID-19 infection rate: Anatomy of an inference problem. Journal of Econometrics. forthcoming. * Mazumder et al. (2017) Mazumder, R., P. Radchenko, and A. Dedieu (2017). Subset selection with shrinkage: Sparse linear modeling when the SNR is low. arXiv:1708.03288. https://arxiv.org/abs/1708.03288. * Office for National Statistics (2020) Office for National Statistics (2020). Coronavirus (COVID-19) Infection Survey pilot: England. https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/coronaviruscovid19infectionsurveypilot/england21may2020. Release date: 2020-05-21. Accessed: 2020-06-02. * Park et al. (2020) Park, S., Y. Kim, S. Yi, S. Lee, B. Na, C. Kim, and et al. (2020). Coronavirus disease outbreak in call center, South Korea. Emerging Infectious Diseases 26(8). An early release version is available at: https://doi.org/10.3201/eid2608.201274. * Phillips and Shi (2019) Phillips, P. C. B. and Z. Shi (2019). Boosting: Why you can use the HP filter. arXiv:1905.00175. https://arxiv.org/abs/1905.00175. * Pindyck (2020) Pindyck, R. S. (2020). COVID-19 and the welfare effects of reducing contagion. Working Paper 27121, National Bureau of Economic Research. * Ravn and Uhlig (2002) Ravn, M. O. and H. Uhlig (2002). On adjusting the Hodrick-Prescott filter for the frequency of observations. The Review of Economics and Statistics 84(2), 371–376. * Stock (2020) Stock, J. H. (2020). Data gaps and the policy response to the novel coronavirus. Working Paper 26902, National Bureau of Economic Research. * The New York Times (2020a) The New York Times (2020a). How South Korea flattened the curve. https://www.nytimes.com/2020/03/23/world/asia/coronavirus-south-korea-flatten-curve.html. Updated: 2020-04-10. Accessed: 2020-06-15. * The New York Times (2020b) The New York Times (2020b). How the coronavirus pandemic unfolded: a timeline. https://www.nytimes.com/article/coronavirus-timeline.html. Updated: 2020-06-09. Accessed: 2020-06-13. * The New York Times (2020c) The New York Times (2020c). See how all 50 states are reopening. https://www.nytimes.com/interactive/2020/us/states-reopen-map-coronavirus.html. Updated: 2020-06-12. Accessed: 2020-06-13. * The New York Times (2020d) The New York Times (2020d). See which states and cities have told residents to stay at home. https://www.nytimes.com/interactive/2020/us/coronavirus-stay-at-home-order.html. Updated: 2020-04-20. Accessed: 2020-06-06. * Tibshirani (2014) Tibshirani, R. J. (2014). Adaptive piecewise polynomial estimation via trend filtering. Annals of Statistics 42(1), 285–323. * Toda (2020) Toda, A. A. (2020). Susceptible-infected-recovered (SIR) dynamics of Covid-19 and economic impact. Covid Economics, Vetted and Real-Time Papers. Issue 1, 3 April 2020. * Van Der Vaart and Wellner (1996) Van Der Vaart, A. W. and J. A. Wellner (1996). Weak convergence. Springer, New York, NY. * Wang et al. (2016) Wang, Y.-X., J. Sharpnack, A. J. Smola, and R. J. Tibshirani (2016). Trend filtering on graphs. Journal of Machine Learning Research 17(1), 3651–3691. * Xinhua (2020) Xinhua (2020). Fighting Covid-19, China in action. http://www.xinhuanet.com/english/2020-06/07/c_139120424.htm. Updated: 2020-03-24. Accessed: 2020-06-15.
CTshort=CT, long=computational thinking CT-cubeshort=CT-cube, long=computational thinking cube CTPshort=CTP, long=computational thinking problem CTtshort=CTt, long=Computational Thinking Test R2T2short=R2T2, long=Remote Rescue with Thymio II STEMshort=STEM, long=Science, Technology, Engineering and Math 11institutetext: G. Adorni 22institutetext: F. Mangili 33institutetext: L. Gambardella 44institutetext: Dalle Molle Institute for Artificial Intelligence (IDSIA), Università della Svizzera Italiana and University of Applied Sciences and Arts of Southern Switzerland (USI-SUPSI), Lugano, Switzerland 55institutetext: A. Piatti 66institutetext: L. Negrini 77institutetext: Department of Education and Learning (DFA), University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Locarno, Switzerland 88institutetext: E. Bumbacher 99institutetext: Haute école pédagogique du canton de Vaud (HEP-VD), Lausanne, Switzerland 1010institutetext: F. Mondada 1111institutetext: Mobile Robotic Systems Group (MOBOTS), Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland 1212institutetext: D. Assaf 1313institutetext: School of Education, University of Applied Sciences and Arts Northwestern Switzerland (FHNW), Windisch, Switzerland # A theoretical framework for the design and analysis of computational thinking problems in education Giorgia Adorni Alberto Piatti Engin Bumbacher Lucio Negrini Francesco Mondada Dorit Assaf Francesca Mangili Luca Gambardella ###### Abstract The field of computational thinking education has grown in recent years as researchers and educators have sought to develop and assess students’ computational thinking abilities. While much of the research in this area has focused on defining computational thinking, the competencies it involves and how to assess them in teaching and learning contexts, this work takes a different approach. We provide a more situated perspective on computational thinking, focusing on the types of problems that require computational thinking skills to be solved and the features that support these processes. We develop a framework for analysing existing computational thinking problems in an educational context. We conduct a comprehensive literature review to identify prototypical activities from areas where computational thinking is typically pursued in education. We identify the main components and characteristics of these activities, along with their influence on activating computational thinking competencies. The framework provides a catalogue of computational thinking skills that can be used to understand the relationship between problem features and competencies activated. This study contributes to the field of computational thinking education by offering a tool for evaluating and revising existing problems to activate specific skills and for assisting in designing new problems that target the development of particular competencies. The results of this study may be of interest to researchers and educators working in computational thinking education. ###### keywords: Computational thinking Competence development Educational framework Learning contexts Situated cognition ## 1 Introduction CT has emerged as a crucial skill for students to acquire in the 21st century. As a result, there has been an increased effort to integrate computer science education into K-12 classrooms (Weintrop et al., 2021). This initiative was triggered by Jeannette Wing’s introduction of the term CT in (Wing, 2006). Despite the impressive growth in tools, activities and curricula for teaching CT, significant challenges remain in successfully integrating this new concept into schools. One critical challenge is the lack of a precise, universally accepted definition of CT, a relatively ambiguous concept (Weintrop et al., 2021). Instead, different definitions with varying purposes, each focusing on different aspects of CT, have been proposed (Lafuente Martínez et al., 2022). This lack of consensus has made it difficult for the field to advance beyond the exploratory stage. To address this issue and systematically develop and evaluate different approaches to teaching, developing, and assessing CT, it is necessary to have a precise and comprehensive formalisation of the concept and to identify widespread best practices. Therefore, efforts are needed to establish a clear and standardised definition of CT that reflects the diverse perspectives and purposes of the field. However, there is no easy way to define CT (Shute et al., 2017). Prevailing approaches have focused on decomposing it into sub-dimensions and explicitly specifying each, e.g., Brennan and Resnick (2012); Grover and Pea (2017). These dimension-based approaches have been used to categorise existing assessment tasks based on analysing the underlying skills associated with a task, e.g., Lafuente Martínez et al. (2021). However, creating tasks for developing and assessing these sub-dimensions has been difficult. A recent study developed a reliable and validated CT assessment for adults by combining existing items (Lafuente Martínez et al., 2022). While experts identified several CT sub-dimensions to be addressed by the items, statistical analyses suggested a one-dimensional model as the best solution. A core problem is that sub-dimensions such as decomposition, generalisation, or pattern recognition are closely interwoven and hard to separate (Lafuente Martínez et al., 2021). Similar issues are observed with other complex constructs, such as scientific inquiry or practices, as noted by previous studies (Osborne, 2014; Ford, 2015). In this article, we propose an alternative, more situated approach by focusing on the types of problems that require CT to be solved, which we call CTP. This approach is based on the idea that CT can be better understood by examining the problems that elicit it and the features that support CT processes rather than attempting to define CT itself. The idea behind incorporating a situated approach in this context is that CT is not just a collection of skills. Still, it is also closely tied to the specific context in which it is being applied and allows understanding the complexity of CT. The theories of situated learning proposed by (Roth and Jornet, 2013; Heersmink, 2013) stress that learning is most effective within authentic and meaningful contexts. They argue that knowledge is constructed through interactions with the environment and the community rather than abstract instruction. Our framework for analysing, evaluating and revising existing CTP and designing new CTP is first based on the theory proposed by Piatti et al. (2022). They emphasise that CT should be considered a situated activity and contextualised and embedded in real-world problem-solving situations. They also highlight that CT should be seen as a dynamic and adaptive problem- solving process rather than a fixed set of competencies or skills. Their view combines the original definition of CT from Wing (2006) – “Computational thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that an information- processing agent can effectively carry out.” – which has been widely cited and considered the foundation of the field of CT, with the situated theories of learning (Roth and Jornet, 2013; Heersmink, 2013). To avoid creating a novel CT competence model, we have adopted state-of-the- art frameworks (Brennan and Resnick, 2012; Weintrop et al., 2016; Shute et al., 2017) and reviewed relevant literature (Tikva and Tambouris, 2021; Bocconi et al., 2016, 2022). Our approach resulted in a catalogue of commonly recognised CT competencies widely used to develop and assess student abilities in CT. These competencies, also called dimensions or skills, represent the essential capacities students need to develop to solve CTP effectively. We established a direct link between our catalogue of CT competencies and our framework by showing how specific skills are more likely to be activated when solving CTP with specific characteristics. Finally, the framework is applied to a variety of prototypical problems from standard CT domains, demonstrating that it allows for systematically analyse and evaluate CTP based on the competencies they are intended to elicit and to assess student abilities in a targeted and efficient manner. Answering research questions such as “What are the characteristics of problems requiring CT to be solved?”, “Which characteristics should a CTP have to activate one (or more) CT competencies?” and “Which CT dimensions are activated in a CTP with certain characteristics?”, can provide a better understanding of how to design and assess CTP that effectively develop CT skills. Our proposed framework offers a systematic approach for identifying the characteristics of CTP and the corresponding CT competencies required to solve them. This approach enables the selection or design of tasks effective in developing and assessing the desired skills, ultimately improving the quality of the tasks and assessment process. This article is divided into four main sections. Section 2 presents a theoretical framework for analysing, evaluating, revising and designing CTP. This approach is based on a catalogue of CT skills, which are commonly used to develop and evaluate student abilities. To determine how the characteristics of a CTP impact the development of CT skills, we defined a mapping between skills and characteristics, which is discussed in detail in Appendix 0.A. Section 3 validates the framework through a qualitative analysis of various CTP found in the literature, focusing on unplugged, robotic, and virtual activities. It is important to note that readers are not required to read the entire section. They can choose to read one example per category or skip this section altogether and move directly to Sections 4 and 5. Additionally, Appendix 0.B provides graphical templates that offer a visual representation of how each CTP discussed in the article aligns with the CT competencies outlined in the study. Section 4, includes an overview of the relationship between CT competencies and different activity domains. It outlines the strengths, weaknesses and areas for improvement of each category analysed. Section 5, summarises the study’s key contributions, limitations and implications for future research and practice in the field of CT competencies development. ## 2 Methods This section presents a theoretical framework for analysing, evaluating, revising and designing CTP in educational contexts. Through a thorough analysis of prototypical activities from various classic areas of the literature where CT is typically pursued, we identify the main characteristics and components of these problems. Relying on existing frameworks and literature on CT competencies, we present a catalogue of skills typically used to assess student abilities in this area. We then define a mapping between the identified characteristics of the CTP and these competencies to determine how these features influence the development of CT abilities. ### 2.1 Computational thinking problems (CTPs) The increasing emphasis on the development and assessment of students’ CT skills has led to a need for a structured approach to analyse, evaluate, revise, and design CTP. Although several initiatives have been launched to introduce computer science education in classrooms, there is currently no universally accepted definition of CT. This lack of a standardised definition presents significant challenges in creating appropriate assessment tasks and identifying the various sub-dimensions of CT. Therefore, it is crucial to establish a clear and precise definition of CT that reflects the diverse perspectives and purposes of the field to facilitate the development and assessment of effective CT pedagogical interventions. To address this issue, we followed the approach in Piatti et al. (2022), which is more closely aligned with the complex, multi-faceted settings in which CT is typically activated, such as educational environments or situations involving multiple people and rich physical environments. They combine the original view of Wing (Wing, 2006) with the situated theories of learning (Roth and Jornet, 2013; Heersmink, 2013). According to Piatti et al. (2022), CT activities are shaped by the physical and social context in which they occur and involve external cognitive artefacts. Figure 1: Visualisation of the CT-cube (From Piatti et al. (2022)). This model considers the type of activity (problem setting, algorithm, assessment), the artefactual environment (embodied, symbolic, formal), and the autonomy (inactive role, non-autonomous active role, or autonomous active role). Piatti et al. (2022) proposed a framework called CT-cube, illustrated in Fig. 1, for the design of CTP and the assessment of CT. This model considers the type of CT activity being performed or required (problem setting, algorithm, assessment), the artefactual environment in which the activities occur, represented by the tools used (embodied, symbolic, formal), and the social interactions and individual’s level of autonomy a priori and/or during the task (inactive role, non-autonomous active role, or autonomous active role). It is important to note that the steps in the activity dimension may be iterated as needed to arrive at a satisfactory solution. Based on the formulation of CT from Piatti et al. (2022), we define CTP by considering the context in which the activity is being performed. According to our framework, every CTP consists of several components: the system, comprising the environment and the agent, the problem solver, and the task. The environment is a physical and/or a virtual space, characterised by one or more variables, called “descriptors”, which may change over time according to the dynamics of this space. The agent is a human, robotic or virtual being capable of performing “actions” on the environment to change the value of its descriptors and therefore alter its state. An algorithm is a finite set of instructions that an agent should follow to perform actions in the environment to solve the task. Algorithms for different types of agents can take various forms, such as code for a virtual agent, behaviour for a robot, or a verbal or written set of instructions for a human agent. The problem solver is a human or group of people who can solve tasks that require the use of algorithms, such as designing, implementing, or communicating them to an agent to change the state of an environment. They have access to reasoning tools, which are cognitive artefacts used to think about the task, for example, whiteboards employed to organise ideas and understand the logic of a problem or solution. Some of these tools, known as interaction tools, also allow the problem solver to interface with the system. An example is a programming platform used to write a program that controls a robotic arm. In this case, the tool serves as both a reasoning tool, enabling the problem solver to plan and design the code, and as an interaction tool, allowing the execution of the algorithm and the observation of its effect on the system. The set of all tools is collectively known as artefactual environment and is described in the works of Heersmink (2013) and Piatti et al. (2022). The task is the activity that the problem solver performs to find one or more solutions to a CTP. A solution is a combination of initial states, algorithms, and final states that meets the system’s requirements for a particular environment, with its set of states, and a given agent, with its set of algorithms. The initial state is the starting configuration of the environment, while the final state is the state of the environment after the algorithm is performed. For a solution to be valid, the execution of the algorithm on the initial state must produce the final state. Fig. 2 conveys the framework’s components and their interactions. Each component is depicted in a distinct colour consistently throughout the article. Figure 2: Visualisation of the components of a CTP. According to the framework, a CTP includes: (1) the problem solver (in green) characterised by the artefactual environment, i.e., the set of reasoning and interaction tools, (2) the system, which consists of an environment with its descriptors (in blue) and an agent with its actions (in violet), and (3) the task (in yellow) characterised by the set of initial states, algorithms and final states. The proposed framework, in addition to defining CTP and their components, allows classify them according to their characteristics, which are relevant for eliciting and assessing CT skills in educational contexts. ###### Definition 1 (Artefactual environment) Tools and resources used by a problem solver to reason, understand a problem or interact with a system. The first characteristic we defined in our framework is the artefactual environment, in line with the definition from Piatti et al. (2022) and the model of the three worlds of mathematics by Tall (2006, 2013, 2020). In particular, tools can be distinguished into “embodied” or ecological and iconic representational cognitive artefacts based on embodiment and perception, “symbolic” cognitive artefacts used to conceive and apply procedures and rules, and “formal” cognitive artefacts used to create, generalise and represent structures. ###### Definition 2 (Tools functionalities) Specific features and functions provided by a tool or resource that enable a problem solver to express a wide range of instructions and operations to the agent. Another aspect of CTP that is closely related to the previous one is the set of tools functionalities. These functionalities can include defining and manipulating “variables”, using different types of “operators”, creating “sequences” of actions, “repeating” actions, using “conditional” statements, defining “functions”, executing tasks in “parallel” and triggering “events”. For example, a symbolic artefact, such as a block-based programming platform, may have many functionalities, such as sequences, repetitions, conditionals, etc. In contrast, the programming interface may have limited functionalities during a robotic activity, for example, it could only consent to the use of operators (like moving forward) or events. ###### Definition 3 (Problem domain) The category of an activity depending on the nature of the agent and of the environment in which the task is performed. The domain classification of a CTP is useful for gaining insights into the task’s context and identifying its specific characteristics, challenges, and considerations. Three main categories of domains are commonly recognised in cognitive task paradigms, these include: “unplugged” activities, which involve a human agent and a physical environment, “robotic” activities, in which the agent is a robot and the environment is physical, and “virtual” activities, where both agent and environment are virtual, such as a simulated entity. It is worth noting that the agent may be “embedded“ in the environment, so its descriptors may likewise be used to describe it. Furthermore, in some cases, the problem solver and agent may be “overlapped”, meaning they are the same entity. ###### Definition 4 (System resettability) The property of a system to be restored to its initial state. It can be achieved either through the direct intervention of the problem solver on the system (agents and environment) or indirectly through the reversibility of actions within the system. Resettability is another characteristic of CTP. An action is reversible if it is possible to undo its effects, allowing the system to be restored to a previous state. A system is considered non-resettable if it cannot be returned to its initial state. This characteristic is crucial in educational contexts as it enables the problem solver to experiment, make mistakes and try different solutions without being constrained by the previous actions. Imagine a task where the problem solver must draw a picture on a piece of paper using a pencil. If he makes a mistake, he can easily erase it and start over. The system is directly resettable because the problem solver can directly intervene and reset the system to its initial state. Alternatively, the system is not resettable if the problem solver can only use a pen. If he makes a mistake, it can not be erased, and the problem solver must continue with the picture in its current state. An example of indirect resettability would be if the problem solver is drawing a picture on a digital tablet. He can use the undo button to wipe out the previous action and return to a previous state. ###### Definition 5 (System observability) The property of a system that pertains to the ability of the problem solver to observe the effects of actions taken within the system and their impact on the system’s state. Observability is another aspect that can vary between CTP. Systems can be classified as partially observable, in which only the aggregate effects of a limited number of actions can be perceived, totally observable, in which every single action and its consequences are visible, or not observable, in which the problem solver is unable to directly see the results of the agent’s actions and the system’s state and must infer it from other information. For example, in a chess game, the problem solver can see the state of the board and the pieces all the time, making the system fully observable. In a game of poker, conversely, the problem solver can only see his own cards and the community cards, but not those held by the other players, making the system partially observable. In a scenario where the problem solver remotely controls a robot to explore an underground cave, the person cannot directly see the environment and robot’s actions and must rely on sensor data to determine its location and progress. As a result, the system is non-observable, and decisions are based on limited information. ###### Definition 6 (Task type) The category of an activity influenced by both the number and type of objectives that need to be achieved to solve the task. The last important set of CTP features are those related to the task. Each element that composes the task (initial state, algorithm, final states) can be “given” or is “to be found”. The tasks are divided into six categories, which differ depending on the number of objectives. Tasks with a single objective are classified into the following types: (1) _find the initial state_ : given the final state and the algorithm that produced it, the problem solver must infer the initial state on which the algorithm was applied; (2) _find the algorithm_ : given the initial and the final states, the problem solver must devise and describe an algorithm, or a part of it, that the agent can execute to transform the system from the initial to the final state; (3) _find the final state_ : given the initial state and an algorithm, the problem solver must derive the final state. Tasks with multiple objectives fall into the following types: (4) _creation act_ : given an initial state, the problem solver must determine a desired final state and an algorithm that the agent can use to transform the system from the initial to the final state; (5) _application act_ : given an algorithm, the problem solver must identify one or more pairs of initial and final states on which the algorithm can be applied successfully; (6) _project act_ : given a desired final state, the problem solver must define an initial state and an algorithm that the agent should use to transform the system from the initial to the final state. ###### Definition 7 (Task cardinality) The proportion between the number of given elements to the number of elements to be found to solve the task. In a task, it is possible for each of the given elements and elements that need to be found to be singular or multiple. The relationship between the number of given elements and the number of elements to be found can be “one- to-one”, “many-to-one” or “many-to-many”. For example, a task with a one-to- one cardinality can be one where the problem solver is provided with a single initial and final state and is expected to find a single algorithm that transforms the initial state into the final state. In contrast, a task with a many-to-one cardinality can be one where multiple initial states are given, and the problem solver is expected to find a single algorithm that transforms all the initial states into a single final state. Finally, a task with a many- to-many cardinality can be one where the problem solver is provided with multiple initial states and a single final state and is expected to find multiple algorithms that transform all possible initial states into the desired final state. This type of task can be traced back to many tasks with a many-to-one cardinality. Figure 3: Graphical template for the analysis of CTP. Template suitable for graphically analysing the components of any CTP according to our framework. Colours represent the CTP components and characteristics following the same colour scheme of Fig. 2. ###### Definition 8 (Task explicitness) The level of explicitness or implicitness in the presentation of the given task’s elements. The way the elements of the task are stated can be used to distinguish a CTP by its level of explicitness. A task with “explicitly” specified elements is directly usable in the problem-solving process, while a task with “implicitly” expressed through constraints requires additional interpretation to be understood. For example, in a task where a robot must turn on its lights as soon as it finds a ball in a playground, the given elements are the initial state (the robot lights switched off, the robot and the ball positions) and the final state (the robot lights turned on and the robot positioned in front of the ball), while the element to be found is the algorithm (the set of actions the robot should perform to find the ball). The position of the ball can be described either explicitly using coordinates, or implicitly by stating that it is located in the playground. ###### Definition 9 (Task constraints) The limitations or specific requirements that must be adhered to on the elements of a task to be found for the solution to be considered as valid. A CTP can be distinguished by the type of constraints on the elements of the task that need to be found. In particular, the elements that have to be found are distinguished in “unconstrained”, meaning they can be freely chosen from all possible states and algorithms, without any limitations or specific requirements that need to be met to consider the solution valid, and “constrained”, meaning they must belong to a specified subset of the respective universe set of states or algorithms. Referring to the previous example, the algorithm to be found can be unconstrained if the robot can perform any action to find the ball (moving in a random direction, using sensors to detect the ball, or following a predefined path) or constrained if the programming platform limits the commands the robot can execute (moving only in specific directions, using only specific sensors or following a particular set of predefined paths to find the ball). ###### Definition 10 (Algorithm representation) The mean by which an algorithm is conveyed. Finally, how the algorithm is represented is a significant task characteristic. An algorithm can be “manifest” if it is directly expressed or “latent” if it is not stated but is tacit or inferred by the problem solver. Manifest algorithms can be further classified as “written” if it is represented by an external and persistent representation of it, such as through a programming language, or “not written” if it is communicated verbally or through other non-permanent means. When the problem solver and the agent are the same entity, the algorithm used to solve the task is often not explicitly expressed or written down, as the problem solver is carrying out the steps of the algorithm directly through their actions as the agent. Fig. 3 graphically represent the components and characteristics of a CTP according to our framework. ### 2.2 A catalogue of computational thinking (CT) skills In this study, we have identified a set of CT competencies commonly used to assess student abilities in CT. These competencies, also called dimensions or skills, represent the fundamental abilities students need to develop to solve CT problems effectively. We decided to draw from various state-of-the-art competency models to select and define our taxonomy of CT competencies rather than relying on a single model. Our selection process was inspired by the literature reviews of Tikva and Tambouris (2021) and Bocconi et al. (2016, 2022), which provides a comprehensive overview of CT skills and their potential for compulsory education. Of great importance is the framework of Brennan and Resnick (2012). They proposed a list of CT skills divided into three dimensions: computational concepts, practices, and perspectives. This characterisation is often used in literature, but it was limited for our purposes because it only considered virtual activities, while we also investigated robotics and unplugged ones. Therefore, we extended this list by partially following the taxonomy of CT in STEM courses proposed by Weintrop et al. (2016). This classification consists of four main categories: data practices, modelling and simulation practices, computational problem-solving, and systems thinking practices. Another work we based on is that of Shute et al. (2017). They developed a competence model based on a review of the relevant definitions of CT in the literature, including those of Brennan and Resnick (2012) and Weintrop et al. (2016). Figure 4: Visualisation of our taxonomy of CT competencies. The overall structure is based on the CT-cube (Piatti et al., 2022). The sub-skills are derived from validated CT models (Brennan and Resnick, 2012; Weintrop et al., 2016; Shute et al., 2017). The three skills groups are represented with the same colour scheme used for the CT-cube dimensions in Fig. 1. To facilitate the assessment of competencies and ensure that it is focused and efficient, we have organised CT skills into a hierarchy with layers of dimensions and sub-dimensions, depicted in Fig. 4. The skills taxonomy is based on the activity dimension of the CT-cube framework by Piatti et al. (2022), introduced in the previous section, representing the individual’s role in the cognitive system at each moment of the task. The activity sub- dimensions have been developed mainly using the frameworks of Brennan and Resnick (2012); Shute et al. (2017); Weintrop et al. (2016). Competencies related to the activity dimension involve a wide range of operations and cognitive processes. For example, the “problem setting” competence may require recognising and understanding various components of the framework within the given CTP, as well as modelling the problem. The “algorithm” competence may involve the comprehension and exploitation of different instructions with varying difficulty levels, often influenced by the type of artifactual environment involved. The “assessment” competence may consider determining whether a solution is correct or its quality is satisfactory. Table 1 summarises the competencies of the first level of the hierarchy, defining all the possible values that the activity dimension can assume. Table 1: Main competencies of the framework and their definition. The skills listed are based on the values of the activity dimension of the CT-cube framework (Piatti et al., 2022). Competence | Definition ---|--- Problem setting | Recognise, understand, reformulate or model a CTP and its components so that its solution can be computed.a Algorithm | Conceive and represent a set of agent’s actions that should be executed by a human, artificial or virtual agent to solve the task.b Assessment | Evaluate the quality and validity of the solution in relation to the original task.c * a See Table 2 for sub-competencies. * b See Table 3 for sub-competencies. * c See Table 4 for sub-competencies. Table 2: Problem setting sub-competencies and their definition. The skills listed are based on leading-edge competence models (Brennan and Resnick, 2012; Shute et al., 2017; Thalheim, 2000; Weintrop et al., 2016; Wing, 2011; Bocconi et al., 2016; Selby and Woollard, 2013; Angeli et al., 2016; Csizmadia et al., 2015; Selby, 2014; Barr and Stephenson, 2011). Competence | Definition ---|--- Analysing | Collect, examine and interpret data about the system: environment descriptors and agent actions. [4pt/2pt] Data collection | Gather details about the system. [1pt/2pt] Pattern recognition | Identify similarities, trends, ideas and structures within the system. Modelling | Restructure, clean and update knowledge about the system. [4pt/2pt] Decomposition | Divide the original task into sub-tasks that are easier to be solved. [1pt/2pt] Abstraction | Simplify the original task, focus on key concepts and omit unimportant ones. Representing | Illustrate or communicate information about the system and the task. Tables 2, 3 and 4 provide a detailed breakdown of the sub-dimensions for the three possible values of the CT-cube activity class (problem setting, algorithm, assessment). In the tables, each row represents a specific skill. The parent skill is distinguished from the lower-level skills by a dashed line, while the lower-level competencies of a particular competence are separated by dotted lines. This helps to visually organise the table to differentiate between the different skill levels and make it easier to understand their relationships. Table 3: Algorithm sub-competencies and their definition. The skills listed are based on leading-edge competence models (Brennan and Resnick, 2012; Cui and Ng, 2021; Rodríguez-Martínez et al., 2020; Bocconi et al., 2016, 2022; Shute et al., 2017). The following definitions are about the algorithmic concepts related to the skill since the skill definition varies based on the artifactual environment: in embodied contexts, it involves recognising or describing concepts through senses; in symbolic contexts, it involves applying them; and in formal contexts, it involves understanding properties to structure complex algorithms. Competence | Definition ---|--- Variables | Entity that stores values about the system or intermediate data. Operators | Mathematical operators (such as addition ($+$), subtraction ($-$) etc.), logical symbols (such as and (&), or (|), and not (!)) or for comparison (such as equal to (==), greater than (>), and less than (<)), or even specific commands or actions (such as “turn left” or “go straight”). Control structures | Statements that define the agent actions flow’s direction, such as sequential, repetitive, or conditional. [4pt/2pt] Sequences | Linear succession of agent actions. [1pt/2pt] Repetitions | Iterative agent actions. [1pt/2pt] Conditionals | Agent actions dependent on conditions. Functions | Set of reusable agent actions which produce a result for a specific sub-task. Parallelism | Simultaneous agent actions. Events | Variations in the environment descriptors that trigger the execution of agent actions. Table 4: Assessment sub-competencies and their definition. The skills listed are based on leading-edge competence models (Brennan and Resnick, 2012; Shute et al., 2017; Weintrop et al., 2016; Bocconi et al., 2016). Competence | Definition ---|--- Correctness | Assess whether the task solution is correct. [4pt/2pt] Algorithm debugging | Evaluate whether the algorithm is correct, identifying errors and fixing bugs that prevent it from functioning correctly. [1pt/2pt] System states verification | Evaluate whether the system is in the expected state, detecting and solving potential issues. [1pt/2pt] Constraints validation | Evaluate whether the solution satisfies the constraints established for the system and the algorithm, looking for and correcting eventual problems. Effectiveness | Assess how effective is the task solution. [4pt/2pt] Optimisations | Evaluate whether the solution meets the standards in a timely and resource-efficient manner, and eventually identify ways to optimise the performance. Generalisation | Formulate the task solution in such a way that can be reused or applied to different situations. It is important to note that the activation of the activity sub-dimensions, especially the algorithmic one, is closely related to the type of artefactual environment being considered. The task-related artefactual environment combines the tools given and the type of problem involved. The activation of a competence may vary depending on the context in which it is being applied or whether the task is performed with embodied, symbolic, or formal artefacts. When considering algorithmic sub-competencies, for example, to activate these skills it is necessary to consider the tool used to represent the algorithm and the type of abstraction of the reasoning required. In embodied environments, knowledge is represented through sensory experiences, such as seeing, hearing, or touching. In this context, these skills may be activated simply by recognising and describing algorithmic concepts using physical interactions. In symbolic environments, knowledge is represented using symbols, such as words and numbers, or languages, such as natural or formal languages. Common types of formal language include those used to code, such as block-based and textual programming languages. In this context, reasoning requires the problem solver to be able to apply these competencies to solve problems and accomplish tasks. In formal environments, knowledge is represented with abstractions, such as mathematical models, logical systems and proofs. In this context, it is necessary to have a deeper understanding of how these skills work and what their properties are to be able to structure and apply them effectively, creating a complex system. ### 2.3 Map CTP characteristics to CT skills To formalise our framework, clarifying the role of CTP characteristics in CT skills assessment, we utilise the previously established definition of CTP to identify the components of these problems and their characteristics and to understand how different features may influence the assessment of CT competencies. We establish a direct link between our catalogue of CT competencies and the proposed framework, demonstrating how particular CT dimensions are more likely to be activated when solving CTP with specific characteristics. The link between the two is discussed in detail in Appendix 0.A, which explains how the various characteristics of CTP can impact the development of CT competencies and how certain skills are more likely to be used when solving CTP with specific characteristics. This approach enables the assessment of student abilities in a targeted and efficient manner. Our analysis of the relationships between characteristics and competencies revealed four different types of connections. From the perspective of the features, a certain characteristic can either be required to activate a certain competence, prevent its activation, trigger its activation or be irrelevant to its activation. Conversely, from the perspective of the competencies, a skill is activated if it has all required features and none of the preventing ones, can be encouraged by triggering features or is inhibited if it has a preventing feature or only irrelevant features. The relationship between features and skills can vary based on the type of environment in which the problem is presented. Table 5 provides a comprehensive explanation of the symbols used to represent the relationship between the characteristics of a CTP and CT competencies. Table 5: Overview of the notation for representing the relationship between characteristics and skills. Symbol | Meaning ---|--- ✓ | The characteristic is required for the competence activation ✗ | The characteristic prevents the competence activation $+$ | The characteristic promotes the competence activation none | The characteristic is irrelevant for the competence activation ✓∗ | The characteristic is one of the possible characteristics required for the competence activation ✓${}^{\text{{SF}}}$ / ✗${}^{\text{{SF}}}$ | The characteristic is required for / prevents the competence activation in the symbolic and formal environments ✓${}^{\text{{F}}}$ / ✗${}^{\text{{F}}}$ | The characteristic is required for / prevents the competence activation in the formal environment Table 6 has been realised to offer a practical understanding of the relationship between different features of CTP we identified and our catalogue of CT competencies enabling us to assess student abilities in a targeted and efficient manner. Each column is a characteristic of the CTP and every row represents a skill. Overall, this table provides a clear visual representation of how different features of the CTP can influence the assessment of CT competencies and how the different dimensions of CT skills are more likely to be activated when solving CTP with specific characteristics. Table 6: Comprehensive overview of the relationship between different CTP characteristics and CT competencies. The table shows the relationship between the characteristics of CTP (columns) and CT competencies (rows). The CTP features considered include the tools’ functionalities, the system’s property, and the task trait. The meaning of the symbols used is provided in Table 5. The same colour schemes used for the CT-cube dimensions in Fig. 1 and for the CTP components in Fig. 2 are employed to present the skills and features, respectively. | Tool functionalities | System | Task ---|---|---|--- | Variables | Operators | Sequences | Repetitions | Conditionals | Functions | Parallelism | Events | System resettable | System not resettable | System (partially) observable | System not observable | Initial or final state to be found | Algorithm to be found | One-to-one cardinality | Many-to-one cardinality | Explicit elements | Implicit elements | Unconstrained elements | Constrained elements | Algorithm manifest | Algorithm latent | Algorithm written | Algorithm not written Data collection | ✓ | | | | | | | $+$ | $+$ | $+$ | $+$ | $+$ | | | $+$ | $+$ | | $+$ | | $+$ | $+$ | $+$ | $+$ | $+$ Pattern recognition | $+$ | | $+$ | ✓∗ | $+$ | ✓∗ | | | $+$ | $+$ | $+$ | $+$ | | | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ 1-25[1pt/1pt] Decomposition | $+$ | $+$ | ✓∗ | $+$ | $+$ | ✓∗ | $+$ | | $+$ | $+$ | | $+$ | | | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ Abstraction | ✓ | | $+$ | $+$ | $+$ | ✓ | | | | $+$ | | $+$ | | | | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ 1-25[1pt/1pt] Data representation | ✓ | | $+$ | $+$ | $+$ | $+$ | | | | $+$ | | $+$ | | | | $+$ | | $+$ | | $+$ | $+$ | $+$ | $+$ | $+$ Variables | ✓ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | | | $+$ | $+$ | | ✓${}^{\text{\tiny{F}}}$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | | $+$ | $+$ Operators | $+$ | ✓ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | | | $+$ | $+$ | | ✓${}^{\text{\tiny{F}}}$ | $+$ | | $+$ | $+$ | $+$ | $+$ | $+$ | | $+$ | $+$ 1-25[1pt/1pt] Sequences | $+$ | $+$ | ✓ | $+$ | | $+$ | | | | | $+$ | $+$ | | ✓${}^{\text{\tiny{F}}}$ | | | $+$ | $+$ | $+$ | $+$ | $+$ | | $+$ | $+$ Repetitions | $+$ | $+$ | $+$ | ✓ | | $+$ | | | | | $+$ | | | ✓${}^{\text{\tiny{F}}}$ | | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | | $+$ | $+$ Conditionals | $+$ | $+$ | | | ✓ | | | $+$ | | | $+$ | $+$ | | ✓${}^{\text{\tiny{F}}}$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | | $+$ | $+$ 1-25[1pt/1pt] Functions | $+$ | $+$ | $+$ | $+$ | | ✓ | | | | | $+$ | | | ✓${}^{\text{\tiny{F}}}$ | | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | | $+$ | $+$ Parallelism | $+$ | $+$ | | | | | ✓ | | | | $+$ | | | ✓${}^{\text{\tiny{F}}}$ | | $+$ | $+$ | | $+$ | | $+$ | | $+$ | $+$ Events | $+$ | $+$ | | | $+$ | | | ✓ | | | $+$ | | | ✓${}^{\text{\tiny{F}}}$ | | | $+$ | | $+$ | | $+$ | | $+$ | $+$ Algorithm debugging | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | ✓ | ✗ | $+$ | | | ✓ | | | | $+$ | | | ✓ | ✗ | ✓${}^{\text{\tiny{F}}}$ | ✗${}^{\text{\tiny{F}}}$ System state verification | | | | | | | | | ✓ | ✗ | $+$ | | ✓ | | | | | $+$ | | | ✓${}^{\text{\tiny{SF}}}$ | ✗${}^{\text{\tiny{SF}}}$ | ✓${}^{\text{\tiny{F}}}$ | ✗${}^{\text{\tiny{F}}}$ Constraints validation | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | ✓ | ✗ | $+$ | | | | | | | | ✗ | ✓ | | | | 1-25[1pt/1pt] Optimisation | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | $+$ | ✓ | ✗ | $+$ | | | | | | | | | | | | | Generalisation | ✓ | | $+$ | $+$ | $+$ | ✓ | | $+$ | ✓ | ✗ | $+$ | | | | | $+$ | | $+$ | | $+$ | | | | This framework, supported by Table 6, is intended to be used as a tool to analyse existing CTP, understand which CT competencies can be assessed by the available features, evaluate the problem effectiveness for intended educational purposes and eventually revise it if it is unsuitable. To use the framework, the first step is to identify the CTP profile and list all of its features. If the CT competencies to be measured have not been specified in the activity, the framework can be used to determine which skills can be assessed from the available features. On the other hand, if the CT competencies are outlined, the framework can be used to compare the problem’s characteristics with those associated with the specified competencies and determine the CTP effectiveness for intended educational purposes. The effectiveness of a CTP can be classified into five categories based on the relationship between its features and the CT competencies. ###### Definition 11 (Minimal CTP) A complete and optimised CTP in which its characteristics perfectly match the features essential to activate the CT competence it is intended to elicit. In other words, the problem is designed to activate the intended CT skill with the minimum required features without any irrelevant or distracting elements. ###### Definition 12 (Extensive CTP) A complete and rich CTP that, in addition to having all of the necessary characteristics to activate a certain CT competence, also has features that act as a stimulus to activate that skill or others. These additional features make the problem more engaging and provide additional opportunities for students to demonstrate their CT abilities. The problem is considered complete and rich, as the additional features enhance the overall learning experience and provide a more comprehensive evaluation of the student’s skills. ###### Definition 13 (Unfocused CTP) A CTP in which some of the features are irrelevant to activate the skill it is intended to elicit. These irrelevant features can create confusion and distract the students from focusing on the essential aspects of the problem, hindering the optimisation of the problem in activating the desired competencies and affecting the accuracy of the assessment. An unfocused CTP should be revised to remove these unnecessary features to improve its effectiveness. ###### Definition 14 (Adjustable CTP) A CTP where some essential features required to activate a certain competence are missing. This type of problem can be modified to include the missing features, optimising it to activate the relevant CT skills. This can be done by modifying the problem statement, the system, or the artefactual environment. ###### Definition 15 (Unsuitable CTP) A CTP that is not appropriate for its intended purpose, which lacks some essential features but has some unwanted. This type of problem makes it difficult or impossible to effectively used to assess certain CT competencies and it would require too many changes to be useful, and it may be better to design or choose another problem instead. Additionally, Table 6 can be used when creating or choosing a new CTP to assess specific CT competencies. The first step, in this case, would be to define the skills to be evaluated and then use the table to list the necessary features and those that are not needed. The table can also be used to find existing problems that match the list of characteristics or to design a new activity from scratch. ## 3 Results To examine and validate our method in-depth, we applied the proposed framework to a range of CT activities that are widely recognised as representatives in educational settings. We focused on three categories of CTP, unplugged, robotic, and virtual activities, to provide a more nuanced understanding of each type of activity’s unique features and challenges. The selection of activities serves as a means to demonstrate the effectiveness and practicality of the framework and is not exhaustive. Through this analysis, we aim to provide a comprehensive account of the framework’s applicability and identify best practices and areas for improvement in the design and assessment of CT activities, with the ultimate goal of contributing to the field of CT education. For each CTP presented in this section, we provided in Appendix 0.B the graphical template used for their analysis, describing the component and the characteristics of the CTP, and the table that summarises the relationship between the characteristics of the CTP and the CT competences the that can be activated or not. ### 3.1 Unplugged activities In the context of CT, an unplugged activity is an activity that does not involve the use of a computer or technology (Brackmann et al., 2017; Del Olmo- Muñoz et al., 2020). As per our framework, unplugged activities refer to CTP where the agent is a human rather than a virtual or robotic agent. These activities have a common feature of involving physical and non-technological artefactual environments, as they require manipulating physical objects rather than using technology for manipulation. Unplugged activities are designed to help students develop CT skills often related to problem setting, such as pattern recognition and fundamental computer science concepts through hands-on and tangible activities before they are introduced to more abstract, technology-based activities (Bell et al., 2009). Examples of unplugged CTP include puzzles, games, and other activities that involve manipulating physical materials, such as blocks or cards. #### 3.1.1 Cross Array Task The Cross Array Task is an unplugged activity designed by Piatti et al. (2022) using the CT-cube to assess the development of algorithmic skills in compulsory schools. Figure 5: The Cross Array Task activity adapted from Piatti et al. (2022). The task requires the problem solver to instruct the agent to reproduce a reference schema solely through verbal communication, with the option of supplementing instructions via gesturing on a support schema if deemed necessary. A removable screen separates the participant and researcher to regulate potential visual cues. ##### Components The Cross Array Task, illustrated in Fig. 5, involves a student and a researcher in a classroom setting, seated at a table and separated by a removable screen. The student has to communicate an algorithm to colour a white cross array to match a reference schema. * • Problem solver: the student who has to communicate an algorithm corresponding to the sequence of instructions to reproduce the colouring of the reference schema. The artefactual environment comprises cognitive tools such as the support and the colouring schemes, available to the problem solver to reason about the task. Additionally, the problem solver can interact with the system to communicate the algorithm. This can be achieved using a natural language such as the voice (symbolic) or gestures (embodied) on the empty cross array. Moreover, by removing the screen that separates the problem solver from the agent, he can have visual feedback (embodied) of the cross array being coloured. * • Agent: the researcher, executor of the problem solver’s instructions, responsible for filling the colouring schema according to the problem solver’s algorithm. The agent’s actions are not resettable. * • Environment: the cross array to be coloured, whose state is described by the colour of each dot (white, yellow, blue, green, or red). * • Task: find the algorithm. The system’s state is defined by the colouring cross status, initially white and, at the end the same as the reference schema. The algorithm is the set of agent instructions to achieve this transformation. ##### Characteristics The characteristics of this activity have been analysed using the graphical templates shown in LABEL:fig:CAT-features in Appendix 0.B. * • Tool functionalities: voice and gestures provide various functionalities associated with algorithmic concepts, suitable to design the algorithm, including (i) variables can represent different colours of the cross array dots; (ii) operators are used to change the colour of the dots performing actions such as colouring a dot, a row, a square and so on; (iii) sequences determine the order in which the actions should be executed to achieve the desired outcome; (iv) repetitions allow for repeating specific sequences of operations, such as colouring the first column in red and repeating it every two columns; (v) functions consist of operations that perform a specific task and can be applied to different inputs, for example, creating a pattern of alternating red and yellow dots in a square and applying it to different positions of the cross array; (vi) parallelism involves executing multiple actions simultaneously and can be associated with using symmetries to describe the pattern. * • System resettability: the system is not resettable since it is impossible to reverse the agent’s actions. * • System observability: the system is partially observable since the cross array being coloured by default is not seen until the end of the task unless the problem solver demands otherwise. * • Task cardinality: the task has a one-to-one mapping, with given one initial and one final state, and an algorithm to be found. * • Task explicitness: all elements are given explicitly. * • Task constraints: the algorithm is unconstrained. * • Algorithm representation: the algorithm is represented through voice commands or gestures. It is considered manifest, because it can be seen, but not written since it is not stored in a permanent format. ##### Enabling features for competencies development The relationship between features and skills is summarised in Appendix 0.B in LABEL:tab:CAT-mapping. This paragraph explores the enabling characteristics that support the development of competencies within the task. * • Problem setting: all competencies can be activated thanks to the presence of variables, sequences, repetitions and functions in the tool functionalities. The presence of many tool functionalities, the non-resettability of the system and the algorithm representation positively affect and boost problem setting skills. The system observability supports data collection and pattern recognition. The one-to-one cardinality, in addition, stimulates decomposition. The explicit and unconstrained definition of the task elements also promotes pattern recognition, decomposition and abstraction. * • Algorithm: all competencies associated with the algorithmic concepts enabled by the tool functionalities, meaning variables, operators, sequences, repetitions and functions, can be activated in all three artefactual environments and promote one another. The form of representation of the algorithm, the system observability, and the explicit and unconstrained definition of the task elements further enhance these. The one-to-one cardinality helps to enhance some of these skills as well. * • Assessment: since the system is not resettable, no assessment skills can be developed. ##### Inhibiting features for competencies development * • Conditionals and events: non-activable as these functionalities are unavailable in the platform. A way to make conditionals available in the tool functionalities would be allowing the problem solver to change a dot colour, for example by communicating instructions such as: “if the dot is red, then colour it yellow”. By doing this, the problem solver engages with the concept of conditionals and can develop their algorithmic skills. The completion of each row in the cross array can be considered an event. The problem solver can specify that they want to fill the cross line by line, and once a line is complete, the researcher will move on to the next line. This allows the problem solver to list only the sequence of colours without repeating the instructions for where to go. The change in the environment (completing a row) triggers the researcher to move to the next row. Using conditionals and events can greatly enhance the complexity of the solutions that can be generated and help develop advanced CT skills. * • Assessment skills: the inability to reset the system impairs the development of the student’s skills. One possible solution to this issue is enabling the student to reset the colouring scheme using a voice command. This would return the schema to its initial blank state, allowing the student to start the task from the beginning and practice their assessment skills. To develop system state verification, it is also essential to not reveal the initial or final states. Moreover, constraints should be imposed on the algorithm to develop constraint validation skills, for example, limiting the use of specific operators or the number of times they can be used, allowing the problem solver to develop the ability to think about the constraints and limitations in their algorithms. #### 3.1.2 Graph Paper Programming Graph Paper Programming is an unplugged activity from Code.org (2015), a nonprofit organisation that aims to provide students with the opportunity to learn computer science as part of their education, offering various activities designed to increase diversity in computer science and reach students at their skill level and in ways that inspire them to continue learning. The Graph Paper Programming activity can be divided into two parts, each with a different task. In the first part, the student is given a $4\times 4$ grid of white and black squares and asked to write explicit instructions for another classmate to reproduce the image without letting the other person see the original drawing. In the second part, the same student follows the instructions they previously wrote to reproduce the image. By dividing the activity into these two parts, we can gain a deeper understanding of the cognitive processes and skills involved in each task and understand the potential for this activity to support the development of CT abilities. Figure 6: The first part of the Graph Paper Programming (GPP) activity, adapted from Code.org (2015). The task requires the problem solver to instruct the agent to reproduce the reference schema with instructions written on a steps array using a predefined set of arrow symbols. ##### Components (part 1) The first part of the activity is illustrated in Fig. 6. * • Problem solver: the student who writes the set of instructions for the agent to follow. The artefactual environment comprises cognitive tools such as the reference schema (embodied) and the support and the set of arrow symbols (embodied) available to the problem solver to reason about the task. Additionally, the problem solver can interact with the system to communicate the algorithm, writing the arrow symbols in the steps array (symbolic). These can be considered as a programming language and its programming platform. * • Agent: the other student who executes the problem solver’s instructions by filling the colouring schema accordingly. Its actions are not resettable. * • Environment: the schema to be coloured, described by the colour of each square (white or black). * • Task: find the algorithm. The system’s state is defined by the colouring schema status, initially white and, at the end the same as the reference schema. The algorithm is the set of agent instructions to achieve this transformation. Figure 7: The second part of the Graph Paper Programming (GPP) activity, adapted from Code.org (2015). The task requires the problem solver to fill the empty colouring schema following the program provided. The figure illustrates the expected final state. ##### Components (part 2) The second part of the activity is illustrated in Fig. 7. * • Problem solver and Agent: they overlap and correspond to the student who follows the instructions to recreate the image. The only action the agent can perform is to paint the colouring schema without the possibility of undoing it. The artefactual environment comprises cognitive tools such as the colouring schema (embodied) available to the problem solver to reason about the task. As before, the arrow symbols on the steps array (symbolic) are also used to interact with the system. Moreover, being the agent and the problem solver overlapped, visual feedback (embodied) is always given. * • Environment: the schema to be coloured, described by the colour of each square (white or black). * • Task: find the final state. The system’s state is defined by the colouring schema status, initially white. However, the final state is not given and has to be found using the provided algorithm. ##### Characteristics The characteristics of this activity have been analysed using the graphical template presented in Appendix 0.B. LABEL:fig:GPPfa-features refers to the first part of the activity, while LABEL:fig:GPPffs-features refers to the second. * • Tool functionalities: in both parts of the activity, the tools provide various functionalities associated with algorithmic concepts suitable to design the algorithm, including (i) variables can represent different colours of the schema squares; (ii) operators correspond to the arrow symbols used to instruct the agent to move from one square to another, determining whether a square is coloured black or white; (iii) sequences determine the order in which the actions should be executed to achieve the desired outcome; (iv) repetitions allow for repeating specific sequences of operations; (v) functions can be represented by a group of instructions that perform a specific task, such as colouring a particular shape on the grid, that can be used multiple times. * • System resettability: the system is not resettable since it is impossible to reverse the agent’s actions. * • System observability: in the first part of the activity, the system is not observable, as there is no visual feedback about the agent’s actions; in the second part of the activity, the visual feedback consented thanks to the problem solver and agent overlapping, makes the system totally observable. * • Task cardinality: both tasks have a one-to-one mapping, with one initial state, final state and algorithm. * • Task explicitness: the task elements are explicit. * • Task constraints: the final state is unconstrained. * • Algorithm representation: the algorithm is manifest and written, represented through the arrow symbols written in the steps array. ##### Enabling features for competencies development This paragraph explores the enabling characteristics that support the development of competencies within this CTP. The relationship between features and skills in the two parts of the activity are summarised in Appendix 0.B in LABEL:tab:GPPfa-mapping and LABEL:tab:GPPffs-mapping. * • Problem setting: all competencies can be activated thanks to the presence of variables, sequences, repetitions and functions in the tool functionalities. The presence of many tool functionalities, the non-resettability of the system and the algorithm representation positively affect and boost problem setting skills. In the first part of the activity, the inability to observe the systems further supports the development of all these competencies, while in the second part, the system observability sustains only data collection and pattern recognition. The one-to-one cardinality in addition stimulates data collection and pattern recognition but also decomposition. The explicit and unconstrained definition of the task elements also promotes pattern recognition, decomposition and abstraction. * • Algorithm: the competencies associated with algorithmic concepts, including variables, operators, sequences, repetitions, and functions, can be developed in the first part of the activity through the use of the tool functionalities in all artefactual environments. However, in the second part of the activity where the algorithm is given, these competencies cannot be developed in the formal environment but only in the embodied and symbolic environments. The algorithm representation, the system observability, and the explicit and unconstrained definition of the task elements further enhance all these skills. * • Assessment: since the system is not resettable, no assessment skills can be developed. ##### Inhibiting features for competencies development * • Conditionals, parallelism and events: non-activable as these functionalities are unavailable in the platform. Activating conditionals would be allowed by providing a new arrow symbol that determines the colour of a square based on certain conditions, for example, the colour of the square above. Parallelism can be enabled by operating a new arrow symbol instructing the agent to colour two squares simultaneously. An event that could be taken into account is that every time a cell is filled with black, the instructions in the steps array move to the next row rather than continuing from that specific cell. * • Assessment skills: the inability to reset the system impairs the development of the student’s skills. In the first part of the activity, where the problem solver is writing the set of instructions for the agent, to solve this issue the system can be reset by simply starting over with a new blank graph array and writing a new set of instructions, or simply offering the possibility to use pencil and eraser. In the second part of the activity, where the problem solver and the agent are the same people, the system can be reset by either using a new colouring schema or erasing the previously produced image and starting over. #### 3.1.3 Triangular Peg Solitaire Triangular Peg Solitaire is a strategy game that can be played in two variants: the classic, on a triangular board with 15 holes and pegs, and the paper and pencil modality. The board is initially filled with pegs, except for one hole, which is left empty (see the top of Fig. 8). This game is often used to teach problem-solving, logic and strategy skills, requiring the player to determine the most efficient sequence of moves to remove all the pegs on the board except one. Research has shown that the second activity variant can effectively promote problem-solving skills even in older students (Barbero, 2020). The game is played by following the rules, which dictate that a peg can only be moved by jumping over a neighbouring peg on the diagonal or horizontal lines (see the bottom of Fig. 8). By analysing the two versions of this game, we aim to understand their impact on promoting problem-solving and critical thinking skills, evaluating advantages and limitations and providing insights into how they can be used for CT development. Figure 8: The Triangular Peg Solitaire. The game is played on a board containing 15 spots, with 14 pegs placed on it at the start of the game (top). The task requires the problem solver to strategically move one peg at a time to eliminate all other pegs on the board until only one remains (bottom), jumping a peg over a neighbouring peg on the diagonal or horizontal lines, with the constraint that there must be a free landing spot for the jumping peg (adapted from Berlekamp et al. (2004)). ##### Components (board variant) The first variant of the activity, illustrated in Fig. 8, requires solving the game on a physical board. * • Problem solver and Agent: they overlap and correspond to the player who must determine the most efficient sequence of moves to remove all the pegs on the board. The only action the agent can perform is to move the pegs on the board without the possibility of undoing it. The artefactual environment comprises tools for reasoning and interacting with the system. Being the agent and the problem solver overlapped, visual feedback (embodied) of the state of the board is always given. Moreover, the problem solver can physically interact with the system by moving the pegs on the board (embodied). * • Environment: the wooden board, described by the number of pegs on it. * • Task: find the algorithm. The system’s initial state is the board with 14 pegs. The final state is the board with one peg. The algorithm to be found specifies the sequence of moves to remove all the pegs. Figure 9: A Triangular Peg Solitaire solution adapted from Bell (2007, 2008) and Barbero and Gómez-Chacón (2018). The task requires the problem solver to solve the game using paper and pencil by meticulously documenting their entire thought process. The solution can be presented in multiple ways, such as graphically using a Cartesian notation (top) or by numbering the boxes progressively and expressing the movements used (bottom). ##### Components (paper & pencil variant) The first variant of the activity, illustrated in Fig. 9, requires solving the game with paper and pencil by documenting the thought process and devising a winning strategy. * • Problem solver and Agent: they overlap and correspond to the player who must determine the most efficient sequence of moves to remove all the pegs on the board. The only action the agent can perform is to write the thinking process and strategy to remove the pegs on the board without the possibility of undoing it. The artefactual environment comprises tools for reasoning and interacting with the system. Being the agent and the problem solver overlapped, visual feedback (embodied) of the state is always given. Moreover, the problem solver can physically interact with the system by writing the thinking process (symbolic). * • Environment: the board drawn in the thinking process, described by the number of pegs on it. * • Task: find the algorithm. The system’s initial state is the board with 14 pegs. The final state is the board with one peg. The algorithm to be found specifies the sequence of moves to remove all the pegs. ##### Characteristics The characteristics of this activity have been analysed using the graphical template presented in Appendix 0.B. In particular, LABEL:fig:TPSboard-features refers to the board version of the activity, while LABEL:fig:TPSpp-features refers to the paper and pencil variant. * • Tool functionalities: in both variants of the activity, the tools provide various functionalities associated with algorithmic concepts suitable to design the algorithm, including (i) variables can represent the state of the board and in particular the number of pegs on it; (ii) operators correspond to the moves made by the player to change the state of the board by removing or moving pegs from one hole to another; (iii) sequences determine the order of moves made by the player; (iv) repetitions allow for repeating certain moves or sequences of moves; (v) conditionals refer to the possible decisions that the plays may need to make, such as where to jump in one direction or another; (vi) functions can be represented by a group of instructions that perform a specific task, for example, a function to delete pegs in a row which can be applied to several rows. * • System resettability: in the board variant, the system is not resettable meaning that once the player has made a move, it cannot be undone. However, the system is resettable in the paper and pencil version, even though the player’s actions are not reversible. This is because the informal setting, in which the player documents their thought process, allows for experimentation and exploration without fear of judgement or negative consequences. In other words, the player can freely make mistakes, express uncertainty, and experiment with different strategies without permanently impacting the game. * • System observability: in both variants, the system is observable because the agent and problem solver, that overlap, can see the state of the board anytime. * • Task cardinality: both activity variants have a one-to-one mapping, with one initial state, one final state and one algorithm. * • Task explicitness: the initial state of the system is given explicitly, while the final one is given implicitly since the task instruction does not specify which is the position of the last remaining peg. * • Task constraints: the algorithm is constrained by the game rules which dictate that a peg can only be moved by jumping over a neighbouring peg on the diagonal or horizontal lines and that there must be a free landing spot for the jumping peg. * • Algorithm representation: in the board variant of the game, the algorithm is latent since it is performed physically through the player’s moving the pegs, it is not permanently recorded and cannot be revisited. On the other hand, in the paper and pencil modality, the player writes down the algorithm, and it becomes a permanent record that can be reviewed and used as a reference. This allows the player to experiment freely and change their approach without having to start over each time. The written representation of the algorithm in the paper and pencil modality provides a clear and concrete way to represent the player’s thought process and strategy. ##### Enabling features for competencies development This paragraph explores the enabling characteristics that support the development of competencies within this CTP. The relationship between features and skills in the two activity variants are summarised in Appendix 0.B in LABEL:tab:TPSboard-mapping and LABEL:tab:TPSpp-mapping. * • Problem setting: all competencies can be activated in both variants of the game thanks to the presence of variables, sequences, repetitions and functions in the tool functionalities. The presence of many tool functionalities, the non-resettability of the system (in the board version of the game), the implicit and constrained definition of the task elements and the algorithm representation positively affect and boost problem setting skills. The system’s observability sustains data collection and pattern recognition skills, while the one-to-one cardinality and the resettability of the system (in the paper and pencil variant of the game) also stimulate decomposition. * • Algorithm: the competencies associated with algorithmic concepts, including variables, operators, sequences, repetitions, conditionals and functions, can be developed in all artefactual environments. The system observability, the implicit and constrained definition of the task elements, and the manifest written representation of the algorithm (in the paper and pencil variant of the game) can further enhance these skills. * • Assessment: no assessment skills can be developed in the board variant of the activity since the system is not resettable. In the paper and pencil version of the activity, algorithm debugging can be activated in all artefactual environments due to the resettability of the system and the manifest and written representation of the algorithm; constraint validation can be developed because there are constraints on the algorithm that can be verified since the system is resettable; optimisation can be activated as it only requires the resettability of the system; generalisation can be activated through the system’s resettability and the presence of variables and functions. Tool functionalities as well as the system observability help develop these competencies. ##### Inhibiting features for competencies development * • Parallelism and events: non-activable as these functionalities are unavailable in the platform. It is possible to develop parallelism and events in the game by incorporating a variant where multiple pegs can be moved simultaneously or by introducing multiple players who can make moves simultaneously. A way to incorporate events would be to use technology such as a computer program or app to play the game, creating programmed events that the player’s actions could trigger. For example, if the player removes a certain peg, the computer could trigger an event that changes the board’s appearance or displays a message. However, it is important to note that these changes would result in a different game with a different set of objectives and challenges and might not necessarily have the same educational benefits as the original Triangular Peg Solitaire. * • Assessment skills: the inability to reset the system impairs the development of the student’s skills in the first variant of the activity. In the paper and pencil version, only system state verification is non-activable because both the initial and final states are provided. The task can be adjusted by modifying the game such that only the initial state is provided and the final state is unknown. For example, letting the player determine the specific peg position for the last peg This would make the task more challenging and require the player to develop their skills in system state verification. #### 3.1.4 Computational Thinking Test (CTt) The CTt is an assessment tool designed to evaluate the CT skills of students between the ages of 12 and 13 (Román-González, 2015; Román-González et al., 2017b, a; Román-González et al., 2018). It aligns with the work of researchers such as Brennan and Resnick (2012); Kalelioğlu (2015), who have identified key computational concepts related to algorithmic skills. Additionally, the CTt is designed to align with the standard interfaces used by organisations universally recognised in this context, such as Code.org, which utilise visual blocks to teach coding. The test consists of 28 multiple-choice questions. However, we will only analyse two. Figure 10: Item 7 of the Computational Thinking Test (CTt) adapted from Román- González et al. (2017b). The task requires the problem solver to correct a set of instructions that should make the agent draw a rectangle. ##### Components (item 7) The first CTP analysed, called “item 7”, is depicted in Fig. 10 and has been designed to evaluate the student’s ability to identify and fix errors in code, in a script that does not involve computational nesting concepts but only the concept of repetitions. * • Problem solver: the student taking the test presented with a wrong code script that must be fixed. The artefactual environment comprises only cognitive tools, thus is impossible to interact with the system, which is considered static. The reasoning tool provided includes the sketch in the problem description (embodied) and the visual blocks interface (symbolic), which allows the student to think about the instructions and imagine to test different solutions. * • Agent: the artist responsible for drawing the rectangle according to the instructions provided. It is an abstract representation, not a physical entity that can be observed or interacted with. The actions it can perform are moving and turning, considered reversible since they must be corrected. * • Environment: the place where the imaginary rectangle should be drawn, described by the number of drawn rectangle segments, length and orientation. * • Task: find the algorithm. The system’s initial state is the imaginary rectangle not yet being drawn, while the final state is the $50\times 100$ pixels rectangle drawn. The algorithm is not valid, for this reason, the final and correct version of it has to be found. Figure 11: Item 14 of the Computational Thinking Test (CTt) adapted from Román-González et al. (2017b). The task requires the problem solver to select the correct set of instructions to make the agent cross a predefined path to reach a desired position. ##### Components (item 14) The second CTP analysed, called “item 14”, is depicted in Fig. 11 and has been designed to evaluate the student’s ability to organise a set of commands in a logical and orderly manner in a script that does not involve computational nesting concepts but only two specific computational concepts: repetitions and conditionals. * • Problem solver: the student taking the test is given four sets of code scripts from which he must select the appropriate set of moving instructions. The artefactual environment comprises only cognitive tools, including the sketch of the maze in the problem description (embodied) and the four sets of instruction in the form of visual blocks (symbolic), thus is impossible to interact with the system. * • Agent: Pac-Man, a representation of an abstract entity that can move in the maze to reach the Ghost following the predefined pattern marked out. Its actions are reversible since they must be corrected. * • Environment: the maze, described by the Pac-Man and the Ghost positions, and the path to be followed. * • Task: find the algorithm. The system’s initial state corresponds to the Pac- Man and the Ghost in their starting position, while in the final state, Pac- Man is in front of the Ghost and has crossed the predefined path. The algorithm is not given since four sets of codes are provided, and the correct one has to be found to reach the desired outcome. ##### Characteristics The characteristics of this activity have been analysed using the graphical template presented in LABEL:fig:CTt7-features and LABEL:fig:CTt14-features in Appendix 0.B. * • Tool functionalities: in both variants of the activity, the tools provide various functionalities associated with the visual blocks interface, including (i) variables; (ii) operators correspond to the agent actions contained in the blocks in turquoise; (iii) sequences; (iv) repetitions represented by the loop in the pink blocks; (v) conditionals described by the if statements in the blue blocks (only in the second activity); (vi) functions. * • System resettability: even if the problem solver cannot interact with the system, the system is resettable in both activities since the algorithm has to be correct or selected from a set. * • System observability: in both tests, the system is not observable because the agents in question, the artist and Pac-Man, are imaginary entities and their actions, such as drawing or moving, are not physically visible. The problem solver must rely on the instructions provided to understand the actions taken by the agent, and cannot observe their actual outcome. * • Task cardinality: both CTP have a one-to-one mapping, with one initial state, final state and algorithm. * • Task explicitness: all elements are given explicitly. * • Task constraints: the algorithms in both tasks are constrained since the computational concepts addressed are already determined and limited to the specific notions presented. * • Algorithm representation: the algorithm is manifest and written in both activities. ##### Enabling features for competencies development This paragraph explores the enabling characteristics that support the development of competencies within this CTP. The relationship between features and skills in the two activity variants is summarised in Appendix 0.B in LABEL:tab:CTt7-mapping and LABEL:tab:CTt14-mapping. * • Problem setting: all competencies can be activated in both activities thanks to the presence of variables, sequences, repetitions and functions in the tool functionalities. The presence of many tool functionalities, the system’s non- observability, the algorithm’s constrained definition, and its written representation promote the development of all problem setting skills. The system resettability, the one-to-one cardinality and the explicit representation of elements support other competencies. * • Algorithm: the competencies associated with algorithmic concepts provided by the tool functionalities can be developed in all artefactual environments. The non-observability of the system, the explicit and constrained definition of the task elements, and the manifest written representation of the algorithm can further enhance these skills. * • Assessment: algorithm debugging can be activated in all artefactual environments due to the resettability of the system and the written algorithm; since the system can be reset, also the constraints on the algorithm can be checked and corrected, allowing for the development of constraint validation; similarly optimisation can be activated since the resetting capability is sufficient; generalisation can be developed thanks to the system’s resettability and the presence of variables and functions. The tool functionalities available further encourage the development of these competencies. ##### Inhibiting features for competencies development * • Conditionals: non-activable in the first activity since this functionality is not present in the visual blocks provided to the students. * • Parallelism and events: non-activable in both activities, as before, because they are not available in the visual blocks provided to the students. To activate these skills, the visual blocks must include the tools for creating parallelism and events in the algorithm. * • System state verification: non-activable because the initial and final states are provided. ### 3.2 Robotics activities In robotics, educational robotics and physical computing activities involve using physical robotic hardware equipped with controllers, sensors, and actuators. These robotic agents are programmed to perform specific behaviours in response to the environment. To achieve this, problem solvers are typically provided with a programming platform allowing interaction with the robot. There are numerous commercially available programming platforms for educational robots, each offering its own set of hardware and programming platforms (Bravo et al., 2017). The agents can be programmed using formal textual programming languages, such as Python (Noone and Mooney, 2018), or symbolic visual programming languages, which are often based on blocks, like Blockly or Scratch (Shin et al., 2014). Some platforms also allow for embodied physical interactions, enabling users to manipulate the robot through touch buttons or tangible symbols that are scanned and executed (Bers and Horn, 2010; Mussati et al., 2019). In this section, we aim to analyse various activities based on different types of agents, including the Thymio II, the Ozobot, and the Micro:bit. #### 3.2.1 Thymio Lawnmower Mission Thymio Lawnmower Mission is an educational robotics activity designed by Chevalier et al. (2020) to promote the development of students’ CT skills through the Thymio II robot. The Thymio II, for instance, is a widely used educational robot equipped with various sensors, including proximity sensors, an accelerometer, a remote control receiver, motors, a speaker, and LEDs, distributed throughout its body (Riedo et al., 2013; Shin et al., 2014). The authors aimed to address the issue of students spending excessive time programming and not enough time problem-solving, referred to as the trial-and- error loop, by conducting an instructional intervention on two groups of primary school students. Figure 12: The Thymio Lawnmower Mission adapted from Chevalier et al. (2020). A group of pupils must program the Thymio II robot to pass over all eight green lawn squares and avoid the fence (left). A special visual programming language platform, with graphical icons that are straightly interpretable, is used for this task (right). ##### Components In the Thymio Lawnmower Mission, illustrated in Fig. 12, the Thymio II robot must systematically traverse all green lawn squares, much like a lawnmower would mow a lawn. * • Problem solver: the group of students performing the task who must program the agent’s lawnmower behaviour. The artefactual environment comprises tools designed for reasoning and interaction, including a graphical programming environment called VPL (symbolic), which allows for the creation of sensor- action relationships to determine the robot’s behaviour, the agent (embodied) and the visual feedback (embodied). * • Agent: the Thymio II, whose actions consist in moving around the playground, by changing velocity and orientation, and using sensors to detect obstacles. All actions are considered irreversible. * • Environment: the playground, i.e., a lawn area surrounded by a fence, divided into eight green squares and one grey square, the garage. Its state is defined by the grass condition or the number of squares passed over by the agent. * • Task: find the algorithm. The initial state is the lawn with tall grass, meaning the robot is not passed over any of the squares that compose it. The final state is the same lawn with the grass mowed, meaning the robot passes over all green squares. The algorithm is the set of moving actions to reach the system’s final state from the initial. ##### Characteristics In the Thymio Lawnmower Mission, the students who participated in the activity were divided into two groups, the control group and the test group, to which different conditions were imposed. The control group was allowed to complete the task without any constraints. In contrast, the test group was subjected to an instructional intervention that blocked the programming interface at certain times to overcome the trial-and-error loop. As a result, the two activity variants have distinct characteristics, analysed using the graphical templates shown in LABEL:fig:TLMcontrol-features and LABEL:fig:TLMtest- features in Appendix 0.B. * • Tool functionalities: the system provides a comprehensive set of tools for the problem solver to create and control the behaviour of the agent to solve the task, including (i) variables such as the values of sensors or the state of the robot; (ii) operators represent the basic actions that the agent can perform; (iii) sequences are not represented by a specific block in the VPL interface, but can be created by arranging blocks in a specific order; (iv) functions are not represented by blocks in the VPL but refer to the possibility of conceptually grouping blocks of code associated with a particular behaviour to produce outputs given inputs; (v) events are directly represented in the graphical interface by sensor-action relationships, allowing the robot to perform actions in response to stimuli, such as detecting an obstacle. * • System resettability: in the control group, the problem solvers can reset the system directly by physically moving the Thymio II agent in the environment and restarting the task by repositioning it in the garage. On the other hand, those in the test group do not have this option as they cannot directly interact with the agent and cannot modify the algorithm since it has to be first written and then executed. * • System observability: the platform provides real-time visual feedback, making the system observable. * • Task cardinality: the task has a one-to-one mapping, with an initial and final state and an algorithm. * • Task explicitness: the elements of the task are given explicitly, as the student is provided with clear instructions on what the outcome should look like. * • Task constraints: the algorithm is unconstrained. * • Algorithm representation: the algorithm is written in the workspace and expressed by the set of blocks and their connections. ##### Enabling features for competencies development This paragraph explores the enabling characteristics that support the development of competencies within the task. The relationship between features and skills in the two activity variants are summarised in Appendix 0.B in LABEL:tab:TLMcontrol-mapping and LABEL:tab:TLMtest-mapping. * • Problem setting: all competencies can be activated thanks to the presence of variables, sequences and functions in the tool functionalities. The presence of many tool functionalities positively affects and boosts problem setting skills. The manifest and written representation of the algorithm can further encourage the development of these skills. In the control group, being the system resettable, data collection, pattern recognition and decomposition are promoted, while in the test group, abstraction and data representation are also encouraged. The one-to-one cardinality of data elements also facilitates data collection, pattern recognition and decomposition. The system observability also supports data collection and pattern recognition. * • Algorithm: all competencies associated with the algorithmic concepts enabled by the tool functionalities, meaning variables, operators, sequences, functions and events, can be activated in all three artefactual environments and promote one another. These are further enhanced by the manifest and written algorithm representation, system observability, and the explicit and unconstrained definition of the task elements. The one-to-one cardinality helps to develop variables and operators further. * • Assessment: regarding the control group, algorithm debugging can be activated in all artefactual environments since the algorithm has to be found, the system is resettable, and the algorithm is manifest and written; optimisation can be developed thanks to the resettability of the system; generalisation can be activated through the system’s resettability and the presence of variables and functions. The tool functionalities available and the system observability help develop these skills as well. On the other hand, the system is non- resettable in the test group, and no assessment skills can be developed. ##### Inhibiting features for competencies development The lack of specific features may hinder the development of particular skills. * • Repetitions, conditionals and parallelism: non-activable in both control and test groups, as these functionalities are unavailable in the VPL programming platform. Therefore, to develop these skills, it is necessary to change the programming language to a textual programming language such as ASEBA. * • Algorithm debugging: non-activable in the test group since the system is not resettable. * • System state verification: non-activable, in both control and test groups, because the initial and final states of the system are given, and in the test group, also because the system is not resettable. * • Constraint validation: non-activable, in both control and test groups, due to the absence of constraints on the algorithm, and in the test group because the system is not resettable. * • Optimisation: non-activable in the test group because the system is not resettable. * • Generalisation: non-activable in the test group due to the non-resettability of the system. #### 3.2.2 Remote Rescue with Thymio II (R2T2) R2T2 is another collaborative educational robotics activity, presented by Mondada et al. (2016) to promote STEM education in schools and encourage students towards careers in these fields. Figure 13: The Remote Rescue with Thymio II (R2T2) mission on Mars adapted from Mondada et al. (2016). Sixteen worldwide teams of pupils collaborate with 16 Thymio to restart the main generator of a simulated damaged power Mars station (left) in five phases using a visual programming language or textual programming language programming platforms (right). ##### Components The R2T2 activity, illustrated in Fig. 13, is a rescue operation on a Mars station, whose goal is to assess the damage of the power plant and restart the main generator by remotely controlling 16 Thymio II robot. The activity is divided into five phases, each with a specific objective. In the first phase, the robots must enter the station and push away an obstacle blocking the main door. In the second phase, the robots must stand on control spots to activate access to the generator. In the third phase, the robots must look into the generator through a small window. In the fourth phase, the robots must turn on a light when detecting the generator rotor using proximity sensors and off when it is no longer visible, thus estimating the generator speed. In the final phase, the generator is restarted, and the mission is completed. * • Problem solver: the group of students performing the task who must program the agents’ behaviours to restart the main generator. The artefactual environment comprises tools designed for reasoning, such as paper and pencils and another robot that is physically accessible. Tools available also to interact with the system include the programming platform (symbolic) with the two available programming environments, VPL and ASEBA, a textual programming language (Magnenat et al., 2011), and the five webcams, installed around the playground, provide a delayed continuous visual feedback (embodied) through YouTube video streams. * • Agent: the 16 remote-controlled Thymio II, which can move around the playground accelerating and rotating, use proximity sensors and turn some lights on and off. All actions are considered irreversible. * • Environment: the playground, i.e., the Mars station, characterised by different descriptors used in the different mission stages, such as the obstruction by the obstacle, covering of the control spots and finally, the restart of the generator. * • Task: find the algorithm. In the initial state, the generator is not working, while it has been restarted at the end. The algorithm is the set of moving actions to reach the system’s final state from the initial. ##### Characteristics LABEL:fig:R2T2-features in Appendix 0.B provides the graphical template used to analyse the task components and characteristics. * • Tool functionalities: the system provides a comprehensive set of tools for the problem solver to create and control the agent’s behaviour to solve the task, depending on the programming environment used. VPL offers the possibility to use variables, operators, sequences, functions and events. Additionally, ASEBA offers control flows such as repetitions and conditionals. Furthermore, parallelism is possible since it refers to the ability to run multiple processes simultaneously, in this case, the Thymio II robots performing the rescue operation in parallel. The agents can execute their tasks concurrently without waiting for each other to complete them. * • System resettability: the system cannot be reset due to the irreversible nature of the actions carried out by the robots. Once the robots take an action, the change in the system’s status is permanent and cannot be undone. Furthermore, the physical separation between the problem solvers and the system means no immediate way to reset the system. * • System observability: the delayed but continued system visual feedback makes it totally observable. * • Task cardinality: the task has a one-to-one mapping, with an initial and final state and an algorithm. * • Task explicitness: all elements are given explicitly. * • Task constraints: the algorithm is unconstrained. * • Algorithm representation: the algorithm is manifest and written in the programming platform. ##### Enabling features for competencies development The relationship between features and skills is summarised in Appendix 0.B in LABEL:tab:R2T2-mapping. * • Problem setting: all competencies can be activated thanks to the presence of variables, sequences and functions in the tool functionalities. The presence of all tool functionalities, the manifest written representation of the algorithm and the non-resettability of the system further encourage the development of these skills. The system observability supports data collection and pattern recognition. The one-to-one cardinality, in addition to these skills, stimulates decomposition. The explicit and unconstrained definition of the task elements also promotes pattern recognition, decomposition and abstraction. * • Algorithm: competencies across all artefactual environments can be triggered since the tool’s functionalities enable all algorithmic concepts. The system observability, the explicit and unconstrained definition of the task elements, and the manifest written algorithm representation further enhance these. * • Assessment: since the system is not resettable, there are no assessment skills activable. ##### Inhibiting features for competencies development The inability to reset the system restricts the activation of all assessment skills. The task can be adjusted by incorporating a mechanism for resetting the system to a previous state, enabling the problem solver to correct errors made during the implementation of the algorithm, explore different solutions, and learn from their mistakes. Additionally, it is necessary to omit the initial or final states to verify the system’s state. Furthermore, to develop the constraint validation skill, it is necessary to incorporate constraints into the algorithm. #### 3.2.3 Ozobot maze The Ozobot Maze activity is a screenless robotics task proposed by Bryndová and Mališů (2020) aimed at teaching primary school students in the Czech Republic CT skills. The educational robot used in this task is the Ozobot, a small programmable robot, used to introduce students to coding, equipped with sensors to follow black lines and read colour patterns called Color Codes to change speed, direction and movements. Figure 14: The Ozobot maze adapted from Bryndová and Mališů (2020). The task requires the pupil to instruct the Ozobot to cross a maze avoiding obstacles and reaching the room where the red person is. Commands such as increasing the speed, changing direction and making some cool movements (spinning like a tornado) are given in Color Codes. ##### Components In the Ozobot Maze activity, illustrated in Fig. 14, the robot should be guided through a maze to reach the room where the red person is. * • Problem solver: the student who creates a suitable sequence of instructions using Color Codes to guide the Ozobot through the maze. The artefactual environment comprises tools for reasoning and interacting with the system. Predefined stickers or markers to fill the empty Codes with colour sequences (embodied) are used to give the robot the correct instructions to achieve the goal. The visual feedback (embodied) lets the problem solver observe the agent and its movements in the playground. * • Agent: the Ozobot agent, which can move around in the playground by changing velocity and orientation. This action is not reversible. * • Environment: the playground, i.e., the house map, whose state is defined by the agent’s position relative to the red person. * • Task: find the algorithm. The initial state is the empty maze with the Ozobot positioned near the starting point. The system’s final state is the Ozobot reaching the end of the maze, in the room with the red person, and all the Color Codes being filled. The algorithm is the set of agent instructions, shown by the Color Codes, to reach the system’s final state from the initial. ##### Characteristics The characteristics of this activity have been analysed using the graphical templates shown in LABEL:fig:Ozobot-features in Appendix 0.B. * • Tool functionalities: the system provides a comprehensive set of tools for the problem solver to create and control the behaviour of the agent to solve the task, including (i) variables can be used to store values such as the position of the robot in the maze; (ii) operators are basic actions that the agent can perform, represented by Direction Codes such as moving straight, turning left or right; (iii) sequences represent the set of instructions used to control the behaviour of the robot in a step-by-step manner and that the Ozobot must follow to complete the task; (iv) repetitions are a way of repeating the same instructions multiple times and refer to the possibility of repeating the same Color Code multiple times, for example, if the agent encounters the same type of intersection repeatedly in the path and the same Color Code is used to specify the direction the agent should take; (v) conditionals are used to make decisions based on certain conditions, for example understanding what to do at an intersection; (vi) functions can be reflected in the different types of Color Codes that can be reused in different situations and map inputs to outputs, such as mapping a specific type of intersection to a specific direction. * • System resettability: the system is not resettable since it is impossible to change the Color Codes once they have been filled in. * • System observability: the real-time visual feedback makes the system observable. * • Task cardinality: the task has a one-to-one mapping. * • Task explicitness: the elements of the task are given explicitly, as the student is provided with clear instructions on what the outcome should look like. * • Task constraints: the algorithm is unconstrained. * • Algorithm representation: the algorithm is manifest and written, expressed by the set of Color Codes. ##### Enabling features for competencies development The relationship between features and skills is summarised in Appendix 0.B in LABEL:tab:Ozobot-mapping. This paragraph explores the enabling characteristics that support the development of competencies within the task. * • Problem setting: all competencies can be activated thanks to tool functionalities such as variables, sequences and functions. The manifest and written algorithm representation, as well as the non-resettability of the system, can further encourage the development of these skills. The system observability supports data collection and pattern recognition. The one-to-one cardinality, in addition, stimulates decomposition. The explicit and unconstrained definition of the task elements promotes pattern recognition, decomposition and abstraction. * • Algorithm: all competencies associated with the algorithmic concepts enabled by the tool functionalities, meaning variables, operators, sequences, repetitions, conditionals and functions, can be activated in all three artefactual environments and promote one another. These are further enhanced by the manifest and written representation of the algorithm, the system observability, and the explicit and unconstrained definition of the task elements. The one-to-one cardinality helps to enhance some of these skills as well. * • Assessment: since the system is not resettable, there are no assessment skills activable. ##### Inhibiting features for competencies development * • Parallelism and events: non-activable since the related features are missing in the tool functionalities. To develop these skills, it is possible to switch to a different interaction tool, such as OzoBlockly, a visual programming language designed to code Ozobots Evo and includes these functionalities. * • Assessment skills: The non-resettable feature of the system hinders the development of assessment abilities. In this sense, by allowing the problem solver to reset the system to a previous state, for example by letting them change the Color Codes stickers and move the robot back to the starting position, the activity can be improved. This way, the problem solver can correct any mistakes made during the implementation of the algorithm, experiment with different solutions, and learn from their mistakes. To develop system state verification, it is also essential to not reveal the initial or final states. Moreover, constraints should be imposed on the algorithm to develop constraint validation skills. #### 3.2.4 Mini-golf challenge with micro:bit In robotics, physical computing activities involve using microcontrollers, sensors, and other electronic components to build and program interactive systems. To enhance the learning experience, various off-the-shelf robotic kits have been developed that allow students to construct robots easily and control them through a graphical user interface. In these activities, students are often engaged in an initial phase of actively constructing the system using recycled materials, electronic circuits and programming the robot. These activities evaluate the students’ understanding of algorithmic concepts, problem-solving skills, knowledge of physics and engineering, creativity, and ability to work collaboratively. Figure 15: The BBC micro:bit (left) and its block programming interface (right). One such physical computing activity is the Mini-golf challenge, proposed by Assaf et al. (2021). In this activity, students are tasked with programming a mini-golf lane’s moving and interactive elements using the BBC micro:bit (Ball et al., 2016; Microbit, 2016). The micro:bit, depicted in Fig. 15, is a pocket-sized computer that can be programmed using Microsoft’s MakeCode editor (Makecode, 2016), which provides a user-friendly interface with colour-coded blocks similar to Scratch and the ability to switch to JavaScript to view the text-based code. Figure 16: The Mini-golf challenge challenge adapted from Assaf et al. (2021). The task requires a group of pupils to define the behaviour of the mini-golf lane movable obstacles, sounds, and lights by programming the BBC micro:bit. ##### Components In the Mini-golf challenge, illustrated in Fig. 16, the objective is to program a mini-golf lane’s moving and interactive elements. * • Problem solver: the group of students who must program the micro:bit. The artefactual environment disposed of paper and pencil (embodied), a cognitive tool to support the thinking phase. Other tools are provided to interact with the system, including the toolkit (embodied) whose components can be assembled and disassembled at will, the visual programming language offered by the MakeCode editor (symbolic), and the visual feedback (embodied). * • Agent: the micro:bit, which can use sensors, control the movement and actions of the mini-golf elements, and turn LEDs and speakers on and off. All actions are considered not reversible. * • Environment: the assembled toolkit, which consists of various components, including a ball, speakers, and lights. Its state is described by the state of its elements, including the ball position, lights illumination and speakers ignition. * • Task: creation act. The initial state is given by the toolkit assembled. The system’s final state and the students’ algorithm are open-ended, which defines the mini-golf station elements’ behaviour. ##### Characteristics The characteristics of this activity have been analysed using the graphical templates shown in LABEL:fig:minigolf-features in Appendix 0.B. * • Tool functionalities: the MakeCode editor allows using all the tool functionalities we defined. * • System resettability: the system can be directly reset by physically moving the ball back to its starting position, and resetting the state of the lights and speakers, for example, by turning them off. Additionally, the MakeCode editor includes a convenient button to streamline the agent’s reset process. * • System observability: the real-time visual feedback makes the system observable. * • Task cardinality: the task has a one-to-one mapping. * • Task explicitness: the initial state of the toolkit, including the ball’s position, the state of the lights, and the speakers, is not specified. * • Task constraints: no constraints are imposed on the two elements to be found. * • Algorithm representation: the algorithm is manifest and written in the programming platform. ##### Enabling features for competencies development The mapping between features and skills is summarised in Appendix 0.B in LABEL:tab:minigolf-mapping. This passage analyses the characteristics that support the development of competencies within the task. * • Problem setting: all competencies can be activated thanks to the presence of variables, sequences and functions in the tool functionalities. The availability of numerous tool functionalities, the written algorithm representation and the non-resettability of the system encourage the development of problem setting skills. The implicit description of the task elements creates an environment of uncertainty, allowing for multiple interpretations and solutions, stimulating problem setting skills. The system’s observability allows the development of data collection and pattern recognition, while the one-to-one cardinality also encourages decomposition. The unconstrained definition of task elements fosters pattern recognition, decomposition, and abstraction. * • Algorithm: all competencies can be activated since the tool functionalities enable all algorithmic concepts in all three artefactual environments. The system observability, the implicit and unconstrained definition of the task elements and the algorithm’s manifest written representation further enhance these. The one-to-one cardinality helps to enhance some algorithmic skills as well. * • Assessment: algorithm debugging and system state verification can be developed in all artefactual environments by the direct resettability of the system and the written representation of the algorithm; optimisation can be activated as the resettability of the system alone is sufficient; generalisation is enabled through the system’s resettability and the presence of variables and functions. Tool functionalities, system observability and the implicit definition of the task elements further support their development. ##### Inhibiting features for competencies development The constraint validation skill cannot be developed as the algorithm and the final state to be found are unconstrained. To encourage the development of this competence, the activity can be adapted by introducing defined constraints, such as a maximum number of moves for the ball to reach the hole or that certain elements of the mini-golf station must be activated in a specific order. These constraints will require students to evaluate the feasibility of their solutions within the specified limitations and assess their adherence to the established criteria. ### 3.3 Virtual activities The last domain of educational activities analysed includes CTP characterised by the presence of a virtual system. These activities typically involve programming a virtual agent to perform a specific or a set of tasks in a virtual environment. In contrast to CTP with a physical environment, virtual activities always provide a virtual interface, often including a comprehensive programming platform that allows the problem solver to use different types of programming languages, such as textual programming language and visual programming language. In some virtual games, the problem solver can interact directly with the agent by clicking on it. This allows for greater flexibility in terms of how the problem solver can program the virtual agent. Additionally, virtual activities often include debugging tools, which allow the problem solver to identify and fix errors in their code. #### 3.3.1 Classic Maze Figure 17: The Angry Bird hitting the Green Pig maze adapted from Studio.code.org (2020a). The problem solver must write a program to get the Angry Bird through the maze to hit the Green Pig (left) by selecting the instruction blocks (middle) and assembling them in the workspace (right). In addition to unplugged activities, Code.org is a platform that offers many coding activities for children based on Blockly, a Google framework for block- based programming (Lovett, 2017). The Classic Maze is part of the Hour of Code offered by Code.org, a worldwide effort to broaden participation in the computer science field (Studio.code.org, 2020b). Participants must use block- based programming to guide different characters through a maze in this activity. The creatures include ones from popular franchises such as Angry Birds, Plants vs Zombies, and Scrat from Ice Age. In this way, they learn the foundations of computer science and algorithmic concepts by successfully guiding the characters through the maze. We decided to analyse two activities of the Classic Maze. Appendix 0.B includes the graphical template used to analyse the components and characteristics of the task, found in LABEL:fig:ABHGP-features and LABEL:fig:PZ-features, and the mapping between the CTP features and the competencies in LABEL:tab:ABHGP-mapping and LABEL:tab:PZ-mapping. ##### Components In the first activity of the Classic Maze, presented in Román-González et al. (2018) and illustrated in Fig. 17, the Angry Bird should be guided through a maze to reach and hit a Green Pig. * • Problem solver: the student performing the task who must program the agent’s behaviour. The artefactual environment comprises tools designed for reasoning and interacting with the system simultaneously, including the programming platform composed of the virtual scenario (embodied artefact), the blocks and the workspace (symbolic artefacts). The system also furnishes various hints to users (embodied artefact), including video tutorials, guidance on how to use the platform, command recommendations, suggestions on the number of blocks required to solve the task, and feedback on the problem solver’s progress towards a solution. * • Agent: the Angry Bird, programmed to navigate a maze and hit the other character. The agent’s actions comprise moving forward, turning left, and right. Moving forward is considered a non-reversible action, whereas the turning is reversible, as a turn left can be easily undone by a turn right, and vice versa. * • Environment: the virtual scenario where the two creatures are located, described by their positions. * • Task: find the algorithm. The initial state corresponds to the animals’ initial positions, while in the final, the characters are in the same position. The algorithm is the set of moving actions to reach the system’s final state from the initial. Figure 18: The Plants vs Zombies maze adapted from Studio.code.org (2020c). The problem solver must write a program to get the Zombie through a maze to eat the plant (left) by selecting the instruction blocks (middle) to be assembled in the workspace (right). ##### Characteristics * • Tool functionalities: the programming platform enables problem solvers to reason about the task at hand and program the movements of the red bird by employing a set of predefined blocks, each representing a specific action the agent is authorised to perform: (i) variables, while not explicitly delivered in the blocks of the programming platform, can be inferred from the visual feedback provided by the system, allowing the problem solver to store values such as the position of the characters; (ii) operators represent the basic actions that the agent can perform, such as moving or turning and are represented by distinct blocks in the visual programming language depicted in cyan by the platform; (iii) sequences are a series of blocks to be executed in a specific order and are implicitly conveyed by the collection of blocks; (iv) functions, which are self-contained blocks of code that perform a specific task and can be executed multiple times with different inputs (e.g., different initial positions of characters), are a concept of relative complexity. It is not certain that the problem solver will recognise them as such rather than just blocks. * • System resettability: the platform provides a direct means of resetting the task through the “start over” button, even if some agent actions are irreversible. This allows the problem solver to start over and try a different approach if necessary. * • System observability: the system provides real-time visual feedback through animations and graphical representations of the system state and its changes, making it observable. The problem solver can monitor the effects of the agent’s actions on the system, allowing him to have a complete understanding of the system’s current state and to make informed decisions in their problem- solving process. * • Task cardinality: the task has a one-to-one mapping, with only one starting position for the animal elements, one final position for the Angry Bird to be placed, and only one algorithm to be found. * • Task explicitness: the elements of the task are given explicitly through the depiction of the scenario that clearly shows the animals’ positions. * • Task constraints: there are no constraints on the elements to be found. All the blocks provided are available without limitations. * • Algorithm representation: the algorithm is written in the workspace and expressed by the set of blocks and their connections. ##### Enabling features for competencies development This paragraph explores the enabling characteristics that support the development of competencies within the task. Certain features play a crucial role in activating skills, while others can further enhance and encourage the development of these competencies. * • Problem setting: all competencies can be activated thanks to the presence of variables, sequences, and functions in the tool functionalities. The manifest and written algorithm representation can further encourage the development of these skills. The resettability of the system and the one-to-one cardinality of data elements facilitate data collection, pattern recognition and decomposition. Observing the system also supports data collection and pattern recognition. Using variables boosts pattern recognition and decomposition, while explicit and unconstrained elements, in addition to these two skills, encourage abstraction. Sequences positively affect pattern recognition, abstraction and data representation, which is also facilitated by functions. Operators also encourage decomposition. * • Algorithm: all competencies associated with the algorithmic concepts enabled by the tool functionalities, meaning variables, operators, sequences, and functions, can be activated in all three artefactual environments. Features such as variables, operators, sequences, and functions help activate the algorithmic skills in a task. These are further enhanced by the manifest and written algorithm representation, system observability, and the explicit and unconstrained definition of the task elements. The one-to-one cardinality enhances variables and operators. * • Assessment: algorithm debugging can be activated in all artefactual environments due to the resettability of the system and the manifest and written representation of the algorithm; optimisation can be activated as it only requires the resettability of the system; generalisation can be activated through the system’s resettability and the presence of variables and functions. Features such as variables, operators, sequences, functions and system observability help develop algorithm debugging and optimisation, while sequences can also foster generalisation. ##### Inhibiting features for competencies development This paragraph explores the impact of missing features on skill activation, focusing on how adjusting the task can enable the development of certain competencies. * • Repetitions, conditionals, parallelism, and events:non-activable due to the absence of specific tool functionalities. These competencies cannot be triggered until these functionalities are added to the tool. * • System state verification: non-activable because both the initial and final state are provided. The task can be adjusted to activate this skill by making one of these states to be found. For instance, by not providing the position of the Green Pig to the problem solver, who must then determine its location, the system’s final state becomes unknown, and the system state verification activable. * • Constraint validation: non-activable because the algorithm to be found is unconstrained. Some constraints can be imposed on the blocks used in the task’s programming platform to activate this skill, such as limiting the agent’s ability to turn right. ##### Comparison of two Classic Maze activities The activities proposed in the Classic Maze exhibit a progressive increase in difficulty. For example, the Plants vs Zombies maze, illustrated in Fig. 18, is a similar task requiring finding the algorithm. The main differences between this task and the previous are the characters involved, the difficulty of the path that the agent, in this case, the Zombie, must traverse to reach the plant, and the set of actions available to the agent. The programming platform for this activity also includes the possibility of using repetitions, represented in pink, adding another layer of difficulty. The features of the CTP remain unchanged from the previous task, except for the addition of the repetition functionality in the tool, which enables the activation of the related competence. #### 3.3.2 Store the Marbles Store the Marbles is a virtual programming activity, presented by Algorea, an online resource designated by France-IOI, for learning the basics of programming (ALGOREA, 2020; France-IOI.org, 2004). The activity is designed to teach students problem-solving skills and programming concepts using a visual block-based programming language. The activity is part of a series of progressive difficulty courses and exercises available on the France-IOI website. Figure 19: The Store the Marbles activity adapted from ALGOREA (2020). The task requires the problem solver to program the robot to produce an algorithm valid for different situations using a visual programming language. ##### Components In this activity, illustrated in Fig. 19, the robot should pick up the marble on his path and drop it into a hole. * • Problem solver: the student who must program the agent’s behaviour. The artefactual environment comprises tools designed for reasoning and interacting with the system simultaneously, including the programming platform composed of the virtual scenario (embodied artefact), the blocks and the workspace (symbolic artefacts). The system also furnishes suggestions on the number of blocks required to solve the task (embodied artefact), but it does not provide additional hints beyond this information. * • Agent: the virtual robot, programmed to move on the path to collect the marble and drop them in the hole. The agent’s actions comprise moving eastward, picking up a marble, and dropping a marble. The simple movement is considered irreversible, whereas picking up and dropping a marble are considered
to encompass the description of living systems from an elementary and fundamental viewpoint coming from first principles, us theoretical physicists need to extend our toolkit to include phenomena arising through combinatorial innovation. One of these cases is the theory of the adjacent possible, which we described extensively. We must highlight the difference between the ergodic and non-ergodic regimes as crucial and key to the derivation of these conclusions. While we are in the ergodic regime, objects can be repeatedly made simply because the conditions to make them occur repeatedly. For instance, in the early stages of molecular chemistry the same small molecules are independently created many times through collisions of their constituents — the Universe has no need to learn and remember how to make those molecules. The present non-ergodic Universe is radically different, because to create biologically (or technologically) complex items repeatedly requires the prescription for their creation to be learned, stored, and then executed. It is this latter regime which lies outside the realm of the conventional physics description, and for which TAP provides a phenomenological model. ### 11.2 The conclusions Some of the important conclusions drawn from this merger of cosmology and biology can be summarised as follows: * • In a theory that is well described by an underlying fundamental model, like the Standard Model of particle physics, the global space of states, the Hilbert space, does not expand. The states that the Universe goes through are vector states which were present in the model from the beginning, and do not appear unpredictably as the Universe undergoes different phase transitions. Whether or not those states are available to the Universe at a given time depends on the temperature regime the Universe is in. All states in the evolution of the Universe along with their effective field theory can be derived from the Standard Model (or its appropriate extension). * • In a system that has no standard model, such as biology, the configuration space genuinely and unpredictably expands in real time. As the system evolves, new microstates are (combinatorially) found and tested by the system’s basic constituents. Those states are genuinely novel, in that they could not have been derived a priori by any underlying theory. Therein lies the crucial distinction between physics and biology. The central question for cosmology that we are addressing here is: Does the Universe create large amounts of information as it evolves from early times to late times? We can compare the probable answers to this question as given by two paradigms for cosmological theory. These are: 1. 1. The Newtonian paradigm (defined above), which asserts that there is a fixed space of states on which fixed laws operate. 2. 2. An evolving-laws paradigm, which denies that there are either a fixed set of states or fixed laws. Different authors, including the present ones, formulate evolving-law paradigms differently, using various frameworks. But we all agree on the denial of the Newtonian paradigm, which is sufficient to reach a firm conclusion on the forementioned question. That is, our differences matter much less than what we agree on. We can all agree on the following. Both paradigms have well-defined quantities called $\\{\rho\\}_{{\rm initial\;lawful}}$, the set of ‘lawful’ possible initial states of the Universe of which one, $\rho_{\rm initial}$, was the actual initial state of the Universe, and $\\{\rho\\}_{\rm late\;lawful}$, a set of ‘lawful’ final or late states of the Universe of which one, $\rho_{\rm final}$, will be the actual final or late state of the Universe. Different theories may possess either a final state or a late state, the latter being one after everything interesting has happened. There are various options for speaking of measures on sets of states. Some, like the Boltzmann or von Neumann entropies, depend on the theory and only make sense in the first of the two paradigms we are comparing. But Bateson information, defined informally as ‘the difference that makes a difference’ [82], is universal enough to let us compare the two paradigms. Let $I_{\rm initial}^{\rm NP}$ be the amount of Bateson information needed to select the actual initial state of the universe from the possible lawful initial states in the Newtonian paradigm. Let $Q^{\rm NP}(\rho_{\rm initial}^{\rm NP}\rightarrow\rho_{\rm final}^{\rm NP})$ be the information needed to predict the exact final state given the exact initial state in the Newtonian paradigm. The same quantities labeled with ‘EL’ arise in the evolving-laws paradigm. Our key question is, does the Universe learn during its evolution? In other words is $Q$ order unity or huge? Most cosmologists focus on the $I$, which is called the entropy of the initial state, or just the entropy of the Universe. We instead are interested in the $Q$, which tell us how much information the Universe learns during its evolution. This is information that is not present in the initial conditions and also is not a consequence of the actual laws. If you believe in the Newtonian paradigm, you must believe $Q^{\rm NP}=1$, as all the information needed to precisely determine the final state is present in the initial state. But an evolving-laws framework makes possible a creative universe in which $Q^{\rm EL}\gg 1$. In other words, all evolving-law cosmologists agree that the Universe creates information. We call $Q$ the creative potential of the Universe. A biological universe can create more information than was present at the Big Bang by conventional accounts. ### 11.3 The outcome To summarise, we have taken the first steps in bringing the biosphere into cosmology by proposing a tool for estimating the number of classical microscopic states contained in the biosphere configuration space. Within the context of an emergent and non-reductionist view of complex systems, we argued that it is not self-evident that the contribution of the biosphere is sub- dominant to the enormous numbers coming from gravitational physics. Emergent complexity may dramatically expand the available volume of configuration space. As a demonstration, we have adapted equations emerging from the Theory of the Adjacent Possible [9], which show an explosive growth that can be super-combinatorial and hence in principle able to overcome the traditional permutational accounting of particle configuration space. Living biospheres may be the dominant source of information in the Universe! How and whether the potential associated divergence of complexity can be tamed (either in equations or in reality itself) remains to be seen. An exciting speculation is that cosmic acceleration may have a role to play in this. Has our new result worsened the problem of cosmological initial conditions? Surprisingly, probably not. The vast numbers generated via the biospheres are due to the growth in state space enabled by the emergent complexity of life. Those states simply did not exist until life came along to make them. The newborn Universe had no opportunity to occupy them, nor pays any cost in failing to do so. To end we note that one should not regard TAP as describing actual physical collisions of two or more objects to make a single new one; it operates at a much more abstract and elevated level. It quantifies the remembered ability of a system to make and remake an object. The word remembered is crucial; it is not enough to make something once, we must be able to do so again and again. Its existence must be encoded in some way; embedded in RNA perhaps, or passed on through folklore, or written down in a stored patent application. This requirement can become incredibly complex. To make one new leopard we need both the actual parent leopards and the information encoded in their genomes (which comes conveniently packaged within the leopards themselves). But to keep making new leopards requires the entire sustainable ecosystem in which they are embedded. Consider John Muir’s famous quote [83]: “When we try to pick out anything by itself we find that it is bound fast by a thousand invisible cords that cannot be broken, to everything in the universe.” ## Acknowledgments We would first and foremost like to acknowledge Barbara Drossel, without whom several of the connections in this work would never have been made. We thank Niayesh Afshordi, Priyal Bordia, Latham Boyle, Hal Haggard, Wim Hordijk, Jaron Lanier, Roberto Mangabeira Unger, Pierre Martin-Dussaud, Mark Neyrinck, Roger Penrose, James Rosindell, and Carlo Rovelli for discussions. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. This research was also partly supported by grants from NSERC and FQXi. This work was supported by the Fundação para a Ciência e a Tecnologia (FCT) through the research grants UIDB/04434/2020 and UIDP/04434/2020. M.C. acknowledges support from the FCT through grant SFRH/BPD/111010/2015 and the Investigador FCT Contract No. CEECIND/02581/2018 and POPH/FSE (EC). A.R.L. acknowledges support from the FCT through the Investigador FCT Contract No. CEECIND/02854/2017 and POPH/FSE (EC). M.C. and A.R.L. are supported by the FCT through the research project EXPL/FIS- AST/1418/2021. We are especially thankful to the John Templeton Foundation for their generous support of this project. ## References * [1] R. Rosen, Life itself, Columbia University Press (1991). * [2] R. W. Ulanowicz, A Third Window: Natural Life beyond Newton and Darwin, Templeton Press (2011). * [3] S. A. Kauffman, A World Beyond Physics: The Emergence and Evolution of Life?. Oxford University Press, Oxford (2019). * [4] D. J. Nicholson, Is the cell really a machine?, J. Theor. Biol. 477, 108 (2019). * [5] G. F. R. Ellis, Emergence in Solid State Physics and Biology, Found. Phys. 50, 1098 (2020). * [6] P. W. Anderson, More is different, Science, 177, 4047:393 (1972). * [7] A. J. Leggett, On the Nature of Research in Condensed-State Physics, Found. Phys. 22, 2 (1992). * [8] R. B. Laughlin, A Different Universe: Reinventing Physics From the Bottom Down, Basic Books (2005). * [9] S. Kauffman, Reinventing the Sacred: A New View of Science, Reason, and Religion. Basic Books, New York, USA (2008). * [10] C. A. Egan and C. H. Lineweaver, A larger estimate of the entropy of the Universe, Astrophys. J. 710, 1825 (2010) [arXiv:0909.3983 [astro-ph]]. * [11] E. W. Kolb and M. S. Turner, The Early Universe, Nature 294, 521 (1981); E. W Kolb and M. S. Turner, The Early Universe, Addison–Wesley, Redwood City (1990). * [12] S. Frautschi, Entropy in an Expanding Universe, Science 217, 593 (1982). * [13] R. Penrose, The road to reality: a complete guide to the laws of the universe, Jonathan Cape, London, UK (2004). * [14] P. Frampton and T. W. Kephart, Upper and Lower Bounds on Gravitational Entropy, JCAP 0806, 008 (2008) [arXiv:0711.0193 [gr-qc]]; P. Frampton, S. D. H. Hsu, T. W. Kephart, and D. Reeb, What is the entropy of the universe?, Class. Quant. Grav 26, 145005 (2009) [arXiv:0801.1847 [hep-th]]. * [15] M. M. Vopson, Estimation of the information contained in the visible matter of the universe, AIP Advances 11, 105317 (2021) [ arXiv:2112.04473 [physics.gen-ph]]. * [16] Y. Akrami et al. (the Planck collaboration), Planck 2018 results. I. Overview and the cosmological legacy of Planck, Astron. Astrophys. 641, A1 (2020) [arXiv:1807.06205 [astro-ph.CO]]. * [17] R Ahumada et al., The Sixteenth Data Release of the Sloan Digital Sky Surveys: First Release from the APOGEE-2 Southern Survey and Full Release of eBOSS Spectra, Astrophys. J. Supp. 249, 3 (2020) [arXiv:1912.02905 [astro-ph.GA]]. * [18] D. Nelson et al., The Illustris Simulation: Public Data Release, Astronomy and Computing 13, 12 (2015) [arXiv:1504.00362 [astro-ph.CO]]. * [19] Event Horizon Telescope Collaboration, First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole, Astrophys. J. 875, L1 (2019) [arXiv:1906.11238 [astro-ph.GA]] and First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole, Astrophys. J. 875, L6 (2019) [arXiv:1906.11243 [astro-ph.GA]]. * [20] J. D. Bekenstein, Black holes and the second law, Lett. Nuovo Cim. 4, 737 (1972); J. D. Bekenstein, Generalized second law of thermodynamics in black-hole physics, Phys. Rev. D7, 2333 (1973); J. D. Bekenstein, Black Holes and Entropy, Phys. Rev. D9, 3292 (1974). * [21] S. W. Hawking, Black hole explosions?, Nature 248, 30 (1974). * [22] P. H. Frampton, Desperately seeking intermediate-mass black holes, 2009, arXiv:0904.2934 [hep-th]. * [23] G. W. Gibbons and S. W. Hawking, Cosmological event horizons, thermodynamics, and particle creation, Phys. Rev. D15, 2738 (1977). * [24] P. C. W. Davies, Why is the physical world so comprehensible?, in Complexity, Entropy and the Physics of Information edited by W. H. Zurek (Addison–Wesley, Redwood City, CA, 1990), p. 61. * [25] G. t’Hooft, Dimensional Reduction in Quantum Gravity, arXiv:gr-qc/9310026 (1993); L. Susskind, The World as a Hologram, J. Math. Phys. 36, 6377 (1995) [arXiv:hep-th/9409089]; R. Bousso, The Holographic Principle, Rev. Mod. Phys. 74, 825 (2002) [arXiv:hep-th/0203101]. * [26] F. J. Dyson, Time without end: Physics and biology in an open Universe, Rev. Mod. Phys. 51, 447 (1979). * [27] S. Dodelson and F. Schmidt, Modern Cosmology, 2nd edition, Academic Press (2020). * [28] C. Sagan, Cosmos: The Story of Cosmic Evolution, Science and Civilisation, Abacus (1983). * [29] G. Parisi, Statistical Field Theory, Addison–Wesley, Redwood City, CA. (1988). * [30] B. Drossel, Strong emergence in condensed matter physics, 2019, arxiv.org/1909.01134. * [31] G. Musser, Emergence: A Review of Research on Condensed Matter and Quantum Gravity and Resonances Between These Fields, FQXi/John Templeton Foundation report, at https://www.templeton.org/wp-content/uploads/2021/12/Research-on-Emergence-Musser-1.pdf (2021). * [32] Boltzmann’s Entropy Formula. (2022, March, 20th). In Wikipedia. https://en.wikipedia.org/wiki/Boltzmann’s_entropy_formula * [33] L. Smolin and R. Mangabeira Unger, The Singular Universe and the Reality of Time, Cambridge University Press, Cambridge (2014). * [34] L. Smolin, Time Reborn, Houghton Mifflin Harcourt, Random House Canada, and Penguin UK (2013). * [35] M. Cortês, S. A. Kauffman, A. R. Liddle, and L. Smolin, Biocosmology: Biology from a cosmological perspective, submitted simultaneously. * [36] R. P. Feynman, The Feynman lectures on physics, Basic Books (2010). * [37] E. Mayr, Systematics and the Origin of Species, Columbia Univ. Press, NY (1942); J. Huxley, Evolution: the Modern Synthesis New York, London, Harper & Brothers, (1943); M. Pigliucci and G. B. Müller (eds.), Evolution, the Extended Synthesis, MIT Press (2010). * [38] Nature Physics Editorial, The rise of quantum materials, Nature Physics 12, 105 (2016). * [39] G. Longo, M. Montévil, and S. Kauffman, No entailing laws, but enablement in the evolution of the biosphere, in Proceedings of the 14th annual conference companion on Genetic and Evolutionary Computation, New York, p. 1379 (2012), [arXiv:1201.2069 [q-bio.OT]]. * [40] S. A. Kauffman, Eros and Logos in Angelaki 25, 9 (2020); S. A. Kauffman and A. Roli, The world is not a theorem, Entropy 23, 1467 (2021) [arXiv:2101.00284 [physics.bio-ph]]. * [41] L. Smolin, Did the Universe evolve?, Class. Quant. Grav. 9, 173 (1992). * [42] A. Linde, D. Linde, and A. Mezhlumian, From the Big Bang Theory to the Theory of a Stationary Universe, Phys. Rev. D49, 1783 (1994) [arXiv:gr-qc/9306035]. * [43] D. Noble, A theory of biological relativity: no privileged level of causation. Interface focus 2, 55 (2012). * [44] S. Weinberg, Dreams of a final theory, Vintage (1993). * [45] L. Smolin, Life of the Cosmos, Oxford University Press, Oxford (1997). * [46] S. A. Kauffman, Answering Schödinger’s ‘What is Life?’, Entropy 22, 815 (2020). * [47] S. P. Zwart and T. Boekholt, Numerical verification of the microscopic time reversibility of Newton’s equations of motion: Fighting exponential divergence, Comm. Nonlin. Sci. Num. Simul. 61, 160 (2018) [arXiv:1802.00970 [astro-ph.IM]] * [48] H. Morowitz, Energy Flow in Biology, Academic Press (1968). * [49] T. Banks, Cosmological Breaking of Supersymmetry, Int. J. Mod. Phys. A16, 910 (2001) [arXiv:hep-th/0007146]. * [50] R. D. Sorkin, The statistical mechanics of black hole thermodynamics, in Black Holes and Relativistic Stars, ed. R. M. Wald (University of Chicago Press, 1998) [arXiv:gr-qc/9705006]; T. Jacobson, On the nature of black hole entropy, AIP Conf. Proc. 493, 85 (1999) [arXiv:gr- qc/9908031]. * [51] R. Bousso, Positive vacuum energy and the N-bound, JHEP 0011, 038 (2000) [arXiv:hep-th/0010252]. * [52] L. Smolin, The strong and weak holographic principles, Nucl. Phys. B601, 209 (2001) [arXiv:hep-th/0003056]. * [53] R. Koppl, A. Devereaux, J. Herriot, and S. A. Kauffman, A simple combinatorial model of world economic history, 2018, arXiv:1811.04502 [econ.GN]. * [54] M. Steel, W. Hordijk, and S. A. Kauffman, Dynamics of a birth-death process based on combinatorial innovation, J. Theor. Biol. 491, 110187 (2020) [arXiv:1904.03290 [q-bio.PE]]. * [55] M. Cortês, S. A. Kauffman, A. R. Liddle, and L. Smolin, The TAP equation: evaluating combinatorial innovation, appearing simultaneously. * [56] C. E. Cleland and C. F. Chyba, Defining ‘Life’, Orig. Life Evol. Biosph. 32, 387 (2002). * [57] D. W. Deamer, Assembling Life: How Can Life Begin on Earth and Other Habitable Planets?, Oxford University Press (2011). * [58] N. Lane, The vital question, Profile books (2016). * [59] B. Damer and D. Deamer, The Hot Spring Hypothesis for an Origin of Life, Astrobiology 20, 429 (2020). * [60] S. A. Kauffman, Autocatalytic sets of proteins, J. Theor. Biol. 119, 1 (1986); J. D. Farmer, S. A. Kauffman, and N. H. Packard, Autocatalytic Replication of Polymers, Physica D 2, 50 (1986). * [61] J. C. Xavier, W. Hordijk, S. Kauffman, M. Steel, and W. F. Martin, Autocatalytic chemical networks at the origin of metabolism, Proc. R. Soc. B287, 20192377 (2020); J. Xavier and S. Kauffman, Small-molecule autocatalytic networks are universal metabolic fossils, 2022, Proc. R. Soc. A., in press. * [62] D. Sobral , J. Matthee, B. Darvish, D. Schaerer, B. Mobasher, H. J. A. Rottgering, S. Santos, and S. Hemmati, Evidence for Pop III-like stellar populations in the most luminous Lyman-$\alpha$ emitters at the epoch of re-ionisation: Spectroscopic confirmation, Astrophys. J. 808, 139 (2005) [arXiv:1504.01734]. * [63] A. Heger and S. E. Woosley, The Nucleosynthetic Signature of Population III, Astrophys. J. 567, 532 (2002) [arXiv:astro-ph/0107037]. * [64] S. A. Kauffman, D. P. Jelenfi, and G. Vattay, Theory of chemical evolution of molecule compositions in the universe, in the Miller–Urey experiment and the mass distribution of interstellar and intergalactic molecules, J. Theor. Biol. 486, 110097 (2020) [arXiv:1806.06716 [q-bio.PE]]. * [65] J. R. Cronin, W. E. Gandy, and S. Pizzarello, Amino-acids of the Murchison meteorite. I. 6 carbon acyclic primary alpha-amino alkanoic acids, J. Mol. Evol. 17, 265 (1981); T. Koga and H. Naraoka, A new family of extraterrestrial amino acids in the Murchison meteorite, Scientific Reports 7, 636 (2017); D. P. Glavin et al., Abundant extraterrestrial amino acids in the primitive CM carbonaceous chondrite Asuka 12236, Meteor. Plan. Sci. 55, 1979 (2020). * [66] A. D. Solis, Reduced alphabet of prebiotic amino acids optimally encodes the conformational space of diverse extant protein folds, BMC Evol. Biol. 19, 158 (2019). * [67] M. Neveu, H. J. Kim, and S. A. Benner, The “strong” RNA world hypothesis: fifty years old, Astrobiology 13, 391 (2013). * [68] A. Lazcano and J. L. Bada, The 1953 Stanley L. Miller Experiment: Fifty Years of Prebiotic Organic Chemistry, Orig. Life Evol. Biosph. 33, 235 (2004). * [69] J. C. Uyeda, T. F. Hansen, S. J. Arnold, and J. Pienaar, The million-year wait for macroevolutionary bursts, PNAS 108:38, 15908 (2011). * [70] P. D. Gingerich, Rates of Evolution: A Quantitative Synthesis, Cambridge University Press, Cambridge (2019). * [71] R. May, J. Lawton, and N. Stork, Assessing Extinction Rates, in J. H. Lawton and R. M. May, eds., Extinction Rates, Oxford University Press, Oxford (1995). * [72] A. Tiessen, P. Pérez-Rodríguez, and L. J. Delaye-Arredondo, Mathematical modeling and comparison of protein size distribution in different plant, animal, fungal and microbial species reveals a negative correlation between protein size and protein number, thus providing insight into the evolution of proteomes, BMC Res Notes 5, 85 (2012). * [73] W. J. Ripple, C. Wolf, T. M. Newsome, M. Galetti, M. Alamgir, E. Crist, M. I. Mahmoud, and W. F Laurance, World Scientists’ Warning to Humanity: A Second Notice, BioScience 67, 1026 (2017). * [74] A. Weisman, The world without us, Thomas Dunne Books/St. Martin’s Press, New York (2007). * [75] P. Madau and M. Dickinson, Cosmic Star-Formation History, Ann. Rev. Astron. Astrophys. 52, 415 (2014) [arXiv:1403.0007 [astro-ph.CO]]. * [76] M. P. Gough, Holographic dark information energy, Entropy 13, 924 (2013) [arXiv:1105.4461 [astro-ph.CO]]. * [77] B. Menin, On the possible ratio of dark energy, ordinary energy and energy due to information, Am. J. Comp. Appl. Math. 9, 21 (2019). * [78] S. Capozziello and O. Luongo, Information entropy and dark energy evolution, Int. J. Mod. Phys. D 27, 1850029 (2018) [arXiv:1704.00195 [gr-qc]]. * [79] M. Li, A model of holographic dark energy, Phys. Lett. B603, 1 (2004) [arXiv:hep-th/0403127]; S. Wang, Y. Wang, and M. Li, Holographic dark energy, Phys. Rep. 696, 1 (2017) [arXiv:1612.00345 [astro-ph.CO]]. * [80] R. Bousso, R. Harnik, G. D. Kribs, and G. Perez, Predicting the Cosmological Constant from the Causal Entropic Principle, Phys. Rev. D76, 043513 (2007) [arXiv:hep-th/0702115]. * [81] J. García-Bellido and L. Espinosa-Portalés, Cosmic acceleration from first principles, Phys. Dark Univ. 34, 100892 (2021) [arXiv:2106.16014 [gr-qc]]. * [82] G. Bateson, Mind and Nature: A Necessary Unity, E.P. Dutton, NY (1979); M. J. Schroeder, The Difference That Makes a Difference for the Conceptualization of Information, Proceedings 2017 1(3), 221 (Proceedings of the IS4SI 2017 Summit Digitalisation for a Sustainable Society, Gothenburg, 2017). * [83] J. Muir, journal entry 1869, in The John Muir Papers, R. H. Limbaugh and K. E. Lewis, editors, 1858-1957 MICROFORM, (Stockton, CA: University of the Pacific, 1980). The Sierra Club elegantly debunks various incorrect versions of this quote at `https://vault.sierraclub.org/john_muir_exhibit/writings/misquotes.aspx` .
# Gravitational particle production of superheavy massive particles in Quintessential Inflation II: $\alpha$-attractors Llibert Aresté Saló<EMAIL_ADDRESS>School of Mathematical Sciences, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom Jaume de Haro<EMAIL_ADDRESS>Departament de Matemàtiques, Universitat Politècnica de Catalunya, Diagonal 647, 08028 Barcelona, Spain ###### Abstract We compute the gravitational production of conformally coupled superheavy particles during the phase transition from the end of inflation to the beginning of kination for $\alpha$-attractors potentials in the context of Quintessential Inflation ($\alpha$-QI), showing that the maximum value of the reheating temperature, independently of the value of the parameter $\alpha$, is near $10^{9}$ GeV. This result, which contradicts the usual belief that the reheating via the production of superheavy massive particles leads to an inefficient reheating temperature, is due to the fact that in our numerical calculations we take into account the contribution of the large wavelength modes to the reheating temperature, which never happens in analytical calculations where only ultraviolet modes are considered. Gravitational Particle production; Quintessential Inflation; $\alpha$-attractors; Reheating Temperature. ###### pacs: 04.20.-q, 98.80.Jk, 98.80.Bp ## I Introduction The so-called gravitational particle production Parker ; gmm ; ford ; Zeldovich of superheavy particles conformally coupled with gravity, which was applied to standard inflation (potentials with a deep well) in kolb ; kolb1 ; Birrell1 ; ema , is one of the mechanisms used to reheat the universe in scenarios containing a period of inflation. This also happens in Quintessential Inflation (QI) to match the inflationary period with the usual hot Big Bang universe guth . However, the gravitational reheating in QI is normally applied to very light fields Spokoiny ; pv ; A and only in few papers, which deal with toy non-smooth models as the Peebles-Vilenkin one, it is applied to massive particles H ; ha ; hap18 ; J ; hashiba ; Hashiba . On the other hand, regarding smooth QI potentials, particle creation has to be analytically studied using the complex WKB approximation hashiba1 , whose effective application is limited to the creation of particles by parabolic potentials kofman . In the present work we continue with our study of particle production by smooth potentials started in ah , now focusing on a smooth exponential potential coming from $\alpha$-attractors in Quintessential Inflation ($\alpha$-QI) vardayan ; K ; benisty3 . Since the $\alpha$-attractors come from supergravity theories containing particles with only gravitational interactions, the late-time decay of these relics may jeopardize the success of the standard BBN lindley . To solve this problem one has to consider sufficiently low reheating temperature (of the order of $10^{9}$ GeV or less) eln . On the contrary, a lower bound for the reheating temperature comes from the fact that the radiation-dominated era occurs before the Big Bang Nucleosynthesis (BBN) epoch, which takes place in the $1$ MeV regime gkr . Coming back to the gravitational production of superheavy particles, we will use the well-known Hamiltonian diagonalization method (see gmmbook for a review), based on the computation of the time dependent $\beta$-Bogoliubov coefficient which encodes the polarization effects and also the real particles created during the phase transition. Fortunately, these polarization effects disappear when the universe evolves adiabatically, which happens soon after the beginning of kination, allowing its numerical calculation. Thus, in order to calculate the energy density of the produced particles, one can safely use the square modulus of the $\beta$-Bogoliubov coefficient after the beginning of kination, whose numerical value is, for the relevant modes that contribute to the reheating, of the order of $10^{-9}-10^{-10}$ depending on the superheavy masses which in our simulations are of the order of $10^{15}-10^{17}$ GeV. Finally, once one has the value of the $\beta$-Bogoliubov coefficients, one can calculate the value of the energy density of the superheavy particles, which must decay into lighter ones before or after the end of the kination phase to form a relativistic plasma. In the former case the reheating temperature is greater than $10^{6}$ GeV and in the second one its maximum value is around $10^{9}$ GeV, which shows that the gravitational production of superheavy particles is a very efficient mechanism to reheat our universe. Throughout the manuscript we use natural units, i.e., $\hbar=c=k_{B}=1$ and the reduced Planck’s mass is denoted by $M_{pl}\equiv\frac{1}{\sqrt{8\pi G}}\cong 2.44\times 10^{18}$ GeV. ## II Particle creation of superheavy particles conformally coupled to gravity We consider a superheavy quantum field $\chi$ conformally coupled with gravity. In order that the polarization effects due to this quantum field do not affect the dynamics of the scalar field responsible for Quintessential Inflation, the mass of the quantum field, namely $m_{\chi}$, must be greater than the Hubble rate, in fact the condition $H\ll m_{\chi}$ has to be satisfied (see Felder to take a look at the problems associated to quantum light fields). Therefore, since the most accepted idea is that inflation starts at GUT scales, where the temperature is around $10^{16}$ GeV, then using the Stefan-Boltzmann law $\rho=\frac{\pi^{2}}{30}g_{*}T^{4}$, where the effective number of degrees of freedom for the Standard model is $106.75$, for a flat FLRW spacetime at the beginning of inflation one has $H\cong 2\times 10^{13}$ GeV. This is the reason why we will chose $m_{\chi}\sim 10^{15}$ GeV or greater. To calculate the energy density of the produced particles, we will use the well-known diagonalization method, based on the Bogoliubov coefficients, which in the conformally coupled case must satisfy the first order system of differential equations (see gmmbook for a detailed discussion) $\displaystyle\left\\{\begin{array}[]{ccc}\alpha_{k}^{\prime}(\tau)&=&\frac{\omega_{k}^{\prime}(\tau)}{2\omega_{k}(\tau)}e^{2i\int^{\tau}\omega_{k}(\bar{\tau})d\bar{\tau}}\beta_{k}(\tau)\\\ \beta_{k}^{\prime}(\tau)&=&\frac{\omega_{k}^{\prime}(\tau)}{2\omega_{k}(\tau)}e^{-2i\int^{\tau}\omega_{k}(\bar{\tau})d\bar{\tau}}\alpha_{k}(\tau),\end{array}\right.$ (3) where the time dependent frequency is denoted by $\omega_{k}(\tau)=\sqrt{k^{2}+m_{\chi}^{2}a^{2}(\tau)}$ and $\tau$ is the conformal time. Finally, in terms of the $\beta$-Bogoliubov coefficient, the vacuum energy density of the $\chi$-field is given by ah $\displaystyle\rho_{\chi}(\tau)=\frac{1}{2\pi^{2}a^{4}(\tau)}\int_{0}^{\infty}k^{2}\omega_{k}(\tau)|\beta_{k}(\tau)|^{2}dk.$ (4) ### II.1 Gravitational particle creation by $\alpha$-attractors in the context of Quintessential Inflation In the present work, the potential that we will consider is an exponential $\alpha$-attractor in Quintessential Inflation, plotted in Figure 1 and given by $\displaystyle V(\varphi)=\lambda M_{pl}^{4}e^{-n\tanh\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)},$ (5) where $\lambda$, $\alpha$ and $n$ are dimensionless parameters, whose relation is in order to match with the current observation data the following one (see benisty3 for details): $\displaystyle\frac{\lambda}{\alpha}e^{n}\sim 10^{-10}\qquad\mbox{and}\qquad\lambda e^{-n}\sim 10^{-120}.$ (6) Figure 1: Plot of the Exponential $\alpha$-attractor potential, for $\alpha\sim 10^{-2}$, $n\sim 10^{2}$ and $\lambda\sim 10^{-66}$. First of all, in order to calculate the energy density of the produced particles one has to integrate numerically the conservation equation for the inflaton field, namely $\displaystyle\ddot{\varphi}+3H\dot{\varphi}+V_{\varphi}=0,$ (7) where $H=\frac{1}{\sqrt{3}M_{pl}}\sqrt{\frac{\dot{\varphi}^{2}}{2}+V(\varphi)}$, with initial conditions (the value of the scalar field and its first derivative) during inflation. Since the slow-roll regime is an attractor, one only has to take initial conditions in the basin of attraction of the slow- roll solution, for example, $\varphi=\varphi_{*}$ and $\dot{\varphi}=-\frac{V_{\varphi}(\varphi_{*})}{3H_{*}}$, where the “star” denotes that the quantities are evaluated at the horizon crossing. Once one has obtained the evolution of the scalar field and in particular the evolution of the Hubble rate, one can compute the evolution of the scalar factor, whose value at the horizon crossing we have chosen to be equal to $1$. From the evolution of the scale factor, we can see on the left-hand side of Figure 2 that a spike appears in the plot of the quantity $\omega_{k}^{\prime}/\omega_{k}^{2}$ during the phase transition from the end of inflation to the beginning of kination, that is, at that moment when the adiabatic evolution is broken and particles are gravitationally produced (see the right-hand side of Figure 2). Figure 2: Adiabatic evolution (left) and evolution of the $\beta$-Bogoliubov coefficient (right) for a heavy field with mass $m_{\chi}\sim 10^{16}$ GeV, when $\alpha\sim 10^{-1}$. Here we have used the value $k=a_{kin}H_{kin}$, which is in the range ${k}\lesssim a_{kin}m_{\chi}$, where we have observed that particles are produced. Then, we have numerically solved equation (3), with initial conditions $\alpha_{k}(\tau_{*})=1$ and $\beta_{k}(\tau_{*})=0$ at the horizon crossing (there were neither particles nor polarization effects at that moment because during the slow-roll regime the derivatives of the Hubble rate are negligible compared with the powers of $H$, i.e., the system is in the adiabatic regime). For the value $k=a_{kin}H_{kin}$, we obtain in Figure 2 that $|\beta_{k}(\tau)|^{2}$ stabilizes soon to a non-zero value after the beginning of kination, containing only particle production effects. We have numerically done the calculations for masses $m_{\chi}\cong 10^{15}-10^{17}$ GeV and for a large range of values of $\alpha$. We have obtained that the relevant modes that contribute significantly to the particle production are in the range $0\lesssim k\lesssim a_{kin}m_{\chi}$ (see Figure 3), leading to values of $|\beta_{k}|^{2}$ of order $10^{-9}$ for $m_{\chi}\sim 10^{15}$ GeV and values of $|\beta_{k}|^{2}$ of order $10^{-10}$ for $m_{\chi}\sim 10^{16}-10^{17}$ GeV, as one can see in Figure 4. Figure 3: Plot of the logarithm of $|\beta_{k}|^{2}$, as a function of $k$, for a heavy field with mass $m_{\chi}\sim 10^{15}$ GeV and $\alpha=0.1$. Figure 4: Plot of the square modulus of the $\beta$-Bogoliubov coefficient, as a function of $\alpha$, for a heavy field with masses $m_{\chi}\sim 10^{15}$, $10^{16}$ and $10^{17}$ GeV. Next, introducing these values of the $\beta$-Bogoliubov coefficient in the energy density (4) and taking into account that the modes that contribute to the energy density satisfy $0\lesssim k\lesssim a_{kin}m_{\chi}$ and lead practically to the same value of the $\beta$-Bogoliubov coefficient, one can safely do the approximation that after the beginning of kination $\omega_{k}(\tau)\cong m_{\chi}a(\tau)$, obtaining $\displaystyle\rho_{\chi}(\tau)\cong\frac{m_{\chi}^{4}}{6\pi^{2}}|\beta_{k}|^{2}\left(\frac{a_{kin}}{a(\tau)}\right)^{3}\sim\left\\{\begin{array}[]{ccc}10^{-11}m_{\chi}^{4}\left(\frac{a_{kin}}{a(\tau)}\right)^{3}&\mbox{when}&m_{\chi}\sim 10^{15}\mbox{ GeV}\\\ &&\\\ 10^{-12}m_{\chi}^{4}\left(\frac{a_{kin}}{a(\tau)}\right)^{3}&\mbox{when}&m_{\chi}\sim 10^{16}-10^{17}\mbox{ GeV},\end{array}\right.$ (11) for $\tau>\tau_{kin}$. A final remark is in order: In hashiba ; ema , the authors infer through numerical calculations using toy models that, for the relevant modes, the square of the $\beta$-Bogoliubov coefficient must decay as $e^{-\kappa m_{\chi}/H_{inf}}$, where $H_{inf}$ is the scale of inflation and $\kappa$ a dimensionless factor of order 1. However, throughout all the calculation the mass of the superheavy field is of the same order or less than the scale of inflation. So, in practice, for the relevant models the authors obtain in all numerical simulations $|\beta_{k}|^{2}\geq 10^{-7}$. In our case we have chosen masses which are many orders greater than the scale of inflation. Effectively, since the power spectrum of scalar perturbations $P_{\zeta}=\frac{H_{*}^{2}}{8\pi^{2}\epsilon_{*}M_{pl}^{2}}\sim 2\times 10^{-9}$ (here, once again, the star denotes that the quantities are evaluated at the horizon crossing) leads to $H_{*}\sim 4\times 10^{-4}\sqrt{\epsilon_{*}}M_{pl}$ -with $\epsilon_{*}=\frac{3\alpha}{16}(1-n_{s})^{2}$ for the case of $\alpha$-attractors, choosing for the spectral index its central value $n_{s}=0.9649$ (see planck )-, one gets the following scale of inflation, $\displaystyle H_{*}\sim 1.5\sqrt{\alpha}\times 10^{13}\mbox{ GeV},$ (12) which is minimum two orders less than $10^{15}$ GeV. In summary, there is gravitational production of superheavy particles, with the square of the $\beta$-Bogoliubov, for the relevant modes, of the order of $10^{-9}-10^{-10}$, but for superheavy masses the production is not suppressed by an abnormally small factor as $e^{-\kappa m_{\chi}/H_{inf}}$. ## III The reheating process After the production of the superheavy particles, they have to decay into lighter ones which after the thermalization process form a relativistic plasma that depicts our hot universe. Then, this decay could occur before the end of kination (when the energy density of the inflaton becomes of the same order than the one of the $\chi$-field), or after its end. ### III.1 Decay before the end of kination In this case, the energy density of the background, i.e. the one of the inflaton field, and the one of the relativistic plasma, when the decay is finished, that is when ${\Gamma}\sim H_{dec}=H_{kin}\left(\frac{{a}_{kin}}{a_{dec}}\right)^{3}$, will be $\displaystyle\rho_{\varphi,dec}=3{\Gamma}^{2}M_{pl}^{2}\qquad\mbox{and}\qquad\rho_{\chi,dec}=\rho_{\chi,kin}\left(\frac{{a}_{kin}}{a_{dec}}\right)^{3}\cong\frac{m_{\chi}^{4}}{6\pi^{2}}|\beta_{k}|^{2}\frac{\Gamma}{H_{kin}}.$ (13) Imposing that the end of the decay precedes the end of kination, which means $\rho_{\chi,dec}\leq\rho_{\varphi,dec}$, and taking into account that it is after the beginning of the kination, i.e., $\Gamma\leq H_{kin}\cong 6\times 10^{-7}M_{pl}$, one gets $\displaystyle\frac{1}{18\pi^{2}}|\beta_{k}|^{2}\frac{m_{\chi}^{4}}{H_{kin}M_{pl}^{2}}\leq\Gamma\leq H_{kin}.$ (14) Finally, the reheating temperature, i.e., the temperature of the universe when the relativistic plasma in thermal equilibrium starts to dominate, which happens when $\rho_{\varphi,reh}\sim\rho_{\chi,reh}$, can be calculated as follows: Since after the decay the evolution of the respective energy densities are given by $\displaystyle\rho_{\varphi,reh}=\rho_{\varphi,dec}\left(\frac{a_{dec}}{a_{reh}}\right)^{6}\qquad\mbox{and}\qquad\rho_{\chi,reh}=\rho_{\chi,dec}\left(\frac{a_{dec}}{a_{reh}}\right)^{4},$ (15) we will have $\frac{\rho_{\chi,dec}}{\rho_{\varphi,dec}}=\left(\frac{a_{dec}}{a_{reh}}\right)^{2},$ and thus, from the Stefan-Boltzmann law $\rho_{reh}=\frac{\pi^{2}}{30}g_{reh}T_{reh}^{4}$, where $g_{reh}=106.75$ is the effective number of degrees of freedom for the Standard Model, the reheating temperature will be $\displaystyle T_{reh}=\left(\frac{30}{\pi^{2}g_{reh}}\right)^{1/4}\rho_{\chi,reh}^{\frac{1}{4}}=\left(\frac{30}{\pi^{2}g_{reh}}\right)^{1/4}\rho_{\chi,dec}^{\frac{1}{4}}\sqrt{\frac{\rho_{\chi,dec}}{\rho_{\varphi,dec}}}$ $\displaystyle\cong\frac{1}{3\sqrt{2}\pi^{3/2}}\left(\frac{30}{\pi^{2}g_{reh}}\right)^{1/4}|\beta_{k}|^{3/2}\left(\frac{H_{kin}}{6\Gamma}\right)^{1/4}\frac{m_{\chi}^{3}}{M_{pl}^{2}H_{kin}}M_{pl}.$ (16) So, taking into account the bound (14), the reheating temperature ranges between $\displaystyle\frac{1}{3\sqrt{2}\pi^{2}}\left(\frac{5}{g_{reh}}\right)^{1/4}|\beta_{k}|^{3/2}\frac{m_{\chi}^{3}}{M_{pl}^{2}H_{kin}}M_{pl}\leq T_{reh}\leq$ $\displaystyle\frac{1}{3\sqrt{2}\pi^{3/2}}\left(\frac{90}{g_{reh}}\right)^{1/4}|\beta_{k}|\left(\frac{m_{\chi}}{M_{pl}}\right)^{2}\sqrt{\frac{M_{pl}}{H_{kin}}}M_{pl},$ (17) which shows that $6\times 10^{5}$ GeV $\lesssim T_{reh}\lesssim 6\times 10^{8}$ GeV for $m_{\chi}\sim 10^{15}$ GeV and $T_{reh}\gtrsim 10^{8}$ GeV for $m_{\chi}\sim 10^{16}$ GeV. However, for masses of order $10^{17}$ GeV, the reheating temperature is some orders greater than $10^{9}$ GeV. This fact, as we have already explained in the Introduction, supposes a great problem in order to ensure the BBN success. Therefore, to have a viable reheating temperature, we conclude that only superheavy particles with masses of the order $10^{15}-10^{16}$ GeV could decay before the end of kination. ### III.2 Decay after the end of kination In the case that the decay of the $\chi$-field is after the end of kination (recall that kination ends when $\rho_{\varphi}\sim\rho_{\chi}$), one has to impose ${\Gamma}\leq H(\tau_{end})\equiv H_{end}$, where we have denoted by $\tau_{end}$ the time at which kination ends. Taking this into account, one has $\displaystyle H^{2}_{end}=\frac{2\rho_{\varphi,end}}{3M_{pl}^{2}}$ (18) and $\displaystyle\rho_{\varphi,end}={\rho}_{\varphi,kin}\left(\frac{{a}_{kin}}{a_{end}}\right)^{6}=\frac{{\rho}_{\chi,kin}^{2}}{{\rho}_{\varphi,kin}},$ (19) where we have used that the kination ends when ${{\rho}_{\chi,end}}={{\rho}_{\varphi,end}}$, meaning $\left({a}_{kin}/a_{end}\right)^{3}=\frac{{\rho}_{\chi,kin}}{{\rho}_{\varphi,kin}}$. So, the condition ${\Gamma}\leq H_{end}$ leads to the bound $\displaystyle\Gamma\leq\sqrt{\frac{2}{3}}\frac{\rho_{\chi,kin}}{M_{pl}\sqrt{\rho_{\varphi,kin}}}\cong\frac{\sqrt{2}}{18\pi^{2}}|\beta_{k}|^{2}\frac{m_{\chi}^{4}}{H_{kin}M_{pl}^{2}}.$ (20) On the other hand, assuming once again instantaneous thermalization, the reheating temperature (i.e., the temperature of the universe when the thermalized plasma starts to dominate) will be obtained when all the superheavy particles decay, i.e. when $H\sim\Gamma$, obtaining $\displaystyle T_{reh}=\left(\frac{30}{\pi^{2}g_{reh}}\right)^{1/4}\rho_{\chi,dec}^{1/4}=\left(\frac{90}{\pi^{2}g_{reh}}\right)^{1/4}\sqrt{{\Gamma}M_{pl}},$ (21) where we have used that, after the end of the kination regime, the energy density of the produced particles dominates the one of the inflaton field. Consequently, since the BBN epoch occurs at the $1$ MeV regime, one can find that, in that case, the reheating temperature is bounded by $\displaystyle 1\mbox{ MeV}\leq T_{reh}\leq\frac{1}{3\pi}\left(\frac{45}{\pi^{2}g_{reh}}\right)^{1/4}|\beta_{k}|\left(\frac{m_{\chi}}{M_{pl}}\right)^{2}\sqrt{\frac{M_{pl}}{H_{kin}}}M_{pl},$ (22) which for masses $m_{\chi}\sim 10^{15}-10^{17}$ GeV leads to a reheating temperature in all the range of viable values $1$ MeV $\leq T_{reh}\leq 10^{9}$ GeV. ## IV A simple model containing the Cosmological Constant A very simple model comes from a lineal potential with the Cosmological Constant denoted by $\Lambda$ vardayan , $V(\phi)=\lambda(\phi+\sqrt{6\alpha})+\Lambda M_{pl}^{2}$, which in terms of the canonically unitary field $\varphi$, defined as $\phi=\sqrt{6\alpha}\tanh\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)$, becomes $\displaystyle V(\varphi)=\lambda\sqrt{6\alpha}\left(\tanh\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)+1\right)+\Lambda M_{pl}^{2}.$ (23) To obtain the values of the parameters, first of all we calculate the main slow roll parameter $\displaystyle\epsilon\cong\frac{1}{12\alpha\cosh^{4}\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)\left(\tanh\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)+1\right)^{2}},$ (24) thus, in order that inflation ends, which happens when $\epsilon=1$, one needs that $\frac{1}{12\alpha}>1$. So, we will take for example $\alpha\sim 10^{-2}$. To get the value of $\lambda$ we use the formula (12) and the fact that at the horizon crossing $\displaystyle H_{*}^{2}\cong\frac{V(\varphi_{*})}{3M_{pl}^{2}}\cong\frac{2\lambda\sqrt{6\alpha}}{3M^{2}_{pl}}.$ (25) Combining both results one gets $\frac{\lambda}{\sqrt{\alpha}}\sim 2\times 10^{-11}M_{pl}^{4}$. Finally, it is well-known that, to match with the current observational data, the Cosmological Constant $\Lambda$ must be of the order $10^{-120}M_{pl}^{2}$ because at the present time the current observational data state that $\Omega_{\Lambda}=\frac{\Lambda M_{pl}^{2}}{3H_{0}^{2}M_{pl}^{2}}\cong 0.7$. ###### Remark IV.1 The potential (23) does not belong to the class of Quintessential Inflation potentials because to deal with the current cosmic acceleration it needs a Cosmological Constant. The difference with the potential (5) is that this one contains two parameters, namely $\lambda$ and $n$, needed to unify the early and late-time acceleration. However, the potential $\lambda\sqrt{6\alpha}\left(\tanh\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)+1\right)$ only contains the parameter $\lambda$ which is determined by the power spectrum of scalar perturbation. Then, to correctly depict the late-time acceleration one needs another parameter, in this case the Cosmological Constant. ###### Remark IV.2 Note also that for the potential (5) the value of $\alpha$ is not restricted to be small as for the potential (23), because for that potential the main slow roll parameter satisfies $\epsilon=\frac{n^{2}}{12\alpha\cosh^{4}\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)}$, so in order that inflation ends one needs $\frac{n^{2}}{12\alpha}>1$, and since $n$ is of the order $10^{2}$ (see the equation (6) and also benisty3 for a detailed discussion) this allows a large range of values of $\alpha$. The important point is that this model leads to the same results for particle production as the exponential model studied previously in detail, and thus, it leads to the same bounds for the reheating temperature. ## V Conclusions In the present work we have numerically studied the gravitational particle production of superheavy particles conformally coupled to gravity in $\alpha$-Quintessential Inflation. To calculate the energy density of the produced particles we have used the well-known diagonalisation method, where the key point is the calculation of the time-dependent $\beta$-Bogoliubov coefficient. This coefficient encodes all the polarization effects and the produced superheavy particles during the phase transition from the end of inflation to the beginning of kination. Fortunately, the polarization effects disappear soon after the beginning of kination, which enables us to extract from it only the part which has to do with particle production. In fact, from the relevant modes that contribute to the particle production the square modulus of the $\beta$-Bogoliubov coefficient is of the order $10^{-9}-10^{-10}$, depending on the mass of superheavy particles. Once these superheavy particles have been created, they must decay into lighter ones to form a relativistic plasma which eventually becomes dominant and matches with the hot Big Bang universe. Then, two different situations arise, namely when the decay occurs before the end of the kination regime and when the decay occurs after the end of the kination regime. We have shown that for both situations the maximum value of the reheating temperature is quite big, more or less around $10^{9}$ GeV, which demystifies the belief that heavy masses suppress the particle production, thus leading to an abnormally low reheating temperature. What really happens is that the main contribution of particle production is in a long wavelength regime, which without a numerical calculation is impossible to quantify. This is the reason why in many papers the production of superheavy particles is simply disregarded because analytically only ultraviolet effects can be calculated, but, as it is well- known, the ultraviolet modes do not contribute significantly to the particle production. ## Acknowledgments JdH is supported by grant MTM2017-84214-C2-1-P funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, and also in part by the Catalan Government 2017-SGR-247. L.A.S thanks the School of Mathematical Sciences (Queen Mary University of London) for the support provided. ## References * (1) L. Parker, Phys. Rev. Lett. 21, 562 (1968). * (2) A. A. Grib, S. G. Mamayev and V. M. Mostepanenko, Gen. Rel. Grav. 7, 535 (1976). * (3) L. H. Ford, Phys. Rev. D 35, 2955 (1987). * (4) Ya B. Zeldovich and A. A. Starobinsky, JETP Lett. 26, 252 (1977). * (5) D. J. H. Chung, E.W. Kolb and A. Riotto, Phys. Rev. D 59, 023501 (1998) [arXiv:hep-ph/9802238]. * (6) D. J. H. Chung, P. Crotty, E. W. Kolb and A. Riotto, Phys. Rev. D 64, 043503 (2001) [arXiv:hep-ph/0104100]. * (7) N. D. Birrell and P. C. W. Davies, J. Phys. A: Math. Gen. 13, 2109 (1980) * (8) Y. Ema, K. Nakayama and Y. Tang, JHEP 09, 135 (2018) [arXiv:1804.07471 [hep-ph]]. * (9) A. Guth, Phys. Rev. D 23, 347 (1981). * (10) B. Spokoiny, Phys. Lett. B 315, 40 (1993) [arXiv:gr-qc/9306008]. * (11) P. J. E. Peebles and A. Vilenkin, Phys. Rev. D 59, 063505 (1999) [arXiv:astro-ph/9810509]. * (12) K. Dimopoulos, Nucl. Phys. Proc. Suppl 95, 70 (2001) [arXiv:astro-ph/0012298]. * (13) L. Aresté Saló and J. de Haro, Eur. Phys. J. C 77, no. 11, 798 (2017) [arXiv:1707.02810 [gr-qc]]. * (14) J. de Haro and L. Aresté Saló, Phys. Rev. D 95, 123501 (2017) [arXiv:1702.04212 [gr-qc]]. * (15) J. Haro, J. Amorós and S. Pan, Eur.Phys.J. C 79 no.6, 505 (2019) [arXiv:1901.00167 [gr-qc]]. * (16) J. Haro, W. Yang and S. Pan, JCAP 01, 023 (2019) [arXiv:1811.07371 [gr-qc]]. * (17) S. Hashiba and J. Yokoyama, JCAP 01, 028 (2019) [arXiv:1809.05410 [gr-qc]]. * (18) S. Hashiba and J. Yokoyama, Phys. Rev. D99, 043008 (2019) [arXiv:1812.10032 [hep-ph]]. * (19) S. Hashiba and Y. Yamada, JCAP 05, 022 (2021) [arXiv:2101.07634 [hep-th]]. * (20) L. Kofman, A. Linde and A. Starobinsky, Phys. Rev. D56 3258-3295 (1997) [arXiv:hep-ph/9704452]. * (21) L. Aresté Saló and J. de Haro, Phys. Rev. D104, 083544 (2021) [arXiv:2108.10795 [gr-qc]]. * (22) Y. Akrami, R. Kallosh, A. Linde and V. Vardanyan, JCAP 1806, 041 (2018) [arXiv:1712.09693 [hep-th]]. * (23) K. Dimopoulos and C. Owen, JCAP 1706, 027 (2017) [arXiv:1703.00305 [gr-qc]]. * (24) L. Aresté Saló, D. Benisty, E. I. Guendelman and J.d. Haro, Phys. Rev. D103, 123535 (2021) [arXiv:2103.07892 [astro-ph.CO]]. * (25) J. Ellis, D.V. Nanopoulos and S. Sarkar, Nuc. Phys. B 259, 175 (1985). * (26) J. Ellis, A. Linde and D. Nanopoulos, Phys. Lett. B 118, 59 (1982). * (27) G. F. Giudice, E. W. Kolb and A. Riotto, Phys. Rev. D 64, 023508 (2001) [arXiv:hep-ph/0005123]. * (28) A. A. Grib, S.G. Mamayev and V. M. Mostepanenko, Friedmann Laboratory Publishing for Theoretical Physics, St. Petersburg (1994). * (29) G. Felder, L. Kofman and A. Linde, Phys.Rev. D60, 103505 (1999) [arXiv:hep-ph/9903350]. * (30) P. A. R. Ade et al. [Planck Collaboration], Astron. Astrophys. 594, A20 (2016) [arXiv:1502.02114 [astro-ph.CO]].
# Lower Bound for the Simplicial Volume of Closed Manifolds Covered by $\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$ Jixiang Fu School of Mathematical Sciences, Fudan University, Shanghai 200433, China<EMAIL_ADDRESS>and Xiaofeng Meng School of Mathematical Sciences, Fudan University, Shanghai 200433, China<EMAIL_ADDRESS> (Date: 2021/3/14) ###### Abstract. We estimate the upper bound for the $\ell^{\infty}$-norm of the volume form on $\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$ seen as a class in $H_{c}^{6}(\mathrm{PSL}_{2}\mathbb{R}\times\mathrm{PSL}_{2}\mathbb{R}\times\mathrm{PSL}_{2}\mathbb{R};\mathbb{R})$. This gives the lower bound for the simplicial volume of closed Riemennian manifolds covered by $\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$. The proof of these facts yields an algorithm to compute the lower bound of closed Riemannian manifolds covered by $\big{(}\mathbb{H}^{2}\big{)}^{n}$. ###### 1991 Mathematics Subject Classification: Primary 53C20; Secondary 55N10 ## 1\. Introduction The simplicial volume is a topological invariant of manifolds introduced by Gromov ([4]) and Thurston ([7]). For an oriented closed connected $n$-dimensional manifold $M$, $H_{n}(M,\mathbb{Z})$ is generated by the fundamental class $[M]_{\mathbb{Z}}$ of $M$. Denote the image of $[M]_{\mathbb{Z}}$ via the change of coefficients map $H_{n}(M,\mathbb{Z})\hookrightarrow H_{n}(M,\mathbb{R})$ by $[M]\in H_{n}(M,\mathbb{R})$. Then the simplicial volume of $M$ is defined by $\|M\|=\mathrm{inf}\left\\{\sum_{i}|a_{i}|\ \middle|\ [\sum_{i}a_{i}\sigma_{i}]=[M]\in H_{n}(M,\mathbb{R})\right\\}.$ Moreover, simplicial volume can also be defined for a non-orientable closed connected manifold $M$. Let $\widehat{M}$ be the oriented double covering of $M$. Then the simplicial volume of $M$ is defined by $\|M\|=\|\widehat{M}\|/2$. Although several vanishing and non-vanishing results for the simplicial volume are presented now, the exact value of non-vanishing simplicial volume has only been calculated for a few cases. Those include closed hyperbolic manifolds ([4], [7]), Hilbert modular surfaces ([5]) and closed manifolds covered by $\mathbb{H}^{2}\times\mathbb{H}^{2}$ ([2]). For the simplicial volume of products, we have the following result by [4] for closed manifolds $M$ and $N$ with dimension $m$ and $n$, respectively. $\|M\|\cdot\|N\|\leq\|M\times N\|\leq\binom{n+m}{n}\|M\|\cdot\|N\|.$ (1) When $M$ and $N$ are closed surfaces, it has been proved in [2] that $\|M\times N\|=\frac{3}{2}\|M\|\cdot\|N\|.$ (2) Combining (1) and (2), we get $\dfrac{3}{2}\prod_{i=1}^{3}\|M_{i}\|\leq\Bigg{\|}\prod_{i=1}^{3}M_{i}\Bigg{\|}\leq\dfrac{45}{2}\prod_{i=1}^{3}\|M_{i}\|,$ for closed surfaces $M_{i}$, $i=1,2,3$. In this paper we improve the above estimation of the lower bound to the following one and furthermore, offer an algorithm to compute the lower bound of closed Riemannian manifolds covered by $(\mathbb{H}^{2})^{n}$. ###### Theorem 1. Let $M_{i}$ be a closed surface, where $i=1,2,3$. Then $\Bigg{\|}\prod_{i=1}^{3}M_{i}\Bigg{\|}\geq\frac{45}{11}\prod_{i=1}^{3}\|M_{i}\|.$ Recall the proportionality principle by Gromov in [4]. Let $M$ be an $n$-dimensional closed Riemannian manifold. Then $\|M\|=\dfrac{\mathrm{Vol(M)}}{\|[\omega_{\widetilde{M}}]_{c}^{G}\|_{\infty}},$ (3) where $\widetilde{M}$ is the universal covering of $M$, $G$ is the group of orientation-preserving isometries of $\widetilde{M}$, $\omega_{\widetilde{M}}$ is the volume form of $\widetilde{M}$ and $[\omega_{\widetilde{M}}]_{c}^{G}$ is the volume class of $\widetilde{M}$ viewed as a class in $H^{n}(C_{c}^{*}(\widetilde{M})^{G})$. According to [7], the constant $\|[\mathrm{Vol}_{\widetilde{M}}]_{c}^{G}\|_{\infty}$ is $\pi$ for a closed oriented surface $M$ supporting a hyperbolic structure. According to [4], the simplicial volume of a closed connected manifold admitting a self-map of non- trivial degree (i.e., not equal to $-1$, $0$, or $1$) vanishes. This implies that the simplicial volume of a closed surface covered by $2$-sphere or torus vanishes. Therefore, Theorem 1 naturally holds when one of the universal coverings of $M_{1}$, $M_{2}$ and $M_{3}$ is $2$-sphere or Euclidean plane. To prove Theorem 1, we only need to consider the case where the universal covering of $M_{1}\times M_{2}\times M_{3}$ is $\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$. Hence the following Theorem proves Theorem 1. ###### Theorem 2. Let $M$ be a closed Riemannian manifold whose universal covering is $\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$. Then $\|M\|\geq\frac{45}{11\pi^{3}}\mathrm{Vol}(M).$ To prove Theorem 1 and 2, we recall a way to compute the proportionality constant $\frac{\|M\|}{\mathrm{Vol}(M)}$ provided by Bucher-Karlsson in [2] for closed locally symmetric spaces of noncompact type. Let $M$ be an $n$-dimensional closed locally symmetric space of non-compact type. We use the same notations as in (3). Let $\mathcal{J}$ be the Van Est isomorphism mapping from $A^{n}(\widetilde{M})^{G}$ to $H_{c}^{n}(G,\mathbb{R})$. Then $\|M\|=\dfrac{\mathrm{Vol}(M)}{\|\mathcal{J}\big{(}\omega_{\widetilde{M}}\big{)}\|_{\infty}}.$ We establish the desired inequality as following. ###### Theorem 3. Let $\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}}\in A^{6}(\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2},\mathbb{R})$ be the Riemannian volume form on $\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$. Let $\mathcal{J}$ be the Van Est isomorphism mapping from $A^{6}(\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2},\mathbb{R})$ to $H_{c}^{6}(\mathrm{PSL}_{2}\mathbb{R}\times\mathrm{PSL}_{2}\mathbb{R}\times\mathrm{PSL}_{2}\mathbb{R},\mathbb{R})$. Then $\|\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}})\|_{\infty}\leq\frac{11}{45}\pi^{3}.$ We briefly outline the content of each section. To estimate the upper bound for $\|\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}})\|_{\infty}$, we need an explicit cocycle in class $\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}})$. In Section 2 we recall the definition of continuous (bounded) cohomology of Lie groups and select a cocycle $\pi^{3}\cdot\Theta_{\theta}$ representing $\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}})$. In Section 3 we recall another bounded cohomology $H_{\delta}^{k}\big{(}C_{b}^{*}(\mathbb{T}^{3},\mathbb{R})^{(PSL_{2}\mathbb{R})^{3}}\big{)}$ which is isometrically isomorphic to the continuous bounded cohomology of $(PSL_{2}\mathbb{R})^{3}$. This allows us to estimate the upper bound of $\|[\Theta_{\theta}]\|_{\infty}$ by calculating the $l^{\infty}$-norm of a selected cocycle $\Theta$ in $C_{b}^{6}(\mathbb{T}^{3},\mathbb{R})^{(PSL_{2}\mathbb{R})^{3}}$. We show that $\|\Theta\|_{\infty}$ can be computed by reducing it to a programming problem. This proves Theorem 3. Finally, in Section 4 we give several conjectures related to the simplicial volume of closed manifolds covered by $\big{(}\mathbb{H}^{2}\big{)}^{n}$. ## 2\. The volume form on $\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$ Before giving a cocycle representing $\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}})$, we recall some definitions first. ### 2.1. The continuous (bounded) cohomology of Lie groups Let $G$ be a connected Lie group. For every $k\in\mathbb{N}$, we define the complex $C^{k}(G,\mathbb{R})$ by $C^{k}(G,\mathbb{R})=\left\\{f:G^{k+1}\rightarrow\mathbb{R}\ \middle|\ \text{f is continuous}\right\\}.$ Define the coboundary $d^{k}:C^{k}(G,\mathbb{R})\rightarrow C^{k+1}(G,\mathbb{R})$ by $(d^{k}f)(g_{0},...,g_{k+1})=\sum_{i=0}^{k+1}(-1)^{i}f(g_{0},...,\hat{g}_{i},...,g_{k+1}),$ for all $f$ in $C^{k}(G,\mathbb{R})$ and all $(g_{0},...,g_{k+1})$ in $G^{k+2}$. Define the $G$-action on $C^{k}(G,\mathbb{R})$ by $(h\cdot f)(g_{0},...,g_{k})=f(h^{-1}g_{0},...,h^{-1}g_{k}),$ for all $f$ in $C^{k}(G,\mathbb{R})$, all $h\in G$ and all $(g_{0},...,g_{k})\in G^{k+1}$. Let $C^{k}(G,\mathbb{R})^{G}$ be the $G$-invariant elements in $C^{k}(G,\mathbb{R})$. Set $C_{b}^{k}(G,\mathbb{R})$ to be the subspace of $C^{k}(G,\mathbb{R})$ consisting all bounded functions and $d_{b}^{k}=d^{k}|_{C_{b}^{k}(G,\mathbb{R})}$. Restrict the $G$-action to $C_{b}^{k}(G,\mathbb{R})$. Then $C_{b}^{k}(G,\mathbb{R})^{G}\coloneqq C_{b}^{k}(G,\mathbb{R})\cap C^{k}(G,\mathbb{R})^{G}$ is the subspace of $C_{b}^{k}(G,\mathbb{R})$ with $G$-invariant elements. Therefore $\big{(}C^{k}(G,\mathbb{R})^{G},d\big{)}$ and $\big{(}C_{b}^{k}(G,\mathbb{R})^{G},d_{b}\big{)}$ induce continuous cohomology $H_{c}^{k}(G,\mathbb{R})$ of $G$ and the continuous bounded cohomology $H_{cb}^{k}(G,\mathbb{R})$ of $G$, respectively. For all $f\in C_{b}^{k}(G,\mathbb{R})$, let $\|f\|_{\infty}$ be the $l^{\infty}$-norm of $f$. This induces the semi-norms of $H_{cb}^{k}(G,\mathbb{R})$ and $H_{c}^{k}(G,\mathbb{R})$, both of which we still denote by $\|\cdot\|_{\infty}$. Here we use a form of Van Est isomorphism introduced in [3]. Let $G$ be a Lie group, $K<G$ be a maximal compact subgroup of $G$ and $X=G/K$ be the associated symmetric space. For $k\in\mathbb{N}$, define $\mathrm{A}^{k}(X,\mathbb{R})$ to be the set of differential $k$-forms. The Lie group $G$ acts on $\mathrm{A}^{k}(X,\mathbb{R})$ by the pullbacks. Let $\mathrm{A}^{k}(X,\mathbb{R})^{G}$ be the subspace of $\mathrm{A}^{k}(X,\mathbb{R})$ with $G$-invariant elements. Denote the Van Est isomorphism by $\mathcal{J}:\mathrm{A}^{k}(X,\mathbb{R})^{G}\stackrel{{\scriptstyle\cong}}{{\longrightarrow}}H_{c}^{k}(G,\mathbb{R})$. ### 2.2. The $2$-form representing $\mathcal{J}(\omega_{\mathbb{H}^{2}})$ Let $\omega_{\mathbb{H}^{2}}\in A^{2}(\mathbb{H}^{2},\mathbb{R})$ be the volume form on $\mathbb{H}^{2}$. View $\mathbb{H}^{2}$ as the upper half-plane $\big{\\{}(x,y)\in\mathbb{R}^{2}|y>0\big{\\}}\subset\mathbb{C}$. Then $PSL_{2}\mathbb{R}$ acts on $\mathbb{R}^{2}$ by the Möbius transformations. This action can be restricted to $\big{\\{}(x,0)\in\mathbb{R}^{2}\big{\\}}$. Notice that through $z\mapsto\dfrac{z-i}{z+i}$ the upper half-plane is identified with the unit disc $\\{|z|<1\\}\subset\mathbb{C}$ and the $PSL_{2}\mathbb{R}$-action on $\big{\\{}(x,0)\in\mathbb{R}^{2}\big{\\}}$ can induce a $PSL_{2}\mathbb{R}$-action on $\mathbb{S}^{1}$. Define a function $Or$ by $\begin{array}[]{cccl}\mathrm{Or}:&(\mathbb{S}^{1})^{3}&\longrightarrow&\mathbb{R}\\\ &(\theta_{0},\theta_{1},\theta_{2})&\longmapsto&\begin{cases}+1&\text{if }\theta_{0},\theta_{1},\theta_{2}\text{ are distinct and positively oriented,}\\\ -1&\text{if }\theta_{0},\theta_{1},\theta_{2}\text{ are distinct and negatively oriented,}\\\ 0&\text{if }\theta_{0},\theta_{1},\theta_{2}\text{ are not distinct}.\end{cases}\end{array}$ We fix a point $\theta$ in $\mathbb{S}^{1}$. Define a cocycle $\mathrm{Or}_{\theta}$ in $C_{b}^{2}(PSL_{2}\mathbb{R},\mathbb{R})^{PSL_{2}\mathbb{R}}$ by $\mathrm{Or}_{\theta}(g_{0},g_{1},g_{2})=\mathrm{Or}(g_{0}\theta,g_{1}\theta,g_{2}\theta)$ for all $(g_{0},g_{1},g_{2})$ in $(PSL_{2}\mathbb{R})^{3}$. Then it is easy to check that $\mathcal{J}(\omega_{\mathbb{H}^{2}})=\pi\cdot[Or_{\theta}]$. ### 2.3. The $6$-form representing $\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}})$ Recall the definition of cup product. Take a $p$-cochain $c^{p}$ in $C^{p}\big{(}(PSL_{2}\mathbb{R})^{3},\mathbb{R}\big{)}$ and a $q$-cochain $c^{q}$ in $C^{q}\big{(}(PSL_{2}\mathbb{R})^{3},\mathbb{R}\big{)}$. The cup product $c^{p}\cup c^{q}$ is defined by $(c^{p}\cup c^{q})(g_{0},...,g_{p+q})=c^{p}(g_{0},...,g_{p})\cdot c^{q}(g_{p},...,g_{p+q}),$ for all $g_{i}=(g_{i}^{1},g_{i}^{2},g_{i}^{3})$ in $(PSL_{2}\mathbb{R})^{3}$, $i=0,...,p+q$. This induces cup product for classes. Recall that the alternation of a $p$-cochain $c^{p}$ in $C^{p}\big{(}(PSL_{2}\mathbb{R})^{3},\mathbb{R}\big{)}$ is $\mathrm{Alt}(c^{p})(g_{0},...,g_{p})=\dfrac{1}{(p+1)!}\sum_{\sigma\in Sym(p+1)}sign(\sigma)c^{p}(g_{\sigma(0)},...,g_{\sigma(p)})$ for all $g_{i}$ in $(PSL_{2}\mathbb{R})^{3}$, $i=0,...,p$. Note that for a $p$-cocycle $f$ we have $[\mathrm{Alt}(f)]=[f]$ and $\|\mathrm{Alt}(f)\|_{\infty}\leq\|f\|_{\infty}$. Note Let $p^{\mathbb{H}}_{i}$ be the $i$-th projection from $\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$ to $\mathbb{H}^{2}$ for $i=1,2,3$. Let $p^{PSL_{2}\mathbb{R}}_{i}$ be the $i$-th projection from $PSL_{2}\mathbb{R}\times PSL_{2}\mathbb{R}\times PSL_{2}\mathbb{R}$ to $PSL_{2}\mathbb{R}$ for $i=1,2,3$. Let $p^{\mathbb{T}}_{i}$ be the $i$-th projection from $\mathbb{T}^{3}\times\mathbb{T}^{3}\times\mathbb{T}^{3}$ to $\mathbb{T}^{3}$ for $i=1,2,3$, where $\mathbb{T}^{3}$ is $\mathbb{S}^{1}\times\mathbb{S}^{1}\times\mathbb{S}^{1}$. Define a function $\Theta:(\mathbb{T}^{3})^{7}\rightarrow\mathbb{R}$ by $\begin{split}\Theta(\theta_{0},...,\theta_{6})&=\mathrm{Alt}\big{(}(p_{1}^{\mathbb{T}})^{*}(Or)\cup(p_{2}^{\mathbb{T}})^{*}(Or)\cup(p_{3}^{\mathbb{T}})^{*}(Or)\big{)}(\theta_{0},...,\theta_{6})\\\ &=\dfrac{1}{7!}\sum_{\sigma\in Sym(7)}sign(\sigma)\prod_{i=1}^{3}Or(\theta_{2i-2}^{i},\theta_{2i-1}^{i},\theta_{2i}^{i})\end{split}$ (4) for all $\theta_{i}=(\theta_{i}^{1},\theta_{i}^{2},\theta_{i}^{3})$ in $\mathbb{T}^{3}$, $i=0,...,6$. Let $(PSL_{2}\mathbb{R})^{3}$ acts on $\mathbb{T}^{3}$ diagonally. Fix a point $\theta=(\theta^{1},\theta^{2},\theta^{3})$ in $\mathbb{T}^{3}$. Define a $6$-form $\Theta_{\theta}$ in $C^{6}\big{(}(PSL_{2}\mathbb{R})^{3},\mathbb{R}\big{)}$ by $\Theta_{\theta}(g_{0},...,g_{6})=\Theta(g_{0}\theta,...,g_{6}\theta)$ for all $g_{i}=(g_{i}^{1},g_{i}^{2},g_{i}^{3})$ in $(PSL_{2}\mathbb{R})^{3}$, $i=0,...,6$. ###### Proposition 4. $\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}})=[\pi^{3}\cdot\Theta_{\theta}]$. ###### Proof. By (4), we have $\begin{split}[\pi^{3}\cdot\Theta_{\theta}]&=\pi^{3}\cdot\Big{[}Alt\big{(}(p^{PSL_{2}\mathbb{R}}_{1})^{*}(Or_{\theta^{1}})\cup(p^{PSL_{2}\mathbb{R}}_{2})^{*}(Or_{\theta^{2}})\cup(p^{PSL_{2}\mathbb{R}}_{3})^{*}(Or_{\theta^{3}})\big{)}\Big{]}\\\ &=\pi^{3}\cdot\Big{[}(p^{PSL_{2}\mathbb{R}}_{1})^{*}(Or_{\theta^{1}})\cup(p^{PSL_{2}\mathbb{R}}_{2})^{*}(Or_{\theta^{2}})\cup(p^{PSL_{2}\mathbb{R}}_{3})^{*}(Or_{\theta^{3}})\Big{]}\\\ &=\Big{[}\pi\cdot(p^{PSL_{2}\mathbb{R}}_{1})^{*}(Or_{\theta^{1}})\Big{]}\cup\Big{[}\pi\cdot(p^{PSL_{2}\mathbb{R}}_{2})^{*}(Or_{\theta^{2}})\Big{]}\cup\Big{[}\pi\cdot(p^{PSL_{2}\mathbb{R}}_{3})^{*}(Or_{\theta^{3}})\Big{]}\\\ &=\mathcal{J}\big{(}(p^{\mathbb{H}}_{1})^{*}(\omega_{\mathbb{H}^{2}})\big{)}\cup\mathcal{J}\big{(}(p^{\mathbb{H}}_{2})^{*}(\omega_{\mathbb{H}^{2}})\big{)}\cup\mathcal{J}\big{(}(p^{\mathbb{H}}_{3})^{*}(\omega_{\mathbb{H}^{2}})\big{)}\\\ &=\mathcal{J}\big{(}(p^{\mathbb{H}}_{1})^{*}(\omega_{\mathbb{H}^{2}})\wedge(p^{\mathbb{H}}_{2})^{*}(\omega_{\mathbb{H}^{2}})\wedge(p^{\mathbb{H}}_{3})^{*}(\omega_{\mathbb{H}^{2}})\big{)}\\\ &=\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}}).\end{split}$ Therefore this proposition is proved. ∎ ## 3\. Upper bound of $\|\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}}\|_{\infty}$ Denote $(PSL_{2}\mathbb{R})^{3}$ by $G$. For the convenience of calculation, we introduce another complex $\big{(}C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})^{G},\delta\big{)}$. For every $k\in\mathbb{N}$, define $C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})$ by $C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})=\left\\{f:(\mathbb{T}^{3})^{k+1}\rightarrow\mathbb{R}\ \middle|\ \text{f is continous, measurable and bounded}\right\\}.$ The coboundary $\delta:C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})\rightarrow C_{b}^{k+1}(\mathbb{T}^{3},\mathbb{R})$ is defined by $(\delta f)(\theta_{0},...,\theta_{k+1})=\sum_{i=0}^{k+1}(-1)^{i}f(\theta_{0},...,\hat{\theta}_{i},...,\theta_{k+1})$ for all $f$ in $C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})$ and all $\theta_{i}$ in $\mathbb{T}^{3}$, $i=0,...,k+1$. Define the action of $G$ on $C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})$ by $(g\cdot f)(\theta_{0},...,\theta_{k})=f(g^{-1}\theta_{0},...,g^{-1}\theta_{k}).$ Denote the $G$-invariant elements in $C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})$ by $C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})^{G}$. Then the complex $\big{(}C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})^{G},\delta\big{)}$ induces a cohomology $H_{\delta}^{k}\big{(}C_{b}^{*}(\mathbb{T}^{3},\mathbb{R})^{G}\big{)}$. Let $\|f\|_{\infty}$ be the $l^{\infty}$-norm of $f$ in $C_{b}^{k}(\mathbb{T}^{3},\mathbb{R})$. This induces the semi-norm of $H_{\delta}^{k}\big{(}C_{b}^{*}(\mathbb{T}^{3},\mathbb{R})^{G}\big{)}$, which we still denote by $\|\cdot\|_{\infty}$. Let $P$ be the subgroup of $PSL_{2}\mathbb{R}$ defined as below. $P=\left\\{\begin{pmatrix}a&0\\\ c&a^{-1}\end{pmatrix}\in SL_{2}\mathbb{R}\ \middle|\ a\in\mathbb{R}\setminus\\{0\\},\ c\in\mathbb{R}\right\\}\bigg{/}\\{\pm 1\\}.$ Notice that $\mathbb{T}^{3}=(PSL_{2}\mathbb{R})^{3}\big{/}P^{3}$ and $P$ is amenable. Using [6, Corollary 7.5.9.], $\big{(}H_{cb}^{k}(G,\mathbb{R}),\|\ \|_{\infty}\big{)}$ is isometrically isomorphic to $\big{(}H_{\delta}^{k}(C_{b}^{*}(\mathbb{T}^{3},\mathbb{R})^{G}),\|\ \|_{\infty}\big{)}$. Note that this isomorphism maps $[\Theta_{\theta}]$ to $[\Theta]$. Hence $\|\mathcal{J}(\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}})\|_{\infty}=\inf\left\\{\|f\|_{\infty}\ \middle|\ f\in[\Theta_{\theta}]\right\\}\leq\|\Theta\|_{\infty}.$ As a conclusion, to estimate the upper bound of $\|\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}}\|_{\infty}$ we need to calculate $\|\Theta\|_{\infty}$. ###### Proposition 5. $\|\Theta\|_{\infty}=\dfrac{11}{45}$. Before using algorithm to compute the exact value of $\|\Theta\|_{\infty}$ we need to simplify the formula (4). Otherwise the computation progress using algorithm will take forever. For all $(\theta_{0},...,\theta_{6})$ in $(\mathbb{T}^{3})^{7}$, we have $\begin{split}\Theta(\theta_{0},...,\theta_{6})&=\dfrac{1}{7!}\sum_{\sigma\in Sym(7)}sign(\sigma)Or\Big{(}\theta_{\sigma(0)}^{1},\theta_{\sigma(1)}^{1},\theta_{\sigma(2)}^{1}\Big{)}\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Or\Big{(}\theta_{\sigma(2)}^{2},\theta_{\sigma(3)}^{2},\theta_{\sigma(4)}^{2}\Big{)}Or\Big{(}\theta_{\sigma(4)}^{3},\theta_{\sigma(5)}^{3},\theta_{\sigma(6)}^{3}\Big{)}\\\ &=\dfrac{1}{7!}\sum_{\sigma\in Sym(7)}sign(\sigma)Or\Big{(}\theta_{\sigma(0)}^{1},\theta_{\sigma(1)}^{1},\theta_{\sigma(2)}^{1}\Big{)}\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Or\Big{(}\theta_{\sigma(0)}^{2},\theta_{\sigma(3)}^{2},\theta_{\sigma(4)}^{2}\Big{)}Or\Big{(}\theta_{\sigma(4)}^{3},\theta_{\sigma(5)}^{3},\theta_{\sigma(6)}^{3}\Big{)}\\\ &=\dfrac{4}{7!}\sum_{\begin{subarray}{c}\sigma\in Sym(7)\\\ \sigma(1)<\sigma(2),\ \sigma(5)<\sigma(6)\end{subarray}}sign(\sigma)Or\Big{(}\theta_{\sigma(0)}^{1},\theta_{\sigma(1)}^{1},\theta_{\sigma(2)}^{1}\Big{)}\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Or\Big{(}\theta_{\sigma(0)}^{2},\theta_{\sigma(3)}^{2},\theta_{\sigma(4)}^{2}\Big{)}Or\Big{(}\theta_{\sigma(4)}^{3},\theta_{\sigma(5)}^{3},\theta_{\sigma(6)}^{3}\Big{)}.\end{split}$ (5) View $\mathbb{S}^{1}$ as the quotient space $[0,1]/\sim_{f}$, where $f(x)=e^{2\pi ix}$. Denote $\theta_{i}$ by $\big{(}[x_{i}],[y_{i}],[z_{i}]\big{)}$, where $x_{i},y_{i},z_{i}\in[0,1]$, for $i=0,...,6$. To prove Proposition 5, we need the following lemmas. ###### Lemma 6. If the elements in $\\{\theta_{j}^{i}\\}_{j=0}^{6}$ are pairwise distinct for $i=1,2,3$, $|\Theta(\theta_{0},...,\theta_{6})|\leq\dfrac{11}{45}.$ ###### Proof. We can assume that $x_{0}<...<x_{6}<1$ and $\Theta$ is alternating. We notice that for all $\theta_{i}=[x_{i}]\in\mathbb{S}^{1}$, where $i=0,1,2$, $Or(\theta_{0},\theta_{1},\theta_{2})=sign(x_{0},x_{1},x_{2}).$ Here we abuse the notation $sign$. If the elements of $\\{x_{i}\\}_{i=0}^{k}$ are pairwise distinct, assuming $x_{0}<...<x_{k}$, we define $sign(x_{i_{0}},...,x_{i_{k}})$ to be $sign(i_{0},...,i_{k})$. If the elements of $\\{x_{i}\\}_{i=0}^{k}$ are not pairwise distinct, we define $sign(x_{i_{0}},...,x_{i_{k}})$ to be $0$. Therefore we only need to consider the order of $\\{y_{i}\\}$ and $\\{z_{i}\\}$. Moreover, notice that take $g=(g_{1},g_{2},g_{3})$ in $(PSL_{2}\mathbb{R})^{3}$ with $g_{i}$ being rotation or reflection of $\mathbb{S}^{1}$ for $i=1,2,3$. Then $|(g\cdot\Theta)(\theta_{0},...,\theta_{6})|=|\Theta(\theta_{0},...,\theta_{6})|$ (6) for all $\theta_{i}\in\mathbb{T}^{3}$, $i=0,...,6$. Thus we can assume $y_{0}=z_{0}=0$, $y_{1}<y_{2}$ and $z_{1}<z_{2}$. Therefore we only need to consider $360$ possible orders of $\\{y_{i}\\}_{i=0}^{6}$, as well as of $\\{z_{i}\\}_{i=0}^{6}$. We denote these possible orders by a $7\times 360$ matrix $P$, where each column represents a possible order. For example, a column $(2,3,5,1,4,6,7)^{t}$ represents $y_{3}<y_{0}<y_{1}<y_{4}<y_{2}<y_{5}<y_{6}$ or $z_{3}<z_{0}<z_{1}<z_{4}<z_{2}<z_{5}<z_{6}$. Let $S$ be a $7\times 1260$ matrix, where each column represents a permutation $\sigma$ in $Sym(1,...,7)$ satisfying $\sigma(2)<\sigma(3)$ and $\sigma(6)<\sigma(7)$. To further simplify the computation progress, we define two $1260\times 360$ matrices $P_{y360}$ and $P_{z360}$ by $(P_{y360})_{i,j}=sign\big{(}P_{(S_{1,i}),j},P_{(S_{4,i}),j},P_{(S_{5,i}),j}\big{)}$ and $(P_{z360})_{i,j}=sign\big{(}P_{(S_{5,i}),j},P_{(S_{6,i}),j},P_{(S_{7,i}),j}\big{)}.$ We define a $1260\times 1$ matrix $P_{sign}$ by $(P_{sign})_{i,1}=sign(S_{1,i},...,S_{7,i})\cdot sign(S_{1,i},S_{2,i},S_{3,i}).$ We apply the input values $P_{y}=P_{y360}$, $P_{z}=P_{z360}$ and $P_{sign}=P_{sign}$ to Algorithm 3 and denote the corresponding output by $O_{360}$. Then we get all $360\times 360$ possible values (possibly repeated) $O_{360}$ of $\Theta$. Algorithm 1 possible values of $\Theta(\theta_{0},...,\theta_{6})$. 0: Matrices $P_{y}$, $P_{z}$ and $P_{sign}$; 0: Matrix $O$; 1: Set p and q to be the number of columns of $P_{y}$ and $P_{z}$ respectively; 2: Set $O$ to be a $p\times q$ zero matrix; 3: for each $i=1,...,p$ do 4: for each $j=1,...,q$ do 5: for each $l=1,...,1260$ do 6: $(O)_{i,j}=(O)_{i,j}+\dfrac{1}{1260}(P_{sign})_{l,1}\cdot(P_{y360})_{l,i}\cdot(P_{z360})_{l,j}$; 7: end for 8: end for 9: end for Hence in this case $|\Theta(\theta_{0},...,\theta_{6})|\leq\dfrac{11}{45}$. For $(\theta_{0},...,\theta_{6})$ defined as in Figure 1, $\Theta$ attains its maximal volume $\dfrac{11}{45}$. $[x_{0}]$$[x_{1}]$$[x_{2}]$$[x_{3}]$$[x_{4}]$$[x_{5}]$$[x_{6}]$$[y_{0}]$$[y_{1}]$$[y_{2}]$$[y_{3}]$$[y_{4}]$$[y_{5}]$$[y_{6}]$$[z_{0}]$$[z_{1}]$$[z_{2}]$$[z_{3}]$$[z_{4}]$$[z_{5}]$$[z_{6}]$ Figure 1. ∎ ###### Lemma 7. If $\\{\theta_{j}^{1}\\}_{j=0}^{6}$ has exactly $3$ distinct points in $\mathbb{S}^{1}$, $|\Theta(\theta_{0},...,\theta_{6})|<\dfrac{11}{45}.$ ###### Proof. By (5), we have $\begin{split}\Theta(\theta_{0},...,\theta_{6})&=\dfrac{1}{7!}\sum_{\sigma\in Sym(7)}sign(\sigma)Or\big{(}[x_{\sigma(0)}],[x_{\sigma(1)}],[x_{\sigma(2)}]\big{)}\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ Or\big{(}[y_{\sigma(0)}],[y_{\sigma(3)}],[y_{\sigma(4)}]\big{)}Or\big{(}[z_{\sigma(4)}],[z_{\sigma(5)}],[z_{\sigma(6)}]\big{)}\\\ &=\dfrac{1}{7!}\sum_{\begin{subarray}{c}a<b<c\\\ \\{a,b,c\\}\in\\{0,...,6\\}\end{subarray}}2\cdot Or\big{(}[x_{a}],[x_{b}],[x_{c}]\big{)}\sum_{\begin{subarray}{c}\\{i,j,k,l\\}=\\{0,...,6\\}\backslash\\{a,b,c\\}\\\ k<l\end{subarray}}\\\ &\ \ \ \ 2\cdot sign(a,b,c,i,j,k,l)\Big{[}Or\big{(}[y_{a}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}+\\\ &\ \ \ \ Or\big{(}[y_{b}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}+\\\ &\ \ \ \ Or\big{(}[y_{c}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}\Big{]}.\end{split}$ ###### Claim 1. For three fixed element $a,b,c\in\\{0,...,6\\}$ satisfying $a<b<c$ there are at least $\dfrac{1}{3}$ of the items in $\begin{split}&Or\big{(}[x_{a}],[x_{b}],[x_{c}]\big{)}\sum_{\begin{subarray}{c}\\{i,j,k,l\\}=\\{0,...,6\\}\backslash\\{a,b,c\\}\\\ k<l\end{subarray}}2\cdot sign(a,b,c,i,j,k,l)\\\ &\ \ \ \ \ \ \ \ \ \ \Big{[}Or\big{(}[y_{a}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}+Or\big{(}[y_{b}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}+\\\ &\ \ \ \ \ \ \ \ \ \ Or\big{(}[y_{c}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}\Big{]}\end{split}$ that vanish or cancel with each other. ###### Proof of claim 1. Fix $\\{a,b,c\\}$ in $\\{0,...,6\\}$, and denote $\\{0,...,6\\}\backslash\\{a,b,c\\}$ by $\\{i_{0},...,i_{3}\\}$. We can assume that $z_{0}\leq z_{1}\leq...\leq z_{6}$ as $\Theta$ is alternating. Assuming $i_{0}<...<i_{3}$, we have $z_{i_{0}}\leq...\leq z_{i_{3}}$. If $\theta_{i_{0}}^{3}$, $\theta_{i_{1}}^{3}$, $\theta_{i_{2}}^{3}$ and $\theta_{i_{3}}^{3}$ are four pairwise distinct points in $\mathbb{S}^{1}$, we have $\begin{split}&\ \ \ Or\big{(}[x_{a}],[x_{b}],[x_{c}]\big{)}\sum_{\begin{subarray}{c}\\{i,j,k,l\\}=\\{0,...,6\\}\backslash\\{a,b,c\\}\\\ k<l\end{subarray}}2\cdot sign(a,b,c,i,j,k,l)\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Or\big{(}[y_{a}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}\\\ &=Or\big{(}[x_{a}],[x_{b}],[x_{c}]\big{)}\sum_{\begin{subarray}{c}\tau\in Sym(i_{0},...,i_{3})\\\ \tau(i_{2})<\tau(i_{3})\end{subarray}}2\cdot sign(a,b,c,\tau)sign(\tau(i_{1}),\tau(i_{2}),\tau(i_{3}))\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Or\big{(}[y_{a}],[y_{\tau(i_{0})}],[y_{\tau(i_{1})}]\big{)}\\\ &=Or\big{(}[x_{a}],[x_{b}],[x_{c}]\big{)}\sum_{\begin{subarray}{c}\tau\in Sym(i_{0},...,i_{3})\\\ \tau(i_{2})<\tau(i_{3})\end{subarray}}2\cdot(-1)^{a+b+c+\alpha(i_{0})}Or\big{(}[y_{a}],[y_{\tau(i_{0})}],[y_{\tau(i_{1})}]\big{)},\end{split}$ (7) where for $\sigma(i_{0})=i_{p}$, $\alpha\big{(}\sigma(i_{0})\big{)}=p$. Notice that the items corresponding to $\sigma=(a,b,c,i_{3},i_{1},i_{0},i_{2})$ cancels the items corresponding to $\sigma=(a,b,c,i_{1},i_{3},i_{0},i_{2})$ and the items corresponding to $\sigma=(a,b,c,i_{2},i_{0},i_{1},i_{3})$ cancels the items corresponding to $\sigma=(a,b,c,i_{0},i_{2},i_{1},i_{3})$. Therefore at least $\dfrac{1}{3}$ of the items in (7) cancel with each other. If $\theta_{i_{0}}^{3}$, $\theta_{i_{1}}^{3}$, $\theta_{i_{2}}^{3}$ and $\theta_{i_{3}}^{3}$ have at least two points coincide with each other, we have $\dfrac{\binom{1}{2}}{\binom{3}{4}}=\dfrac{1}{2}$ of the items in (7) vanish. The same goes for $\begin{split}&\ \ \ Or\big{(}[x_{a}],[x_{b}],[x_{c}]\big{)}\sum_{\begin{subarray}{c}\\{i,j,k,l\\}=\\{0,...,6\\}\backslash\\{a,b,c\\}\\\ k<l\end{subarray}}2\cdot sign(a,b,c,i,j,k,l)\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Or\big{(}[y_{b}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}\end{split}$ and $\begin{split}&\ \ \ Or\big{(}[x_{a}],[x_{b}],[x_{c}]\big{)}\sum_{\begin{subarray}{c}\\{i,j,k,l\\}=\\{0,...,6\\}\backslash\\{a,b,c\\}\\\ k<l\end{subarray}}2\cdot sign(a,b,c,i,j,k,l)\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Or\big{(}[y_{c}],[y_{i}],[y_{j}]\big{)}Or\big{(}[z_{j}],[z_{k}],[z_{l}]\big{)}.\end{split}$ Hence we get the required result. Now that we have these facts, we can prove this lemma in 4 cases. ###### Case 1. There are $1$, $1$ and $5$ points in $\\{\theta_{j}^{1}\\}_{j=0}^{6}$ that respectively have the same values in $\mathbb{S}^{1}$. We notice that $Or\big{(}[x_{0}],[x_{1}],[x_{2}]\big{)}=0$ when there exists $i\neq j$ such that $[x_{i}]=[x_{j}]$. Hence there are only $\dfrac{\binom{1}{5}}{\binom{3}{7}}=\dfrac{1}{7}<\dfrac{11}{45}$ of the items in (4) which may not vanish. Therefore this case stands. ###### Case 2. There are $1$, $2$ and $4$ points in $\\{\theta_{j}^{1}\\}_{j=0}^{6}$ that respectively have the same values in $\mathbb{S}^{1}$. Similarly to Case 1, there are only $\dfrac{\binom{1}{2}\cdot\binom{1}{4}}{\binom{3}{7}}=\dfrac{8}{35}<\dfrac{11}{45}$ of the items in (4) which may not vanish. Therefore this case stands. ###### Case 3. There are $1$, $3$ and $3$ points in $\\{\theta_{j}^{1}\\}_{j=0}^{6}$ that respectively have the same values in $\mathbb{S}^{1}$. According to Claim 1, there are only $\dfrac{\binom{1}{3}\cdot\binom{1}{3}}{\binom{3}{7}}\times\dfrac{2}{3}=\dfrac{6}{35}<\dfrac{11}{45}$ of the items in (4) which may not vanish or cancel with each other. Therefore this case stands. ###### Case 4. There are $2$, $2$ and $3$ points in $\\{\theta_{j}^{1}\\}_{j=0}^{6}$ that respectively have the same values in $\mathbb{S}^{1}$. Similarly to Case 3, there are only $\dfrac{\binom{1}{2}\cdot\binom{1}{2}\cdot\binom{1}{3}}{\binom{3}{7}}\times\dfrac{2}{3}=\dfrac{8}{35}<\dfrac{11}{45}$ of the items in (4) which may not vanish or cancel with each other. Therefore this case stands. In conclusion, this lemma is proved. ∎ ###### Lemma 8. If $\\{\theta_{j}^{1}\\}_{j=0}^{6}$ has at least $4$ distinct points in $\mathbb{S}^{1}$, $|\Theta(\theta_{0},...,\theta_{6})|<\dfrac{11}{45}.$ ###### Proof. We prove this lemma in $3$ cases. ###### Case 5. There are $1$, $1$, $1$ and $4$ points in $\\{\theta_{j}^{1}\\}_{j=0}^{6}$ that respectively have the same values in $\mathbb{S}^{1}$. When there are $2$ points in $\\{\theta_{j}^{2}\\}_{j=0}^{6}$ that have the same value in $\mathbb{S}^{1}$, assuming that $x_{0}=...=x_{3}<x_{4}<x_{5}<x_{6}<1$, the worst case is $y_{4}=y_{5}$. Taking a closer look at the proof of Claim 1, we have that there are at most $\dfrac{(1+4)\times\frac{2}{3}+(2+4)\times\frac{2}{3}\times\frac{8+2}{12}}{\binom{3}{7}}=\dfrac{2}{9}<\dfrac{11}{45}$ of the items in (4) that may not vanish or cancel with each other. Therefore we can assume that $\\{\theta_{j}^{i}\\}_{j=0}^{6}$ are pairwise distinct for $i=2$ and $3$, respectively. We use Algorithm 3 to prove this. Let $(x_{1},...,x_{7})$ be $(1,1,1,1,2,3,4)$. Define a $1260\times 1$ matrix $P_{sign1114}$ by $(P_{sign1114})_{i,1}=sign(S_{1,i},...,S_{7,i})\cdot sign(x_{\scriptscriptstyle S_{1,i}},x_{\scriptscriptstyle S_{2,i}},x_{\scriptscriptstyle S_{3,i}}).$ Here we input $P_{y}=P_{y360}$, $P_{z}=P_{z360}$ and $P_{sign}=P_{sign1114}$. Denote the corresponding output by $O$. The maximal value of entries of $O$ is $\dfrac{2}{15}$ which is smaller than $\dfrac{11}{45}$. Therefore this case stands. ###### Case 6. There are $1$, $1$, $2$ and $3$ points in $\\{\theta_{j}^{1}\\}_{j=0}^{6}$ that respectively have the same values in $\mathbb{S}^{1}$. Let $P_{2}$ be a $7\times\dfrac{7!}{2!}$ (i.e., $7\times 2520$) matrix which each column represents a permutation of $(1,1,2,3,4,5,6)$. Let $P_{3}$ be a $7\times\dfrac{7!}{2!\times 2!}$ (i.e., $7\times 1260$) matrix which each column represents a permutation of $(1,1,2,2,3,4,5)$. Let $P_{4}$ be a $7\times\dfrac{7!}{2!\times 2!}$ (i.e., $7\times 1260$) matrix which each column represents a permutation of $(1,1,2,3,3,4,5)$. Let $P_{5}$ be a $7\times\dfrac{7!}{3!}$ (i.e., $7\times 840$) matrix which each column represents a permutation of $(1,1,1,2,3,4,5)$. Let $P_{6}$ be a $7\times\dfrac{7!}{2!\times 2!\times 2!}$ (i.e., $7\times 630$) matrix which each column represents a permutation of $(1,1,2,2,3,3,4)$. Let $P_{7}$ be a $7\times\dfrac{7!}{3!\times 2!}$ (i.e., $7\times 420$) matrix which each column represents a permutation of $(1,1,1,2,2,3,4)$. Let $P_{8}$ be a $7\times\dfrac{7!}{2!\times 2!}$ (i.e., $7\times 420$) matrix which each column represents a permutation of $(1,1,1,2,3,3,4)$. Define a $7\times 7710$ matrix $P_{r}$ by stacking $P$, $P_{2}$,…,$P_{7}$ and $P_{8}$ vertically. Let $P_{yr}$ be $(P_{yr})_{k,l}=sign\big{(}(P_{r})_{(S_{1,k}),l},(P_{r})_{(S_{4,k}),l},(P_{r})_{(S_{5,k}),l}\big{)}.$ Let $P_{zr}$ be $(P_{zr})_{k,l}=sign\big{(}(P_{r})_{(S_{5,k}),l},(P_{r})_{(S_{6,k}),l},(P_{r})_{(S_{7,k}),l}\big{)}.$ Let $(x_{1},...,x_{7})$ be $(1,1,1,2,2,3,4)$. Define a $1260\times 1$ matrix $P_{sign1123}$ by $(P_{sign1123})_{i,1}=sign(S_{1,i},...,S_{7,i})\cdot sign(x_{S_{1,i}},x_{S_{2,i}},x_{S_{3,i}}).$ Input $P_{y}=P_{yr}$, $P_{z}=P_{zr}$ and $P_{sign}=P_{sign1123}$ to get the corresponding output $O1123$. Define $(x_{1},...,x_{7})=(1,1,1,2,3,3,4)$. Define a $1260\times 1$ matrix $P_{sign1213}$ by $(P_{sign1213})_{i,1}=sign(S_{1,i},...,S_{7,i})\cdot sign(x_{S_{1,i}},x_{S_{2,i}},x_{S_{3,i}}).$ Input $P_{y}=P_{yr}$, $P_{z}=P_{zr}$ and $P_{sign}=P_{sign1213}$ to get the corresponding output $O1213$. Notice that the maximal values of entries of $O1123$ and $O1213$ are both $\dfrac{7}{45}$. Notice that $\begin{split}\Theta\Big{(}\big{(}[x_{0}],[y_{0}],[z_{0}]\big{)},...,\big{(}[x_{6}],[y_{6}],[z_{6}]\big{)}\Big{)}&=\Theta\Big{(}\big{(}[y_{0}],[x_{0}],[z_{0}]\big{)},...,\big{(}[y_{6}],[x_{6}],[z_{6}]\big{)}\Big{)}\\\ &=\Theta\Big{(}\big{(}[x_{0}],[z_{0}],[y_{0}]\big{)},...,\big{(}[x_{6}],[z_{6}],[y_{6}]\big{)}\Big{)}.\end{split}$ (8) Therefore this case stands. ###### Case 7. All other cases which are not include above. Using equation (6) and (8), we only need to apply Algorithm 3 as below. Let $(x^{i}_{1},...,x^{i}_{7})$ be $(1,1,2,2,3,3,4)$, $(1,1,1,2,3,4,5)$, $(1,1,2,2,3,4,5)$ and $(1,1,2,3,4,5,6)$ respectively for $i=1,...,4$. Define four $1260\times 1$ matrices $\\{P_{sign}^{i}\\}_{i=1}^{4}$ by $(P_{sign}^{i})_{j,1}=sign(S_{1,j},...,S_{7,j})\cdot sign\big{(}x^{i}_{S_{1,j}},x^{i}_{S_{2,j}},x^{i}_{S_{3,j}}\big{)}$ respectively for $i=1,...,4$. Input $P_{y}=P_{yr}$, $P_{z}=P_{zr}$ and $P_{sign}=P_{sign}^{i}$ to get the outputs $O^{i}=O$ respectively for $i=1,...,4$. The maximal values of entries of $O^{i}$ are $\dfrac{8}{45}$, $\dfrac{8}{45}$, $\dfrac{1}{5}$ and $\dfrac{2}{9}$ for $i=1,...,4$, respectively. Therefore this case stands. ∎ Consequently, Proposition 5 is proved. Therefore, we get the desired estimation $\|\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}}\|_{\infty}\leq\dfrac{11\pi^{3}}{45}$. As a conclusion, Theorem 3 is proved. Furthermore, Algorithm 3 can easily be generalized to provide an algorithm to compute the lower bound of closed Riemannian manifolds covered by $\big{(}\mathbb{H}^{2}\big{)}^{n}$. ## 4\. Conjectures Denote the $4$-cocycle representing $\omega_{\mathbb{H}^{2}\times\mathbb{H}^{2}}$ in [2, Proposition 7] by $\Theta^{\prime}$. Notice that when taking value $(\theta_{0},...,\theta_{4})=\Big{(}\big{(}[x_{0}],[y_{0}]\big{)},...,\big{(}[x_{4}],[y_{4}]\big{)}\Big{)}$ as in Figure 2, $\Theta^{\prime}$ attains its maximal volume $\dfrac{2}{3}$. $[x_{0}]$$[x_{1}]$$[x_{2}]$$[x_{3}]$$[x_{4}]$$[y_{0}]$$[y_{1}]$$[y_{2}]$$[y_{3}]$$[y_{4}]$ Figure 2. We notice a pattern comparing Figure 1 with 2 as following. ###### Conjecture 9. Define a $2n$-form $\Theta_{n}$ in $C_{b}^{2n}(\mathbb{T}^{n},\mathbb{R})$ as $\Theta_{n}(\theta_{0},...,\theta_{2n})=\dfrac{\pi^{n}}{(2n+1)!}\sum_{\sigma\in Sym(2n+1)}sign(\sigma)\prod_{i=1}^{n}Or\big{(}\theta_{2i-2}^{i},\theta_{2i-1}^{i},\theta_{2i}^{i}\big{)}$ for all $\theta_{i}=(\theta_{i}^{1},...,\theta_{i}^{n})$ in $\mathbb{T}^{n}$, $i=0,...,2n$. Then $\|\Theta_{n}\|_{\infty}=\Theta_{n}(\theta_{0},...,\theta_{2n})$, where $\theta^{k}_{i}=e^{\frac{2(ki)\pi\sqrt{-1}}{2n+1}}$ for $k=1,...,n$ and $i=0,...,2n$. We already have $\|\omega_{(\mathbb{H}^{2})^{n}}\|_{\infty}\leq\|\Theta_{n}\|_{\infty}$ and the proportionality principle for locally symmetric spaces. We can ask further that whether this inequality is actually equality. ###### Conjecture 10. Let $M$ be a closed oriented manifold covered by $\big{(}\mathbb{H}^{2}\big{)}^{n}$. Then the simplicial volume $\|M\|=\dfrac{\mathrm{Vol}(M)}{\|\Theta_{n}\|_{\infty}}$. If these two conjecture are true, we can get the exact value of simplicial volume of all closed oriented manifolds covered by $\big{(}\mathbb{H}^{2}\big{)}^{n}$. ## References * [1] M. Bucher-Karlsson: The proportionality constant for the simplicial volume of locally symmetric spaces. Colloq. Math. 111, 183–198 (2008) * [2] M. Bucher-Karlsson: The simplicial volume of closed manifolds covered by $\mathbb{H}^{2}\times\mathbb{H}^{2}$. J. Topol. 1(3), 584-–602 (2008) * [3] J. L. Dupont: Simplicial de Rham cohomology and characteristic classes of flat bundles. Topology 15(3), 233–245 (1976) * [4] M. Gromov: Volume and bounded cohomology. Publ. Math. Inst. Hautes Études Sci. 56, 5–99 (1982) * [5] C. Löh, R. Sauer: Simplicial volume of Hilbert modular varieties. Comment. Math. Helv. 84(3), 457–470 (2009) * [6] N. Monod: Continuous bounded cohomology of locally compact groups. Lecture Notes in Math. 1758, Springer-Verlag, New York (2001) * [7] W. Thurston: The geometry and topology of 3-manifolds. Lecture notes, Princeton (1978)
# Integrating Elastic Bands to Enhance Performance for Textile Robotics Cem Suulker1, Sophie Skach1, and Kaspar Althoefer1 1All authors are with the Centre for Advanced Robotics, at the School of Engineering and Materials Science, Queen Mary University of London, United Kingdom<EMAIL_ADDRESS> ## I Introduction The field of soft robotics is growing rapidly, and innovative elastic materials are replacing heavy metallic links while soft inflatable actuators are taking the place of electromechanical rotational motors. The use of flexible textile materials in human-robot interaction has also been shown to offer attractive design options due to their safe nature [1, 2]. The creation of soft robots and actuators often involves the use of various materials and methods from the clothing industry. One crucial variable is the stretch quality of the fabric material. Knitted fabrics are commonly used for their stretchiness, but they often stretch in all directions, which is usually not desirable. Woven fabrics, are normally known as non-stretchy. But integrating elastane yarn into the weft, makes them more stretch in one direction than the other. This quality makes them more suitable for creating actuators. They are also more durable than knitted ones. While using coating can make these structures airtight, it also eliminates the material’s ability to stretch. Another approach to creating stretch in soft robotic structures involves increasing the material density using methods such as pleats and ruffles. The use of pleats in soft robotics has been extensively researched and is considered an effective method [1]. However, the use of ruffles has not received as much attention. Ruffles are often used in the neck area of clothing to gather the fabric material in a dense way. From a soft robotics perspective, this gathered dense material can be inflated to create high elongation. Various types of elastic bands can be used to create ruffles, but in this extended abstract, we will focus on braided elastic bands. These bands are used for storing energy in soft robotics. However, their full potential is realized when they are integrated into the fabric structure using the ruffles method. In this extended abstract, we will showcase two soft robotic applications that utilize elastic bands to enhance the system’s performance. Figure 1: Before and after the integration of elastic bands to an actuator. ## II Application: Bending Actuator for Wearables To create bending of an inflatable textile actuator, an imbalance between two layers of fabric must be achieved. This imbalance is created by using pleating techniques [1], excess material of one layer is folded and stitched onto the other, and unfolds when inflated. This approach, however, has potential drawbacks, e.g. when the space between the layers is too small for the pleats to unfold (which, for finger sized designs, can become imminent). Another technique to create such imbalance is the use of different types of fabric with varying elasticity. This way, one layer stretches more when actuated and the structure will bend towards the less elastic one. Integrating braided elastic band to the stretch fabric option with the ruffles technique is proven more efficient in terms of blocking force and bending angle capabilities [3]. ### II-1 Materials To create this textile actuator the two different textile properties layers are selected. Bottom: a plain cotton weave for the (light fabric in Fig. 1). Top: a cotton mix with elastane yarn integrated to the weft of the fabric (dark fabric in Fig. 1). To enhance a fabric’s stretch behavior and create the desired material imbalance between layers, an additional support material is used that is integrated when assembling the actuators: a braided elastic band, also called elastics, see Figure 1, commonly used in clothing to create ruffles or elastic waistbands. It consists of braided polyester and a small part of thin rubber, making it durable and extremely stretchy. ### II-2 Fabrication First, the mono-directional stretch black fabric was cut 80% longer than the cotton bottom layer fabric. The elastic band was integrated on the side seam between the top and bottom layer, first stitched onto the top layer while being stretched, and then sewn onto the bottom layer in a relaxed state. This enabled the top layer to reach an excess length of 180% of the bottom layer. The actuator is equipped with a latex bladder to ensure air tightness. Figure 2: Blocking force output and flexion angle versus pressure graphs for stretch fabric actuator, and elastic band integrated actuator. Integration of the elastic band significantly boosts the performance. ### II-3 Results Two important parameters for soft bending actuators for wearables are flexion angle and blocking force [4] In example, for rehabilitation or assistive hand exoskeletons it is imperative that each finger is unrestricted in relation to its maximum angle, and actuators should apply 10-15 N blocking force to the fingers[5]. In Fig. 2 the elastic band integration boosts both of these critical parameters for the actuator. The force capability increases approximately from 10 N to 25 N, and maximum bending angle increases from approximately 180 degrees to 360 degrees. ## III Application: Soft Cap for Eversion Robots Growing robots based on the eversion principle are known for their ability to extend from within rapidly, along their longitudinal axis, and in doing so, reach deep into hard-to-access, remote spaces. Because of their unique movement principle maintaining a payload at the tip is a major challenge. Various tip mechanisms have been proposed, including complex, rigid designs that may not be compatible with functional hardware. To address these shortcomings, we proposed a soft, entirely fabric-based cylindrical cap that can be easily slipped onto the tip of eversion robots [6]. We created a series of caps of different sizes and materials and conducted an experimental study to evaluate their effectiveness in maintaining their position and transporting payloads, such as a camera, across long distances. We also assessed the caps’ ability to navigate through narrow openings and found that our soft, flexible cap does not significantly hinder the robot’s flexibility or overall maneuverability. Our design offers a solution to the challenge of maintaining sensory payloads at the tip of eversion robots without compromising their performance or flexibility. Figure 3: a) Pattern for stretch fabric cap b) Pattern for elastic band integrated cap. ### III-1 Materials & Fabrication To keep the cap attached to the tip of the eversion body, the main concept is based on squeezing the cap against the body. We have devised two different designs to achieve this squeezing motion: one using a stretch fabric and the other utilizing ruffles with elastic bands. Please see Fig. 3 for more information on the patterns of the caps. ### III-2 Results In order to assess the performance of the caps, we subjected them to a series of challenges that evaluated their ability to adapt to varying layer thicknesses, protruding objects from the robot body, navigability, and squeezability. The performance of the caps was measured using a percentage index that took into account the number of challenges that the caps successfully completed. The stretch fabric caps achieved a performance rating of 84-86%, while the elastic band integrated caps achieved a rating of 90-96% (Fig. 4). This improvement can be attributed to the smaller contact area between the two fabrics, with the stretch fabric caps having a contact area of 15 cm and the elastic band integrated caps having a contact area of only 1 cm. Figure 4: The eversion robot, with a stretchy cap, with an elastic band integrated cap. Integration of elastic band boosts the performance of the robot. ## IV Conclusions The elastic bands integrated using the ruffles technique proved to be effective in enhancing the performance of the soft robotic structures. In the actuator application, the elastic bands greatly increased the bending capability and force capability of the structure, while in the eversion robot cap application, the elastic bands improved the performance slightly by maintaining the sensory payload at the tip without restricting the eversion process. These findings demonstrate the potential of using elastic bands and textile techniques in soft robotics to create more efficient and adaptable structures. ## References * [1] L. Cappello, J. T. Meyer, K. C. Galloway, J. D. Peisner, R. Granberry, D. A. Wagner, S. Engelhardt, S. Paganoni, and C. J. Walsh, “Assisting hand function after spinal cord injury with a fabric-based soft robotic glove,” _Journal of neuroengineering and rehabilitation_ , vol. 15, no. 1, pp. 1–10, 2018. * [2] C. Suulker, A. Hassan, S. Skach, and K. Althoefer, “A comparison of silicone and fabric inflatable actuators for soft hand exoskeletons,” in _2022 IEEE 5th International Conference on Soft Robotics (RoboSoft)_. IEEE, 2022, pp. 735–740. * [3] C. Suulker, S. Skach, and K. Althoefer, “Soft robotic fabric actuator with elastic bands for high force and bending performance in hand exoskeletons,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 4, pp. 10 621–10 627, 2022. * [4] C. Suulker, S. Skach, and K. Althoefer, “A fabric soft robotic exoskeleton with novel elastic band integrated actuators for hand rehabilitation,” _arXiv preprint arXiv:2212.07206_ , 2022. * [5] C. D. Takahashi, L. Der-Yeghiaian, V. Le, R. R. Motiwala, and S. C. Cramer, “Robot-based hand motor therapy after stroke,” _Brain_ , vol. 131, no. 2, pp. 425–437, 2008. * [6] C. Suulker, S. Skach, D. Kaleel, T. Abrar, Z. Murtaza, D. Suulker, and K. Althoefer, “Soft cap for eversion robots,” _arXiv preprint arXiv:2301.12862_ , 2023.
Unimodality and peak location of the characteristic polynomials of two distance matrices of trees Rakesh Jana Iswar Mahato Sivaramakrishnan Sivasubramanian Department of Mathematics, Indian Institute of Technology Bombay, Mumbai 400076, India Unimodality of the normalized coefficients of the characteristic polynomial of distance matrices of trees are known and bounds on the location of its peak (the largest coefficient) are also known. Recently, an extension of these results to distance matrices of block graphs was given. In this work, we extend these results to two additional distance-type matrices associated with trees: the Min-4PC matrix and the 2-Steiner distance matrix. We show that the sequences of coefficients of the characteristic polynomials of these matrices are both unimodal and log-concave. Moreover, we find the peak location for the coefficients of the characteristic polynomials of the Min-4PC matrix of any tree on $n$ vertices. Further, we show that the Min-4PC matrix of any tree on $n$ vertices is isometrically embeddable in $\RR^{n-1}$ equipped with the $\ell_1$ norm. Unimodal, Log-concave, Characteristic polynomial, Four-point condition, Steiner distance. [2020] 05C5005C0505C1215A18. § INTRODUCTION Let $A = a_0,a_1,\cdots,a_m$ be a sequence of real numbers. The sequence $A$ is called unimodal if there exists an index $k$ with $1 \leq k \leq m-1$ such that $a_{j-1}\leq a_j$ when $j\leq k$ and $a_j\geq a_{j+1}$ when $j\geq k$. The sequence $A$ is called log-concave if $a_i^2\geq a_{i-1}a_{i+1}$ when $1 \leq i \leq m-1$. Log-concavity and unimodality are significant properties with application across various areas; for example, algebra (see [6] by Brändén and [14] by Stanley), probability theory (see [13] by Prekopa), combinatorics and geometry [14]. These applications emphasize the importance of understanding and identifying log-concave sequences in different mathematical contexts. For the distance matrix of a tree, Graham and Lovász in <cit.> conjectured unimodality of the normalized coefficients of the characteristic polynomial of the distance matrix of trees and also conjectured the location of the peak(s). The unimodality part was proved by Aalipour et. al. in [1] and the peak location was disproved by Collins in [8]. Two points are noteworthy. Firstly, for a square matrix $M$, the definition of its characteristic polynomial used in all earlier papers is $\chi_M(x) = \det(M - xI)$ and this does not always make $\chi_M(x)$ monic. We thus change the definition slightly and define \begin{equation} \label{eqn:charpoly_monic} \charpoly_M(x) = \det(xI - M). \end{equation} Doing this small change helps us get rid of the need to multiply its coefficients by a power of $(-1)$ with the exponent depending on $n$. Hence, for the rest of this paper, for square matrices $M$, we define $\charpoly_M(x)$ using (<ref>) and make the needed small changes to results before quoting them from the literature. Secondly, normalizing the coefficients of the characteristic polynomial is not important for unimodality as we get similar results by scaling the coefficients $c_k$ of the characteristic polynomial by $\alpha^k$, where $\alpha$ is a positive real number. This point is also mentioned by both Abiad et. al in <cit.> and by Aalipour et. al in However, to determine the peak location of a unimodal sequence (or for bounds on the peak location), it is important to know whether we take the coefficients of the characteristic polynomial or its normalized version. In this paper, for results on the peak location, we consider the sequence of coefficients of $\charpoly_M(x)$ without any normalization. Abiad et. al in [2] also give their results for the coefficients of the un-normalized characteristic polynomial. Let $T$ be a tree with $n$ vertices and let $D$ be its distance matrix. Let $\charpoly_D(x) = \det (xI - D) = \sum_{k=0}^n c_k x^k$ be $D$'s characteristic polynomial. By definition, as $\charpoly_D(x)$ is a monic polynomial, $c_n = 1$. We further have $c_{n-1} = 0$ as $c_{n-1} = \Tr(D)$ which is zero (as all diagonal entries of distance matrices are zero). In [1], Aalipour et. al proved the following result. With the notation above, let $\displaystyle d_k = \frac{-1}{2^{n-k-2}} c_k$ be the normalized coefficients of $\charpoly_D(x)$. Then, the sequence $d_k$ as $k$ varies from $0$ to $n-2$ is unimodal and log concave. The proof of Theorem <ref> uses real rootedness of $\charpoly_D(x)$ to show log concavity. For unimodality, they need the following result of Edelberg, Garey and Graham (see <cit.>) which states that when $0 \leq k \leq n-2$, $c_k$ is negative (and hence $d_k$ is positive). With $T$ and $D$ as above, let $\displaystyle \charpoly_D(x) = \sum_{k=0}^n c_k x^k.$ Then, for $0 \leq k \leq n-2$, we have $c_k < 0$. An extension of these results to distance matrices of block graphs was obtained by Abiad et al. in [2]. The authors showed unimodality results for coefficients in the characteristic polynomial of distance matrices of block graphs along similar lines and gave bounds on the peak location for some block graphs. In this paper, we extend such results to two other matrices. Both matrices are defined for trees $T$. The first, $\Min_T$ is very similar to the distance matrix $D_T$ of trees $T$ while the second is 2-Steiner distance matrix $\DD_2(T)$ of trees $T$. This second matrix does not have diagonal elements that are zero but our proof goes through nonetheless. Both of these are $\binom{n}{2} \times \binom{n}{2}$ matrices but are not full rank matrices (see [3, 4]), so our results are for the restriction of these matrices to a basis for their respective row spaces. Let $T$ be a tree on $n$ vertices and let $\VV$ be the set of 2-element subsets of the vertices of $T$. Clearly $|\VV| = \binom{n}{2}$. Define the following $\binom{n}{2} \times \binom{n}{2}$ matrices whose rows and columns are indexed by elements of $\VV$. Let $d_{i,j}$ be the distance between $i$ and $j$ in $T$. For four vertices $i,j,k$ and $l$ from the vertex set of $T$, define the $$S_{i,j,k,l} = \{ d_{i,l} + d_{j,k}, d_{i,k} + d_{j,l}, d_{i,j} + d_{k,l}\}.$$ Tree distances are special and Buneman in [7] showed that for all choices $i,j,k,l$ of four vertices, among the three terms in $S_{i,j,k,l}$, the second maximum value equals the maximum value. This inspired the definition of the following $\binom{n}{2} \times \binom{n}{2}$ matrices. Define the $\Min_T$ matrix as follows. For $\{i,j\}, \{k,l\} \in \VV$, the entry of $\Min_T$ corresponding to the row $\{i,j\}$ and the column $\{k,l\}$ is the minimum entry of $S_{i,j,k,l}$. One can also define the $\Max_T$ matrix by changing the word “minimum" in the previous sentence to “maximum". For a tree $T$, Azimi and Sivasubramanian in [3] defined $\DD_2(T)$, the 2-Steiner distance matrix of a tree $T$ as follows. For $\{i,j\}, \{k,l\} \in \VV$, the entry of $\DD_2(T)$ corresponding to the row $\{i,j\}$ and column $\{k,l\}$ is the minimum number of edges among all connected subtrees of $T$ whose vertex set contains the four vertices $i,j,k$ and $l$. Azimi and Sivasubramanian showed that $\DD_2(T)$ is the average of the $\Max_T$ and $\Min_T$ matrix, that is, $\DD_2(T) = \frac{1}{2} \Big(\Max_T + \Min_T \Big)$. Bapat and Sivasubramanian in [4] studied the $\Min_T$ matrix and showed results on its rank and its invariant factors. Consider a tree $T = (V,E)$ on $n$ vertices. Let $j,k \in V$ with $j\not= k$ be two vertices and let $f = \{j,k\} \not\in E$ be a non edge of $T$ with $d_{j,k} = d > 1$ (that is, the distance in $T$ between $j$ and $k$ is $d$). Bapat and Sivasubramanian in [4] proved that $\rk(\Min_T) = n$ and the set $B = E \cup \{f\}$ forms basis of $\Min_T$'s row space. Our first result is the following about $\Min_T[B,B]$, the submatrix of $\Min_T$ restricted to both the rows and columns in $B$. With the notation above, let $N = \Min_T[B,B]$ and $\charpoly_N(x) = \sum_{k=0}^n a_kx^k$. Then, the sequence $|a_k|$ as $k$ varies from $0$ to $n-2$ is unimodal and log-concave. If $|a_t| = \max_{0 \leq k \leq n-2} |a_k|$ is the largest coefficient in absolute value, then $\lfloor \frac{n-2}{3} \rfloor \leq t \leq \lceil \frac{n+1}{3}\rceil$. When $T$ is a tree of order $n$ with $p$ leaves and $B \in \BB$ is a basis of $\DD_2(T)$'s row space, the authors in <cit.> proved that $\DD_2(T)[B,B]$ has $2n-p-2$ negative eigenvalues and one positive eigenvalue. In this paper, we show that when $B$ is a basis of $\Min_T$'s row space, we get an analogous statement for the matrix $\Min_T[B,B]$. This is proved in two ways with our first proof being Theorem <ref> proved in Section <ref>. Our second proof is more general and is of independent interest as it gives some corollaries about hypermetricity and negative-type metric spaces which we do not get from our first proof. In Section <ref>, we give an isometric embedding of $T$'s $\binom{n}{2} \times \binom{n}{2}$ $\Min_T$ matrix into $\RR^{n-1}$ equipped with the $\ell_1$ norm. We prove the following result. Let $T$ be a tree having $n$ vertices. Then, $\Min_T$ is isometrically $\ell_1$-embeddable in Our proof is surprisingly easy and appears in Section <ref>. For all trees $T$, it follows from the theory of isometrically $\ell_1$-embeddable finite metric spaces (see Deza and Laurent <cit.>) that the $\Min_T$ matrix has exactly one positive eigenvalue. By standard interlacing arguments, restricting $\Min_T$ to elements from a basis $B$, if $\Min_T[B,B]$ is a full rank matrix having rank $r$, one infers that $\Min_T[B,B]$ has $r-1$ negative eigenvalues and 1 positive eigenvalue. A distance matrix $D = (d_{i,j})_{1 \leq i,j \leq n}$ is said to be a hypermetric if \begin{equation} \label{eqn:ineq-for-metr} \sum_{1 \leq i < j \leq n} x_ix_jd_{i,j} \leq 0 \end{equation} for all $x \in \ZZ^n$ with $\sum_{i=1}^n x_i = 1$ ($x_i$ here is the $i$-th component of $x$). If inequality (<ref>) holds for all $x \in \ZZ^n$ with $\sum_{i=1}^n x_i = 0$, then $D$ is said to be a negative type metric. It is known (see <cit.>) that if a distance matrix $D$ is isometrically embeddable in an $\ell_1$ space, then it is both hypermetric and of negative type. For any tree $T$, though the matrix $\Min_T$ satisfies the triangle inequality, proving this takes some work. Remark <ref> shows that this can be obtained as a simple consequence of our isometric embedding. Azimi and Sivasubramanian in [3] considered the matrix $\DD_2(T)$. Note that the diagonal entry of $\DD_2(T)$ corresponding to the row and column indexed by $\{i,j\} \in \VV$ equals $d_{i,j}$, which is the tree distance between $i$ and $j$. Hence, $\DD_2(T)$ does not have zero entries in its diagonal (indeed all its main diagonal entries are positive). For a tree $T$ of order $n$ with $p$ leaves Azimi and Sivasubramanian showed that $\rk(\DD_2(T)) = 2n-p-1$, gave a class $\BB$ of bases for its row space and obtained the determinant of $\DD_2(T)[B,B]$, the restriction of $\DD_2(T)$ to the entries in rows and columns from $B \in \BB$. In this article, we obtain the following result about $\DD_2(T)[B,B]$. Let $T$ be a tree on $n$ vertices and let $T$ have $p$ leaves. With the notation above, for any $B \in \BB$, consider $P = \DD_2(T)[B,B]$ and let $\charpoly_P(x) = \sum_{k=0}^{2n-p-1} a_kx^k$. Then, the sequence $|a_k|$ as $k$ varies from $0$ to $2n-p-2$ is unimodal and log-concave. Though we start with singular matrices, by restricting attention to entries from a basis, we consider two full rank $r \times r$ matrices $M$. Further, all our matrices will have exactly one positive eigenvalue and $(r-1)$ negative eigenvalues. Check that the result quoted below is on matrices with all diagonal entries being 0. We give a mild generalization of on the $\charpoly_M(x)$ where $M$ has exactly one positive eigenvalue and has non negative entries along its main diagonal. This is given as Theorem <ref> in Section <ref>. We need this version for the matrix $\DD_2(T)[\BB,\BB]$, as it has non-zero elements on its diagonals. A uniform proof giving bounds on the peak location of the coefficients of $\charpoly_{\DD_2(T)[B,B]}(x)$ for all trees $T$ seems hard. So, we consider three special cases, the star tree, the bi-star tree $S_{1,n-3}$ and the path tree and obtain bounds on $|a_t| = \max_{0 \leq k \leq 2n-p-3} |a_k|$, the largest coefficient in absolute value in their respective characteristic polynomials. For the star and the bi-star our bounds are tight and are given as Theorem <ref> and Theorem <ref> in Subsections <ref> and <ref>, respectively. For the path tree, we give an upper bound on the peak location as Theorem <ref> in Subsection <ref>, and we conjecture the value of the peak location. § UNIMODALITY AND LOG-CONCAVITY For unimodality, we will need the idea of real rootedness of polynomials with real coefficients. The following result <cit.> is known. Let $p(x)=\sum_{k=0}^n a_kx^k$ be a real-rooted polynomial with real * Then its coefficient sequence $a_0,a_1,\hdots,a_n$ is * If a sequence $a_0,a_1,\hdots,a_n$ is both positive and log-concave, then it is unimodal. For any real and symmetric matrix $M$, by the Spectral Theorem, $\charpoly_M(x)$ is real rooted and so the first part of Lemma <ref> is trivially satisfied. When all eigenvalues of $M$ are negative, it is easy to see that all coefficients of $\charpoly_M(x)$ are positive. When $M = (m_{i,j})_{1 \leq i,j \leq n}$ is an $n \times n$ real, symmetric matrix with $m_{i,i} = 0$ for $1 \leq i \leq n$, and if $M$ has exactly one positive eigenvalue then, the proof of Theorem <ref> can be extended to show that almost all the coefficients of $\charpoly_M(x)$ are negative. This is the main point of Below, we mildly generalize this to include real, symmetric matrices which have a non negative trace. Recall the inertia of a real symmetric matrix $M$ is the triple $\Inertia(M) = \big(n_{+}(M),n_{-}(M),n_{0}(M)\big)$. Here, $n_{+}(M),$ $n_{-}(M)$ and $n_{0}(M)$ denote the number of positive, negative and zero eigenvalues of $M$ respectively. Consider a real, symmetric matrix $M$ of order $n$ with $\Tr(M)\geq 0$ and $\charpoly_M(x)=\sum_{k=0}^n a_kx^k$. Let $\Inertia(M)=(1,r-1,n-r)$, with $2\leq r\leq n$. If $\Tr(M)=0$, then $a_k<0$ when $n-r\leq k\leq n-2$ and if $\Tr(M)> 0$, then $a_k<0$ when $n-r\leq k\leq n-1$. Let the non zero eigenvalues of $M$ be $\lambda_1, -\lambda_2, -\lambda_3, \ldots, -\lambda_r$ and let the eigenvalue $0$ occur with multiplicity $n-r$. Here, we assume that $\lambda_i > 0$ when $1 \leq i \leq r$ and that the $\lambda_i$'s need not be distinct. Define $g_0 = 1$ and when $k \geq 1$, define $g_k$ to be the sum of all $k$-fold products of $\lambda_2, \ldots, \lambda_n$. Clearly, $g_k > 0$ when $1 \leq k \leq r$. Further \begin{eqnarray} \charpoly_M(x) & = & x^{n-r}(x-\lambda_1)\prod_{i=2}^r(x+\lambda_i) = x^{n-r}(x - \lambda_1)\Bigg( \sum_{k=0}^{r-1} g_k x^{r-1-k}\Bigg) \nonumber \\ & = & \Bigg( x^n + \sum_{k=1}^{r-1} \big( g_k - \lambda_1g_{k-1} \big) x^{n-k} -\lambda_1 g_{n-1} x^{n-r} \Bigg) \label{eqn:imp} \end{eqnarray} Since $\lambda_1=g_1+t$, $g_k - \lambda_1g_{k-1}=g_k - (g_1+t)g_{k-1} =(g_k-g_1g_{k-1})-tg_{k-1}<0 $ as we have $t\geq 0$ and $ -\lambda_1 g_{n-1}= -(g_1+t) g_{n-1}<0$. Moreover, $c_{n-1}=-\Tr(M)=-t$. Hence, when $n-r\leq k\leq n-1$ and $t>0$, we have $a_k<0$. Likewise, when $n-r\leq k\leq n-2$ and $t=0$, we have $a_k<0$, completing the proof. The following corollary of Theorem <ref> can be drawn. Let $M$ be a real and symmetric matrix of order $n$ with $\charpoly_M(x)=\sum_{k=0}^n a_kx^k$ and $\Inertia(M)=(1,n-1,0)$. * If $\Tr(M)=0$, then the sequence $|a_0|, |a_1|, \hdots, of the absolute values of its coefficients from $\charpoly_M(x)$ is log-concave and unimodal. * If $\Tr(M)>0$, then the sequence $|a_0|, |a_1|, \hdots, |a_{n-2}|, |a_{n-1}|$ of the absolute values of its coefficients from $\charpoly_M(x)$ is log-concave and unimodal. Since $M$ is a real, symmetric matrix, $\charpoly_M(x)$ is real-rooted and hence by Lemma <ref>, it follows that the sequence $a_0,a_1,\hdots,a_{n-2},a_{n-1}, a_n$ is log-concave. 1. By Theorem <ref>, we get $a_k<0$ when $0\leq k\leq n-2$. Since all terms $a_0,a_1,\hdots,a_{n-2}$ are negative, the sequence comprising their absolute values $(|a_k|)_{k=0}^{n-2}$ is log-concave and positive. By Lemma <ref>, $|a_0|, |a_1|, \hdots, |a_{n-2}|$ is unimodal. 2. By Theorem <ref>, we have $a_k<0$ for $0\leq k\leq n-1$. As all the terms $a_0,a_1,\hdots,a_{n-1}$ are negative, the sequence comprising their absolute values $(|a_k|)_{k=0}^{n-1}$ is log-concave and positive. By Lemma <ref>, $|a_0|, |a_1|, \hdots, |a_{n-1}|$ is unimodal. § THE $\MIN_T$ MATRIX OF A TREE $T$ Let $T= \big(V,E\big)$ be a tree with $V=\{1,2,\hdots,n\}$. Further, let $E=\{e_1,e_2,\hdots,e_{n-1}\}$. If $i,j \in V$ with $f=\{i,j\} \not\in E$ be a non-edge of $T$ with $d_{i,j} = d > 1$, then Bapat and Sivasubramanian in [4] showed that $B = E \cup \{f\}$ is a basis of $\Min_T$'s row space. Consider the $n \times n$ matrix $N = \Min_T[B,B]$ obtained by restricting the matrix $\Min_T$ to its rows and columns in $B$. We start this section with the following result. Let $N = \Min_T[B,B]$ be the matrix as described above. Then, $N$ has $(n-1)$ negative eigenvalues and one positive eigenvalue. In our proof, we use the Schur complement formula for inertia. The matrix $N$ restricted to the rows and columns indexed by $E$ is $K = 2(J-I)$ (see <cit.>) whose inverse is also presented in <cit.>. Clearly, $K$ has $(n-2)$ negative eigenvalues and one positive eigenvalue. Further, let $x_f$ be an $(n-1)$-dimensional column vector with its columns indexed by $e \in E$ with its $e$-th component $x_f(e) = \Min_T(f,e)$. Then, the Schur complement of $K$ in $N$ equals $0 - x_f^tK^{-1}x_f$. By <cit.>, this equals $p = -\frac{n-1}{2(n-2)}$. Since $\Inertia(N) = \Inertia(K) + \Inertia(p)$, we get that $N$ has only one positive eigenvalue and $n-1$ negative eigenvalues. To give our proof of Theorem <ref>, we compute $\charpoly_N(x)$ using equitable partitions. We first recall the definition of an equitable partition of a matrix $M$. Let $M$ be an $n \times n$ real, symmetric matrix and index the rows and columns of $M$ by elements of the set $X$. Let $\Pi=\{X_1,X_2,\hdots,X_p\}$ be a partition of the set $X$ and let $M$ be partitioned according to $\Pi$ as \[M=\left( {\begin{array}{cccc} M_{11} & M_{12} &\hdots & M_{1p}\\ M_{21} & M_{22} &\hdots & M_{2p}\\ \vdots &\vdots & \ddots & \vdots\\ M_{p1} & M_{p2}& \hdots &M_{pp}\\ \end{array} } \right).\] Here, $M_{ij}$ denotes the block submatrix of $M$ induced by the rows in $X_i$ and the columns in $X_j$. If the row sum of each block $M_{ij}$ is a constant, then the partition $\Pi$ is called an equitable partition. Let $q_{ij}$ denote the average row sum of $M_{ij}$. The matrix $Q=(q_{ij})$ is called the quotient matrix of $M$ with respect to $\Pi$. Next, we state a well-known result (see <cit.>) connecting the spectrum of a quotient matrix arising from an equitable partition to the spectrum of the original matrix. Let $Q$ be a quotient matrix of any real, symmetric, square matrix $M$ arising from an equitable partition. Then, all eigenvalues of $Q$ are eigenvalues of $M$. We next find the spectra of $\Min_T[B,B]$ for any tree $T$ of order $n$. For any tree $T$ on $n$ vertices, the eigenvalues of $\Min_T[B,B]$ are $-2$ with multiplicity $n-3$, and the three roots of the cubic polynomial Let $f = \{i,j\}$ with $d_{i,j} = d$. By relabelling, we can assume that the edges $e_1, e_2, \ldots, e_d$ are on the $ij$-path in $T$. Let $E_1=\{e_1,e_2,\hdots,e_d\}$ and $E_2=\{e_{d+1},\hdots,e_{n-1}\}$. Let $J$ denote a matrix all of whose entries are 1 (of appropriate dimension) and $I$ denoting the identity matrix (of appropriate dimension), and $\bone$ denote a column vector all of whose components are 1 (of appropriate dimension). With these, $N=\Min_T[B,B]$ can be written as \[ N= \begin{blockarray}{cccc} & E_1 & E_2 & f\\ \begin{block}{c(ccc)} E_1 & 2(J-I) & 2J & (d-1) \bone \\ E_2 & 2J & 2(J-I) & (d+1) \bone \\ f & (d-1)\bone^t & (d+1)\bone^t & 0\\ \end{block} \end{blockarray} ~.\] Let $e(i,j)$ denote the $n$-dimensional column vector that has its $i$-th component $1$, its $j$-th component $-1$ and all other components as $0$. If $S=\{e(j,j+1):1\leq j\leq d-1\}\cup \{e(j,j+1):d+1\leq j\leq n-2\}$, then for any $\textbf{x}\in S$, we have $N\textbf{x}=-2\textbf{x}$. Note that $|S|=n-3$, and that all vectors in $S$ are linearly independent. Therefore, $-2$ is an eigenvalue of $N$ with multiplicity at least $n-3$. Recall that $E_1=\{e_1,e_2,\hdots,e_d\}$ and $E_2=\{e_{d+1},\hdots,e_{n-1}\}$. Then it is easy to check that $\Pi_1=E_1 \cup E_2\cup \{f\}$ is an equitable partition of $N$ with quotient matrix \[Q_{\Pi_1}= \left ( {\begin{array}{ccc} 2(d-1) & 2(n-d-1) & d-1\\ 2d & 2(n-d-2) & d+1 \\ d(d-1) & (d+1)(n-d-1) & 0 \\ \end{array} } \right).\] By a direct calculation, the characteristic polynomial of $Q_{\Pi_1}$ is By Lemma <ref>, all eigenvalues of $Q_{\Pi_1}$ are eigenvalues of $N$ as well. Since $g(-2)\neq 0$, the eigenvalues of $N$ are $-2$ with multiplicity $n-3$, and the roots of $g(x)=0$. This completes the proof. We proceed to give our proof of Theorem <ref>. Proof of Theorem <ref>: For a tree $T$ of order $n$, by Theorem <ref>, we have $\Inertia(\Min_T[B,B])=(1,n-1,0)$. Hence, by Corollary <ref>, the sequence $|a_0|, |a_1|, \cdots,|a_{n-2}|$ is unimodal and log-concave. Now, we have to find the peak location of this unimodal sequence. By Theorem <ref>, it follows that the characteristic polynomial of $\Min_T[B,B]$ is $i\displaystyle f(x)=(x+2)^{n-3}(x^3+b_1x^2+c_1x+d_1),$ where $b_1=-(2n-6)$, $c_1=-(nd^2-5d^2+2nd-2d+5n-9)$ and $d_1=-2(d-1)^2(n-1)$. Let $a_k$ be the coefficient of $x^k$ in $f(x)$. One can check that \begin{align*} a_0&=d_1\binom{n-3}{0}2^{n-3}=-2(d-1)^2(n-1)2^{n-3}, \\ a_1&=\bigg[2c_1\binom{n-3}{0}+d_1\binom{n-3}{1}\bigg]2^{n-4}\\ &=-\big[8(n-3)+2(nd^2-5d^2+2nd-2d+5n-9)(n-3)\big]2^{n-5} \\ a_{n-1}&=2\binom{n-3}{n-4}+b_1\binom{n-3}{n-3}=2(n-3)-2(n-3)=0,~~ a_n=1, \end{align*} and for $3\leq k\leq n-3$ \begin{align*} a_k &=\bigg[8\binom{n-3}{k-3}+4b_1\binom{n-3}{k-2}+2c_1\binom{n-3}{k-1}+d_1\binom{n-3}{k}\bigg]2^{n-k-3} \\ %&= \binom{n-3}{k-3}2^{n-k-3}\bigg(8+\frac{4b_1(n-k)}{k-2}+ \frac{2c_1(n-k)(n-k-1)}{(k-2)(k-1)}+\frac{d_1(n-k)(n-k-1)(n-k-2)}{(k-2)(k-1)k}\bigg)\\ &= \binom{n-3}{k-3}2^{n-k-3} f_1(n,k), ~~ \text{where}\\ f_1(n,k)&=8+\frac{4b_1(n-k)}{k-2}+ \frac{2c_1(n-k)(n-k-1)}{(k-2)(k-1)}+\frac{d_1(n-k)(n-k-1)(n-k-2)}{(k-2)(k-1)k}\\ &=8-\frac{8(n-3)(n-k)}{k-2}- \frac{2(nd^2-5d^2+2nd-2d+5n-9)(n-k-1)(n-k)}{(k-1)(k-2)}\\ \end{align*} Thus, $|a_k|=\binom{n-3}{k-3}2^{n-k-3} |f_1(n,k)|$. When $n \geq 8$, it is easy to check that $|a_0|\leq |a_1|\leq |a_2|$ and $a_{n-3}\geq a_{n-2}$. Further, when $3\leq k\leq n-3$, we have \begin{align*} |a_k|- |a_{k-1}| &= \binom{n-3}{k-3}2^{n-k-3} |f_1(n,k)|-\binom{n-3}{k-4}2^{n-k-2} |f_1(n,k-1)|\\ %&= \binom{n-3}{k-4}2^{n-k-3} \bigg[\bigg(\frac{n-k+1}{k-3}\bigg)|f_1(n,k)|-2|f_1(n,k-1)|\bigg]\\ &= \binom{n-3}{k-4}2^{n-k-3} \bigg[\frac{8(n-2)(n^2-4kn+4n+3k^2-4k-1)}{(k-3)(k-2)}\\ &~~~~~~~~+\frac{2(nd^2-5d^2+2nd-2d+5n-9)(n-k)(n-k+1)}{(k-2)(k-3)}\cdot \bigg(\frac{n-3k+1}{k-1}\bigg)\\ &~~~~~~~~+\frac{2(d-1)^2(n-1)(n-k-1)(n-k)(n-k+1)}{(k-1)(k-2)(k-3)}\cdot \bigg(\frac{n-3k-2}{k}\bigg)\bigg]. \end{align*} Hence, when $3\leq k\leq n-3$, one can verify that $|a_k|\geq |a_{k-1}|$ if and only if $k\leq \frac{n-2}{3}$ and $|a_k|\leq |a_{k-1}|$ if and only if $k\geq \frac{n+4}{3}$. Thus, when $n \geq 8$, we have $|a_0|\leq |a_1|\leq |a_2| \leq \hdots \leq |a_{\lfloor \frac{n-2}{3} \rfloor}|$ and $|a_{\lceil \frac{n+4}{3}-1\rceil}| \geq |a_{\lceil \frac{n+4}{3}\rceil}| \geq \hdots \geq |a_{n-3}|\geq |a_{n-2}|$. Hence, if $|a_t|=\max_{0\leq k\leq n-2} |a_k|$, then $ \lfloor \frac{n-2}{3} \rfloor \leq t \leq \lceil \frac{n+1}{3}\rceil$. This completes the proof. § ISOMETRICALLY EMBEDDING $\MIN_T$ IN $\ELL_1$ SPACE For any tree $T$ having $n$ vertices, we show that the $\Min_T$ matrix is isometrically embeddable in $\RR^{n-1}$ equipped with the $\ell_1$ norm. This gives an alternate proof that the matrix $\Min_T$ has $r-1$ negative eigenvalues and one positive eigenvalue, where $r$ is the rank of $\Min_T$. Identify the $(n-1)$ dimensions of $\RR^{n-1}$ with the edges of $T$. For $\{i,j\} \in \VV$, the embedding $\phi_{ \{i,j\} }$ maps $\{i,j\}$ to the incidence vector of the unique path $P_{i,j}$ between $i$ and $j$ in $T$. We illustrate by an example. Let $T$ be the tree given in Figure <ref> with edge set $E = \{e_1,e_2,e_3, e_4\}$. For brevity, for $\{i,j\} \in \VV$, we omit the comma in the subscript and denote $\phi_{i,j}$ in Figure <ref> as $\phi_{ij}$. Let $f = \{1,4\} \in \VV$. The set of edges on the path $P_{1,4}$ between the vertices $1$ and $4$ is clearly $P_{1,4} = \{e_1,e_2 \}$ and thus, the column vector $\phi_{14} = (1,1,0,0)^t$. This column vector $\phi_{1,4}$ is illustrated with a different colour in Figure <ref>. $\begin{array}{l|r|r|r|r|r| r|r|r|r|r|} & \phi_{12} & \phi_{13} & \phi_{14} & \phi_{15} & \phi_{23} & \phi_{24} & \phi_{25} & \phi_{34} & \phi_{35} & \phi_{45} \\ \hline e_1 & 1 & 1 & \fby{1} & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ e_2 & 0 & 0 & \fby{1} & 1 & 0 & 1 & 1 & 1 & 1 & 0 \\ e_3 & 0 & 1 & \fby{0} & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\ e_4 & 0 & 0 & \fby{0} & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\ \end{array}$ A tree and its embedding. Column $\phi_{14}$ is illustrated on the left. After seeing the example in Fig <ref>, we are now ready for the proof of Theorem <ref>. (Of Theorem <ref>): We identify the $n-1$ dimensions of $\RR^{n-1}$ with the edges of $T$. Consider the embedding $\phi: \VV \rightarrow \RR^{n-1}$ described above. Thus $\phi_{i,j}$ is the incidence vector of the path $P_{i,j}$. For all $i,j, s,t \in V(T)$, we show that $\Min_T(\{i,j\}, \{s,t\}) = \lVert \phi_{i,j} - \phi_{s,t} \rVert_1$. We consider two cases depending on whether the path $P_{i,j}$ intersects the path $P_{s,t}$. Case 1, (when $P_{i,j} \cap P_{s,t} = \emptyset$): In this case, note that $\lVert \phi_{i,j} - \phi_{s,t} \rVert_1 = d_{i,j} + d_{s,t}$. If $\alpha$ is a vertex lying on $P_{i,j}$ and $\beta$ is a vertex on the path $P_{s,t}$ are chosen such that $d_{\alpha,\beta}$ is the smallest among choices of vertices $\alpha$ on the path $P_{i,j}$ and $\beta$ on the path $P_{s,t}$, then as $d_{\alpha,\beta} \geq 0$, we have $d_{i,j} + d_{s,t} \leq d_{i,t} + d_{j,s}$ and $d_{i,j} + d_{s,t} \leq d_{i,s} + d_{j,t}$. Thus, $\Min_T(\{i,j\}, \{s,t\}) = d_{i,j} + d_{s,t}= \lVert \phi_{i,j} - \phi_{s,t} \rVert_1$. Case 2, (when $P_{i,j} \cap P_{s,t} \not= \emptyset$): Let $S = P_{i,j} \cap P_{s,t}$. As $T$ is a tree, it is easy to see that $S$ is a set of edges on a path from $\alpha$ to $\beta$, where $\alpha, \beta \in V(T)$. That is $d_{\alpha,\beta} = |S|$. In this case, it is easy to see that $\lVert \phi_{i,j} - \phi_{s,t} \rVert_1 = d_{i,j} + d_{s,t} - d_{\alpha,\beta}$. It is now clear that the minimum element of the set $S_{i,j,s,t}$ is $d_{i,j} + d_{s,t} - d_{\alpha,\beta}$ completing the proof. We make two remarks from the proof of Theorem In the proof of Theorem <ref>, note that the images $\phi_{i,j}$ are vectors in $\{0,1\}^{n-1}$. Thus, for any tree $T$ having $n$ vertices, its $\Min_T$ matrix is isometrically embeddable in the $(n-1)$ dimensional hypercube equipped with the Hamming metric. This is easily seen to be stronger than being isometrically embeddable in $\ell_1$ space. Theorem <ref> shows that the $\Min_T$ matrix satisfies triangle inequality. The following corollary is easily follows from Theorem <ref>. For any tree $T$, the matrix $\Min_T$ is hypermetric, is of negative type and has exactly one positive eigenvalue. § THE 2-STEINER DISTANCE MATRIX $\DD_2(T)$ OF A TREE $T$ Recall that for a tree $T$ having $n$ vertices and $p$ pendant vertices, Azimi and Sivasubramanian in <cit.> showed that its 2-Steiner distance matrix $\DD_2(T)$ has rank $r = 2n-p-1$. They also gave the following basis. For a tree $T$ of order $n$ with $p$ leaves, let $B_1,B_2,\hdots,B_{n-p}$ be the blocks of its line graph $\LG(T)$ such that $|B_i|=b_i$ for $ i=1,\ldots,n-p$. If $e_i\in V(\LG(T))$, $i=1,\ldots,n-1$ and $f_j$, $j=1,\ldots,n-p$, is the symmetric difference of endpoints of edge $f_j^\prime \in B_j$ in $\LG(T)$, then $B=\{e_1,e_2,\hdots,e_{n-1},f_1,\hdots,f_{n-p}\}$ forms a basis for the row space of $\DD_2(T)$. Below, we provide the proof of the first part of Theorem <ref>, followed by Corollary <ref>, which shows that the sequence $|a_0|, |a_1|, \hdots, |a_{r-1}|$ is unimodal and log-concave. (Of Theorem <ref> :) Let $T$ be a tree of order $n$ with $p$ pendant vertices and $r = 2n-p-1$. Azimi and Sivasubramanian in <cit.> showed that the matrix $\DD_2(T)[B,B]$ has $r-1$ negative eigenvalues and one positive eigenvalue. Hence, by Corollary <ref>, it follows that the sequence $|a_0|, |a_1|, \hdots,|a_{r-1}|$ is unimodal and log-concave, completing the first part. For the second part of Theorem <ref>, we give our bounds on the peak location of the coefficients of $\charpoly_{\DD_2(T)[B,B]}(x)$. As we consider three families of trees, the star $S_n$, the bi-star $S_{1,n-3}$ and the path $P_n$ on $n$ vertices, we trifurcate our proof into three subsections. §.§ Peak location for star trees For a star $S_n$ on $n$ vertices, $B=E\cup \{f\}$ is a basis of $\DD_2(S_n)$, where $E$ is the edge set of $S_n$ and $f=\{i,j\}\notin E$ for any two vertices $i,j$ of $S_n$. In the following theorem, we find the spectra of $\DD_2(S_n)[B,B]$. For a star $S_n$ on $n$ vertices, the eigenvalues of $\DD_2(S_n)[B,B]$ are $-1$ with multiplicity $n-3$ and the roots of the cubic polynomial $g(x)=x^3-2(n-1)x^2-7(n-2)x-(n-1).$ Let $S_n$ have vertex set $V=\{1,2,\hdots,n\}$. Let $E(T)=\{e_i = \{1,i+1\}: 1 \leq i < n\}$ be its edge set. Without loss of generality, assume that $f=\{2,3\}\notin E(T)$ and $B=\{e_1,e_2,\hdots,e_{n-1},f\}$. \[ \DD_2(S_n)[B,B] = \begin{blockarray}{cccccc} & e_1 & e_2 & \hdots & e_{n-1} & f\\ \begin{block}{c(ccccc)} e_1 & 1 & 2 & \hdots & 2 & 2 \\ e_2 & 2 & 1 & \hdots & 2 & 2 \\ % e_3 & 2 & 2 & \hdots & 2 & 3 \\ \vdots & \vdots & \vdots &\vdots &\ddots &\vdots & \vdots \\ e_{n-1} & 2 & 2 & \hdots & 1 & 3 \\ f & 2 & 2 & \hdots & 3 & 2\\ \end{block} \end{blockarray} .\] Let $e(i,j)$ be the $n$-dimensional column vector with its $i$-th component $1$, its $j$-th component $-1$ and all other components being $0$. If $X=\{e(1,2)\}\cup \{e(j,j+1):3\leq j\leq n-2\}$, then for any $\textbf{x}\in X$, we clearly have $\big(\DD_2(S_n)[B,B]\big)\textbf{x}=-\textbf{x}$. Note that $|X|=n-3$, and all vectors in $S$ are linearly independent. Therefore, $-1$ is an eigenvalue of $\DD_2(S_n)[B,B]$ with multiplicity at least $n-3$. Let $E_1=\{e_1,e_2\}$, $E_2=\{e_3,\hdots,e_{n-1}\}$ and $\Pi_2: E_1 \cup E_2 \cup \{f\}$. It is easy to see that $\Pi_2$ an equitable partition of $\DD_2(S_n)[B,B]$ and gives rise to the quotient matrix \[Q_{\Pi_2}= \left ( {\begin{array}{ccc} 3 & 2(n-3) & 2\\ 4 & 2(n-4)+1 & 3 \\ 4 & 3(n-3) & 2 \\ \end{array} } \right). \mbox{ A simple computation gives } \] \begin{equation} \label{eqn:charpoly_equi_partn} \charpoly_{Q_{\Pi_2}}(x) = g(x)=x^3-2(n-1)x^2-7(n-2)x-(n-1). \end{equation} By Lemma <ref>, all eigenvalues of $Q_{\Pi_2}$ are eigenvalues of $\DD_2(S_n)[B, B]$. Since $g(-1)\neq 0$, the eigenvalues of $\DD_2(S_n)[B, B]$ are $-1$ with multiplicity $n-3$, and the roots of $g(x)=0$, completing the proof. In our next result, we determine the peak location of the coefficients of $\charpoly_{\DD_2(S_n)[B,B]}(x)$ up to an interval of constant size that is independent of $n$. Let $B$ be the basis of $\DD_2(S_n)$ used in Theorem <ref>. If $a_0,a_1,\hdots,a_n$ are the coefficients of the characteristic polynomial of $\DD_2(S_n)[B,B]$ and $|a_t|=\max |a_k|$, then $\lfloor \frac{n-2}{2} \rfloor \leq t \leq \lceil \frac{n}{2}\rceil$. By Theorem <ref> and (<ref>), we have If $a_k$ is the coefficient of $x^k$ in $f(x)$, then it is easy to see that \begin{align*} a_0 &=-(n-1),~~ a_1 =-(n^2+3n-11),~~a_2=-\frac{1}{2}(n^3+6n^2-47n+68),\\ a_k &= -\bigg[-\binom{n-3}{k-3}+2(n-1)\binom{n-3}{k-2}+14(n-2)\binom{n-3}{k-1}+(n-1)\binom{n-3}{k}\bigg]\\ &~~~~~\text{when}~ 3\leq k\leq n-3,\\ a_{n-2}&=-(3n^2+5n-28),~~ a_{n-1}=-(n+1), ~~\text{and} ~~ a_n=1. \end{align*} It is easy to check that $|a_0|\leq |a_1|\leq |a_2|$ and $|a_{n-2}|\geq |a_{n-1}|\geq |a_{n}|$ when $n\geq 6$. When $4\leq k\leq n-3$, one can check that \begin{align*} &|a_k|- |a_{k-1}|\\ &= \binom{n-3}{k-4} \bigg[\frac{(2n^3+4n^2-6kn^2+4k^2n-3kn-2k^2+4)}{(k-2)(k-3)}+\bigg(\frac{7(n-2)(n-k)(n-k+1)}{(k-1)(k-2)}\bigg)\cdot\\ &~~~~~~~~~~~~~~~~~~~~~~\bigg(\frac{n-2k+2}{k-3}\bigg)+\bigg(\frac{(n-1)(n-k-1)(n-k)(n-k+1)}{(k-1)(k-2)(k-3)}\bigg)\cdot \bigg(\frac{n-2k-2}{k}\bigg)\bigg]. \end{align*} Hence, when $4\leq k\leq n-3$, it is easy to verify that $|a_k|\geq |a_{k-1}|$ if and only if $k\leq \frac{n-2}{2}$ and $|a_k|\leq |a_{k-1}|$ if and only if $k\geq \frac{n+2}{2}$. Thus, we have $|a_0|\leq |a_1|\leq |a_2| \leq \hdots \leq |a_{\lfloor \frac{n-2}{2} \rfloor}|$ and $|a_{\lceil \frac{n+2}{2}-1\rceil}| \geq |a_{\lceil \frac{n+2}{2}\rceil}| \geq \hdots \geq |a_{n-3}|\geq |a_{n-2}|$. Hence, if $|a_t|=\max_{0\leq k\leq n-2} |a_k|$, then $\lfloor \frac{n-2}{2} \rfloor \leq t \leq \lceil \frac{n}{2}\rceil$, completing our proof. §.§ Peak location for the bi-star $S_{1,n-3}$ Let $S_{1,n-3}$ be a tree on $n$ vertices obtained from $P_2$ that has the edge $\{v_1,v_2\}$ by attaching a pendant vertex $v_0$ to $v_1$ and $(n-3)$ pendant vertices $v_3,v_4,\hdots,v_{n-1}$ to $v_2$. Let $e_1=\{v_0,v_1\}, e_2=\{v_1,v_2\}$ and $e_i=\{v_2,v_i\}$ for $3\leq i \leq n-1$. Since $S_{1,n-3}$ has $n-2$ pendant vertices, two types of basis $B_1$ and $B_2$ are output by the algorithm given by Azimi and Sivasubramanian (see Remark <ref>). These are \begin{align*} B_1&=\{e_1,e_2,\hdots,e_{n-1},f_1,f_2\}~ \text{where}~ f_1=\{v_0,v_2\},f_2=\{v_1,v_3\}~\text{and}\\ B_2&=\{e_1,e_2,\hdots,e_{n-1},f_1,f_2\}~ \text{where}~ f_1=\{v_0,v_2\},f_2=\{v_3,v_4\}. \end{align*} We find the eigenvalues of both $\DD_2(S_{1,n-3})[B_1,B_1]$ and Let $B_1$ and $B_2$ be the bases of $S_{1,n-3}$ as mentioned above. Then, * the eigenvalues of $\DD_2(S_{1,n-3})[B_1,B_1]$ are $-1$ with multiplicity $n-5$ and the roots of the polynomial * the eigenvalues of $\DD_2(S_{1,n-3})[B_2,B_2]$ are $-1$ with multiplicity $n-5$ and the roots of the polynomial With the given labelling, we have \begin{align*} \DD_2(S_{1,n-3})[B_1,B_1]&=\begin{blockarray}{cccccccc} %& e_1 & e_2 & e_3 & e_4 & \hdots & e_{n-1} & f_1 & f_2\\ & e_1 & e_2 & e_3 & \hdots & e_{n-1} & f_1 & f_2\\ \begin{block}{c(ccccccc)} e_1 & 1 & 2 & 3 & \hdots & 3 & 2 & 3 \\ e_2 & 2 & 1 & 2 & \hdots & 2 & 2 & 2 \\ e_3 & 3 & 2 & 1 & \hdots & 2 & 3 & 2 \\ %e_4 & 3 & 2 & 2 & \hdots & 2 & 3 & 3 \\ \vdots & \vdots & \vdots &\vdots &\ddots &\vdots &\vdots & \vdots \\ e_{n-1} & 3 & 3 & 2 & \hdots & 1 & 3 & 3 \\ f_1 & 2 & 2 & 3 & \hdots & 3 & 2 & 3 \\ f_2 & 3 & 2 & 2 & \hdots & 3 & 3 & 2 \\ \end{block} \end{blockarray}\quad \text{and}\\ \DD_2(S_{1,n-3})[B_2,B_2]&=\begin{blockarray}{ccccccccc} %& e_1 & e_2 & e_3 & e_4 & e_5 & \hdots & e_{n-1} & f_1 & f_2\\ & e_1 & e_2 & e_3 & e_4 & \hdots & e_{n-1} & f_1 & f_2\\ \begin{block}{c(cccccccc)} e_1 & 1 & 2 & 3 & 3 & \hdots & 3 & 2 & 4 \\ e_2 & 2 & 1 & 2 & 2 & \hdots & 2 & 2 & 3 \\ e_3 & 3 & 2 & 1 & 2 & \hdots & 2 & 3 & 2 \\ e_4 & 3 & 2 & 2 & 1 & \hdots & 2 & 3 & 2 \\ %e_5 & 3 & 2 & 2 & 2 & \hdots & 2 & 3 & 3 \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots &\vdots &\vdots & \vdots \\ e_{n-1} & 3 & 2 & 2 & 2 & \hdots & 1 & 3 & 3 \\ f_1 & 2 & 2 & 3 & 3 & \hdots & 3 & 2 & 4 \\ f_2 & 4 & 3 & 2 & 2 & \hdots & 3 & 4 & 2 \\ \end{block} \end{blockarray}~. \end{align*} As before, let $e(i,j)$ be the $n$-dimensional column vector with its $i$-th component $1$, its $j$-th component $-1$ and all other components being $0$. If $X= \{e(j,j+1):4\leq j\leq n-2\}$ and $Y=\{e(3,4)\}\cup \{e(j,j+1):5\leq j\leq n-2\}$, then for any $\textbf{x}\in X$ and $\textbf{y}\in Y$, we have $\big(\DD_2(S_{1,n-3})[B_1,B_1]\big)\textbf{x} =-\textbf{x}$ and $\big(\DD_2(S_{1,n-3})[B_2,B_2]\big)\textbf{y}=-\textbf{y}$. Note that $|X|=|Y|=n-3$, and that all vectors in both $X$ and $Y$ are linearly independent. Therefore, $-1$ is an eigenvalue of both $\DD_2(S_{1,n-3})[B_1,B_1]$ and $\DD_2(S_{1,n-3})[B_2,B_2]$ with multiplicity at least $n-3$. If $E_1=\{e_4,\hdots,e_{n-1}\}$, then it is easy to see that $\Pi_3:\{e_1\}\cup \{e_2\}\cup \{e_3\}\cup E_1 \cup \{f_1\}\cup \{f_2\}$ is an equitable partition of $\DD_2(S_{1,n-3})[B_1,B_1]$ with the quotient matrix given below. A simple computation gives the characteristic polynomial of $Q_{\Pi_3}$ to be By Lemma <ref>, the eigenvalues of $Q_{\Pi_3}$ are eigenvalues of $\DD_2(S_{1,n-3})[B_1,B_1]$. Since $h_1(-1)\neq 0$, the eigenvalues of $\DD_2(S_{1,n-3})[B_1,B_1]$ are $-1$ with multiplicity $n-5$, and the roots of $h_1(x)=0$. If $E_2=\{e_3,e_4\}$ and $E_3=\{e_5,\hdots,e_{n-1}\}$, then it is easy to see that $\Pi_4:\{e_1\}\cup \{e_2\}\cup E_2 \cup E_3 \cup \{f_1\}\cup \{f_2\}$ is an equitable partition of $\DD_2(S_{1,n-3})[B_2,B_2]$ with the quotient matrix given below. \[ \left ( {\begin{array}{cccccc} 1 & 2 & 3 & 3(n-4) & 2 & 3\\ 2 & 1 & 2 & 2(n-4) & 2 & 2\\ 3 & 2 & 1 & 2(n-4) & 3 & 2\\ 3 & 2 & 2 & 2(n-4)-1 & 3 & 3\\ 2 & 2 & 3 & 3(n-4) & 2 & 3\\ 3 & 2 & 2 & 3(n-4) & 3 & 2\\ \end{array} } \right) \mbox{ and } \left ( {\begin{array}{cccccc} 1 & 2 & 6 & 3(n-5) & 2 & 4\\ 2 & 1 & 4 & 2(n-5) & 2 & 3\\ 3 & 2 & 3 & 2(n-5) & 3 & 2\\ 3 & 2 & 4 & 2(n-5)-1 & 3 & 3\\ 2 & 2 & 6 & 3(n-5) & 2 & 4\\ 4 & 3 & 4 & 3(n-5) & 4 & 2\\ \end{array} } \right)\] The characteristic polynomial of $Q_{\Pi_4}$ clearly equals (15n-34)x-(n-1).$ By Lemma <ref>, all eigenvalues of $Q_{\Pi_4}$ are eigenvalues of $\DD_2(S_{1,n-3})[B_2,B_2]$. Since $h_2(-1)\neq 0$, the eigenvalues of $\DD_2(S_{1,n-3})[B_2,B_2]$ are $-1$ with multiplicity $n-5$, and the roots of $h_2(x)=0$. For both $\DD_2(S_{1,n-3})[B_1,B_1]$ and $\DD_2(S_{1,n-3})[B_2,B_2]$, we determine the peak location of coefficients of their characteristic polynomial in the next result. Let $S_{1,n-3}$ be the tree on $n$ vertices as mentioned above. * Let $a_0,a_1,\hdots,a_n, a_{n+1}$ be the coefficients of $\charpoly_{\DD_2(S_{1,n-3})[B_1,B_1]}(x)$ and $|a_t|=\max |a_k|$. Then $\lfloor \frac{n-4}{2} \rfloor \leq t \leq \lceil \frac{n+4}{2}\rceil$. * Let $b_0,b_1,\hdots,b_n,b_{n+1}$ be the coefficients of $\charpoly_{\DD_2(S_{1,n-3})[B_2,B_2]}(x)$ and $|b_t|=\max |b_k|$. Then $\lfloor \frac{n-4}{2} \rfloor \leq t \leq \lceil \frac{n+4}{2}\rceil$. Since our proofs for both parts are very similar, we give details for the first part and only sketch details of the second part. Proof of item 1. By Theorem <ref>, it follows that the characteristic polynomial of $\DD_2(S_{1,n-3})[B_1,B_1]$ is If $a_k$ is the coefficient of $x^k$ in $h(x)$, then, we have \begin{align*} a_0 &=-(n-1),~~~a_1 =-(n^2+7n-23),~~~a_2=-\frac{1}{2}(n^3+14n^2-55n+30),\\ a_3 &=-\frac{1}{6}(n^4+20n^3-118n^2+91n+234),\\ a_k&=-\sum_{k=6}^{n-5} \bigg[(n-1)\binom{n-5}{k}+(13n-28)\binom{n-5}{k-1}+(45n-110)\binom{n-5}{k-2}+(54n-126)\cdot\\ &~~~\binom{n-5}{k-3}+(21n-36)\binom{n-5}{k-4}+2(n-1)\binom{n-5}{k-5}-\binom{n-5}{k-6}\bigg]~ \text{for}~ 6\leq k\leq n-5,\\ a_{n-2}&=-\frac{1}{6}(5n^3+72n^2-383n+354),~a_{n-1}=-\frac{1}{2}(3n^2+29n-41),~ a_n=-(n+3),~ a_{n+1}=1. \end{align*} It is easy to check when $n \geq 6$ that $|a_0|\leq |a_1|\leq |a_2|$ and that $|a_{n-2}|\geq |a_{n-1}|\geq |a_{n}|$. When $7\leq k\leq n-5$, it is again easy to see that we have \begin{align*} \end{align*} Hence, when $7\leq k\leq n-5$, one can check that $|a_k|\geq |a_{k-1}|$ if and only if $k\leq \frac{n-4}{2}$ and $|a_k|\leq |a_{k-1}|$ if and only if $k\geq \frac{n+6}{2}$. Thus, we have $|a_0|\leq |a_1|\leq |a_2| \leq \hdots \leq |a_{\lfloor \frac{n-4}{2} \rfloor}|$ and $|a_{\lceil \frac{n+6}{2}-1\rceil}| \geq |a_{\lceil \frac{n+6}{2}\rceil}| \geq \hdots \geq |a_{n-2}|\geq |a_{n-1}|\geq |a_{n}|$. Hence, if $|a_t|=\max_{0\leq k\leq n-2} |a_k|$, then $\lfloor \frac{n-4}{2} \rfloor \leq t \leq \lceil \frac{n+4}{2}\rceil$. This completes the proof of the first part. Proof of item 2. By Theorem <ref>, it follows that the characteristic polynomial of $\DD_2(S_{1,n-3})[B_2,B_2]$ is (15n-34)x-(n-1)\big]$. As the rest of the proof is similar to the first case, we omit its details. §.§ Bounds on the peak location of the Path For a matrix $M$ and index sets $\alpha$ and $\beta$, the submatrix of $M$ restricted to the rows in $\alpha$ and the columns in $\beta$ is denoted by When $\alpha=\beta$, we use the notation $M[\alpha]$ to denote the principal submatrix $M[\alpha,\alpha]$ of $M$. We also use the notation $M(\alpha|\beta)$ to denote the submatrix of $M$ obtained by deleting the rows corresponding to $\alpha$ and the columns corresponding to $\beta$. We recall some results from [3]. Suppose $T$ is a a tree of order $n$ with $p$ leaves. Let $B$ be a basis of $\DD_2(T)$ as defined in Remark <ref> with $B =\{e_1,e_2,\ldots,e_{n-1},f_1,\ldots,f_{n-p}\}$ and let $\v$ be the column vector defined as $v_e_i = 1- ∑_e_i∈f_j' (|B_j|-1)$ and $v_f_i = |B_i|-1, where $f_i'\in B_i$.$ Then $^t=̌1$ and $_2[B,B]=̌(n-1)$. \end{lemma} \begin{remark} \label{remark:v-for-path} When $T=P_n$, a path on $n$ vertices, the vector $\v$ defined in Lemma \ref{lem:steiner-v} is given by v_{f_j} =1$ for $1\leq j\leq n-p$, and for $1\leq i\leq n-1$, $v_{e_i}=\begin{cases} 0 & \text{if $e_i$ is a pendant edge},\\ -1 & \text{otherwise;} \end{cases} \end{remark} \begin{remark} \label{remark: basis Pn} Let $P_n$ be the path on $n$ vertices with edges $e_i=\{i,i+1\}$ for $i=1,\ldots, n-1$. Let $B=(e_1,f_1,e_2,f_2,\ldots, e_{n-2},f_{n-2},e_{n-1})$ be the ordered basis for the row space of $\DD_2(P_n)$ where $f_j=\{j,j+2\}$, for $j=1,\ldots, n-2$. We follow this particular ordering of $B$ to order the rows and columns of $\DD_2(P_n)[B,B]$. \end{remark} We denote by $\DPn$ the matrix $\DD_2(P_n)[B, B]$, that is, $\DPn:=\DD_2(P_n)[B, B]$. \begin{remark}\label{remark:laplacian-like} By the definition of the Laplacian-type matrix outlined in \cite[Page 77]{aliazimi-siva-steiner-2-dist}, we define the matrix $L$ whose rows and columns are indexed by the elements of $B$ with entries as follows: the entries $L(e_i,e_j)$ and $L(f_i,f_j)$ are zero if $i\neq j$. Further, define L(x,x) =\begin{cases} 2 & \text{if $x\in B\setminus\{e_1,e_{n-1}\}$},\\ 1 & \text{if $x\in \{e_1,e_{n-1}\}$}, \end{cases}, \quad \text{and} \quad L(e_i,f_k) = \begin{cases} -1 & \text{if $i\in\{k-1,k\}$},\\ 0 & \text{otherwise}. \end{cases}$$ Note that $L$ is a symmetric tridiagonal matrix 1 & -1 & 0 & 0 & \cdots & 0 \\ -1 & 2 & -1 & 0 & \cdots & 0 \\ 0 & -1 & 2 & -1 & \cdots & 0 \\ \vdots & \vdots &\ddots & \ddots &\ddots & \vdots \\ 0 & 0 & \cdots & -1 & 2 & -1 \\ 0 & 0 & \cdots & 0 & -1 & 1 \end{array}\right). \end{remark} The following result is a special case of \cite[Theorem 1 and 2]{aliazimi-siva-steiner-2-dist} and provides the inverse of $\DPn$. \begin{theorem}\label{th:det-inv-st-path} Let $P_n$ is the path on $n$ vertices and let $B=(e_1,f_1,e_2,f_2,\ldots,e_{n-2},f_{n-2},e_{n-1})$ be the ordered basis of $\DD_2(P_n)$ as defined in Remark \ref{remark: basis Pn}. Then $\det\DPn = (n-1)$ and $ \DPn^{-1} = -L +\frac{1}{n-1} \v \v^t.$ \end{theorem} We need the following result (see Horn and Johnson \cite[Page 18] {horn-johnson-matrix-analysis}), about the blocks in the inverse of a partitioned nonsingular matrix $M$. \begin{lemma}\label{lem:partition-inverse} Let $M$ be a nonsingular matrix and $\alpha$ be a subset of the index set of $M$'s rows and columns. Let $\alpha^c$ denote the complement set of $\alpha$ and suppose $M^{-1}[\alpha]$ and $M[\alpha^c]$ are invertible. Then, $$\left(M^{-1}[\alpha]\right)^{-1}= M[\alpha]-M\left[\alpha, \alpha^c\right]M\left[\alpha^c\right]^{-1} M\left[\alpha^c, \alpha\right].$$ \end{lemma} In our next result, we find the principal minors of $\DPn$ of size $r-1$. \begin{theorem}\label{th:principal-monors} Let $P_n$ be a path on $n\geq 3$ vertices and $\DD_2(P_n)$ be its $2$-Steiner distance matrix. If $B$ is a basis of $\DD_2(P_n)$'s row space and $\alpha \in B$, then \det \DPn(\alpha | \alpha) = \begin{cases} -(n-1) & \text{ if $\alpha$ is a pendant edge in $P_n$},\\ -(2n-3) & \text{otherwise}. \end{cases} \end{theorem} \begin{proof} Our proof is by induction on $n$. Let $B=\{e_1,f_1,e_2\}$ be a basis of $\DD_2(P_3)$. Note that $\DP3=\left(\begin{array}{rrr} 1 & 2 & 2 \\ 2 & 2 & 2 \\ 2 & 2 & 1 \end{array}\right)$. Clearly $\DP3(f_1 | f_1)=-3$ and $\DP3(e_i | e_i)=-2$ when $i=1,2$. Hence, the result holds when $n=3$. Further, note that $\DP4 = \left(\begin{array}{rrrrr} 1 & 2 & 2 & 3 & 3 \\ 2 & 2 & 2 & 3 & 3 \\ 2 & 2 & 1 & 2 & 2 \\ 3 & 3 & 2 & 2 & 2 \\ 3 & 3 & 2 & 2 & 1 \end{array}\right).$ One can verify that $\det \DP4(e_1|e_1) =\det \DP4(e_3|e_3)=-3$ and that $\det\DP4(\alpha|\alpha)=-5$ for $\alpha\in \{f_1,e_2,f_2\}$. Hence, our result is also true when $n=4$. Assume that the statement is true for all path on $k$ vertices, where $k\leq n-1$. By $P_n$ we mean the path on $n>4$ vertices with $e_i=\{i,i+1\}$ for $i=1,\ldots, n-1$ and let $f_j=\{j,j+2\}$ for $j=1,\ldots, n-2$. Let $B_n=(e_1,f_1,e_2,f_2,\ldots,e_{n-2},f_{n-2},e_{n-1})$ be an ordered basis of $\DD_2(P_n)$'s row space and $\DPn= \DD_2(P_n)[B_n,B_n]$. Further note that $\dST(e_1,b_i) =\dST(f_1,b_i)$ for each $b_i\in B_n\setminus \{e_1\}$ and $\dST(f_1,b_i) =\dST(e_2,b_i)$ for each $b_i\in B_n\setminus \{e_1,f_1\}$. For $x\in B_n$, we write $r_x$ (respectively $c_x$) to denote the row (respectively column) corresponding to $x$. By performing the elementary row operations $r_{f_1}=r_{f_1}-r_{e_2}$ and $c_{f_1}=c_{f_1}-c_{e_2}$ on the matrix $\DPn(e_1|e_1)$ we get \left( \begin{array}{r|c} -1 & \1^t\\ \hline \1 & \DP{n-1} \end{array}\right), where $\DP{n-1}=\DD_2(P_{n-1})[B_{n-1},B_{n-1}]$. By Lemma \ref{lem:steiner-v}, there exist $v$ such that $\1^tv=1$ and $\DP{n-1}v=(n-2)\1$. By Schur complements and the determinantal formula \cite[sec. 0.8.5]{horn-johnson-matrix-analysis} and Theorem \ref{th:det-inv-st-path}, we get \begin{align*} \det \DPn(e_1|e_1) = \det(\DP{n-1}) \left(-1- \1^t \DP{n-1}^{-1}\1\right) = (n-2) \left(-1- \frac{\1^t v}{n-2}\right) =-(n-1). \end{align*} Analogously, we have $ \det \DPn(e_{n-1}|e_{n-1}) =-(n-1)$. Let $\alpha\in \{f_1,e_{2},\ldots, e_{n-2}, f_{n-2}\}$. Since $n>4$, without loss of generality, we may assume that $\{e_1,f_1,e_2\} \subset \alpha^c$. By performing the row operations $r_{f_1}=r_{f_1}-r_{e_2}$ and $c_{f_1}=c_{f_1}-c_{e_2}$ on the matrix $\DPn(\alpha|\alpha)$ we get \begin{align*} \DPn(\alpha|\alpha) \sim \left(\begin{array}{rr|c} -2 & 1 & \1^t \\ 1 & -1 & \0^t \\ \hline \1 & \0 & \DP{n-1}(\alpha|\alpha) \end{array}\right). \end{align*} Again, by applying Schur complements and the determinantal formula, we get \begin{equation} \det \DPn(\alpha|\alpha) =\det \DP{n-1}(\alpha|\alpha) \det \left[ \left(\begin{array}{cc} -2 & 1 \\ 1 & -1 \end{array}\right) - X^t \DP{n-1}(\alpha|\alpha)^{-1} X \right], \label{eq:schur-complement-minor} \end{equation} where $X= \begin{pmatrix} \1 & \0 \end{pmatrix} $. By Lemma \ref{lem:partition-inverse}, we get \begin{equation}\label{eq:inverse-D-partition} \DP{n-1}[\alpha^c]^{-1} = \DP{n-1}^{-1}[\alpha^c] - \DP{n-1}^{-1}[\alpha^c,\alpha] \left(\DP{n-1}^{-1}[\alpha]\right)^{-1}\DP{n-1}^{-1}[\alpha,\alpha^c]. \end{equation} Let $L$ be the Laplacian-like matrix for the tree $P_{n-1}$, as described in Remark \ref{remark:laplacian-like}. Suppose $v[\alpha] =t$. Clearly $t\in\{-1,1\}$. By Theorem \ref{th:det-inv-st-path}, we get \begin{equation}\label{eq:m11} \DP{n-1}^{-1}[\alpha] = -L[\alpha] + \frac{1}{n-2} v[\alpha](v[\alpha])^t = -2+\frac{1}{n-2} =- \frac{2n-5}{n-2}. \end{equation} Note that $\1^tv[\alpha^c]\1 +v[\alpha] =1$. Since $L\1=\0$, it follows that $\1^tL[\alpha^c]\1 = 2$. Hence, by Theorem \ref{th:det-inv-st-path}, we get \begin{equation} \label{eq:m12} X^t \DP{n-1}^{-1}[\alpha^c] X = - \begin{pmatrix} \1^t L[\alpha^c]\1 & 0 \\ 0 & 0 \end{pmatrix} + \frac{1}{n-2} \begin{pmatrix} \1^t v[\alpha^c] (v[\alpha^c])^t\1 & 0 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} -2 + \dfrac{(1-t)^2}{n-2} & 0\\0&0 \end{pmatrix}. \end{equation} Further, note that \begin{equation} \label{eq:m13} X^t \DP{n-1}^{-1}[\alpha^c,\alpha] = X^t \left[ -L[\alpha^c,\alpha] + \frac{1}{n-2} v[\alpha^c](v[\alpha])^t\right] = \begin{pmatrix} 2 + \dfrac{t(1-t)}{n-2} \\0 \end{pmatrix}. \end{equation} \begin{equation} \label{eq:m14} \DP{n-1}^{-1}[\alpha,\alpha^c] X = \left[ -L[\alpha] + \frac{1}{n-2} v[\alpha](v[\alpha])^t\right]X = \begin{pmatrix} 2 + \dfrac{t(1-t)}{n-2} & 0 \end{pmatrix}. \end{equation} By \eqref{eq:inverse-D-partition}, \eqref{eq:m11}, \eqref{eq:m12}, \eqref{eq:m13}, and \eqref{eq:m14}, we get \begin{equation*} X^t \DP{n-1}(\alpha|\alpha)^{-1} X = \begin{pmatrix} -2 + \dfrac{(1-t)^2}{n-2} & 0\\0&0 \end{pmatrix} +\frac{n-2}{2n-5} \begin{pmatrix} \left[2 + \dfrac{t(1-t)}{n-2} \right]^2 & 0\\0&0 \end{pmatrix} . \end{equation*} On simplification we get \begin{equation} \label{eq:m15} X^t \DP{n-1}(\alpha|\alpha)^{-1} X = \begin{pmatrix} \frac{2}{2n-5}+f(t)& 0\\0&0 \end{pmatrix}, \end{equation} where $f(t) = \frac{(1-t)^2}{n-2}+\frac{4t(1-t)}{2n-5}+\frac{t^2(1-t)^2}{(n-2)(2n-5)} $. It is easy to note that $f(\pm 1)=0$. Hence, it follows from \eqref{eq:m15} that \begin{equation} \label{eq:m16} X^t \DP{n-1}(\alpha|\alpha)^{-1} X = \begin{pmatrix} \frac{2}{2n-5}& 0\\0&0 \end{pmatrix}. \end{equation} By \eqref{eq:schur-complement-minor} and \eqref{eq:m16} we get \det \DPn(\alpha|\alpha) = \frac{2n-3}{2n-5}\ \det \DP{n-1}(\alpha|\alpha) . Thus, the result follows by induction and our proof is complete. \end{proof} We next present our upper bound on the peak location for coefficients in the characteristic polynomial of $\DPn=\DD_2(P_n)[B,B]$, where the basis $B$ is defined as in Remark \ref{remark: basis Pn}. . \begin{theorem} \label{th: peak-steiner-path} Let $P_n$ be a path on $n> 2$ vertices and $\charpoly_{\DPn}(x) = \sum_{i=0}^{2n-3}a_ix^i$. If $|a_\ell| = \max\{|a_0|,|a_1|, \ldots, |a_{2n-4}|\}$, then $\ell \leq \left\lfloor\dfrac{7n}{5}\right\rfloor$. \end{theorem} \begin{proof} To prove the result we will use Lemma 3.2(1) of[2]. Since $\det (\DPn) = (n-1)$, we have $|a_0| = n-1$. Furthermore, by Theorem \ref{th:principal-monors}, the sum of all principal minors of $\det \DD_2(P_n)[B,B]$ of size $2n-4$ is given by -(2n-5)(2n-3) -2(n-1) = -(4n^2-14n+13). It follows that $|a_1|= 4n^2-14n+13$. Now note that \dfrac{(2n-3)-j}{(2n-3)(j+1)} \dfrac{4n^2-14n+13}{n-1} <1 \iff j > \dfrac{(2n-3)(4n^2-15n+14)}{3(2n-3)(n-2)+2(n-1)} =f(n) Suppose $g(n) = \frac{7n}{5}$. Note that $g'(n) - f'(n) > 0$ for $n > 2$. Hence, by \cite[Lemma 3.2(1)]{Abiad-Brimkov-Hayat-Khramova-Koolen-dist-char-poly}, it follows that $\ell \leq \left\lfloor\dfrac{7n}{5}\right\rfloor$. \end{proof} Note that Theorem \ref{th: peak-steiner-path} only provides an upper bound for the peak location of the unimodal sequence ${|a_0|,\ldots, |a_{2n-4}|}$ associated to a path $P_n$. One can use the approach mentioned in \cite[Lemma 3.2(2)]{Abiad-Brimkov-Hayat-Khramova-Koolen-dist-char-poly} to get a lower bound on the peak location. However, to use \cite[Lemma 3.2(2)]{Abiad-Brimkov-Hayat-Khramova-Koolen-dist-char-poly}, a suitable estimate of $a_{2n-4}$ and $a_{2n-5}$ is required. In the case of $P_n$, even if $a_{2n-4}$ and $a_{2n-5}$ are known exactly, \cite[Lemma 3.2(2)]{Abiad-Brimkov-Hayat-Khramova-Koolen-dist-char-poly} does not seem to provide a lower bound on the peak location, and so we do not discuss this aspect in this paper. Using SageMath [15], when $5<n<15$, the actual peak location for $P_n$ seems to be $n-1$. We record this as a conjecture. \begin{conjecture} For a path $P_n$ on $n> 5$ vertices, if $\charpoly_{\DPn}(x) = \sum_{i=0}^{2n-3}a_ix^i$ and $|a_\ell| = \max\{|a_0|,|a_1|, \ldots, |a_{2n-4}|\}$, then $\ell =n-1$. \end{conjecture} We further note that $|a_{2n-4}|$ is the trace of $\DPn$, and hence $|a_{2n-4}|=2(n-2) +(n-1) =3n-5$. One needs to find principal minors of a suitable size to estimate $a_{2n-5}$. Again, by looking at the data from SageMath, we make the following conjecture that provides an estimate for $a_{2n-5}$. \begin{conjecture} For a path $P_n$ on $n> 5$ vertices, if $\charpoly_{\DPn}(x) = \sum_{i=0}^{2n-3}a_ix^i$, then $a_{2n-5} =- \frac{1}{6} (n - 1) (n - 2) (2n^2 + 6n - 15).$ \end{conjecture} %section 2 % Bibliography \bibliographystyle{abbrv} \newcommand{\etalchar}[1]{$^{#1}$} \begin{thebibliography}{AAB{\etalchar{+}}18} [1] G.~Aalipour, A.~Abiad, Z.~Berikkyzy, L.~Hogben, F.~H.~J. Kenter, J.~C.-H. Lin, and M~Tait. \newblock Proof of a {C}onjecture of {G}raham and {L}ov{\'a}sz concerning {U}nimodality of {C}oefficients of the {D}istance {C}haracteristic {P}olynomial of a {T}ree. \newblock {\em Electronic Journal of Linear Algebra}, 34:373--380, 2018. [2] A. Abiad, B. Brimkov, S. Hayat, A.~P. Khramova, and J.~H. Koolen. \newblock Extending a conjecture of {G}raham and {L}ov{\'a}sz on the distance characteristic polynomial. \newblock {\em Linear Algebra and its Applications}, To appear:https://doi.org/10.1016/j.laa.2023.03.027, 2023. [3] A. Azimi and S.~Sivasubramanian. \newblock The $2$-Steiner distance matrix of a tree. \newblock {\em Linear Algebra and its Applications}, 655:65--86, 2022. [4] R.~B. Bapat and S.~Sivasubramanian. \newblock Smith {N}ormal {F}orm of a distance matrix inspired by the four-point \newblock {\em Linear Algebra and its Applications}, 603:301--312, 2020. [5] A.~E. Brouwers and W.~H. Haemers. \newblock {\em {Spectra of Graphs}}. \newblock Universitext, Springer Verlag, 2012. [6] P. Br{\"a}nd{\'e}n. \newblock Unimodality, {L}og-concavity, {R}eal-rootedness and {B}eyond. \newblock In M~Bona, editor, {\em Handbook of Enumerative Combinatorics}, chapter~7. Chapman \& Hall \/ CRC Press, 2015. [7] P. Buneman. \newblock A {N}ote on the {M}etric {P}roperties of {T}rees. \newblock {\em Journal of Combinatorial Theory Series B}, 17:48--50, 1974. [8] K.~L. Collins. \newblock On a conjecture of {G}raham and {L}ov{\'a}sz about distance matrices. \newblock {\em Discrete Applied Mathematics}, 25:27--35, 1989. [9] M.~Deza and M.~Laurent. \newblock {\em Geometry of Cuts and Metrics}. \newblock Springer Verlag, volume 15 edition, 1997. [10] M.~Edelberg, M.~R. Garey, and R.~L Graham. \newblock On the distance matrix of a tree. \newblock {\em Discrete Mathematics}, 14:23--39, 1976. [11] R.~L. Graham and L.~Lovasz. \newblock Distance matrix polynomials of trees. \newblock {\em Advances in Mathematics}, 29(1):60--88, 1978. [12] R.~A. Horn and C.~R. Johnson. \newblock {\em Matrix Analysis}. \newblock Cambridge University Press, 2012. [13] A. Prekopa. Logarithmic concave measures and related topics. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, University of California Press, 513–525, 1971. [14] R.P. Stanley. Log-concave and unimodal sequences in algebra, combinatorics, and geometry. Annals of the New York Academy of Sciences, 576, 500–535, 1989. [15] The~Sage Developers. \newblock {\em SageMath, the {S}age {M}athematics {S}oftware {S}ystem ({V}ersion 9.4)}, 2021. \newblock {\tt http://www.sagemath.org}. \end{thebibliography} % \bibliography{bibdatabase} \end{document}
# Correlate-and-Excite: Real-Time Stereo Matching viaGuided Cost Volume Excitation Antyanta Bangunharcana1, Jae Won Cho2, Seokju Lee2, In So Kweon2, Kyung-Soo Kim1, Soohyun Kim1 1A. Bangunharcana, K-S. Kim, and S. Kim are with Mechatronics, Systems and Control Laboratory, KAIST, Daejeon, 34141, Republic of Korea {antabangun, kyungsoo<EMAIL_ADDRESS>W. Cho, S. Lee, I. S. Kweon are with the Robotics and Computer Vision Laboratory, KAIST, Daejeon, 34141, Republic of Korea {chojw, seokju91<EMAIL_ADDRESS> ###### Abstract Volumetric deep learning approach towards stereo matching aggregates a cost volume computed from input left and right images using 3D convolutions. Recent works showed that utilization of extracted image features and a spatially varying cost volume aggregation complements 3D convolutions. However, existing methods with spatially varying operations are complex, cost considerable computation time, and cause memory consumption to increase. In this work, we construct Guided Cost volume Excitation (GCE) and show that simple channel excitation of cost volume guided by image can improve performance considerably. Moreover, we propose a novel method of using top-$k$ selection prior to soft-argmin disparity regression for computing the final disparity estimate. Combining our novel contributions, we present an end-to-end network that we call Correlate-and-Excite (CoEx). Extensive experiments of our model on the SceneFlow, KITTI 2012, and KITTI 2015 datasets demonstrate the effectiveness and efficiency of our model and show that our model outperforms other speed-based algorithms while also being competitive to other state-of- the-art algorithms. Codes will be made available at https://github.com/antabangun/coex. ## I INTRODUCTION Stereo matching aims to estimate depth from a pair of images [1, 2] and is an essential task in the field of robotics, autonomous driving, and computer vision. This task has various challenging issues such as occlusions, textureless areas, areas with repeating textures, thin or small objects, etc. With the advancements of deep learning algorithms, the accuracy of stereo matching algorithms has improved significantly; however, many accurate state- of-the-art models do not have fast processing speed for real-time applications [3, 4, 5, 6, 7]. Algorithms that focus on fast computations exist but often sacrifice accuracy to gain this advantage which may be the main reason why stereo cameras are not utilized more frequently in applications [8, 9] such as autonomous driving where fast computation is essential. If the efficiency of stereo matching algorithms can be improved from the current standard, stereo camera based depth perception can be an alternative to the expensive LiDAR sensors that are currently used in many self-driving algorithms [10]. Figure 1: D1-all% error on KITTI stereo 2015 leaderboard vs. frame rate. Our proposed method CoEx, shown in the red star, achieve competitive performance compared to other state-of-the-art models while also being real-time. Figure 2: An overall end-to-end stereo matching model with an hourglass architecture of cost aggregation. GCE modules are inserted in between the 3D convolutions to utilize image feature map. Operation between image and cost volume features are broadcasted operation. Top-$k$ regression is used to compute the final disparity estimate. This model can be extended to other volumetric CNN-based architecture and the proposed modules can be incorporated in the same manner. Recent series of learning-based stereo matching algorithms [11, 12, 5] use left and right input images to construct a cost volume by computing the cross- correlation or concatenation of the features between from the two images. The correlation based approach reduces the input images’ feature vectors into cosine similarity values, giving a model with lower memory usage and faster runtime. However, this reduces the representation power of the neural network and often results in poor performance compared to the concatenation based cost volume. In a volumetric approach, the computed cost volume is aggregated using 3D convolutional layers [13]. However, deep stacks of 3D convolutions are computationally expensive and memory inefficient [14]. Recent works have tried to improve the efficiency of the cost aggregation step using spatially varying aggregation [5, 3, 15]. While these works show improvements in accuracy, there is a significant increase in computational cost and memory consumption as well as additional complexity in the implementation of the proposed approaches. We propose an efficient and straightforward way of improving cost aggregation by utilizing extracted image features using attention based approaches that have been shown to improve image classification networks [16, 17]. Given a cost volume feature map, Guided Cost volume Excitation (GCE) excites the cost volume channels with weights computed from the the reference image features. The computed weights are shared across the disparity channel, so the operation is lightweight and easy to implement. This module lets the 3D neural network layers to extract geometric features from the cost volume and the image-guided weights to excite the relevant features. We empirically show that this operation improves performance significantly without any significant additional computational cost. We show that this module allows correlation based cost volume to utilize image information and performs at a similar accuracy with the concatenation based model, allowing us to construct a fast and accurate correlation based stereo matching model. In volumetric based stereo matching models, soft-argmin is the standard approach to compute the final disparity estimates, and few works have been done to improve the soft-argmin regression. The soft-argmin function computes the expected value from a disparity distribution at each pixel obtained from the cost volume aggregation. However, in many cases, the disparity distribution can have multiple peaks _e.g_., on the edge boundaries or even an almost uniform distribution _e.g_., textureless region. Due to this reason, taking the expected value when the distribution is not unimodal may not be the best choice to estimate the disparity. Instead, we propose to use only the top-$k$ values from the distribution to compute the disparity map. We show that this simple yet novel idea gives more accurate depth estimates and can be applied to any volumetric based model. With our proposed ideas, we construct an end-to-end real-time stereo matching network that we call CoEx (Correlate-and-Excite). We sum up our contributions and list them as follows: 1. 1. We present Guided Cost volume Excitation (GCE) to utilize extracted feature map from image as guidance for cost aggregation to improve performance. 2. 2. We propose a new method of disparity regression in place of soft- argmax(argmin) to compute disparity from the top-$k$ matching cost values and show that it reliably improves performance. 3. 3. Through these methods, we build a real-time stereo matching network CoEx, that outperforms other speed-oriented methods and shows its competitiveness when compared to state-of-the-art models. ## II Related Works Recent works have focused on using deep Convolutional Neural Networks (CNN) to improve stereo matching performance. In [18, 19, 20], CNNs are used to obtain feature representation for left and right images to be used for feature matching, but cost aggregation is still done using traditional means. DispNet [12] extended the idea to train an end-to-end deep model to predict depth from stereo images by introducing a correlation layer to construct the cost volume. Following this, many more end-to-end works have been proposed which can mostly be divided into either direct regression or volumetric approach [21]. Direct regression based methods use 2D convolutions on the cost volume to directly compute the disparity map [22, 23, 24]. On the other hand, volumetric based methods use 3D convolutions to aggregate the cost volume by taking into account the geometric constraints [11, 13, 14, 21] and stacking 3D convolutions in an hourglass architecture. Recently, more works have focused on improving the efficiency of 3D convolutions in the aggregation step. Two notable works GANet [5] and CSPN [3] use spatially dependent filters to aggregate cost. These methods have achieved a higher accuracy using spatially dependent 3D aggregation but at the cost of a higher computation time. Inspired by the strengths and drawbacks of these approaches, we base our model on spatially dependent 3D operation but focus on speed and efficiency. On the other hand, StereoNet [25] focused on building a real-time stereo matching model, and like many others, do so by sacrificing its accuracy. Recently, the accuracy of works [26, 27] on real-time stereo matching models are getting closer to the best performing models. The volumetric based approaches mentioned above outputs a distribution of matching cost values at each disparity level for every pixel. The final disparity estimates are then computed by taking the expected value of the distribution using a soft-argmin operation. As a result, the network is only indirectly trained to produce a disparity distribution and can fail in ambiguous regions. There have been few works improving the soft-argmin disparity regression. Recent studies AcfNet [28] and CDN [29] train the network to produce better unimodal distribution by introducing novel loss functions. This work presents a new method that builds upon the soft-argmin operation itself and improves the overall disparity regression. ## III Method A deep learning based end-to-end stereo matching network consists of matching cost computation, cost aggregation, and disparity regression. We present a novel GCE and top-$k$ soft-argmin disparity regression module that can be integrated into volumetric based baseline stereo approaches, both without adding significant computation overhead to the baseline stereo matching model. A real-time end-to-end stereo model is built using the proposed modules, shown in Fig. 2, that achieves competitive performance to the state-of-the-art. We describe each of the components in detail in the following subsections. ### III-A Matching cost computation Given a left and right input stereo image pair $3\times H\times W$, feature maps are extracted from both of them using a shared feature extraction module. We use MobileNetV2 [30] as our backbone feature extractor for its lightweight property and build a U-Net [31] style upsampling module with long skip connections at each scale level. From this feature extraction module, features at each scale are extracted for use later as a guiding signal for spatially varying cost aggregations. To construct the cost volume, feature maps extracted at the $1/4$ scale of the left and right image are used with correlation layer [12] to output a $D/4\times H/4\times W/4$ cost volume, where $D=192$ is the maximum disparity set for our network. ### III-B Guided Cost volume Excitation (GCE) 3D convolutions are used in modern architectures to aggregate the constructed cost volume to allow the neural network to capture geometric representation from the data. Recent works [32, 5] have used spatially varying modules to complement 3D convolutions and lead to better performance. Specifically, weights are computed from the reference image feature map to aggregate the 3D feature representation computed from the cost volume. The modules compute weights at each location for each pixel of interest and its surrounding neighbors to allow for neighborhood aggregation in a spatially dependent manner. We argue that the 3D convolutions in a volumetric cost aggregation already capture neighborhood information. A spatially varying update of the cost volume feature map without neighborhood aggregation is sufficient and is significantly more efficient. To formulate it, for a cost volume with $c$ feature channels, we pass an image feature map at the same scale into a guidance sub-network to output $c$ weights for each pixel. With this formulation, the 3D convolutions capture geometric information from the cost volume, and the guidance weights excite the relevant geometric features. At scale $(s)$ of the cost volume: $\begin{split}\alpha=\sigma(F^{2D}(I^{(s)}))\\\ C_{o}^{(s)}=\alpha\times C_{i}^{(s)},\end{split}$ (1) where $F^{2D}$ is implemented using 2D point-wise convolution, with $\sigma$ being the sigmoid function. The guidance weights are shared across the disparity dimension, and the multiplication in (1) is a broadcasted multiplication. This flow is shown on the bottom left of Fig. 2. Since this module involves excitation of cost volume features using weights computed from the reference image feature map as guidance, we call this module Guided Cost volume Excitation (GCE). This module is extremely simple and straightforward, with only a few operations added to the overall network; however, we show in Sec. IV-D1 that adding GCE module can improve the accuracy of our model significantly. In our CoEx model, the cost aggregation architecture follows GC-Net [13], with an hourglass architecture of 3D convolutions but with a reduced number of channels and network depth to reduce computational cost. The proposed GCE module is then added at every scale of the cost volume ( Fig. 2). The overall cost aggregation module with GCE is detailed in Table VI. The module outputs a 4D cost volume at $1/4$ of the original image resolution. ### III-C Top-$k$ disparity regression The 4D cost volume produced in the previous steps gives us matching confidence values for each disparity level for every pixel, which can be transformed into a probability distribution by taking a Softmax across the disparity values. In previous works, the soft-argmax operation is used to compute disparity by taking the expected value over this distribution [13]: $\hat{d}=\sum_{d=0}^{D}d\times Softmax(c_{d})$ (2) where $d$ is a predetermined set of disparity indices. A disparity distribution where there is only a single peak may give an adequate estimate for disparity predictions. However, in some instances, there can be multiple peaks or even a relatively uniform distribution. In these cases, the expected value of the matching cost distribution can diverge significantly from the actual ground truth value. To alleviate this issue, instead of taking the expected value of the whole distribution, we use only the top-$k$ values of the aggregated cost volume at every pixel. We call this regression strategy top-$k$ soft-argmax(argmin) disparity regression. Specifically, at every pixel, we use the top-$k$ weights to compute the expected disparity value. When $k$ equals the number of disparity of interest $D$, the top-$k$ regression is simply a soft-argmax operation [13]. When $D>k>1$, only the top-$k$ values in each pixel are used to compute the estimated disparity. This is done by masking the top-$k$ values and performing softmax on these values to normalize them so that weights that sum up to 1 can be obtained. These weights are then multiplied with their corresponding disparity indices, while the remaining values are masked out. The sum of the values are the weighted average of the top-$k$ disparity candidates. This operation can be seen as similar to $k$-max pooling [33]. In the instance where $k$ equals $1$, the top-$k$ regression becomes an argmax, since the weight of the maximum index becomes a constant at $1$. When this is the case, the operation is not trainable, and is why previous works resorted to using soft-argmax. Though simple, we show through our experiments the effectiveness of the top-$k$ soft- argmax regression. Using the top-$k$ regression to compute the disparity map at the full resolution requires a large amount of additional computation time, as shown in Sec. IV-D. To mitigate this, we design our model to compute the disparity regression at $1/4$ of the input image resolution. Finally, the output disparity prediction is upsampled to the original input image resolution. Following the footsteps of [34], the final disparity estimate at each pixel in the upsampled resolution is obtained with a weighted average of a $3\times 3$ “superpixel” surrounding it. Another CNN branch predicts the weights for each superpixel. We train the network in a fully supervised end-to-end manner using $\textit{smooth}_{L1}$ loss function. Our final loss function is as follows: $\mathcal{L}(d_{GT},\hat{d})=\frac{1}{N}\sum_{i=1}^{N}\textit{smooth}_{L_{1}}(d_{GT,i}-\hat{d}_{i}),$ (3) given, $\textit{smooth}_{L_{1}}(x)=\left\\{\begin{tabular}[]{ll}$0.5x^{2}$,&{if} $|x|<1$\\\ $|x|-0.5$,&{otherwise}\end{tabular},\right.\\\ $ (4) where, $N$ is the number of labeled pixels, $d_{GT}$ and $\hat{d}$ is the ground truth and predicted disparity respectively. ## IV Experiments Methods Scene- KITTI Flow 2012 2015 Runtime EPE 3px(%) D1(%) ($ms$) Accuracy GC-Net [13] 2.51 2.30 2.87 900 CRL [22] 1.32 – 2.67 470 SegStereo [23] 1.45 2.03 2.25 600 PSMNet [11] 1.09 1.89 2.32 410 PDS-Net [14] 1.12 2.53 2.58 500 GANet-deep[5] 0.84 1.60 1.81 1,800 CSPN [3] 0.78 – 1.74 1,000 HD3S [4] 0.78 1.80 2.02 140 CSN [7] 0.65 – 2.00 600 CDN [29] 0.70 – 1.92 400 LEAStereo [21] 0.78 1.45 1.65 300 Speed DispNetCorr [12] 1.68 4.65 4.34 60 DeepPrunerFast [26] 0.97 – 2.59 62 StereoNet [25] 1.101 6.02 4.83 15 AANet [27] 0.87 2.42 2.55 62 AANet+ [27] 0.72 2.04 2.03 60 CoEx (Ours) 0.69 1.93 2.13 27 TABLE I: Comparison with other state-of-the-arts models. Bold: Best, Underscore: Second best. In this section, we explain in detail the implementation details and training of our Correlate-and-Excite (CoEx) network, show through extensive experiments and ablations the effectiveness of our approach, and include detailed discussions on our method. ### IV-A Datasets and Evaluation metrics To test the effectiveness of our approach CoEx, we conduct experiments and evaluations on the following datasets: SceneFlow [12], KITTI Stereo 2012 [35], and KITTI Stereo 2015 [36]. SceneFlow is a synthetic dataset consisting of 35,454 training images and 4,370 testing images. The disparity range starts from 1 to 468, with all images having a size of $W=960$, $H=540$. We use the ‘finalpass’ version of the dataset. Only pixels with disparity values lower than our maximum disparity of 192 are used for training and evaluation. The end-point-error (EPE), which is the average difference between the predicted and ground truth, is used as a reporting metric. KITTI 2012 and 2015 datasets are real-world datasets with sparse ground truth obtained from a LiDAR sensor. We divide the training data into 90% training and 10% validation set. KITTI 2012 uses ‘Out-All’, the percentage of erroneous pixels in total for an error threshold of 3 pixels, for its metric. For KITTI 2015, we show the ‘D1-all’ metric reported on the leaderboard, which is the percentage of all labeled pixels’ stereo disparity outliers. ### IV-B Implementation details We use the MobileNetV2 pre-trained on ImageNet [37] as listed in Sec. III-A for our feature extractor backbone. The use of ImageNet pre-trained model allows for faster convergence during training. We implement our model using PyTorch and use the Adam optimizer ($\beta_{1}=0.9$, $\beta_{2}=0.999$) as our optimizer with Stochastic Weight Averaging (SWA) [38]. We randomly crop images to size $W=576$, $H=288$ for training. On the SceneFlow dataset, we train our model for 10 epochs with a learning rate $1\times 10^{-3}$ for the first 7 epochs and $1\times 10^{-4}$ for the remaining 3 epochs with a batch size of $8$. For our experiments on the KITTI dataset, we use a model pre-trained on the SceneFlow dataset and finetune the model on the KITTI dataset for 800 epochs with an initial learning rate of $1\times 10^{-3}$ decaying by a factor of $0.5$ at epochs $30$, $50$, and $300$. The Nvidia RTX 2080Ti GPU is used for training and testing. Figure 3: Qualitative results on KITTI 2015 test set. Error in orange corresponds erroneous prediction. Figure 4: Disparity distributions of models trained with different choice of $k$ in top-$k$ regression. Dashed red line is the estimated disparity and the solid green line is ground truth disparity. ### IV-C Performance of CoEx We show the comparisons of our model to the existing state-of-the-art in Table I. Note that KITTI results are all from the KITTI Stereo Matching Leaderboard, and the SceneFlow EPE values, as well as the runtime, are the values reported in each work. Among the speed based models, StereoNet is the fastest performing model with a runtime of 15 $ms$. However, StereoNet’s accuracy on SceneFlow and KITTI is considerably less than CoEx, with differences being 0.411 EPE for SceneFlow and 2.7% on KITTI 2015. Methods Feature Cost Refine- Total Extraction Aggregation ment LEAStereo [21] 12 463 – 475 AANet [27] 22 32 32 88 AANet+ [27] 11 21 45 80 CoEx (Ours) 10 17 – 27 TABLE II: Time comparison in $ms$ with other state-of-the-arts models on the same hardware. Bold: Best time. As runtime comparisons in different hardware do not give a fair comparison, we compare the runtime breakdown of LEAStereo [21] and AANet [27] with our model tested on the same hardware (RTX 2080Ti) using the official open-source models in Table II. The cost aggregation part includes cost volume construction and disparity regression. Our model is $3.3\times$ faster than AANet while giving 0.18 EPE lower and 0.46% better KITTI 2012 3px out-all% and 0.42% better D1-all% on KITTI 2015. AANet+ added more focus towards disparity refinement to improve accuracy without sacrificing speed at the cost of a high number of network parameters at $8.4M$ compared to our $2.7M$. Our model does not use any post aggregation refinement and still gives similar accuracy while being $3\times$ faster than AANet+. ### IV-D Ablation study Base Cost volume GCE Top-k reg SceneFlow Time model Corr Concat $k$ EPE (ms) PSMNet ✓ 192 0.8291 292 ✓ 6 0.7437 405 ✓ One 192 0.8176 297 ✓ One 6 0.7321 405 ✓ 192 1.053 223 ✓ 6 0.8798 332 ✓ One 192 0.8285 225 ✓ One 192$\rightarrow$6* 0.8088 333 ✓ One 6 0.7653 333 ✓ One 2 1.108 332 CoEx ✓ 48 0.8552 26 ✓ 6 0.8262 26 ✓ 3 0.7928 26 ✓ 2 0.7942 26 ✓ One 48 0.8242 26 ✓ Full 48 0.7426 26 ✓ Full 48$\rightarrow$2* 0.7782 27 ✓ Full 6 0.7185 27 ✓ Full 3 0.7115 27 ✓ Full 2 0.6854 27 TABLE III: Ablation study of GCE and top-$k$ soft-argmin regression integrated into base models on SceneFlow ‘finalpass’ with the EPE metric (lower is better). ‘One’ means only a single GCE layer is added into the model, while ‘Full’ adds GCE layer at every scale level Fig. 2. *$k_{1}\rightarrow k_{2}$ : the model is trained using $k_{1}$ and tested using $k_{2}$ soft-argmin regression Base GCE SceneFlow model Add Excite EPE 3px CoEx 0.7426 4.308 ✓ 0.7310 4.159 ✓ 0.6854 4.021 TABLE IV: Comparison between the use of addition or excitation of cost volume features from reference image features. Base GCE Neighborhood SceneFlow Time model EPE 3px (ms) CoEx One 0.7684 4.409 26 One 0.7732 4.435 47 TABLE V: Comparison of GCE and spatially varying neighborhood aggregation We perform ablation studies on the SceneFlow dataset to study the influence of the proposed modules. We integrate GCE and top-$k$ soft-argmin regression into baseline stereo matching models. For this ablation study, we used the baseline PSMNet and CoEx model (Table III). Note that PSMNet uses a concatenation of the feature representations between the left and right images to construct cost volume. Concatenation allows the neural network to have a stronger representation power than correlation based cost volume construction that reduces the feature map to a single value of cosine similarity for each match. Replacing the concatenation in PSMNet to correlation reduces the accuracy as expected. However, adding only a single GCE layer into the correlation based PSMNet, indicated by ‘One’ in Table III, brings the accuracy to a similar value to the concatenation based PSMNet, indicating that GCE enable the network to utilize image feature representations that is missed by correlation. In addition, the use of correlation also reduces the computation time significantly. In PSMNet, the cost volume is upsampled to the original input image resolution and the maximum disparity value is at $D=192$. We test top-$k$ soft-argmin regression in PSMNet with $k$ between $2$ to $192$. We found that reducing $k$ from the original value of $k=192$ generally improves performance up to a point. The accuracy degrades when k is set too low, perhaps due to a lack of gradient flow in backpropagation. Moreover, performing sorting to obtain the top-$k$ values in the full cost volume resolution proves to be too computationally costly. This motivated us to compute our disparity regression in the CoEx model at $1/4$ the input image resolution and utilize the superpixel upsampling Sec. III-C to obtain the disparity map at the original resolution. Note that in CoEx, $k=192/4=48$ is the maximum value of $k$. We show in Table III, adding top-$k$ soft-argmin regression to CoEx hardly increases the computation time and gives better accuracy when lower $k$ values are used. Table III also shows the performance gain when GCE is integrated at every scale level (Fig. 2), indicated by ‘Full’. Our best model is obtained when full GCE integration and top-$2$ soft-argmin regression are added into the base CoEx model. Notice that the two proposed modules only add $1ms$ of computation overhead from the base model but gives $0.17$ lower test EPE. #### IV-D1 GCE We investigated two approaches to use the reference image as a guide for cost volume aggregation. The first is a simple addition between image features and cost volume features with a broadcasted operation, which effectively acts like a UNet style skip-connection. The second is based on excitation and is the proposed GCE module. The test comparison between the two on the SceneFlow dataset is shown in Table IV. Addition based skip-connection does give a slight accuracy improvement to the baseline. However, we found cost volume excitation a much more effective way of utilizing image features in cost aggregation. We compare the GCE module that performs spatially varying local aggregation with a similar spatially varying operation that involves neighborhood aggregation. To do this, we formulate a neighborhood as a graph and use graph convolution to aggregate the nodes surrounding the center node of interest, where the graph edges are spatially varying and computed from the reference image feature map. The details of this graph-based aggregation are given in the Appendix. Table V shows that a simple excitation of the cost volume feature using a GCE module is performs better and more efficient than the implemented neighborhood spatially independent aggregation. #### IV-D2 Top-$2$ disparity regression To further illustrate how top-$k$ regression improves compared to soft-argmin regression, we plot the disparity distribution, produced from the output of cost aggregation, of models trained with each $k$ value. Fig. 4 illustrates 3 cases where a lower $k$ value in top-$k$ regression outperforms the baseline soft-argmin method. In the left most plot, the candidate disparities have a unimodal distribution. The middle case shows when there are 2 possible peaks, and the rightmost case shows the case when the distribution is relatively flat. In all those cases, the model trained using top-$2$ distribution is able to use only the peak matching values and is able to suppress values far away from the correct matching peak, resulting in a more accurate estimate. Then how well would models trained with full soft-argmin perform when we replace this regression module with top-$k$ soft-argmin at test time? We provide experimental results for this test in Table III and found no improvement in the accuracy. The models need to learn to use the top-$k$ soft- argmin regression during training. ## V Conclusion This paper introduces a new real-time stereo matching model that leverages spatially dependent cost aggregation that we call CoEx. We show that spatially varying aggregation can be performed in a lightweight and straightforward fashion to improve performance. We also show how a direct use of top-$k$ values can improve the soft-argmin disparity regression. We believe that the incredible speed of our method, where it is fast enough for real-time applications, can be a springboard for future real-time stereo matching research in real-world application settings. ## APPENDIX ### V-A Detailed architecture The detailed cost aggregation module is shown in Table VI. $s$ and $p$ are stride and padding sizes for the convolution kernels respectively. $I^{(s)}$ is the feature map of the left image obtained in the feature extraction stage at scale $(s)$. No. Layer Setting Input $[1]$ correlation layer $I^{(4)}$ (Left and Right) $[2$-$1]$ conv3d $3\times 3\times 3,8$ $[1]$ $[2]$ GCE $[2$-$1]$ and $I^{(4)}$ $[3$-$1]$ $\left[\begin{tabular}[]{@{}l@{}}conv3d $3\times 3\times 3,16$, $s=2$\\\ conv3d $3\times 3\times 3,16$\end{tabular}\right]$ $[2]$ $[3]$ GCE $[3$-$1]$ and $I^{(8)}$ $[4$-$1]$ $\left[\begin{tabular}[]{@{}l@{}}conv3d $3\times 3\times 3,32$, $s=2$\\\ conv3d $3\times 3\times 3,32$\end{tabular}\right]$ $[3]$ $[4]$ GCE $[4$-$1]$ and $I^{(16)}$ $[5$-$1]$ $\left[\begin{tabular}[]{@{}l@{}}conv3d $3\times 3\times 3,48$, $s=2$\\\ conv3d $3\times 3\times 3,48$\end{tabular}\right]$ $[4]$ $[5]$ GCE $[5$-$1]$ and $I^{(32)}$ $[6$-$1]$ deconv3d $4\times 4\times 4,32$, $s=2,p=1$ $[5]$ $[6$-$2]$ conv3d $3\times 3\times 3,32$ $[6$-$1]$ $[6]$ GCE $[6$-$2]$ and $I^{(16)}$ $[7$-$1]$ deconv3d $4\times 4\times 4,16$, $s=2,p=1$ $[6]$ $[7$-$2]$ conv3d $3\times 3\times 3,16$ $[7$-$1]$ $[7]$ GCE $[7$-$2]$ and $I^{(8)}$ $[8]$ deconv3d $4\times 4\times 4,1$, $s=2,p=1$ $[7]$ TABLE VI: Cost aggregation module. ### V-B Neighborhood aggregation There are multiple previously proposed methods performing spatially varying aggregation that utilizes the neighborhood information [15, 32, 5]. To compare GCE with a module that computes spatially varying aggregation of the neighbors, here we formulate a module that performs image-guided neighborhood aggregation. Given a voxel of interest at pixel location $i$ and its neighbors $j\in N(i)$ in a $1\times n\times n$ window, we compute the feature update of cost volume at $i$ as follows: $\begin{split}m_{i}^{(s,t+1)}=\sum_{j\in N(i)}e_{ji}\odot C^{(s,t)}_{j},\quad\quad\quad\\\ C_{i}^{(s,t+1)}=\xi(W_{1}C_{i}^{(s,t)}+W_{2}m_{i}^{(t+1)}+b),\\\ \end{split}$ (5) where $\odot$ represent element-wise product and $\xi$ is an activation function. $e_{ji}$ is the edge weight (or affinity in [32] of $j$ to $i$, and it is computed using $MLP$ on the image features at $i$ and $j$, and also the encoding of the relative position $p_{i}-p_{j}$ of the neighbors: $\begin{split}\hat{e}_{ji}=MLP([I_{i}^{(s)}||I_{j}^{(s)}||MLP(p_{i}-p_{j})])\\\ e_{ji}^{c}=\exp{\hat{e}_{ji}^{c}}/\sum_{j\in N(i)}\exp{\hat{e}_{ji}^{c}},\end{split}$ (6) where we use softmax (2nd line of the eqation) to normalize the edge weights at each feature channel $c$. In this work, Deep Graph Library (DGL) [39] is used to implement the neighborhood aggregation as a graph. For image feature map with $c_{I}$ channels and cost volume of size $c\times d\times h\times w$, GCE requires the following computation cost: $\begin{split}(c_{I}\times c\times h\times w)+(c\times d\times h\times w)\end{split}$ (7) Where the left part of the equation is the cost to obtain spatially varying weights, and the right part is the self-update. In contrast, if we write down the cost of weight computation and update of neighborhood aggregation in a $1\times n\times n$ neighborhood, in the simplest form of weight computation where it computes weight by a point-wise convolution, it would require a computation cost of at least: $\begin{split}(c_{I}\times n\times n\times c\times h\times w)+(n\times n\times c\times d\times h\times w).\end{split}$ (8) Even in the simplest form, it would require $n\times n$ more times than GCE. ## References * [1] H. Hirschmuller, “Stereo processing by semiglobal matching and mutual information,” _IEEE Trans. on Patt. Anal. and Mach. Intel._ , 2007. * [2] D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” _International journal of computer vision_ , 2002. * [3] X. Cheng, P. Wang, and R. Yang, “Depth estimation via affinity learned with convolutional spatial propagation network,” in _ECCV_ , 2018. * [4] Z. Yin, T. Darrell, and F. Yu, “Hierarchical discrete distribution decomposition for match density estimation,” in _CVPR_ , 2019. * [5] F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr, “Ga-net: Guided aggregation net for end-to-end stereo matching,” in _CVPR_ , 2019. * [6] S. Lee, S. Im, S. Lin, and I. S. Kweon, “Learning residual flow as dynamic motion from stereo videos,” in _IROS_ , 2019. * [7] X. Gu, Z. Fan, S. Zhu, Z. Dai, F. Tan, and P. Tan, “Cascade cost volume for high-resolution multi-view stereo and stereo matching,” in _CVPR_ , 2020\. * [8] S. Lee, S. Im, S. Lin, and I. S. Kweon, “Learning monocular depth in dynamic scenes via instance-aware projection consistency,” in _AAAI_ , 2021. * [9] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” in _CVPR_ , 2017. * [10] S. Hwang, N. Kim, Y. Choi, S. Lee, and I. S. Kweon, “Fast multiple objects detection and tracking fusing color camera and 3d lidar for intelligent vehicles,” in _URAI_ , 2016. * [11] J.-R. Chang and Y.-S. Chen, “Pyramid stereo matching network,” in _CVPR_ , 2018. * [12] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in _CVPR_ , 2016. * [13] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry, “End-to-end learning of geometry and context for deep stereo regression,” in _ICCV_ , 2017. * [14] S. Tulyakov, A. Ivanov, and F. Fleuret, “Practical deep stereo (pds): Toward applications-friendly deep stereo matching,” in _NeurIPS_ , 2018. * [15] C. Cai and P. Mordohai, “Do end-to-end stereo algorithms under-utilize information?” in _3DV_ , 2020. * [16] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in _ECCV_ , 2018. * [17] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in _CVPR_ , 2018. * [18] W. Luo, A. G. Schwing, and R. Urtasun, “Efficient deep learning for stereo matching,” in _CVPR_ , 2016. * [19] J. Zbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,” _Journal of Machine Learning Research_ , 2016. * [20] S. Lee, J. Kim, T.-H. Oh, Y. Jeong, D. Yoo, S. Lin, and I. S. Kweon, “Visuomotor understanding for representation learning of driving scenes,” in _BMVC_ , 2019. * [21] X. Cheng, Y. Zhong, M. Harandi, Y. Dai, X. Chang, H. Li, T. Drummond, and Z. Ge, “Hierarchical neural architecture search for deep stereo matching,” in _NeurIPS_ , 2020. * [22] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan, “Cascade residual learning: A two-stage convolutional neural network for stereo matching,” in _ICCVw_ , 2017. * [23] G. Yang, H. Zhao, J. Shi, Z. Deng, and J. Jia, “Segstereo: Exploiting semantic information for disparity estimation,” in _ECCV_ , 2018. * [24] X. Song, X. Zhao, L. Fang, H. Hu, and Y. Yu, “Edgestereo: An effective multi-task learning network for stereo matching and edge detection,” _International Journal of Computer Vision_ , pp. 1–21, 2020. * [25] S. Khamis, S. Fanello, C. Rhemann, A. Kowdle, J. Valentin, and S. Izadi, “Stereonet: Guided hierarchical refinement for real-time edge-aware depth prediction,” in _ECCV_ , 2018. * [26] S. Duggal, S. Wang, W.-C. Ma, R. Hu, and R. Urtasun, “Deeppruner: Learning efficient stereo matching via differentiable patchmatch,” in _ICCV_ , 2019\. * [27] H. Xu and J. Zhang, “Aanet: Adaptive aggregation network for efficient stereo matching,” in _CVPR_ , 2020. * [28] Y. Zhang, Y. Chen, X. Bai, S. Yu, K. Yu, Z. Li, and K. Yang, “Adaptive unimodal cost volume filtering for deep stereo matching.” in _AAAI_ , 2020\. * [29] D. Garg, Y. Wang, B. Hariharan, M. Campbell, K. Q. Weinberger, and W.-L. Chao, “Wasserstein distances for stereo disparity estimation,” _NeurIPS_ , 2020\. * [30] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in _CVPR_ , 2018. * [31] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _MICCAI_ , 2015. * [32] X. Cheng, P. Wang, and R. Yang, “Learning depth with convolutional spatial propagation network,” _arXiv preprint arXiv:1810.02695_ , 2018. * [33] N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A convolutional neural network for modelling sentences,” _arXiv preprint arXiv:1404.2188_ , 2014\. * [34] F. Yang, Q. Sun, H. Jin, and Z. Zhou, “Superpixel segmentation with fully convolutional networks,” in _CVPR_ , 2020. * [35] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in _CVPR_ , 2012. * [36] M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in _CVPR_ , 2015. * [37] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in _CVPR09_ , 2009. * [38] P. Izmailov, D. Podoprikhin, T. Garipov, D. Vetrov, and A. G. Wilson, “Averaging weights leads to wider optima and better generalization,” _arXiv preprint arXiv:1803.05407_ , 2018. * [39] M. Wang, D. Zheng, Z. Ye, Q. Gan, M. Li, X. Song, J. Zhou, C. Ma, L. Yu, Y. Gai, T. Xiao, T. He, G. Karypis, J. Li, and Z. Zhang, “Deep graph library: A graph-centric, highly-performant package for graph neural networks,” _arXiv preprint arXiv:1909.01315_ , 2019.
[a]I. Soler # Supercurrent renormalization of $\cal{N}$ = 1 supersymmetric Yang-Mills theory on the lattice G. Bergner M. Costa H. Panagopoulos S. Piemonte G. Spanoudes ###### Abstract Supersymmetry on the lattice is explicitly broken by the gluino mass and lattice artifacts. However, it can be restored in the continuum limit by fine tuning the parameters based on the renormalized Ward identities. On the renormalization step not only the mass but also the renormalization of the supercurrent needs to be addressed. Here we present a lattice investigation to obtain the renormalization factors of the supercurrent for $\mathcal{N}$=1 Super-Yang Mills theory in a gauge invariant renormalization scheme. We also provide the conversion factors which are necessary in order to translate our results to the more standard $\overline{\rm MS}$ scheme. ## 1 Introduction Even though the lattice discretization breaks supersymmetry (SUSY) explicitly, Curci and Veneziano [1] showed for $\mathcal{N}=1$ supersymmetric Yang-Mills (SYM) that chiral symmetry and supersymmetry can both be recovered in the continuum limit by tuning a single parameter, the gluino mass. This approach has been successfully applied in numerical simulations providing also first insights about the particle spectrum of the theory [2, 3]. In supersymmetric theories containing matter supermultiplets, such as SQCD, the number of parameters that need fine tuning is significantly larger. A tuning based on Ward identities becomes challenging. In particular renormalization coefficients of the supercurrent need to be determined numerically in this approach. In this work we study the renormalization of the supercurrent and compare non-perturbative effects to perturbation estimates. As a first exploratory step we will study the renormalization of the supercurrent for $\mathcal{N}=1$ supersymmetric Yang-Mills (SYM) on the lattice. We will do so by computing, non-perturbatively, the renormalization factors in a gauge-invariant renormalization scheme (GIRS) [4]. Furthermore, we will compute the conversion factors from GIRS to $\overline{\rm MS}$ scheme perturbatively in dimensional regularization. Then we will be able to convert our lattice regularized GIRS renormalization factors to the more standard $\overline{\rm MS}$ scheme. ## 2 The model $\mathcal{N}=1$ SYM is the simplest four-dimensional supersymmetric gauge theory. This theory describes the strong interactions between the carriers of the gauge force, the gluons, and their super-partners, the gluinos, which are Majorana fermions transforming under the adjoint representation of the gauge group. The gluons are represented by the non-Abelian gauge field $u^{a}_{\mu}(x)$ and the gluinos by the fermionic field $\lambda^{a}(x)$, where $a=1,...,N_{c}^{2}-1$. The on-shell Lagrangian for $\mathcal{N}=1$ in Euclidean space is $\displaystyle\mathcal{L}=\frac{1}{4}u^{a}_{\mu\nu}u^{a}_{\mu\nu}+\frac{1}{2}\bar{\lambda^{a}}\gamma_{\mu}(D_{\mu}\lambda)^{a},$ (1) where $u^{a}_{\mu\nu}$ is the non-Abeliean field strength tensor and $D_{\mu}$ is the gauge covariant derivative acting as $(D_{\mu}\lambda)^{a}=\partial_{\mu}\lambda^{a}+gf_{abc}u_{\mu}^{b}\lambda^{b}$. The infinitesimal supersymmetry transformation leaving the action of the theory invariant is given by $\displaystyle\delta u_{\mu}^{a}(x)=-i\bar{\xi}\gamma_{\mu}\lambda^{a}(x)$ $\displaystyle\delta\lambda^{a}(x)=\frac{1}{2}\sigma_{\mu\nu}u_{\mu\nu}^{a}(x)\xi,$ (2) where $\sigma_{\mu\nu}=\frac{1}{2}[\gamma_{\mu},\gamma_{\nu}]$ and $\xi$ is a Grassmann variable corresponding to the infinitesimal parameter of the transformation. Applying Noether’s theorem to the classical theory, the symmetry transformation Eq.(2) leads to the conserved supercurrent $S_{\mu}(x)\equiv-\sigma_{\nu\rho}\gamma_{\mu}{\rm tr}_{c}(\,u_{\nu\,\rho}(x)\lambda(x)).$ (3) Defining the theory at the quantum level requires regularization and renormalization which leads to important modifications. On the lattice, supersymmetry is broken by the addition of a gluino mass term and by the explicit breaking of Lorentz symmetry. Under renormalization the mass term gets additively renormalised and the supercurrent mixes with another dimension 7/2 operator $T_{\mu}(x)\equiv 2\,\gamma_{\nu}{\rm tr}_{c}(\,u_{\mu\,\nu}(x)\lambda(x)).$ (4) The corresponding Ward identity for the supercurrent after such modification reads as $\displaystyle Z_{SS}\big{<}\nabla_{\mu}S_{\mu}(x)Q(y)\big{>}+Z_{ST}\big{<}\nabla_{\mu}T_{\mu}(x)Q(y)\big{>}=m_{S}\big{<}\chi(x)Q(y)\big{>}+O(a),$ (5) where $Z_{SS}$ and $Z_{ST}$ are the renormalization coefficients of the supercurrent $S^{R}_{\mu}=Z_{SS}S_{\mu}+Z_{ST}T_{\mu}$ and $m_{s}$ is the renormalized gluino mass; $Q(y)$ can be any operator localized at a point $y\neq x$. We will explore in this work the renormalization of the Supercurrent operators $S_{\mu},T_{\mu}$ on the lattice both numerically by Monte-Carlo simulations and using perturbation theory. ## 3 Renormalization and GIRS scheme Our main goal is to compute the renormalization of the supercurrent both perturbatively and non-perturbatively. Therefore the first step is to decide on a proper renormalization scheme that can be applicable in both situations. In this work we will use the GIRS scheme [4] which is reminiscent of the X-space renormalization scheme. The GIRS is defined through the renormalization conditions $\displaystyle\langle\mathcal{O}^{\rm{GIRS}}_{X}(x)\mathcal{O}^{\rm{GIRS}}_{Y}(y)\rangle|_{{}_{x-y=\bar{z}}}\equiv Z_{X}^{\rm{GIRS}}Z_{Y}^{\rm{GIRS}}\langle\mathcal{O}^{\rm{B}}_{X}(x)\mathcal{O}^{\rm{B}}_{Y}(y)\rangle|_{{}_{x-y=\bar{z}}}=\langle\mathcal{O}_{X}(x)\mathcal{O}_{Y}(y)\rangle^{\rm{tree}}|_{{}_{x-y=\bar{z}}},$ for $X,Y$ two operators of interest and $(x\neq y)$ in order to avoid potential contact terms; the superscript $B$ denotes bare quantities. This scheme is appealing because the two-point Green functions $\langle\mathcal{O}^{\rm{B}}_{X}(x)\mathcal{O}^{\rm{B}}_{Y}(y)\rangle|_{{}_{x-y=\bar{z}}}$ can be computed both non-perturbatively and also in perturbation theory. Even more important, choosing gauge-invariant operators, only the mixing between these operators and other gauge-invariant operators is relevant, which makes this scheme particularly suitable for lattice computations. Considering the gauge invariant operators $\mathcal{O}_{X},\mathcal{O}_{Y}=S_{\mu},T_{\mu}$ the resulting mixing matrix relating the bare and renormalized supercurrent operators is $\displaystyle\begin{pmatrix}{S}^{R}_{\mu}&\\\\[8.61108pt] {T}^{R}_{\mu}\end{pmatrix}=\begin{pmatrix}Z_{SS}&Z_{ST}\\\\[8.61108pt] Z_{TS}&Z_{TT}\end{pmatrix}\begin{pmatrix}{S}^{B}_{\mu}&\\\\[8.61108pt] {T}^{B}_{\mu}\end{pmatrix}.$ (6) To determine the 4 elements of the mixing matrix $Z$ we need 4 conditions: * • Three conditions can be imposed by considering expectation values between the two mixing operators 111A bar on $S_{\mu},T_{\mu}$ denotes the corresponding charge conjugates $\displaystyle G^{S\,S}_{\mu\nu}(x,y)\equiv\langle S_{\mu}(x)\ \overline{S}_{\nu}(y)\rangle\,\,\,$ $\displaystyle,\,\,\,G^{T\,T}_{\mu\nu}(x,y)\equiv\langle T_{\mu}(x)\ \overline{T}_{\nu}(y)\rangle\,\,\,,\,\,\,$ $\displaystyle G^{S\,T}_{\mu\nu}(x,y)\equiv$ $\displaystyle\langle S_{\mu}(x)\ \overline{T}_{\nu}(y)\rangle.$ (7) * • A fourth condition can be imposed on two-point Green’s functions involving products of $S_{\mu}$ (or $T_{\mu}$) with other gauge-invariant operators of equal or lower dimension. The only such operator with compatible behaviour under the Lorentz group is the Gluino-Glue operator ${\cal O}(x)\equiv\sigma_{\mu\nu}\,{\rm{tr}}_{c}(\,u_{\mu\nu}(x)\lambda(x))$ and a corresponding Green’s function is $G^{{\cal O}\,S}_{\mu}(x,y)\equiv\langle{\cal O}(x)\ \overline{S}_{\mu}(y)\rangle.$ (8) There is a variety of ways to imposed the GIRS renormalization conditions. Especially suitable for numerical lattice investigations is the following form where we integrate over the spatial components of $z=y-x=(\vec{z},t)$ for the sake of improving the signal $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\\{{\left[G^{S\,S}_{\mu\nu}(x,y)\right]}^{{\rm GIRS}}P_{\nu\mu}\\}$ $\displaystyle=$ $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\\{{\left[G^{S\,S}_{\mu\nu}(x,y)\right]}^{\rm tree}P_{\nu\mu}\\},$ (9) $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\\{{\left[G^{T\,T}_{\mu\nu}(x,y)\right]}^{{\rm GIRS}}P_{\nu\mu}\\}$ $\displaystyle=$ $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\\{{\left[G^{T\,T}_{\mu\nu}(x,y)\right]}^{\rm tree}P_{\nu\mu}\\},$ (10) $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\\{{\left[G^{S\,T}_{\mu\nu}(x,y)\right]}^{{\rm GIRS}}P_{\nu\mu}\\}$ $\displaystyle=$ $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\\{{\left[G^{S\,T}_{\mu\nu}(x,y)\right]}^{\rm tree}P_{\nu\mu}\\},$ (11) $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\\{{\left[G^{S\,{\cal O}}_{\mu}(x,y)\right]}^{{\rm GIRS}}P_{\mu}\\}$ $\displaystyle=$ $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\\{{\left[G^{S\,{\cal O}}_{\mu}(x,y)\right]}^{\rm tree}P_{\mu}\\}.$ (12) $P_{\nu\mu}=\gamma_{\mu}\gamma_{4}\gamma_{\nu}$ and $P_{\mu}=\gamma_{\mu}\gamma_{4}$ are projectors acting on the Dirac space that project to states transforming properly under parity, time reversal and charge conjugation. The repeated indices $\mu,\nu$ are not summed over and there is a freedom on which components $\mu,\nu$ to choose. However, the operator components need to be the same in all GIRS conditions Eq. (9–12), as in principle, in this scheme, different components could give different renormalization factors. The tree level values on the right hand side of Eq. (9–12) after spatial integration are $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\left[G^{SS,\,{\rm tree}}_{\mu\nu}(x,y)\ P_{\nu\mu}\right]$ $\displaystyle=-\frac{(N_{c}^{2}-1)\ t}{\pi^{2}|t|^{5}}(1-\delta_{\mu 4}-\delta_{\nu 4}-3\,\delta_{\mu\nu}+4\,\delta_{\mu 4}\,\delta_{\nu 4}),$ (13) $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\left[G^{TT,\,{\rm tree}}_{\mu\nu}(x,y)\ P_{\nu\mu}\right]$ $\displaystyle=\phantom{+}\frac{(N_{c}^{2}-1)\ t}{4\pi^{2}|t|^{5}}(2+\delta_{\mu 4}+\delta_{\nu 4}+3\,\delta_{\mu\nu}-4\,\delta_{\mu 4}\,\delta_{\nu 4}),$ (14) $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\left[G^{ST,\,{\rm tree}}_{\mu\nu}(x,y)\ P_{\nu\mu}\right]$ $\displaystyle=-\frac{(N_{c}^{2}-1)\ t}{2\pi^{2}|t|^{5}}(1-\delta_{\mu 4}-\delta_{\nu 4}-3\,\delta_{\mu\nu}+4\,\delta_{\mu 4}\,\delta_{\nu 4}),$ (15) $\displaystyle\int d^{3}\vec{z}\ {\rm Tr}\left[G^{S\mathcal{O},\,{\rm tree}}_{\mu}(x,y)\ P_{\mu}\right]\ $ $\displaystyle=\phantom{+}0.$ (16) It is instructive to write out the full set of GIRS conditions in terms of the bare correlators. They lead to a set of quadratic equations for the renormalization factors $\displaystyle Z_{SS}^{2}\ {\rm Tr}\left[G^{SS}_{\mu\nu}P_{\nu\mu}\right]+Z_{SS}\ Z_{ST}\ ({\rm Tr}\left[G^{ST}_{\mu\nu}P_{\nu\mu}\right]+{\rm Tr}\left[G^{TS}_{\mu\nu}P_{\nu\mu}\right])+Z_{ST}^{2}\ {\rm Tr}\left[G^{TT}_{\mu\nu}P_{\nu\mu}\right]={\rm Tr}\left[G^{SS,{\rm tree}}_{\mu\nu}P_{\nu\mu}\right],$ $\displaystyle Z_{TS}^{2}\ {\rm Tr}\left[G^{SS}_{\mu\nu}P_{\nu\mu}\right]+Z_{TS}\ Z_{TT}\ ({\rm Tr}\left[G^{ST}_{\mu\nu}P_{\nu\mu}\right]+{\rm Tr}\left[G^{TS}_{\mu\nu}P_{\nu\mu}\right])+Z_{TT}^{2}\ {\rm Tr}\left[G^{TT}_{\mu\nu}P_{\nu\mu}\right]={\rm Tr}\left[G^{TT,{\rm tree}}_{\mu\nu}P_{\nu\mu}\right],$ $\displaystyle Z_{SS}\ \left(Z_{TS}\ {\rm Tr}\left[G^{SS}_{\mu\nu}P_{\nu\mu}\right]+Z_{TT}\ {\rm Tr}\left[G^{ST}_{\mu\nu}P_{\nu\mu}\right]\right)+Z_{ST}\ \left(Z_{TS}\ {\rm Tr}\left[G^{TS}_{\mu\nu}P_{\nu\mu}\right]+Z_{TT}\ {\rm Tr}\left[G^{TT}_{\mu\nu}P_{\nu\mu}\right]\right)=$ $\displaystyle\hskip 341.43306pt{\rm Tr}\left[G^{ST,{\rm tree}}_{\mu\nu}P_{\nu\mu}\right],$ $\displaystyle Z_{O}\left(Z_{SS}\ {\rm Tr}\left[G^{SO}_{\mu}P_{\mu}\right]+Z_{ST}\ {\rm Tr}\left[G^{TO}_{\mu}P_{\mu}\right]\right)={\rm Tr}\left[G^{SO,{\rm tree}}_{\mu}P_{\mu}\right]=0,$ (17) where on the last equation we used the renormalization of the Gluino-Glue operator $\mathcal{O}^{R}=Z_{\mathcal{O}}\mathcal{O}^{B}$ and Eq. (16). From Eq. (13–15) one can see that the particular choice of temporal indices for either $\mu$ or $\nu$ will lead to a vanishing tree level value on the first equation in (17). This in combination with the last equation above would result in vanishing or indeterminate value for $Z_{SS}$ or $Z_{ST}$. Therefore the only allowed components $\mu$, $\nu$ are the spatial ones. ## 4 Perturbative results in dimensional regularization and conversion factors to $\overline{\rm MS}$ Instead of comparing perturbative and non-perturbative results on the GIRS scheme we will follow a different approach. We will convert our results from GIRS scheme to $\overline{\rm MS}$ using the conversion factors $C^{GIRS,\overline{\rm MS}}$. This comes with the advantage that $\overline{\rm MS}$ is more amenable to perturbation theory and the fitting of the $Z$ factors on the numerical side is easier (see next section). The conversion factors $C^{GIRS,\overline{\rm MS}}$ are the factors relating $\overline{\rm MS}$ renormalized and GIRS renormalized operators $\displaystyle\begin{pmatrix}C_{SS}^{{\rm GIRS},\ \overline{\rm MS}}&C_{ST}^{{\rm GIRS},\ \overline{\rm MS}}\\\ \\\ C_{TS}^{{\rm GIRS},\ \overline{\rm MS}}&C_{TT}^{{\rm GIRS},\ \overline{\rm MS}}\end{pmatrix}\cdot\begin{pmatrix}Z_{SS}^{{\ \rm R},\ {\rm GIRS}}&Z_{ST}^{{\ \rm R},\ {\rm GIRS}}\\\ \\\ Z_{TS}^{{\ \rm R},\ {\rm GIRS}}&Z_{TT}^{{\ \rm R},\ {\rm GIRS}}\end{pmatrix}$ $\displaystyle=$ $\displaystyle\begin{pmatrix}Z_{SS}^{{\ \rm R},\ \overline{\rm MS}}&Z_{ST}^{{\rm R},\ \overline{\rm MS}}\\\ \\\ Z_{TS}^{{\ \rm R},\ \overline{\rm MS}}&Z_{TT}^{{\rm R},\ \overline{\rm MS}}\end{pmatrix},$ (18) where R stands for a chosen regularization scheme and the conversion factors themselves are regularization independent. We first obtained the conversion factors by computing the renormalization constants to one-loop in perturbation theory in dimensional regularization both for the $\overline{\rm MS}$ and the GIRS scheme. From the action defined by the Lagrangian Eq. (1), after applying the conventional Faddev-Popov method, one can obtain the corresponding bare Green’s functions at tree-level and to one-loop order in dimensional regularization (DR), involving respectively, the one-loop and two-loop Feynman diagrams of Fig. 1. Figure 1: One-loop and two-loop Feynman diagrams contributing to the tree- level and one-loop two-point Green’s functions of Eqs. (7) and (8). A wavy (solid, dashed) line represents gluons (gluinos, ghosts). The two crosses denote the insertions of operators $S_{\mu},T_{\nu},\mathcal{O}$ appearing in the definition of each two-point function. As is standard practice, the pole terms ($1/\varepsilon^{n}$, $n\in\mathbb{Z}^{+}$) are removed by defining the $\overline{\rm MS}$ mixing matrix elements to have only negative integer powers of $\varepsilon$, i.e., $Z_{ij}^{{\rm DR},\overline{\rm MS}}=\delta_{ij}+g^{2}(z_{ij}/\varepsilon)+\mathcal{O}(g^{4})$, where $i,j=S,T$ and $Z_{\mathcal{O}}^{{\rm DR},\overline{\rm MS}}=1+g^{2}(z_{\mathcal{O}}/\varepsilon)+\mathcal{O}(g^{4})$. Our results for $Z_{ij}^{{\rm DR},\overline{\rm MS}}$, $Z_{\mathcal{O}}^{{\rm DR},\overline{\rm MS}}$ read $\displaystyle Z_{SS}^{{\rm DR},\overline{\rm MS}}$ $\displaystyle=$ $\displaystyle 1+\mathcal{O}(g^{4}),$ (19) $\displaystyle Z_{ST}^{{\rm DR},\overline{\rm MS}}$ $\displaystyle=$ $\displaystyle\mathcal{O}(g^{4}),$ (20) $\displaystyle Z_{TS}^{{\rm DR},\overline{\rm MS}}$ $\displaystyle=$ $\displaystyle\frac{g^{2}}{16\,\pi^{2}}\;\frac{3N_{c}}{2\varepsilon}+\mathcal{O}(g^{4}),$ (21) $\displaystyle Z_{TT}^{{\rm DR},\overline{\rm MS}}$ $\displaystyle=$ $\displaystyle 1-\frac{g^{2}}{16\,\pi^{2}}\;\frac{3N_{c}}{\varepsilon}+\mathcal{O}(g^{4}),$ (22) $\displaystyle Z_{\mathcal{O}}^{{\rm DR},\overline{\rm MS}}$ $\displaystyle=$ $\displaystyle 1-\frac{g^{2}}{16\,\pi^{2}}\;\frac{3N_{c}}{\varepsilon}+\mathcal{O}(g^{4}),$ (23) which agree with our recent one-loop calculations in Refs. [6, 5]. By combining our one-loop results for the mixing matrix in ${\rm GIRS}$ Eqs. (17) and in $\overline{\rm MS}$ Eqs. (19 – 22), we extract the one-loop conversion factors $\displaystyle C_{SS}^{{\rm GIRS},\ \overline{\rm MS}}$ $\displaystyle=1-\frac{g^{2}_{\overline{\rm MS}}}{16\pi^{2}}\frac{17N_{c}}{6}+\mathcal{O}(g^{4}_{\overline{\rm MS}}),$ (24) $\displaystyle C_{ST}^{{\rm GIRS},\ \overline{\rm MS}}$ $\displaystyle=\frac{g^{2}_{\overline{\rm MS}}}{16\pi^{2}}4N_{c}+\mathcal{O}(g^{4}_{\overline{\rm MS}}),$ (25) $\displaystyle C_{TS}^{{\rm GIRS},\ \overline{\rm MS}}$ $\displaystyle=-\frac{g^{2}_{\overline{\rm MS}}}{16\pi^{2}}\frac{3N_{c}}{2}\left(\frac{2}{3}+2\gamma_{E}+\ln(\bar{\mu}^{2}a\ t^{2})\right)+\mathcal{O}(g^{4}_{\overline{\rm MS}}),$ (26) $\displaystyle C_{TT}^{{\rm GIRS},\overline{\rm MS}}$ $\displaystyle=1+\frac{g^{2}_{\overline{\rm MS}}}{16\pi^{2}}N_{c}\left(\frac{7}{6}+6\gamma_{E}+3\ln(\bar{\mu}^{2}a\ t^{2})\right)+\mathcal{O}(g^{4}_{\overline{\rm MS}}).$ (27) ## 5 Non-perturbative results For the lattice discretization of $\mathcal{N}=1$ SYM we employed: a tree- level Symanzik improved gauge action and Wilson fermions for the gluino fields. The action reads222 ${\rm tr}_{c}$ denotes trace over color matrices. $\displaystyle{\cal S}^{L}_{\rm SYM}=\sum_{x}\Bigg{\\{}$ $\displaystyle\frac{2a^{4}}{g^{2}}\left[\frac{5}{3}\sum_{\rm plaq.}{\rm Re}\ {\rm tr}_{c}(1-U_{\rm plaq.})-\frac{1}{12}\sum_{\rm rect.}{\rm Re}\ {\rm tr}_{c}(1-U_{\rm rect.})\right]+\sum_{y}\frac{a^{3}}{2\kappa}\bar{\lambda}(x)D_{W}\lambda(y)\Bigg{\\}},$ where, $U_{\rm plaq.}(U_{\rm rect.})$ denotes $1{\times}1$ ($2{\times}1$) rectangular Wilson loops and the lattice Wilson operator is represented in terms of the hopping parameter $\kappa\equiv 1/(2m_{0}+8)$ as $D_{W}=1-\kappa\big{[}(1-\gamma_{\mu})(V_{\mu}(x))\delta_{x+\mu,y}+(1+\gamma_{\mu})(V^{\dagger}{}_{\mu}(x-\mu))\delta_{x-\mu,y}\big{]}.$ (29) One-level of stout smearing was used on the links $V_{\mu}(x)$, which in the adjoint representation are given by $V_{\mu}^{ab}=2\,{\rm tr_{c}}[U^{\dagger}_{\mu}(x)T^{a}U_{\mu}(x)T^{b}]$. We considered the configurations from earlier works [3, 7] with ensembles based on two different gauge groups $SU(2)$ and $SU(3)$. In case of the gauge group $SU(3)$ the lattice action is different, see [7] for details. The ensembles were generated at two different lattice sizes $V=L^{3}\times T$ of $V_{1}=24^{3}\times 48$ and $V_{2}=32^{3}\times 64$ along with different values of the coupling constant $\beta$ and different mass parameters $\kappa$. As the behaviour of the renormalization constant did not change qualitatively from the different ensembles, we will present here the ones based on the $SU(2)$ group with a lattice size of $V_{1}=24^{3}\times 48$, two different mass parameters $\kappa=0.14925,\ 0.14920$ and a gauge coupling of $\beta=1.75$. The complete set of results for all ensembles are collected in [8]. For comparison of the results, the lattice spacing can be estimated using the QCD Sommer scale value $r_{0}=0.5\text{ fm}$ which leads to a lattice spacing of $a=0.0554(11)\text{ fm}$. The supercurrent and $\mathcal{O}$ operators are represented on the lattice using clover plaquettes $\hat{F}_{\mu\nu}^{\alpha\beta}(x,t)$ and gluino fields $\lambda(x)$. The correlators between $S_{\mu}$, $T_{\mu}$, and $\mathcal{O}$ take the following generic omitting Lorentz, spinor and color indices $\langle A(t)\overline{B}(0)\rangle\equiv\sum_{\vec{x},\vec{y}}\langle A(\vec{x},t)\overline{B}(\vec{y},0)\rangle=\sum_{\vec{x},\vec{y}}\langle{\rm Tr}[\Gamma\hat{F}(\vec{x},t)D^{-1}(\vec{x},t|\vec{y},0)\hat{F}(\vec{y},0)\Gamma^{\prime}]\rangle_{G}=C^{\alpha\beta},$ (30) for $A,B=S_{\mu},T_{\mu},\mathcal{O}$. The expectation value $\langle\cdot\rangle_{G}$ indicates that the fermion has been integrated out. $\Gamma$ and $\Gamma^{\prime}$ collect the combination of gamma matrices of each operator and the inverse of the Dirac operator $D^{-1}(x|y)$ propagates a gluino from $x$ to $y$. In order to use the set of four GIRS conditions Eq. (9–12) we are constrained to use spatial projectors $P_{i}=\gamma_{4}\gamma_{i}$ and $P_{ij}=\gamma_{i}\gamma_{4}\gamma_{j}$ with $i=1,2,3$. We chose $i=j$ which has the advantage of giving better signal to noise ratio. As an example we present two of the correlators in Fig. 2. t$\text{Tr}\langle\mathcal{O}(t)S_{i}(0)P_{i}\rangle$ t$\text{Tr}\langle\mathcal{O}(t)T_{i}(0)P_{i}\rangle$ Figure 2: Correlators $\text{Tr}\langle\mathcal{O}(t)S_{i}(0)P_{i}\rangle$ and $\text{Tr}\langle\mathcal{O}(t)T_{i}(0)P_{i}\rangle$ computed numerically on the ensemble with $\kappa=0.14925$ and $\beta=1.75$ of the $V=24^{3}\times 48$ lattice. Due to the gauge nature of these operators one is expected to find a substantial amount of noise on the signal. To improve the signal we used isotropy, time reversal, and charge conjugation to average equivalent correlators. We used wall sources for the operators and we average over spatial positions of the sink. This amounts to summing up over all $\vec{x}$ and $\vec{y}$ contributions of the correlator. Using the GIRS conditions which on the lattice take the following form $\displaystyle\frac{1}{3L^{3}}\sum_{\vec{x},\vec{y}}\sum_{i}\ {\rm Tr}\left[G^{SS,\,{\rm GIRS}}_{ii}((\vec{x},t),(\vec{y},0))\ \gamma_{i}\gamma_{4}\gamma_{i}\right]$ $\displaystyle=$ $\displaystyle\frac{2(N_{c}^{2}-1)t}{\pi^{2}{|t|}^{5}},$ (31) $\displaystyle\frac{1}{3L^{3}}\sum_{\vec{x},\vec{y}}\sum_{i}\ {\rm Tr}\left[G^{TT,\,{\rm GIRS}}_{ii}((\vec{x},t),(\vec{y},0))\ \gamma_{i}\gamma_{4}\gamma_{i}\right]$ $\displaystyle=$ $\displaystyle\frac{5(N_{c}^{2}-1)t}{4\pi^{2}{|t|}^{5}},$ (32) $\displaystyle\frac{1}{3L^{3}}\sum_{\vec{x},\vec{y}}\sum_{i}\ {\rm Tr}\left[G^{ST,\,{\rm GIRS}}_{ii}((\vec{x},t),(\vec{y},0))\ \gamma_{i}\gamma_{4}\gamma_{i}\right]$ $\displaystyle=$ $\displaystyle\frac{(N_{c}^{2}-1)t}{\pi^{2}{|t|}^{5}},$ (33) $\displaystyle\frac{1}{3L^{3}}\sum_{\vec{x},\vec{y}}\sum_{i}\ {\rm Tr}\left[G^{S{\cal O},\,{\rm GIRS}}_{i}((\vec{x},t),(\vec{y},0))\ \gamma_{4}\gamma_{i}\right]$ $\displaystyle=$ $\displaystyle 0.$ (34) Again these conditions lead to a set of second order equations for the renormalization factors $Z$ similar to Eq. (17) but in their lattice discretized versions. In GIRS the time separation $t$ represents an energy scale for the renormalization constants. The short distance part is dominated by contact terms and lattice artifacts and has to be neglected. We use the conversion factors explained in previous sections to convert GIRS to $\overline{\rm MS}$ scheme. This is expected to replace the dependence on the GIRS scale with the the one on the $\overline{\mu}$ energy scale of the $\overline{\rm MS}$ scheme. After the conversion, a plateau like behaviour is expected at larger distances and the dependence on time separation is replaced by a dependence on the energy scale. More importantly, converting to the $\overline{\rm MS}$ scheme will allow us to compare directly with other results in perturbation theory. The result after applying the conversion factors to the $Z_{ST}/Z_{SS}$ factor is shown in Fig. 3. We fitted the data points in time interval $t\in[5,11]$ where contact terms have decayed and the noise has still not overcome the signal. The value obtained is $Z_{ST}/Z_{SS}=-0.0418(84)$ while the perturbative one found in [5] but without clover improvement is $Z_{ST}/Z_{SS}=0.10080$. ## 6 Summary and conclusions We studied the renormalization of the supercurrent for $\mathcal{N}=1$ SYM. We extracted the renormalization factors for the $S_{\mu}$ and $T_{\mu}$ operators in the GIRS scheme from bare correlators computed numerically on the lattice. By finding the conversion factors $C^{GIRS,\overline{\rm MS}}$ perturbatively, we translated the non-perturbative results to the $\overline{\rm MS}$ scheme where we could compare to perturbation theory. We observed a significant disagreement between the perturbative and non- perturbative determination of the $Z$ factors. Yet we have room for improvement: simulating closer to the perturbative regime, including two-loop terms in the perturbative computation or including smearing could improve the agreement. They are feasible albeit complicated tasks, without any conceptual hindrances. It is worth reiterating at this point that the determination of $Z_{ST}^{{\rm L},\overline{\rm MS}}/Z_{SS}^{{\rm L},\overline{\rm MS}}$ via GIRS, despite the present discrepancy with perturbative estimates, stands to be very useful in the study of more complicated theories, such as supersymmetric QCD, for the purpose of reducing the number of undetermined parameters in Ward identities. Acknowledgements: M.C., H.P. and G.S. acknowledge financial support from the European Regional Development Fund and the Cyprus Research and Innovation Foundation (Projects: EXCELLENCE/0918/0066 and EXCELLENCE/0421/0025. M.C. also acknowledges partial support from the Cyprus University of Technology under the "POSTDOCTORAL" program. G.S acknowledges financial support from H2020 project PRACE-6IP (Grant agreement ID: 823767). G.B. and I.S. acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG) Grant No. 432299911 and 431842497 GIRS schemet$Z_{ST}/Z_{SS}$ $\overline{\rm MS}$ schemet Figure 3: $V=24^{3}\times 48$ lattice with $\kappa=0.14925$ (blue dots) and $\kappa=0.14920$ (orange squares). The points on the $\kappa=0.14920$ ensemble are shifted in $t$ by $+0.33$ for visibility. The error bars were estimated by a jackknife analysis and measurements were done every 8th step on Monte-Carlo time to reduce autocorrelations. ## References * [1] G. Curci and G. Veneziano, _Supersymmetry and the Lattice: A Reconciliation?_ , _Nucl. Phys. B_ 292 (1987) 555-572. * [2] S. Ali et al., _Analysis of Ward identities in supersymmetric Yang-Mills theory_ , _Eur. Phys. J. C_ 78 (2018) 404 [hep-lat/1802.07067]. * [3] G. Bergner, P. Giudice, G. Münster, I. Montvay and S. Piemonte, _The light bound states of supersymmetric $SU(2)$ Yang-Mills theory_, _JHEP_ 03 (2016) 080 [hep-lat/1512.07014]. * [4] M. Costa et al., _Gauge-invariant renormalization scheme in QCD: Application to fermion bilinears and the energy-momentum tensor_ , _Phys. Rev. D_ 103 (2021) 094509 [hep-lat/2102.00858]. * [5] G. Bergner, M. Costa, H. Panagopoulos, I. Soler and G. Spanoudes, _Perturbative renormalization of the supercurrent operator in lattice N=1 supersymmetric Yang-Mills theory_ , _Phys. Rev. D_ 106 (2022) 034502 [hep-lat/2205.02012]. * [6] M. Costa, H. Herodotou, P. Philippides and H. Panagopoulos, _Renormalization and mixing of the Gluino-Glue operator on the lattice_ , _Eur. Phys. J. C_ 81 (2021), 401 [hep-lat/2010.02683]. * [7] S. Ali, G. Bergner, H. Gerber, I. Montvay, G. Münster, S. Piemonte and P. Scior, _Numerical results for the lightest bound states in $\mathcal{N}=1$ supersymmetric SU(3) Yang-Mills theory_, _Phys. Rev. Lett._ 122 (2019) 221601 [hep-lat/1902.11127]. * [8] G. Bergner, M. Costa, H. Panagopoulos, S. Piemonte, I. Soler and G. Spanoudes, _Nonperturbative renormalization of the supercurrent in $\mathcal{N}=1$ Supersymmetric Yang-Mills Theory_, [hep-lat/2209.13934].
$\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\sigma^{-1}(V(Z_{p,q}))$ belongs to $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X_{p}\cup X_{q}$ and the required follows as $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}V(P_{p,q})\subseteq\sigma^{-1}(V(Z_{p,q}))$. This completes the proof that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ is a $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$. We now prove that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ is $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal. Recall that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\cal A$ be a c-diameter partition of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$. Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ be a $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\cal A$-cloud and let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}=\sigma(C)$ be a subset of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}V({\Gamma})$. As $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ is of diameter at most $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}c$, then, from Observation 4, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ is also of diameter at most $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}c$. Notice that if $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ intersects some member $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W$ of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal W}$, then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}=\sigma(C)$ also intersects $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\sigma(W)$, therefore $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ intersects some element of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Q_{\rm in}\cup Q_{\rm out}$. Assume $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ contains $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p\in Q_{\rm in}\cup Q_{\rm out}$, then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}\subseteq N_{p}$. From Observation 4, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C\subseteq X_{p}=\alpha(W_{p})$, therefore $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ satisfies Condition (A). By construction, the distance in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\Gamma}$ between two elements of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Q_{\rm in}$ is either $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}2c+1$ or at least $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}4c+2$. The distance in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\Gamma}$ between on elements of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Q_{\rm in}$ and any element of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Q_{\rm out}$ is a multiple of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}2c+1$. This implies that if $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p,q\in Q$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p\not=q$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}N_{p}\cap C^{\prime}\not=\emptyset$, and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}N_{q}\cap C^{\prime}\not=\emptyset$, then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}q$ are linked. By construction, if $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}q$ are linked, then for every $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}r\in Q$ and every $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}u\in Z_{p,q}$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\bf dist}_{\Gamma}(r,u)\geq\min({\bf dist}_{\Gamma}(r,p),{\bf dist}_{G}(r,q))$, where for every $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}x\in Q_{\rm in}$, the quantity $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\bf dist}_{{\Gamma}}(x,b_{\rm out})$ is interpreted as $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\min\\{{\bf dist}_{\Gamma}(x,q^{\prime})\mid{q^{\prime}\in Q_{\rm out}}\\}$. This implies that if $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ intersects $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Z_{p,q}$ for some $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p,q\in Q$, then for every $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}r\in Q\setminus\\{p,q\\}$, then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ does not intersect $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}N_{r}$. We will use this fact in the next paragraph towards completing the proof of Condition (B). We now claim that if $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ intersects two distinct paths in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\\{Z_{p,q}\mid(p,q)\in Q^{2},p\neq q\\}$, then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ intersects at most one of the sets in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\\{N_{q^{\prime}}\mid q^{\prime}\in Q\\}$. Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Z_{p,q}$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Z_{p^{\prime},q^{\prime}}$ be two distinct paths intersected by $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$. We argue first that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p,q,p^{\prime},q^{\prime}$ cannot be all different. Indeed, if this is the case, as $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ intersects $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Z_{p,q}$ then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ cannot intersect $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}N_{p^{\prime}}$ or $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}N_{q^{\prime}}$ as $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p^{\prime},q^{\prime}\not\in\\{p,q\\}$. As $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Z_{p^{\prime},q^{\prime}}\subseteq N_{q^{\prime}}\cup N_{p^{\prime}}$, we have a contradiction. Assume now that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}p=p^{\prime}$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}q\not=q^{\prime}$. As $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ intersects $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Z_{p,q}$, then it does not intersect $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}N_{r}$ for any $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}r\in Q\setminus\\{p,q\\}$, and as it intersects $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}Z_{p,q^{\prime}}$, then it does not intersect $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}N_{r}$ for any $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}r\in Q\setminus\\{p,q^{\prime}\\}$. We obtain that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{\prime}$ intersects at most one of the sets in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\\{N_{r}\mid r\in Q\\}$ that is $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}N_{p}$. By definition of the states, we obtain that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ shadows at most one state that is $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X_{p}$. That completes the proof of condition (B). ∎ We define bellow three ways to transform a $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$. In each of them, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}=({\cal X},\alpha,{\cal R},\beta)$ is an $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ is an $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-cloud in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf front}_{\cal A}({\cal S})$. * 1. The expansion procedure applies when $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ intersects at least two freeways of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$. Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X$ be the state of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ shadowed by $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ (this state is unique because of property (B) of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{A}$-normality). We define $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}({\cal X}^{\prime},\alpha^{\prime},{\cal R}^{\prime},\beta^{\prime})={\sf expand}({\cal S},C)$ such that * – $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal X}^{\prime}={\cal X}\setminus\\{X\\}\cup\\{X\cup C\\}$, * – for each $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W\in{\cal W}$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\alpha^{\prime}(W)=X^{\prime}$ where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X^{\prime}$ is the unique set of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal X}^{\prime}$ such that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W\subseteq X^{\prime}$, * – $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal R}^{\prime}={\cal R}$, and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\beta^{\prime}=\beta$. * 2. The clash procedure applies when $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ intersects exactly one freeway $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}P$ of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$. Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X_{1},X_{2}$ be the two states of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ that intersect this freeway. Notice that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}P=\beta(\alpha^{-1}(X_{1}),\alpha^{-1}(X_{2}))$, as it is the only freeway with vertices in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X_{1}$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X_{2}$. Assume that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}(C\cap V(P))\cap X_{1}\not=\emptyset$ (if, not, then swap the roles of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X_{1}$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X_{2}$). We define $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}({\cal X}^{\prime},\alpha^{\prime},{\cal R}^{\prime},\beta^{\prime})={\sf clash}({\cal S},C)$ as follows: * – $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal X}^{\prime}=\\{X_{1}\cup C\\}\cup\bigcup_{X\in{\cal X}\setminus\\{X_{1}\\}}\\{{\sf cc}_{G}(X\setminus C,\alpha^{-1}(X))\\}$ (notice that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\alpha^{-1}(X)\subseteq X\setminus C$, for every $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X\in{\cal X}$, because of property (A) of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normality), * – for each $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W\in{\cal W}$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\alpha^{\prime}(W)=X^{\prime}$ where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X^{\prime}$ is the unique set of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal X}^{\prime}$ such that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W\subseteq X^{\prime}$, * – $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal R}^{\prime}={\cal R}\setminus\\{P\\}\cup\\{P^{\prime}\\}$, where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}P^{\prime}=P_{1}\cup P^{*}\cup P_{2}$ is defined as follows: let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}s_{i}$ be the first vertex of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ that we meet while traversing $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}P$ when starting from its endpoint that belongs in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W_{i}$ and let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}P_{i}$ the subpath of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}P$ that we traversed that way, for $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}i\in\\{1,2\\}$. We define $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}P^{*}$ by taking any path between $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}s_{1}$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}s_{2}$ inside $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G[C]$, and * – $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\beta^{\prime}=\beta\setminus\\{(\\{W_{1},W_{2}\\},P)\\}\cup\\{\\{W_{1},W_{2}\\},P^{\prime}\\}$. * 3: The annex procedure applies when $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ intersects no freeway of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ and touches some country $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X\in{\cal X}$. We define $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}({\cal X}^{\prime},\alpha^{\prime},{\cal R}^{\prime},\beta^{\prime})={\sf anex}({\cal S},C)$ such that * – $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal X}^{\prime}=\\{X_{1}\cup C\\}\cup\bigcup_{X\in{\cal X}\setminus\\{X_{1}\\}}\\{{\sf cc}_{G}(X\setminus C,\alpha^{-1}(X))\\}$ (notice that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\alpha^{-1}(X)\subseteq X\setminus C$, for every $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X\in{\cal X}$, because of property (A) of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normality), * – for each $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W\in{\cal W}$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\alpha^{\prime}(W)=X^{\prime}$ where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X^{\prime}$ is the unique set of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal X}^{\prime}$ such that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W\subseteq X^{\prime}$, * – $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal R}^{\prime}={\cal R}$, and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\beta^{\prime}=\beta$. ###### Claim . Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}=({\cal X},\alpha,{\cal R},\beta)$ be an $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$, and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C\in{\sf front}_{\cal A}({\cal S})$. Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}^{\prime}={\sf action}({\cal S},C)$ where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf action}\in\\{{\sf expand},{\sf clash},{\sf anex}\\}$. Then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}^{\prime}$ is an $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cost}({\cal S^{\prime}},{\cal A})\leq{\sf cost}({\cal S},{\cal A})$. Moreover, if $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cov}_{\cal S}(C)\geq 1$, then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cost}({\cal S^{\prime}},{\cal A})<{\sf cost}({\cal S},{\cal A})$ and if $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cov}_{\cal S}(C)=0$ (which may be the case only when $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf action}={\sf anex}$), then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}|{\sf indep}({\cal S^{\prime}})|<|{\sf indep}({\cal S})|$. ###### Proof of Claim 4.2.. We first show that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}^{\prime}$ is an $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$. In each case, the construction of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}^{\prime}$ makes sure that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal X^{\prime}}$ is a connected packing of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ and that the countries are updated in a way that their capitals remain inside them. Moreover, the highways are updated so to remain internally disjoint and inside the corresponding updated countries. We next prove that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}^{\prime}$ is $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{A}$-normal. Condition (A) is invariant as the cloud we take into consideration cannot intersect any $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W\in{\cal W}$ and a cloud intersecting some capital $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W\in{\cal W}$ cannot be disconnected from $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}W$. It now remains to prove condition (B). Because of Condition 4 of the definition of a $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration, if a cloud $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ intersects a freeway, then it shadows at least one state. Now assume that a cloud $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ intersects two freeways in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}^{\prime}$, then by construction of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}^{\prime}$, it also intersects at least the two same freeways in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$. This along with the fact that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$ satisfies Condition (B), implies that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}^{\prime}$ satisfies condition (B) as well, as required. Notice that, for any cloud $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{*}\in{\cal A}\setminus\\{C\\}$, if $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{*}$ does not intersect a state $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X$ in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$, then the corresponding state $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X^{\prime}$ in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}^{\prime}$, i.e., the state $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X^{\prime}=\alpha^{\prime}(\alpha^{-1}(X))$, also does not intersect $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C^{*}$. This means that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cost}({\cal S^{\prime}},{\cal A})\leq{\sf cost}({\cal S},{\cal A})$. Notice now that by the construction of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}^{\prime}$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ is not in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf front}_{\cal A}({\cal S}^{\prime})$. In the case where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cov}_{\cal S}(C)\geq 1$ we have that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cost}({\cal S^{\prime}},{\cal A})<{\sf cost}({\cal S},{\cal A})$. Notice that the case where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cov}_{\cal S}(C)=0$ happens only when $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf action}={\sf anex}$ and there is an edge with one endpoint in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$ and one in some country $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X^{*}$ of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ that does not intersect $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$. Moreover $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cc}_{G}(X\setminus C,\alpha^{-1}(X))=X$, for every state $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}X$ of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$. This implies that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf indep}(\mathcal{S}^{\prime})\subseteq{\sf indep}(\mathcal{S})$. As $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C\subseteq{\sf indep}(\mathcal{S})$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C\cap{\sf indep}(\mathcal{S}^{\prime})=\emptyset$, we conclude that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}|{\sf indep}(\mathcal{S}^{\prime})|<|{\sf indep}(\mathcal{S})|$ as required. ∎ To continue with the proof of Subsection 4.2 we explain how to transform the $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ to a complete one. This is done in two phases. First, as long as there is an $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-cloud $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C\in{\sf front}({\cal S})$ where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cov}_{\cal S}({C})\geq 1$, we apply one of the above three procedures depending on the number of freeways intersected by $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C$. We again use $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ to denote the $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ that is created in the end of this first phase. Notice that, as there is no $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-cloud with $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cov}_{\cal S}({C})\geq 1$, then $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cost}_{\cal A}({\cal S})=0$. The second phase is the application of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf anex}({\cal S},C)$, as long as some $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C\in{\sf front}_{\cal A}({\cal S})$ is touching some of the countries of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$. We claim that this procedure will be applied as long as there are vertices in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf indep}(\mathcal{S})$. Indeed, if this is the case, the set $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf front}_{\cal A}({\cal S})$ is non-empty and by the connectivity of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$, there is always a $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}C\in{\sf front}_{\cal A}({\cal S})$ that is touching some country of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$. Therefore, as $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf cost}_{\cal A}({\cal S})=0$ (by Claim 4.2), procedure $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf anex}({\cal S},C)$ will be applied again. By Claim 4.2, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}|{\sf indep}(\mathcal{S})|$ is strictly decreasing during the second phase. We again use $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ for the final outcome of this second phase. We have that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf indep}(\mathcal{S})=\emptyset$ and we conclude that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal S}$ is a complete $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ such that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}|{\sf front}_{\cal A}({\cal S})|=0$. We are now going to create a graph isomorphic to $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$ only by doing contractions in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$. For this we use $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$, a complete $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal A}$-normal $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ such that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}|{\sf front}_{\cal A}({\cal S})|=0$, obtained as describe before. We contract in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ every country of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$ into a unique vertex. This can be done because the countries of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$ are connected. Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$ be the resulting graph. By construction of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$ is a contraction of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}H$. Because of Condition $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}4$ of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$-state configuration, every freeway of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$ becomes an edge in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$. This implies that there is a graph isomorphic to $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Lambda$ that is a subgraph of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$. So $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\hat{\Gamma}_{k^{\prime}}$ is isomorphic to a subgraph of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$ with the same number of vertices. Let see $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\hat{\Gamma}_{k^{\prime}}$ as a subgraph of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$ and let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}e$ be an edge of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$ that is not an edge of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\hat{\Gamma}_{k^{\prime}}$. As $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}e$ is an edge of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$, this implies that in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$, there is two states of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$ such that there is no freeway between them but still an edge. This is not possible by construction of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{S}$. We deduce that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$ is isomorphic to $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\hat{\Gamma}_{k^{\prime}}$. Moreover, as $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}|{\sf front}_{\cal A}({\cal S})|=0$, then every cloud is a subset of a country. This implies that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$ is also a contraction of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}H$. By contracting in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G^{\prime}$ the edge corresponding to $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\\{a,({k^{\prime}-1,k^{\prime}-1})\\}$ in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\hat{\Gamma}_{k^{\prime}}$, we obtain that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\Gamma_{k^{\prime}}$ is a contraction of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}H$. Subsection 4.2 follows. ∎ ###### Proof of Subsection 1.4. Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\lambda$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}c$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}c_{1}$, and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}c_{2}$ be integers. It is enough to prove that there exists an integer $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\lambda^{\prime}=\mathcal{O}(\lambda\cdot c_{1}\cdot(c_{2})^{c})$ such that for every graph class $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{G}\in\mbox{\rm SQGC}(c)$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\displaystyle\forall G\in\mathcal{G}\ $ $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\displaystyle{\sf tw}(G)\leq\lambda\cdot({\bf bcg}(G))^{c}$ $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\displaystyle\Rightarrow$ $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\displaystyle\forall F\in\mathcal{G}^{(c_{1},c_{2})}\ $ $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\displaystyle{\sf tw}(F)\leq\lambda^{\prime}\cdot({\bf bcg}(F))^{c}.$ Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{G}\in\mbox{\rm SQGC}(c)$ be a class of graph such that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\forall G\in\mathcal{G}\ \ {\sf tw}(G)\leq\lambda\cdot({\bf bcg}(G))^{c}$. Let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}H\in\mathcal{G}^{(c_{1},c_{2})}$ and let $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}J$ be two graphs such that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G\in\mathcal{G}$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G\leq^{(c_{1})}J\mbox{, and }H\leq^{c_{2}}J$. $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}J$ exist by definition of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{G}^{(c_{1},c_{2})}$. * • By definition of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}H$ and $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}J$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf tw}(H)\leq{\sf tw}(J)$. * • By Section 2, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf tw}(J)\leq(c_{1}+1)({\sf tw}(G)+1)-1$. * • By definition of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mathcal{G}$, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf tw}(G)\leq\lambda\cdot{\bf bcg}(G)^{c}$. * • By Subsection 4.2, $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\bf bcg}(G)\leq(2c_{2}+1)({\bf bcg}(H)+2)+1$. If we combine these four statements, we obtain that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf tw}(H)\leq(c_{1}+1)(\lambda\cdot[(2c_{2}+1)({\bf bcg}(H)+2)+1]^{c}+1)-1.$ As the formula is independent of the graph class, the Subsection 1.4 follows.∎ ## 5 Conclusions, extensions, and open problems The main combinatorial result of this paper is that, for every $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}d$ and every apex-minor-free graph class $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal G}$, the intersection class $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf inter}_{d}({\cal G})$ has the SQGC property for $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}c=1$. Certainly, the main general question is to detect even wider graph classes with the SQGM/SQGC property. In this direction, some insisting open issues are the following: * • Is the bound on the (multi-)degree necessary? Are there classes of intersection graphs with unbounded or “almost bounded” maximum degree that have the SQGM/SQGC property? * • All so far known results classify graph classes in SQGM(1) or SQGC(1). Are there (interesting) graph classes in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mbox{\rm SQGM}(c)$ or $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mbox{\rm SQGC}(c)$ for some $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}1<c<2$ that do not belong in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mbox{\rm SQGM}(1)$ or $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mbox{\rm SQGC}(1)$ respectively? An easy (but trivial) example of such a class is the class $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal Q}_{d}$ of the q-dimensional grids, i.e., the cartesian products of $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}q\geq 2$ equal length paths. It is easy to see that the maximum $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}k$ for which an $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}n$-vertex graph $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}G\in{\cal Q}_{q}$ contains a $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}(k\times k)$-grid as a minor is $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}k=\Theta(n^{\frac{1}{2}})$. On the other size, it can also be proven that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\sf tw}(G)=\Theta(n^{\frac{q-1}{q}})$. These two facts together imply that $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal Q}_{q}\in\mbox{\rm SQGM}(2-\frac{2}{q})$ while $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}{\cal Q}_{q}\not\in\mbox{\rm SQGM}(2-\frac{2}{q}-\epsilon)$ for every $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\epsilon>0$. * • Usually the graph classes in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mbox{\rm SQGC}(1)$ are characterised by some “flatness” property. For instance, see the results in [31, 34, 34] for $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}H$-minor free graphs, where $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}H$ is an apex graph. Can $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}\mbox{\rm SQGC}(1)$ be useful as an intuitive definition of the “flatness” concept? Does this have some geometric interpretation? ## References * [1] Stefan Arnborg, Jens Lagergren, and Detlef Seese. Easy problems for tree-decomposable graphs. Journal of Algorithms, 12:308–340, 1991. * [2] Julien Baste and Dimitrios M. Thilikos. Contraction-bidimensionality of geometric intersection graphs. In Daniel Lokshtanov and Naomi Nishimura, editors, 12th International Symposium on Parameterized and Exact Computation, IPEC 2017, September 6-8, 2017, Vienna, Austria, volume 89 of LIPIcs, pages 5:1–5:13. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPIcs.IPEC.2017.5. * [3] Hans L. Bodlaender, Marek Cygan, Stefan Kratsch, and Jesper Nederlof. Deterministic single exponential time algorithms for connectivity problems parameterized by treewidth. Information and Computation, 243:86–111, 2015. * [4] Julia Chuzhoy and Zihan Tan. Towards tight(er) bounds for the excluded grid theorem. J. Comb. Theory, Ser. B, 146:219–265, 2021. doi:10.1016/j.jctb.2020.09.010. * [5] Bruno Courcelle. The monadic second-order logic of graphs. I. Recognizable sets of finite graphs. Information and Computation, 85(1):12–75, 1990. * [6] Bruno Courcelle. The expression of graph properties and graph transformations in monadic second-order logic. Handbook of Graph Grammars, pages 313–400, 1997. * [7] Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michal Pilipczuk, Johan M. M. van Rooij, and Jakub Onufry Wojtaszczyk. Solving connectivity problems parameterized by treewidth in single exponential time. In Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS 2011), pages 150–159, 2011. * [8] Erik D. Demaine. Algorithmic Graph Minors and Bidimensionality. In Proceedings of the 36th International Conference on Graph-theoretic Concepts in Computer Science (WG 2010), pages 2–2, 2010. * [9] Erik D. Demaine, Fedor V. Fomin, MohammadTaghi Hajiaghayi, and Dimitrios M. Thilikos. Bidimensional parameters and local treewidth. SIAM Journal on Discrete Mathematics, 18(3):501–511, 2005. * [10] Erik D. Demaine, Fedor V. Fomin, MohammadTaghi Hajiaghayi, and Dimitrios M. Thilikos. Subexponential parameterized algorithms on bounded-genus graphs and H-minor-free graphs. Journal of the ACM, 52(6):866–893, 2005. * [11] Erik D. Demaine, Fedor V. Fomin, MohammadTaghi Hajiaghayi, and Dimitrios M. Thilikos. Bidimensional structures: Algorithms, combinatorics and logic. Dagstuhl Reports, 3(3):51–74, 2013. * [12] Erik D. Demaine and MohammadTaghi Hajiaghayi. Fast algorithms for hard graph problems: Bidimensionality, minors, and local treewidth. In Proceedings of the 12th International Symposium on Graph Drawing (GD 2004), volume 3383 of LNCS, pages 517–533, 2004. * [13] Erik D. Demaine and MohammadTaghi Hajiaghayi. Bidimensionality: New connections between FPT algorithms and PTASs. In Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2005), pages 590–601, 2005. * [14] Erik D. Demaine and MohammadTaghi Hajiaghayi. The bidimensionality theory and its algorithmic applications. The Computer Journal, 51(3):292–302, 2008. * [15] Erik D. Demaine and MohammadTaghi Hajiaghayi. Linearity of grid minors in treewidth with applications through bidimensionality. Combinatorica, 28(1):19–36, 2008. * [16] Erik D. Demaine, MohammadTaghi Hajiaghayi, and Dimitrios M. Thilikos. The bidimensional theory of bounded-genus graphs. SIAM Journal on Discrete Mathematics, 20(2):357–371, 2006. * [17] Reinhard Diestel, Tommy R. Jensen, Konstantin Yu. Gorbunov, and Carsten Thomassen. Highly connected sets and the excluded grid theorem. Journal of Combinatorial Theory. Series B, 75(1):61–73, 1999. * [18] Frederic Dorn, Fedor V. Fomin, and Dimitrios M. Thilikos. Fast subexponential algorithm for non-local problems on graphs of bounded genus. In Proceedings of the 10th Scandinavian Workshop on Algorithm Theory (SWAT 2006), volume 4059 of LNCS, pages 172–183, 2006. * [19] Frederic Dorn, Fedor V. Fomin, and Dimitrios M. Thilikos. Catalan structures and dynamic programming in $\color[rgb]{0.1,0.1,0.41}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.41}H$-minor-free graphs. In Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2008), pages 631–640, 2008. * [20] Fedor V. Fomin, Erik D. Demaine, and MohammadTaghi Hajiaghayi. Bidimensionality. In Encyclopedia of Algorithms. Springer, 2015. * [21] Fedor V. Fomin, Petr A. Golovach, and Dimitrios M. Thilikos. Contraction obstructions for treewidth. Journal of Combinatorial Theory. Series B, 101(5):302–314, 2011\. * [22] Fedor V. Fomin, Daniel Lokshtanov, Venkatesh Raman, and Saket Saurabh. Bidimensionality and EPTAS. In Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2011), pages 748–759, 2011. * [23] Fedor V. Fomin, Daniel Lokshtanov, and Saket Saurabh. Bidimensionality and geometric graphs. In Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2012), pages 1563–1575, 2012. * [24] Fedor V. Fomin, Daniel Lokshtanov, and Saket Saurabh. Efficient computation of representative sets with applications in parameterized and exact algorithms. In Proceedings of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2014), pages 142–151, 2014. * [25] Fedor V. Fomin, Daniel Lokshtanov, and Saket Saurabh. Excluded grid minors and efficient polynomial-time approximation schemes. J. ACM, 65(2):10:1–10:44, 2018. doi:10.1145/3154833. * [26] Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, and Dimitrios M. Thilikos. Bidimensionality and kernels. In Proceedings of the 21th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2010), pages 503–510, 2010. * [27] Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, and Dimitrios M. Thilikos. Bidimensionality and kernels. CoRR, abs/1606.05689, 2016. * [28] Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. Approximation schemes via width/weight trade-offs on minor-free graphs. In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 2299–2318. SIAM, 2020. doi:10.1137/1.9781611975994.141. * [29] Fedor V. Fomin and Dimitrios M. Thilikos Petr Golovach and. Contraction bidimensionality: the accurate picture. In 17th Annual European Symposium on Algorithms, volume 5757 of LNCS, pages 706–717. Springer, 2009. * [30] Fanica Gavril. The intersection graphs of subtrees in trees are exactly the chordal graphs. Journal of Combinatorial Theory, Series B, 16(1):47–56, 1974. * [31] Archontia C. Giannopoulou and Dimitrios M. Thilikos. Optimizing the graph minors weak structure theorem. SIAM Journal on Discrete Mathematics, 27(3):1209–1227, 2013. * [32] Alexander Grigoriev, Athanassios Koutsonas, and Dimitrios M. Thilikos. Bidimensionality of Geometric Intersection Graphs. In Proceedings of the 40th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM 2014), volume 8327 of LNCS, pages 293–305, 2014. * [33] Ken-ichi Kawarabayashi and Yusuke Kobayashi. Linear min-max relation between the treewidth of H-minor-free graphs and its largest grid. In Proceedings of the 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012), volume 14 of LIPIcs, pages 278–289, 2012. * [34] Ken-ichi Kawarabayashi, Robin Thomas, and Paul Wollan. New proof of the flat wall theorem. Journal of Combinatorial Theory, Series B, 2017. To appear. * [35] Alexander Leaf and Paul D. Seymour. Tree-width and planar minors. Journal of Combinatorial Theory. Series B, 111:38–53, 2015. * [36] Jiří Matoušek. String graphs and separators. In Geometry, Structure and Randomness in Combinatorics, pages 61–97. Springer, 2014. * [37] Michal Pilipczuk. Problems parameterized by treewidth tractable in single exponential time: A logical approach. In Proceedings of the 36th International Conference on Mathematical Foundations of Computer Science (MFCS 2011), pages 520–531, 2011\. * [38] Neil Robertson and Paul D. Seymour. Graph Minors. II. Algorithmic aspects of tree-width. Journal of Algorithms, 7:309–322, 1986. * [39] Neil Robertson and Paul D. Seymour. Graph minors. V. Excluding a planar graph. Journal of Combinatorial Theory. Series B, 41(1):92–114, 1986. * [40] Neil Robertson, Paul D. Seymour, and Robin Thomas. Quickly excluding a planar graph. Journal of Combinatorial Theory. Series B, 62(2):323–348, 1994\. * [41] Juanjo Rué, Ignasi Sau, and Dimitrios M. Thilikos. Dynamic programming for H-minor-free graphs. In Proceedings of Computing and Combinatorics - 18th Annual International Conference, (COCOON 2012), pages 86–97, 2012. * [42] Juanjo Rué, Ignasi Sau, and Dimitrios M. Thilikos. Dynamic programming for graphs on surfaces. ACM Transactions on Algorithms, 10(2):1–8, 2014. * [43] Dimitrios M. Thilikos. Graph minors and parameterized algorithm design. In The Multivariate Algorithmic Revolution and Beyond - Essays Dedicated to Michael R. Fellows on the Occasion of His 60th Birthday, pages 228–256, 2012. * [44] Dimitrios M. Thilikos. Bidimensionality and parameterized algorithms (invited talk). In 10th International Symposium on Parameterized and Exact Computation, IPEC 2015, September 16-18, 2015, Patras, Greece, pages 1–16, 2015.
11institutetext: Cornell University 11email<EMAIL_ADDRESS>22institutetext: Weill Cornell Medical College 33institutetext: University of Pennsylvania # NeRD: Neural Representation of Distribution for Medical Image Segmentation Hang Zhang 1122 Rongguang Wang 33 Jinwei Zhang 1122 Chao Li 1122 Gufeng Yang 11 Pascal Spincemaille 22 Thanh D. Nguyen 22 Yi Wang 1122 ###### Abstract We introduce Neural Representation of Distribution (NeRD) technique, a module for convolutional neural networks (CNNs) that can estimate the feature distribution by optimizing an underlying function mapping image coordinates to the feature distribution. Using NeRD, we propose an end-to-end deep learning model for medical image segmentation that can compensate the negative impact of feature distribution shifting issue caused by commonly used network operations such as padding and pooling. An implicit function is used to represent the parameter space of the feature distribution by querying the image coordinate. With NeRD, the impact of issues such as over-segmenting and missing have been reduced, and experimental results on the challenging white matter lesion segmentation and left atrial segmentation verify the effectiveness of the proposed method. The code is available via https://github.com/tinymilky/NeRD. ###### Keywords: Image Segmentation Neural Representation Convolutional Neural Networks ## 1 Introduction Deep convolutional neural networks (CNNs) have been the dominant approach across various tasks of computer vision and image processing. The efficient convolutional operation with spatially-invariant filters is one of the main factors for the success of CNNs. The input feature map to the convolutional layers shares these filters across all spatial positions, thereby reducing network parameters and improving the generalization ability. Many medical image applications have benefited from this, for example, multiple sclerosis (MS) lesion segmentation [28, 27], white matter (WM) lesion segmentation [16], and quantitative susceptibility mapping [30, 31]. However, we still observe severe failure cases when applying the widely used U-Net [23] to brain lesion segmentation. E.g., We can see from Fig. 4 and Fig. 4 that brain lesions close to the brain boundary (meaning that they are close to the image boundary) or close to the ventricles (meaning that they are close to the center of the image) are prone to be misclassified by the U-Net. Analyzing the network architecture, we summarize the causes as follows: 1) the padding operation in convolutional layers can lead to artefacts in feature maps [18, 1] (see Fig. 2), 2) and can shift the feature distribution across different spatial positions [10]; 3) the down-sampling operations such as max- pooling and strided convolution ignore the basic sampling theorem, resulting in breaking the property of spatial invariance for the segmentation task [32, 12]. Figure 1: Visualization of a failure case on brain lesion segmentation. Green boxes indicate regions of failure. (A) An T2-FLAIR image example of a patient with heavy lesion burden. (B) Lesions labeled by a human expert (marked in red). (C) Segmentation result of U-Net. (D) Segmentation result of our proposed CIF-based network. Various methods, such as cube padding [5], circular convolution [24], explicit boundary-based filters [9], and the max-blurring-pooling [32] have been investigated by researchers to tackle the above issues. All of these methods can lessen the negative impact in some degree, but most of these methods are ad-hoc and none of them can handle all three issues collectively. Therefore, we argue that it is imperative to develop a unified framework to solve the problem to facilitate our medical image segmentation. The key problem is the feature distribution shifting caused by commonly used network operations such as padding and pooling. Usually, the deep network stacks many convolutional layers; thus, with the network depth increases, the consecutive operations of padding and pooling can gradually shift the feature distribution from the boundary to the center of the image. Suppose $\Omega\subset\mathbb{N}^{2}$ is the spatial domain of an input image, $\mathbf{v}=(i,j)\in\Omega$ is the spatial position vector, the final feature map before the pixel-wise classifier is $\mathbf{X}\in\mathbb{R}^{H\times W\times C}$, $\mathbf{x}_{\mathbf{v}}\in\mathbb{R}^{C}$ is the feature vector in position $\mathbf{v}$, and $\phi(\theta)$ is the feature vector distribution, then we can define the following mapping function: $\displaystyle\theta=f(\mathbf{v}),$ (1) $\displaystyle\mathbf{X}_{\mathbf{v}}\sim\phi(\theta),$ (2) where $\theta$ is the parameter of the distribution $\phi$. Basically, the equation tells us the distribution of the feature is determined by its spatial location. If convolutional neural network is strictly spatially invariant, the $\theta$ should be equivalent for features in all locations. However, as we mentioned above, certain operations in the network can alter the distribution, resulting in varying $\theta$ depending on its location. Figure 2: Visual example of the feature shifting. The image at the left lower panel is cropped from the image at the left upper panel. The feature maps on the right of both images are obtained with a U-Net. Orange boxes indicate regions of feature shifting. Current segmentation network assumes that $\theta$ is equivalent for all feature vectors, which brings trouble for our brain lesion segmentation. Thus, in this paper, we propose a Neural Representation of Distribution (NeRD) technique to approximate the mapping function $f$ in Eq. (1) to resolve the issue. The idea of NeRD is inspired by a recently developed neural implicit representation (NIR) technique [21, 19], where the NIR parameterizes a signal as a continuous function that maps the domain such as the coordinate of the signal to pixel values at that coordinate. In our study, we map the coordinates to the feature distributions. In summary: * • We propose a Neural Representation of Distribution (NeRD) technique to bring back the spatial invariance by approximating the feature distribution based on pixel coordinates. * • We validate the proposed method on two challenging medical image segmentation tasks, where both quantitative and qualitative results demonstrate the effectiveness of our method. ### 1.1 Related Works #### 1.1.1 Brain Lesion Segmentation MS lesion segmentation and WM lesion segmentation are most important and difficult tasks in brain lesion analysis, as these lesions vary greatly in terms of shape, size and location. Though numerous automated approaches have been proposed, a clinically reliable technique is not yet available. The 2.5D stacked denseNet [29] is proposed to capture broader brain structure information. The folded attention network [28] applies light-weight self- attention method for richer contextual information. The geometric loss [27] is developed to regularize CNN training, which helps segmenting small lesions. The boundary loss [13] uses distance transformation mapping to tackle the data imbalance problem. All these methods have achieved reasonably good result, but bone of them takes feature distribution shifting into consideration. #### 1.1.2 Neural Implicit Representation NIR is a recently developed technique, which is frequently used in representation of geometry and appearance [21, 19] in graphics. DeepSDF [21] Learns a set of continuous signed distance functions for shape representation. Later, NeRF [19] provides a more flexible way for synthesizing novel views of complex scenes. Other vision applications such as image super resolution [4], image synthesis [2] also benefit from the NIR technique. Though the NIR is blooming in many areas, we haven’t seen any method using NIR for feature distribution mapping. #### 1.1.3 Meta Learning We use the proposed NeRD technique to predict network weights based on image coordinates, the process of which is one of the meta learning strategies [17]. The weights of certain modules of the network are predicted by another network module instead of directly learned. [20] proposes a dynamic parameter layer for image question answering. [7] uses box weights to predict mask weights for image segmentation. [8] achieves super-resolution of arbitrary scale factor by predicting weights of up-sampling module based on the scale factor. In our work, we use image coordinates coupled with training data to dynamically estimate the feature distribution. ## 2 Methodology Recently, NIR techniques have been investigated to model continuous 3D shapes as level sets, which can efficiently map coordinates to signed distance function [11, 21] for shape reconstruction. In this work, our goal is to map the image coordinates to the feature distribution to resolve the spatial invariance issue brought by basic operations such as padding and pooling. ### 2.1 Neural Representation of Distribution The practical implementation of a segmentation network consists of two parts, a encoder-decoder structure for feature extraction and a multi-layer perceptron (MLP) for pixel-wise classification. Let $\mathbf{X}\in\mathbb{R}^{H\times W\times C}$ ($H$ and $W$ are the spatial size of the image, and $C$ is the number of channels) be the output from the encoder-decoder structure, $\mathbf{v}=(d_{t},d_{r},d_{b},d_{l})$ be the position vector for a pixel ($d_{t},d_{r},d_{b}$, and $d_{l}$ indicates the distance of the pixel to the top, the right, the bottom and the left of the image), we can use the Gaussian distribution to approximate the feature distribution of a given position $\mathbf{v}$ as follows: $\mathbf{x}_{\mathbf{v}}\sim\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma}),$ (3) where $\mathbf{\mu}$ is the mean vector in location $\mathbf{v}$, and $\mathbf{\Sigma}$ is the corresponding co-variance matrix. Since the spatial invariance of CNNs has been broken by certain operations in the network, one of the MLP assumptions that the data is drawn from the same distribution no longer holds. It is expected that a simple MLP classifier is prone to fail in classifying pixels close to the boundary or the center of the image, as there exists an non-trivial discrepancy between their feature distributions (as is verified in our experiments, see Fig. 4 and Fig. 5). In this work, we use the proposed NeRD technique to resolve the issue. Figure 3: Visual illustration of the proposed method. We represent a continuous parameter space of the distribution function as a 4D vector-valued function, whose input is a 4D position vector $\mathbf{v}=(d_{t},d_{r},d_{b},d_{l})$, and whose output is the mean vector $\mathbf{\mu}$ and the co-variance matrix $\mathbf{\Sigma}$ of the distribution of this position. In practice, we use another MLP $f$ to approximate this continuous 4D representation of the distribution, and optimize the weight $\mathbf{\Theta}$ of $f$ along with other network weights to map each input position vector to the corresponding mean vector and co- variance matrix. It has been shown that CNNs can linearize [3, 25] the manifold of images into an Euclidean subspace of deep features, which indicate that elements in the output feature vector are independent to each other. Thus, we further simplify the model and reduce the co-variance matrix to a vector $\mathbf{\sigma}$. With the estimation of $\mathbf{\mu}$ and $\mathbf{\sigma}$ from the function $f$, we can normalize and classify the output feature vector at position $\mathbf{v}$ as follows: $s_{\mathbf{v}}=\mathbf{w}^{\top}\dfrac{\mathbf{x}_{\mathbf{v}}-\mathbf{\mu}}{\mathbf{\sigma}},$ (4) where $\mathbf{\mu}$ and $\mathbf{\sigma}$ are obtained with the MLP $f_{\mathbf{\Theta}}:(\mathbf{v})\rightarrow(\mathbf{\mu},\mathbf{\sigma})$, $s_{\mathbf{v}}$ is the final output of the network, $\mathbf{w}$ is the weight of the MLP classifier (Please note that as we use z-score [33] to normalize the input image, there is no bias term for the classifier). To be numerically stable, in practice, we estimate $1/\mathbf{\sigma}$ and $-\mu/\mathbf{\sigma}$ instead, and the final equation for normalization and classification can be described as: $s_{\mathbf{v}}=\mathbf{w}^{\top}(\mathbf{x}_{\mathbf{v}}\dfrac{1}{\mathbf{\sigma}}+(-\dfrac{\mathbf{\mu}}{\mathbf{\sigma}}))$ (5) ### 2.2 Pixel-aligned Classifier and The Overall Framework The overall framework of our proposed method is shown in Fig. 3. The input image goes through a U-Net to obtain the final feature map for pixel-wise classification, and in the meantime, the offset generator provides the pixel- wise position vectors, followed by an MLP as the distribution calibrator to generate the estimation of $1/\mathbf{\sigma}$ and $-\mu/\mathbf{\sigma}$. With the distribution estimation of every pixel position, we apply Eqn. (5) to normalize and classify the feature vector of each position. The final segmentation can be obtained with another MLP and a Sigmoid function. We call the framework using the proposed feature estimation technique as NeRD. We wouldn’t say NeRD a concrete method but an idea that can improve the performance of a CNN network by estimating the pixel-wise feature distribution. Thus, it can be expected that there are many variants using the NeRD idea. Here, we describe one that we deem as interesting. Rather than estimating the parameters of the feature distribution, we estimate a pixel- aligned classifier. That is to say, we estimate a unique linear classifier for every single pixel position, which can be described as $s_{\mathbf{v}}=\mathbf{x}_{\mathbf{v}}^{\top}\mathbf{w}_{\mathbf{v}}$, where the $\mathbf{w}_{\mathbf{v}}$ is the weight of a linear classifier estimated by an MLP $f_{\mathbf{\Theta}}:(\mathbf{v})\rightarrow(\mathbf{w}_{\mathbf{v}})$. We call this NeRD classifier as NeRDc and the former mean-variance estimator as NeRDm. Both NeRDc and NeRDm can be efficiently implemented using tensor operations in the modern GPU architecture. ## 3 Experimental Results ### 3.1 Datasets White matter hyperintensities111https://wmh.isi.uu.nl (WMH) [15] is a publicly available dataset which contains 60 3D scans with 2 modalities (T1 and FLAIR weighted) acquired from multiple vendors and scanners in three different institutes. The spatial resolution goes from $0.95\times 0.95\times 3\ mm^{3}$ to $1.21\times 1\times 3\ mm^{3}$ for each volume. Manual annotations of WMH are provided for the 60 scans. In the experiments, we split this dataset into training, validation, and testing sets containing 42, 6 and 12 samples, respectively. We also use left atrial222http://atriaseg2018.cardiacatlas.org (LA) segmentation challenge [26] dataset for evaluation. A total of 154 independently acquired 3D LGE-MRIs from 60 deidentified patients with atrial fibrillation were used in this challenge. The clinical images were acquired with either a 1.5T Avanto or 3.0T Verio whole-body scanner. The spatial resolution of one 3D LGE-MRI scan was $0.625\times 0.625\times 0.625\ mm^{3}$ with spatial dimensions of either $576\times 576\times 88$ or $640\times 640\times 88$ pixels. From the whole set, 108 scans were used for training, 15 for validation, and the remaining 31 for testing. ### 3.2 Implementation Details #### 3.2.1 Data pre-processing. We slice the original images into a stack of independent 2D images. Each scan is center-cropped to size $160\times 224$ for WMH and size $290\times 240$ pixels for LA, and normalized to real values between 0 and 1. Since two modalities (T1 and FLAIR) are available for WMH, both of them are concatenated along the channel dimension before being used as input to the network. #### 3.2.2 Netowrk and training. We employ U-Net [23] as the backbone architecture in our experiments. To train our model, we use Adam [14] optimizer, with an initial learning rate of $1\text{e}-3$ (weight decay of $1\text{e}-6$) and a batch size equal to 14. The learning rate is halved at 50%, 70% and 90% of the total training epoch (90) for optimal convergence. We use PyTorch [22] for implementation, and run the experiments on a machine equipped with an NVIDIA RTX 2080 Ti GPU with 11GBs of memory. #### 3.2.3 Evaluation metrics. To quantify the performance of WMH segmentation, we use Dice similarity coefficient [6], lesion-wise Dice (LDice), lesion-wise true positive rate (LTPR), and lesion-wise positive pre-dictive value (LPPV) as metrics. LDice, LTPR and LPPV are defined as $\text{LDice}=\frac{\text{TPR}}{\text{GL}+\text{PL}}$, $\text{LTPR}=\frac{\text{TPR}}{\text{GL}}$, and $\text{LPPV}=\frac{\text{TPR}}{\text{PL}}$, where TPR denotes the number of lesions in ground-truth segmentation that overlap with a lesion in the produced segmentation, and GL, PL is the number of lesions in ground-truth segmentation and produced segmentation respectively. Dice quantifies the voxel-wise overlap between the output and the ground-truth. Complementarily, LDice, LTPR and LPPV are more sensitive in measuring the lesion-wise detection accuracy. As for LA segmentation, two region-based metrics, Dice and Jaccard, are used to measure the region mismatch. Three boundary-based metrics, average surface distance (ASD), Hausdorff distance (HD), and 95% Hausdorff distance (95HD), are used to evaluate errors in the boundary. ### 3.3 WMH Segmentation Table 1: Quantitative comparison with average (standard deviation over three independent runs) on white matter hyperintensities (WMH) segmentation. Model | Filter | Dice (%) $\uparrow$ | LDice (%) $\uparrow$ | LFPR (%) $\downarrow$ | LTPR (%) $\uparrow$ ---|---|---|---|---|--- U-Net | 256 | 76.7 (1.96) | 66.9 (5.37) | 30.8 (4.95) | 73.9 (4.15) NeRDc | 256 | 78.4 (0.41) | 69.7 (1.84) | 27.8 (3.52) | 72.2 (0.94) NeRDm | 256 | 77.4 (0.71) | 66.9 (2.82) | 33.5 (3.71) | 73.9 (0.86) U-Net | 512 | 78.4 (0.82) | 69.8 (0.55) | 29.2 (3.14) | 76.2 (3.07) NeRDc | 512 | 79.2 (0.05) | 72.0 (1.07) | 27.1 (0.44) | 77.7 (1.83) NeRDm | 512 | 78.6 (0.37) | 72.3 (0.32) | 27.7 (1.11) | 77.6 (2.43) Figure 4: WMH examples. Yellow boxes indicate regions of missing and green boxes indicate regions of over-segmenting. (A) FLAIR images; (B) ground-truth (marked in red); (C) U-Net mask; (D) NeRDc mask; (E) NeRDm mask. #### 3.3.1 Quantitative results. We compare our proposed NeRDc and NeRDm methods with the baseline U-Net [23] on lesion-specific metrics as shown in Table 1. We use two variants of U-Net backbone containing different set of kernel channel numbers that representing low- and high-capacity networks separately. For the low-capacity network (denoted as 256 under “filter” column in the table), we use [16, 32, 64, 128, 256] as the size for convolution filters in different layers, and we double the channel number for the high-capacity network (denoted as 512). In the low- capacity group, we observe that NeRDc outperforms both U-Net and NeRDm in most metrics, and in particular, there’s 2.8% increment for LDice and 3% reduction in LFPR compared to U-Net. NeRDc also shows substantial improvements over Dice (0.8%), LFPR (2.1%), and LTPR (1.5%) in the high-capacity group compared to U-Net. Importantly, our proposed NeRDc using low-capacity backbone achieved similar performance compared to the U-Net with high-capacity backbone. For example, the Dice score of NeRDc in low-capacity group is the same as the U-Net in high-capacity group. #### 3.3.2 Qualitative results. As shown in Fig. 4, we present WMH segmentation results with a missing case and a over-segmenting case made by vanilla U-Net. We can see from the first row of Fig. 4, U-Net missed a lesion close to the ventricle/center of the image, while our NeRDm accurately located this lesion, which demonstrated the superiority of our feature distribution estimation technique. Though the end of cortex (yellow box position) exhibited similar high intensity value as lesions, and was close to the boundary, both our NeRDc and NeRDm made no mistakes in this region, while U-Net over-segmented several pixels. ### 3.4 LA Segmentation Table 2: Quantitative comparison with average (standard deviation over three independent runs) on left atrial (LA) segmentation. Model | Filter | Dice (%) $\uparrow$ | Jaccard (%) $\uparrow$ | HD $\downarrow$ | 95HD $\downarrow$ | ASD $\downarrow$ ---|---|---|---|---|---|--- U-Net | 256 | 90.3 (0.51) | 82.5 (0.81) | 34.1 (3.09) | 6.5 (0.52) | 2.1 (0.13) NeRDc | 256 | 90.6 (0.26) | 82.9 (0.39) | 32.1 (3.55) | 6.4 (0.55) | 2.1 (0.12) NeRDm | 256 | 90.5 (0.16) | 82.8 (0.25) | 29.6 (2.62) | 6.3 (0.03) | 2.1 (0.12) U-Net | 512 | 90.1 (0.14) | 82.2 (0.24) | 34.3 (0.36) | 6.9 (0.10) | 2.3 (0.08) NeRDc | 512 | 90.7 (0.14) | 83.1 (0.23) | 30.6 (0.67) | 6.3 (0.21) | 2.1 (0.07) NeRDm | 512 | 90.3 (0.21) | 82.5 (0.37) | 35.2 (2.49) | 6.5 (0.31) | 2.2 (0.06) Figure 5: LA examples. Yellow boxes indicate regions of missing and green boxes indicate regions of over-segmenting. (A) LGE-MR images; (B) ground-truth (marked in red); (C) U-Net mask; (D) NeRDc mask; (E) NeRDm mask. #### 3.4.1 Quantitative results. We report the LA segmentation results of the baseline U-Net [23], NeRDc, and NeRDm in Table 2. We evaluated the performance of each model using a set of boundary-based metrics, such as Hausdorff distance (HD) and average surface distance (ASD). Similar to WMH segmentation, we employ CNN backbone with both low- and high-capacity for detailed performance investigation. In the low- capacity group, we can observe that both NeRDc and NeRDm show improved performance on all metrics compared to U-Net. Especially, NeRDm achieved 4.5 reduction in HD with reduced variance. On the other hand, NeRDc outperforms both U-Net and NeRDm on all metrics in the high-capacity group with a significant margin in HD. Notably, NeRDc in the low-capacity group showed substantial improvement over U-Net in the high-capacity group on all metrics. This phenomena were observed in both WMH and LA segmentation, which indicates that even with fewer network parameters and lower computational resource requirement, our proposed NeRD can achieve comparable or better performance than the counterpart without NeRD module consistently. #### 3.4.2 Qualitative results. Fig. 5 shows two examples of LA segmentation results. As can be seen in the first row of the figure, similar to WHM segmentation, U-Net equipped with the NeRD module didn’t miss the pixels close to the boundary, while the U-Net did. Similarly, our proposed methods didn’t over-segment the areas close to the center of the image, while the U-Net did. ## 4 Conclusions We presented a novel neural representation learning technique (NeRD) to estimate the pixel-wise feature distribution. We instantiated two variants of the NeRD, NeRDm for estimating the mean and variance of the feature distribution and NeRDc for estimating pixel-wise linear classifier. Both variants showed performance improvement over the counterpart without NeRD modules on two challenging medical image segmentation tasks, WHM segmentation that contains multiple lesions spanning the whole brain and LA segmentation that contains a single large object. We believe that the proposed NeRD technique can contribute to more medical image applications. ## References * [1] Alsallakh, B., Kokhlikyan, N., Miglani, V., Yuan, J., Reblitz-Richardson, O.: Mind the pad–cnns can develop blind spots. arXiv preprint arXiv:2010.02178 (2020) * [2] Anokhin, I., Demochkin, K., Khakhulin, T., Sterkin, G., Lempitsky, V., Korzhenkov, D.: Image generators with conditionally-independent pixel synthesis. arXiv preprint arXiv:2011.13775 (2020) * [3] Bengio, Y., Mesnil, G., Dauphin, Y., Rifai, S.: Better mixing via deep representations. In: International conference on machine learning. pp. 552–560. PMLR (2013) * [4] Chen, Y., Liu, S., Wang, X.: Learning continuous image representation with local implicit image function. arXiv preprint arXiv:2012.09161 (2020) * [5] Cheng, H.T., Chao, C.H., Dong, J.D., Wen, H.K., Liu, T.L., Sun, M.: Cube padding for weakly-supervised saliency prediction in 360 videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1420–1429 (2018) * [6] Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945) * [7] Hu, R., Dollár, P., He, K., Darrell, T., Girshick, R.: Learning to segment every thing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4233–4241 (2018) * [8] Hu, X., Mu, H., Zhang, X., Wang, Z., Tan, T., Sun, J.: Meta-sr: A magnification-arbitrary network for super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1575–1584 (2019) * [9] Innamorati, C., Ritschel, T., Weyrich, T., Mitra, N.J.: Learning on the edge: Investigating boundary filters in cnns. International Journal of Computer Vision pp. 1–10 (2019) * [10] Islam, M.A., Jia, S., Bruce, N.D.: How much position information do convolutional neural networks encode? In: International Conference on Learning Representations (2019) * [11] Jiang, C., Sud, A., Makadia, A., Huang, J., Nießner, M., Funkhouser, T., et al.: Local implicit grid representations for 3d scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6001–6010 (2020) * [12] Kayhan, O.S., Gemert, J.C.v.: On translation invariance in cnns: Convolutional layers can exploit absolute spatial location. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 14274–14285 (2020) * [13] Kervadec, H., Bouchtiba, J., Desrosiers, C., Granger, E., Dolz, J., Ayed, I.B.: Boundary loss for highly unbalanced segmentation. In: International conference on medical imaging with deep learning. pp. 285–296. PMLR (2019) * [14] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) * [15] Kuijf, H.J., Biesbroek, J.M., De Bresser, J., Heinen, R., Andermatt, S., Bento, M., Berseth, M., Belyaev, M., Cardoso, M.J., Casamitjana, A., et al.: Standardized assessment of automatic segmentation of white matter hyperintensities and results of the wmh segmentation challenge. IEEE transactions on medical imaging 38(11), 2556–2568 (2019) * [16] La Rosa, F., Abdulkadir, A., Fartaria, M.J., Rahmanzadeh, R., Lu, P.J., Galbusera, R., Barakovic, M., Thiran, J.P., Granziera, C., Cuadra, M.B.: Multiple sclerosis cortical and wm lesion segmentation at 3t mri: a deep learning method based on flair and mp2rage. NeuroImage: Clinical 27, 102335 (2020) * [17] Lemke, C., Budka, M., Gabrys, B.: Metalearning: a survey of trends and technologies. Artificial intelligence review 44(1), 117–130 (2015) * [18] Liu, R., Jia, J.: Reducing boundary artifacts in image deconvolution. In: 2008 15th IEEE International Conference on Image Processing. pp. 505–508. IEEE (2008) * [19] Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: Representing scenes as neural radiance fields for view synthesis. arXiv preprint arXiv:2003.08934 (2020) * [20] Noh, H., Hongsuck Seo, P., Han, B.: Image question answering using convolutional neural network with dynamic parameter prediction. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 30–38 (2016) * [21] Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: Deepsdf: Learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 165–174 (2019) * [22] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems. pp. 8024–8035 (2019) * [23] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234–241. Springer (2015) * [24] Schubert, S., Neubert, P., Pöschmann, J., Pretzel, P.: Circular convolutional neural networks for panoramic images and laser data. In: 2019 IEEE Intelligent Vehicles Symposium (IV). pp. 653–660. IEEE (2019) * [25] Upchurch, P., Gardner, J., Pleiss, G., Pless, R., Snavely, N., Bala, K., Weinberger, K.: Deep feature interpolation for image content changes. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7064–7073 (2017) * [26] Xiong, Z., Xia, Q., Hu, Z., Huang, N., Bian, C., Zheng, Y., Vesal, S., Ravikumar, N., Maier, A., Yang, X., et al.: A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging. Medical Image Analysis 67, 101832 (2021) * [27] Zhang, H., Zhang, J., Wang, R., Zhang, Q., Gauthier, S.A., Spincemaille, P., Nguyen, T.D., Wang, Y.: Geometric loss for deep multiple sclerosis lesion segmentation. arXiv preprint arXiv:2009.13755 (2020) * [28] Zhang, H., Zhang, J., Wang, R., Zhang, Q., Spincemaille, P., Nguyen, T.D., Wang, Y.: Efficient folded attention for 3d medical image reconstruction and segmentation. arXiv preprint arXiv:2009.05576 (2020) * [29] Zhang, H., Valcarcel, A.M., Bakshi, R., Chu, R., Bagnato, F., Shinohara, R.T., Hett, K., Oguz, I.: Multiple sclerosis lesion segmentation with tiramisu and 2.5 d stacked slices. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 338–346. Springer (2019) * [30] Zhang, J., Liu, Z., Zhang, S., Zhang, H., Spincemaille, P., Nguyen, T.D., Sabuncu, M.R., Wang, Y.: Fidelity imposed network edit (fine) for solving ill-posed image reconstruction. NeuroImage 211, 116579 (2020) * [31] Zhang, J., Zhang, H., Sabuncu, M., Spincemaille, P., Nguyen, T., Wang, Y.: Bayesian learning of probabilistic dipole inversion for quantitative susceptibility mapping. In: Medical Imaging with Deep Learning. pp. 892–902. PMLR (2020) * [32] Zhang, R.: Making convolutional networks shift-invariant again. In: International Conference on Machine Learning. pp. 7324–7334 (2019) * [33] Zill, D.G.: Advanced engineering mathematics. Jones & Bartlett Publishers (2020)
Non-Unitary Quantum Many-Body Dynamics using the Faber Polynomial Method Rafael D. Soares1,2*, M. Schirò2 1 Université Paris-Saclay, CNRS, LPTMS, 91405 Orsay, France 2 JEIP, UAR 3573 CNRS, Collège de France, PSL Research University, 11 Place Marcelin Berthelot, 75321 Paris Cedex 05, France <EMAIL_ADDRESS> ## Abstract Efficient numerical methods are still lacking to probe the unconventional dynamics of quantum many-body systems under non-unitary evolution. In this work, we use Faber polynomials to numerically simulate both the dynamics of non-Hermitian systems and the quantum jumps unravelling of the Lindblad dynamics. We apply the method to the non-interacting and interacting Hatano- Nelson models evolving from two different setups: i) a Néel state, and ii) a domain wall. In the first case, we study how interactions preserve the initial magnetic order against the skin effect. In the second example, we present numerical evidence of the existence of an effective hydrodynamic description for the domain-wall melting problem in the non-interacting limit. Additionally, we investigate both the conditional and unconditional dynamics of the quantum jump unravelling in two quantum spin chains, which exhibit either the non-Hermitian or the Liouvillian skin effect. This numerical method inherently generalises the well-established method based on Chebyshev polynomials to accommodate non-Hermitian scenarios. ###### Contents 1. 1 Introduction 2. 2 Non-Unitary Dynamics of Open Quantum Systems 3. 3 Faber Polynomial Method 1. 3.1 Warm-Up: Unitary Evolution 2. 3.2 Non-Unitary Evolution 1. 3.2.1 Convergence 4. 4 Application to Non-Hermitian Gaussian Systems 1. 4.1 Benchmark: Dynamics of the Hatano-Nelson model 2. 4.2 Domain Wall Melting for Hatano-Nelson 5. 5 Application to Non-Hermitian Many-Body Systems 1. 5.1 Magnetisation Dynamics in the Interacting Hatano-Nelson Model 2. 5.2 Effect of interaction in the Domain-Wall melting for Hatano-Nelson 6. 6 Application to Quantum Jumps Unravelling 1. 6.1 Magnetisation and Entanglement Dynamics in Monitored Spin Chains 7. 7 Conclusions 8. A Further Details on Faber Polynomials 9. B General Features on the Hatano-Nelson Model 10. C Comparison with MPS based Methods ## 1 Introduction In recent years, the scientific community has shown a growing interest in elucidating the distinctive characteristics of many-body quantum systems subjected to effective non-unitary dynamics. In quantum mechanics, non-unitary dynamics typically arises when a closed quantum system interacts with an external environment, leading to dissipation, decoherence, or wave function collapse that disrupts the usual unitary Schrödinger evolution. While a fully microscopic description of both environment and system is a daunting task, in many cases of experimental and theoretical interest one can assume the dynamics of the environment to be sufficiently fast, allowing for the derivation of a local-in-time, Markovian, non-unitary evolution for the system of interest [1, 2]. Different types of Markovian open quantum system dynamics have been considered in the literature. A first relevant example is provided by systems that evolve according to the Lindblad master equation [3, 4, 5]. Here, the evolution of the system density matrix is generated by a non-unitary (super)operator, the Lindbladian. Many-body versions of Lindblad master equations have been studied in a number of contexts and with different objectives, from dissipative phase transitions [6] to quantum transport [7]. Another class of non-unitary dynamics arises for continuously monitored quantum systems [2, 8, 9], whose stochastic evolution - a so-called quantum trajectory [10] \- is described by a non-unitary unravelling of the Lindblad master equation [11]. In particular, under the quantum jump unravelling, a deterministic non-unitary evolution is driven by a non-Hermitian Hamiltonian between stochastic measurements [12, 13, 14]. Recent works have raised interest in possible phase transitions in the entanglement structure of those quantum many-body trajectories [15, 16, 17, 18, 19, 20, 21]. Finally, as a last example of non-unitary Markovian dynamics, we can consider the one generated by a purely non-Hermitian Hamiltonian. Non-Hermitian physics emerges intrinsically in various domains of physics beyond quantum physics, encompassing photonics [22], hydrodynamics [23] and active matter [24, 25]. In the context of open quantum systems, a non-Hermitian evolution can be obtained by post-selecting the quantum trajectories corresponding to no quantum jumps, i.e. the no-click limit [26]. Non-Hermitian quantum systems show anomalous static and dynamical properties, which are attracting widespread interest. Among these, we mention the unconventional propagation of quantum correlations [27, 28, 29], the distinct entanglement transitions generated by time evolution [30, 31, 32, 33, 34] or their extraordinary sensitivity to boundary conditions, also known as the skin effect [35, 36, 37, 38, 39, 40]. The latter manifests itself through the unusual localisation of all single-particle eigenstates at the system’s edges under open boundary conditions [41, 42, 43, 44, 45]. Furthermore, it has unique signatures in the dynamics, leading to non-reciprocal transport[46]. Besides these theoretical developments, unlike their Hermitian counterparts, the toolbox of computational many-body physics is more limited when it comes to non-unitary dynamics. In this work, we introduce a new method to tackle the quantum dynamics of a non-unitary system. Our approach is based on expanding the evolution operator in Faber polynomials [47, 48]. This numerical approach is a natural generalisation of its Hermitian counterpart, the Chebyshev polynomial method for time evolution [49], which has been the primary choice for efficiently simulating nonequilibrium transport phenomena in both interacting [50] and non-interacting systems [51, 52, 53]. Although time evolution integrators based on Faber polynomials have already been proposed in some works [54, 55, 56], for example, in the simulation of electromagnetic wave propagation through passive media [57, 58], it appears that their full potential remains largely unexplored. Namely, in the simulation of non- Hermitian quantum systems, they could have a significant impact owing to their numerical stability and adjustable accuracy compared to other methods such as integrators based on Runge-Kutta [59] or Trotterization techniques [60]. In this manuscript, we test, benchmark and apply the Faber polynomial method to investigate the time evolution of particle density, charge current, and entanglement in several setups involving the interacting and non-interacting Hatano-Nelson model [61, 62], a paradigmatic non-Hermitian model showing non- reciprocity [63] and the skin effect at the single particle level, and non- Hermitian quantum spin chains. In addition, we merge the Faber polynomial method with quantum jumps in order to simulate the dynamics of the full Lindblad master equation through a suitable unravelling and the conditional dynamics encoded in the entanglement of quantum trajectories. The manuscript is structured as follows. In Sec. 2 we set the stage and define the classes of non-unitary dynamics that we will focus on throughout this manuscript. In Sec. 3 we describe the Faber polynomial method for non-unitary dynamics and discuss its convergence. In Sec. 4 we present our first application to quadratic (Gaussian) non-Hermitian systems. In particular, we explore the melting of a domain-wall state under the Hatano-Nelson Hamiltonian. Furthermore, in Sec. 5, we focus on the spin version of the many- body interacting Hatano-Nelson chain, examining the evolution of both an initial Néel state and a domain wall under the influence of non-reciprocal hopping and interactions. Finally, in Sec. 6, we apply the method to the stochastic quantum jump dynamics obtained by the unravelling of a Lindblad master equation. Sec. 7 summarises our conclusions and discusses potential future research directions. ## 2 Non-Unitary Dynamics of Open Quantum Systems In this work, we focus on the dynamics of open quantum many-body systems described by a Hamiltonian $\mathcal{H}$ and a set of independent environments. A typical example will be a quantum spin chain connected on each site to an external bath. In practice, we will always assume a Markovian description of the environment, which can also be identified as a measurement apparatus that monitors a certain physical property of the system, for example the particle density [17, 19, 20, 21]. However, our primary focus is on the non-unitary dynamics of the system, obtained by tracing out the environment. This is naturally modelled by the Lindblad master equation and its unravelling [9]. In this setting, we consider two types of quantum dynamics: (i) the stochastic evolution of the system conditioned to a given set of measurement outcomes, and (ii) the dynamics of the averaged state. In the first case, the system evolves according to the stochastic Schrödinger equation, $\displaystyle d\ket{\psi(\xi_{t},t)}$ $\displaystyle=-idt\left[\mathcal{H}-\frac{i}{2}\sum_{\mu}\left(L^{\dagger}_{\mu}L_{\mu}-\langle L^{\dagger}_{\mu}L_{\mu}\rangle_{t}\right)\right]\ket{\psi(\xi_{t},t)}$ (1) $\displaystyle\quad+\sum_{\mu}\left(\frac{L_{\mu}}{\sqrt{\langle L^{\dagger}_{\mu}L_{\mu}\rangle}}-1\right)d\xi_{\mu,t}\ket{\psi(\xi_{t},t)},$ where $\langle\circ\rangle_{t}\equiv\langle\psi(\xi_{t},t)|\circ|\psi(\xi_{t},t)\rangle$, and $\xi_{t}=\left\\{\xi_{\mu,t}\right\\}$ are a set of statistically independent Poisson processes ${d\xi_{\mu,t}\in\\{0,1\\}}$ with average value $\overline{d\xi_{\mu,t}}=dt\langle L^{\dagger}_{\mu}L_{\mu}\rangle_{t}$. The above dynamics breaks down into two steps: a deterministic non-unitary evolution driven by a non-Hermitian Hamiltonian $\displaystyle\mathcal{H}_{\rm nH}=\mathcal{H}-\frac{i}{2}\sum_{\mu}L^{\dagger}_{\mu}L_{\mu},$ (2) and a series of stochastic quantum jumps at random times, at which the wave function changes discontinuously (see second line of Eq. 1). We note that the non-Hermitian evolution is normalised and state dependent. This is encoded in the last term in the first line of Eq. 1. If one post-selects the quantum trajectories over the records of no click, the dynamics is deterministic and driven by $\mathcal{H}_{\rm nH}$ [33, 32, 31]. Otherwise, if one considers all the trajectories and averages over the measurement outcomes, the conditional density matrix $\rho_{c}(\xi_{t},t)=|\psi(\xi_{t},t)\rangle\langle\psi(\xi_{t},t)|$, i.e. $\rho(t)=\overline{\rho_{c}(\xi_{t},t)},$ (3) evolves according to the Lindblad master equation with jump operators $L_{\mu}$, i.e. $d\rho(t)=-idt\left[\mathcal{H},\rho\right]+dt\sum_{\mu}\left(L_{\mu}\rho(t)L^{\dagger}_{\mu}-\frac{1}{2}\left\\{L^{\dagger}_{\mu}L_{\mu},\rho\right\\}\right).$ (4) In both cases the basic building block of the non-unitary dynamics is the evolution driven by a non-Hermitian Hamiltonian. In the next section, we introduce the Faber polynomial method to accurately solve the time evolution governed by a non-unitary Schrödinger equation, ## 3 Faber Polynomial Method The knowledge of the time evolution operator, $\mathcal{U}(t)$, allows a comprehensive description of the physical properties of a system when it is far from equilibrium. This operator is necessary to propagate a given initial state, $\ket{\Psi(t)}=\mathcal{U}(t)\ket{\Psi_{0}}$, allowing the calculation of observables that characterise the nonequilibrium state. In principle, an exact expression for the state is necessary as soon as one moves beyond the scope of linear response theory. Nevertheless, this problem is equivalent to solving for the spectrum and eigenstates of the Hamiltonian, as in the case of a time-independent Hamiltonian, $\mathcal{U}(t)=\exp\left(-i\mathcal{H}t\right).$ (5) The idea behind both the Chebyshev (unitary evolution) and Faber (non-unitary) polynomial methods is to perform an expansion of the time evolution in the respective polynomial basis, $\mathcal{U}(t)=\sum_{n=0}^{+\infty}c_{n}\left(t\right)\mathcal{P}_{n}\left(\mathcal{H}\right),$ (6) where $c_{n}\left(t\right)$ is the $n^{\rm th}$ coefficient of the series expansion and $\mathcal{P}_{n}$ is the $n^{\rm th}$ polynomial, which corresponds to a Chebyshev polynomial of the first kind or to a Faber polynomial, depending on the situation. Then the state after the time step, $\delta t$, can be approximated by truncating the series expansion to the order $N_{p}$, $\ket{\Psi(t_{0}+\delta t)}\simeq\sum_{n=0}^{N_{p}-1}c_{n}(\delta t)\ket{\Psi_{n}},$ (7) where we define $\ket{\Psi_{n}}=\mathcal{P}_{n}\left(\mathcal{H}\right)\ket{\Psi\left(t_{0}\right)}$. As will be demonstrated, the coefficients $c_{n}(\delta t)$ decrease as the order $n$ increases. Moreover, the states $\ket{\Psi_{n}}$ are efficiently computed through the recurrence relations that the polynomials satisfy. To compute the subsequent level of the expansion, the main computational task is the application of the system’s Hamiltonian onto a particular state. Consequently, the most demanding operation involves only multiplying the Hamiltonian by a limited group of vectors, leading to a resource usage that increases linearly with $\dim\left(\mathcal{H}\right)$ for sparse matrices or quadratically with $\dim\left(\mathcal{H}\right)$ for dense matrices. Linear scaling is expected for Hamiltonians characterising a system with short-range interactions or hoppings. These principles are exactly those underpinning Kernel Polynomial Methods [64], which has become an essential computational resource in condensed matter physics, particularly for calculating various spectral quantities[65, 66, 67, 68, 69]. ### 3.1 Warm-Up: Unitary Evolution When the time evolution is generated by a Hermitian Hamiltonian, one can expand Eq. 5 using Chebyshev polynomials of the first kind (for further details check Tal-Ezer and Kosloff [49]), $\mathcal{U}\left(t\right)=\sum_{n=0}^{\infty}c_{n}(t)T_{n}\left(\tilde{\mathcal{H}}\right),\quad c_{n}(t)=\frac{2}{1+\delta_{n,0}}(-i)^{n}J_{n}(\lambda t),$ (8) where $\tilde{\mathcal{H}}=\mathcal{H}/\lambda$ is the rescaled Hamiltonian, $J_{n}(x)$ is the $n^{\text{th}}$ Bessel Function of the first kind, and $T_{n}$ is the $n^{\text{th}}$ Chebyshev polynomial of the first kind. It is necessary to rescale the Hamiltonian so that its eigenvalues fall within the domain of the definition of polynomials, the open interval $\left(-1,1\right)$. It is always possible to do this given that the Hamiltonian is always bounded in finite-size lattice models. Using the recursion relation of the Chebyshev polynomials, the states $\ket{\Psi_{n}}$ in Eq. 7 are computed on the run using, $\displaystyle\ket{\Psi_{0}}$ $\displaystyle=\ket{\Psi(t_{0})},$ (9) $\displaystyle\ket{\Psi_{1}}$ $\displaystyle=\tilde{\mathcal{H}}\ket{\Psi_{0}},$ $\displaystyle\ket{\Psi_{n+1}}$ $\displaystyle=2\tilde{\mathcal{H}}\ket{\Psi_{n}}-\ket{\Psi_{n-1}},\quad n\geq 2,$ with $|\Psi_{n}\rangle=T_{n}(\tilde{\mathcal{H}})|\Psi\rangle$. This expansion is feasible solely because the Hamiltonian is Hermitian. Therefore, in the context of open quantum systems where a non-Hermitian operator dictates the dynamics, a different set of polynomials is necessary. Under these circumstances, the spectrum is defined in the complex plane and the operator has right and left eigenvectors, which can be distinct. ### 3.2 Non-Unitary Evolution For a non-Hermitian Hamiltonian, the propagator Eq. 5 is expanded using Faber polynomials [70, 71] instead. These are a familiar tool in complex analysis, used as a polynomial basis to represent a complex-valued function within the domain $\mathcal{D}$ in which it is analytical. The Faber polynomials are generated by conformal mapping, which maps the complement of a closed disk of radius $\rho$ to the complement of the region containing all the spectra of the Hamiltonian (the domain $\mathcal{D}$). In our work, we assume $\mathcal{D}$ to be an elliptic region containing all the eigenvalues of $\mathcal{H}$. This fits our purposes, as the conformal mapping associated with this shape generates a class of Faber polynomials with a minimum recurrence relation (see the Appendix A for further details). The expansion of Eq. 5 for a non-Hermitian Hamiltonian with this choice reduces to $\mathcal{U}\left(t\right)=\sum_{n=0}^{+\infty}c_{n}\left(t\right)F_{n}\left(\tilde{\mathcal{H}}\right),\quad c_{n}(t)=e^{-i\lambda t\gamma_{0}}\left(\frac{-i}{\sqrt{\gamma_{1}}}\right)^{n}\;J_{n}\left(2\sqrt{\gamma_{1}}\lambda t\right).$ (10) where $F_{n}$ is the $\rm n^{\text{th}}$ Faber polynomial. The parameter $\lambda$ is used to rescale the Hamiltonian so that the norm of $F_{n}\left(z\right)$ is bounded [57], and it is obtained from the bounds of the real and imaginary part of the spectra. $\gamma_{0}$ and, $\gamma_{1}$ are associated with the details of the elliptic contour chosen, $\gamma_{0}$ is the centre of the ellipse and $\gamma_{1}=1-b$, where $b$ is the semi major- axis111Recall that the equation of an ellipse is: $\frac{(x-x_{0})^{2}}{a^{2}}+\frac{(y-y_{0})^{2}}{b^{2}}=1$.. The ellipse must be constructed to be close as possible to eigenvalues of $\mathcal{H}$, thereby reducing the magnitude of $\lambda$. One can show [58] that this is achieved using $\begin{aligned} \lambda&=\frac{\left(\ell^{2/3}+p^{2/3}\right)^{3/2}}{2},\\\ \gamma_{1}&=\frac{\left(\tilde{p}^{2/3}+\tilde{\ell}^{2/3}\right)\left(\tilde{p}^{4/3}-\tilde{\ell}^{4/3}\right)}{4\lambda}\end{aligned},$ (11) with $\ell=\left[\max\left[\text{Im}\left(E\right)\right]-\min\left[\text{Im}\left(E\right)\right]\right]/2$, $p=\left[\max\left[\text{Re}\left(E\right)\right]-\min\left[\text{Re}\left(E\right)\right]\right]/2$ , $\tilde{\ell}=\ell/\lambda$ and $\tilde{p}=p/\lambda$. Using the recursion relation of the Faber Polynomials (consult Appendix A) the states $\ket{\Psi_{n}}=F_{n}\left(\tilde{\mathcal{H}}\right)\ket{\Psi}$ are given by, $\displaystyle\ket{\Psi_{0}}$ $\displaystyle=\ket{\Psi\left(t_{0}\right)}$ (12) $\displaystyle\ket{\Psi_{1}}$ $\displaystyle=\left(\tilde{\mathcal{H}}-\gamma_{0}\right)\ket{\Psi_{0}}$ $\displaystyle\ket{\Psi_{2}}$ $\displaystyle=\left(\tilde{\mathcal{H}}-\gamma_{0}\right)\ket{\Psi_{1}}-2\gamma_{1}\ket{\Psi_{0}}$ $\displaystyle\ket{\Psi_{n+1}}$ $\displaystyle=\left(\tilde{\mathcal{H}}-\gamma_{0}\right)\ket{\Psi_{n}}-\gamma_{1}\ket{\Psi_{n-1}},\quad n>2$ In order to perform one time step of evolution, one simply has to truncate Eq. 10 up to the desired order $N_{p}$, and calculate the associated states $\ket{\Psi_{n}}$. Through the relations in Eq. 12, one never has to store the Hamiltonian matrix, only needing to store in memory at most the previous two states, $\ket{\Psi_{n-1}}$ and $\ket{\Psi_{n}}$, to compute the following term of the expansion $\ket{\Psi_{n+1}}$. Furthermore, the expansion coefficients can be computed once at the beginning of the algorithm, as they depend only on the chosen time step. Given this, the computation of the state of the system after a time step scales linearly in the number of polynomials and linearly in the Hilbert space dimension (assuming that the Hamiltonian has a sparse representation on the used basis). This scaling can be improved by using parallelisation techniques [68] and making use of the underlying symmetries of the Hamiltonian [72]. An additional procedure is required when addressing purely non-Hermitian dynamics, specifically, ensuring the normalisation of the quantum state throughout the time evolution. Theoretically, this normalisation can only be done prior to the computation of an observable. However, the algorithm may exhibit instability if the coefficients fluctuate within the limits of machine precision. Consequently, it is prudent to normalise the state following each time step, $\ket{\Psi\left(t+\delta t\right)}=\dfrac{\mathcal{U}\left(\delta t\right)\ket{\Psi\left(t\right)}}{\left\|\mathcal{U}\left(\delta t\right)\ket{\Psi\left(t\right)}\right\|}$ (13) In the following section, we revise how to do this when dealing with fermionic Gaussian states. #### 3.2.1 Convergence In this section, we discuss the convergence properties of our algorithm, illustrating in Fig. 1 how the absolute value associated with the $n^{th}$ Faber polynomial varies for different rescaled time steps, $\lambda\delta t$ noting that a greater time step naturally requires more polynomials to achieve convergence. For numerical purposes, the Faber series is an exact representation of the time-evolution operator if the coefficient of the last polynomial used is within the machine precision. This argument holds because of Hamiltonian rescaling, as demonstrated in [73], which guarantees that $\forall_{m}\max_{z\in G}\left|F_{m}(z)\right|\leq 2$. Using the asymptotic properties of the Bessel functions, we see that the weight of the coefficient decreases with $n$ according to [74], $\left|c_{n}\right|\sim\dfrac{\left(\lambda\delta t\right)^{n}}{n!}$ (14) whenever $\lambda\delta t\ll n$. In the remaining of this work, the number of polynomials is chosen such that the last coefficient of the absolute value of the last coefficient of the expansion is of the order $\left|c_{n}\right|\sim 10^{-16}$. Figure 1: Absolute value of the coefficients associated with the Faber expansion of the time-evolution operator. We represent the absolute of the $n^{th}$ Faber polynomial for different rescaled time steps. ## 4 Application to Non-Hermitian Gaussian Systems In this Section, we use the Faber polynomial method to study the dynamics of a fermionic Gaussian non-Hermitian system. Here, the many-body wave function can be expressed using a single-particle basis, and the Faber polynomials can be employed to represent the single-particle propagator. In the case of Hamiltonians with $U(1)$ symmetry, associated with particle number conservation, for an initial Gaussian state with a well-defined particle number $M$, the many-body state can always be represented in the form $\ket{\Psi\left(t\right)}=\prod_{n=0}^{\text{M}-1}\left[\sum_{\ell=0}^{\text{L}-1}\text{U}_{\ell n}\left(t\right)c_{\ell}^{\dagger}\right]\ket{\text{vac}},$ (15) with $M$ the total number of particles and $L$ the total number of sites. The time evolution is given by the following equation. $i\frac{d}{dt}\textbf{\text{U}}_{n}=\textbf{\text{h}}\;\textbf{\text{U}}_{n},$ (16) where $\textbf{\text{U}}_{n}$ is the $n^{th}$ column vector and h is the single-particle Hamiltonian, a $\rm L\times L$ matrix. This translates the evolution of the many-body state into the evolution of the $\rm M$ single- particle states, each represented by a column in the matrix U. This is expected, as the dynamics and characteristics of a non-interacting system can be simplified to those of the single-particle Hamiltonian, assuming the appropriate quantum statistics. For a typical tight-binding Hamiltonian with a finite number of hopping terms, the complexity of our algorithm for a single time step of evolution scales as $\mathcal{O}(N_{\text{p}}\cdot\rm L\cdot M)$. Following the time step, it is essential to restore proper normalisation and particle statistics. In the case of Gaussian states, this is achieved through a QR decomposition [17], which guarantees that the U matrix is an isometry, specifically $\text{U}^{\dagger}\text{U}=\mathcal{I}_{M\times M}$. $\text{U}(t+\delta t)=\text{QR},$ (17) with Q a unitary $\rm L\times M$ matrix. So the proper-normalised many-body state is obtained by assigning $\text{U}(t+\delta t)=Q$. Even tough, thought in this document we only study particle-number conserving models, this method is easily also applied to non-particle conserving Hamiltonians using similar techniques [75, 20]. Typically, this normalisation step is the most computationally intensive part of the time-evolution process. However, unlike other methods, this step can be executed less frequently since the time-step does not need to be small. Thus, this computationally intensive procedure can be minimised while still ensuring high accuracy in the time integration. The method of directly evolving the state presents a more cost-effective alternative to approaches that compute the equations of motion for all two- point functions. The latter typically employs conventional ordinary differential equation solvers, such as the fourth-order Runge-Kutta method. Firstly, it necessitates the selection of a short time-step to ensure accurate integration, a requirement not imposed by the Faber algorithm. Secondly, this method involves the evolution of the full correlation matrix, which in practice involves the evolution of $\rm L\cdot\left(L-1\right)/2$ elements. In contrast, the evolution of U requires the evolution of $\rm L\times M$ elements with $\rm M\leq L$. Lastly, the Faber algorithm requires the storage of only two vectors of size $\rm L$ in memory to evolve a given column, while the integration of the correlation matrix demands the storage of four vectors of size $\rm L$ to evolve each of the $\rm L$ columns. ### 4.1 Benchmark: Dynamics of the Hatano-Nelson model The Hatano-Nelson (HN) model [61, 62] is a paradigmatic lattice model for non- Hermitian phenomena. It corresponds to a chain of spinless fermions with an asymmetric hopping, $\mathcal{H}_{\text{HN}}=-\dfrac{1}{2}\sum_{\ell=0}^{L-1}\left(\left(J+\gamma\right)c_{\ell}^{\dagger}c_{\ell+1}+\left(J-\gamma\right)c_{\ell+1}^{\dagger}c_{\ell}\right),$ (18) where $J$ is a hopping term to the first-neighbour, $\gamma\in\mathbb{R}$ parameterised the left-right imbalance in charge hopping, also called non- reciprocity, and $c^{\dagger}_{\ell}\left(c_{\ell}\right)$ is the creation (annihilation) operator which creates (destroys) a fermion on site $\ell$. Under open boundary conditions (OBC), this non-reciprocal hopping gives rise to a unique phenomenon of non-Hermitian systems, the skin effect [35, 36, 37, 38, 39, 40]. This corresponds to the localization of the single-particle eigenstates at the edges of the system. In addition, the model has a huge sensitivity to the boundary conditions: under periodic boundary conditions (PBC), the single-particle spectrum encircles $E=0$ in the complex plane, with the eigenstates manifesting as delocalised plane waves. In contrast, with OBC, the spectrum is real for $\left|\gamma\right|<J$ and purely imaginary for $\left|\gamma\right|>J$. Additionally, all right single-particle eigenstates show an exponential localisation at the left (right) boundary for positive (negative) $\gamma$ (check Appendix. B for further details). Figure 2: Left panel: Comparison between the results obtained with the Faber polynomial method and those reported in [34] for the entanglement entropy of half of the chain for a total chain of size $L=100$. Right panel: Entanglement entropy for half of the chain, with $\gamma=-0.05J$ using a different number of polynomials and the time step of $0.1J^{-1}$. We note that we use a symmetric definition for $\gamma$ with regard to [34] In the following, we benchmark our results using the Faber polynomial method, with those reported by Kawabata et al. [34]. That is, we investigate the dynamics of the entanglement [76] associated with a segment of the chain, denoted $\ell$. This is rigorously derived from the von Neumann entropy of the reduced density matrix($\rho_{\ell}$) [77], $S_{\ell}(t)=-\text{Tr}\left(\rho_{\ell}\ln\rho_{\ell}\right).$ (19) $\rho_{\ell}$ is determined by tracing out the complementary degrees of freedom of the subregion $\ell$. However, the Gaussianity of the state allows us to use the standard techniques [78, 79, 80] to perform this computation using the one-particle density matrix restricted to the lattice sites belonging to the region $\ell$, $\mathcal{C}_{n,m\in\ell}=\left\langle c^{\dagger}_{n}c_{m}\right\rangle$. Thus, Eq. 19 for a free fermionic system is simplified to $S_{\ell}(t)=-\text{Tr}\left(\left.\mathcal{C}\right|_{\ell}\ln\left.\mathcal{C}\right|_{\ell}+\left(\mathcal{I}_{\ell\times\ell}-\left.\mathcal{C}\right|_{\ell}\right)\ln\left(\mathcal{I}_{\ell\times\ell}-\left.\mathcal{C}\right|_{\ell}\right)\right),$ (20) where $\mathcal{I}_{\ell\times\ell}$ is the $\ell\times\ell$ identity matrix. Similarly to the authors of Ref. [34], we prepare our system in a charge density wave state in the system with open boundaries, $\ket{\Psi_{0}}=\left(\prod_{l=1}^{L/2}c_{2l}^{\dagger}\right)\ket{\text{vac}}.$ (21) In Fig. 2 (left) we benchmark the dynamics of the half-chain entanglement entropy $S_{L/2}(t)$ with the results of Ref. [34], for two values of $\gamma$ finding perfect agreement. In the right panel, we demonstrate convergence with respect to the number of polynomials $N_{p}$, for a given time-step $\delta t=0.1J^{-1}$. We validate the decreasing of the entanglement due to the presence of non-Hermitian Skin effect [34]. In addition to the entanglement entropy, we have also validated our results against other metrics presented in Ref. [34]. For instance, we equally observe the initial charge-density wave state rapidly evolving into a state with charge accumulation at one boundary due to non-reciprocal hopping. In Fig. 3 (top panels) we plot the space-time dynamics of particle density as well as a cut at long-times, describing the steady-state density profile along the chain for different system sizes. We see that for short systems, as compared to the single particle wave function localisation length, the accumulation takes the form of a domain wall, while upon increasing system size a finite slope emerges, which we have checked to vanish exponentially with $L$. In Fig. 3 (bottom panels) we plot the dynamics of the local current [34] defined as $I_{\ell}=\dfrac{Ji}{2}\left\langle c_{\ell+1}^{\dagger}c_{\ell}-c_{\ell}^{\dagger}c_{\ell+1}\right\rangle.$ (22) We see that, consistently with the density plot, a finite (negative) current flows in the bulk of the chain, for sufficiently long systems and long times, while at the boundaries the current vanishes due to the localised charge in the domain walls. This current is a feature of this non-equilibrium steady, given that it is not present in the ground-state of the Hatano-Nelson model. Besides, in contrast to the Hermitian scenario, the density profiles exhibit a spatial gradient within the bulk of the chain. This non-trivial spatial distribution of density arises from the single-particle skin effect. Additionally, in contrast to the Hermitian scenario, non-Hermiticity permits spatial variations in the local charge current while the density profile remains time-independent. This phenomenon is facilitated by the fact that the continuity equation associated to charge conservation in non-Hermitian systems takes the form $\partial_{t}\left\langle c^{\dagger}_{\ell}c_{\ell}\right\rangle+\left(I_{\ell}-I_{\ell-1}\right)=\mathcal{T}_{\ell},$ (23) where the additional term $\mathcal{T}_{\ell}$ corresponds to the sink/source of particles due to coupling with the environment equal to, $\mathcal{T}_{\ell}=-\dfrac{\gamma}{J}\sum_{n=0}^{L-2}\left(\left\langle\left\\{c^{\dagger}_{\ell}c_{\ell},I_{n}\right\\}\right\rangle-2\left\langle I_{n}\right\rangle\left\langle c^{\dagger}_{\ell}c_{\ell}\right\rangle\right).$ (24) This term is unique to systems evolving under non-unitary dynamics generated by a non-Hermitian Hamiltonian and has important consequences for the transport properties of these systems [34, 81], as we will further discuss below. Figure 3: Left - Time and spatial dependence of the particle density (Top plot) and charge current (bottom) profile for a system size of 100 sites. Right - Spatial dependence of the particle density (top) and charge current (bottom) in the steady for different system sizes. Other parameters: $\gamma=-0.8J$ Figure 4: Time evolution of the magnetisation profile of the Hatano-Nelson model for different values of the non-Hermitian parameter $\gamma$. The total system length corresponds to $L=256$. The dashed white lines correspond to the effective penetration length. ### 4.2 Domain Wall Melting for Hatano-Nelson In this Section, our focus is on exploring the impact of non-Hermiticity on the temporal evolution of particle density, current, and entanglement profiles in an HN system initialised in a domain-wall state, $\ket{\rm DW}=\ket{111\cdots 10\cdots 000}$222Using the Jordan-Wigner [82, 83] transformation the Hatano-Nelson model can be viewed in the spin-$1/2$ language as an XX chain with a non-reciprocal XX exchange term.. This configuration has been the subject of extensive investigations in the unitary case [84, 85, 86, 87], as it exemplifies the distinctive characteristics of non-equilibrium dynamics. In addition, it has inspired the development of generalised hydrodynamics (GHD) [88, 89, 90], which facilitates precise calculations of charge and current profiles using a hydrodynamic description. Quantum correlations and entanglement entropy can also be calculated under an extension of this framework, quantum GHD [91]. In the non-Hermitian case, the dynamics of an initial domain wall has been less studied, even in the simple non-interacting HN model. We start considering the time-evolution of the particle density profile under the HN dynamics, that we plot in Fig. 4 for increasing values of the non- reciprocal coupling $\gamma$. In the unitary case ($\gamma=0$) a clear light cone is visible, corresponding to ballistically propagating quasiparticles. In the non-Hermitian case a light cone is still visible at short times, at least for $\left|\gamma\right|<J$, when the particle density satisfies a scaling function $n\left(\ell/t\right)=f\left(\ell/(v_{\text{eff}}\;t\right)$, where $v_{\rm eff}$ is an effective velocity. As the non-reciprocal coupling increases, the light cone shrinks more and more, up to $\gamma=J$, corresponding to the exceptional point of the HN, at which the domain-wall state remains stable. We now focus on the short-time dynamics and discuss the origin of the velocity renormalisation. From the numerical data we obtain (see Fig. 5 (top left)) Figure 5: Top - Quasi-particle effective velocity as a function of non- reciprocal coupling (left plot). Particle density profile as a function of space for a fixed time (right plot). Bottom – Spatial profile of charge current at different times for $\gamma=0.2J$ (right plot). The plot in the bottom left shows the current profile as a function of space and time for $\gamma=0.8J$. Out of the ballistic region, the renormalised GHD equations are no longer valid. Other parameters: $L=256$. $v_{\text{eff}}=J-\gamma.$ (25) According to this, the domain wall spreads less for higher values of the non- Hermiticity parameter, and the renormalised velocity vanishes at $\gamma=J$. We note that this simple formula is distinct and is not related to those derived for the propagation of wave packets through a non-Hermitian medium [92, 93], or to the one obtained from the energy dispersion relation of the model, which would be proportional to $\sqrt{J^{2}-\gamma^{2}}$ (see Fig. 5 (top left) and consult Appendix. B for the explicit derivation). Moreover, it depends on the initial conditions chosen, as if the system was initialised in the state $\ket{\cdots 000111\cdots}$ instead, the formula should be replaced by $v_{\text{eff}}=J+\gamma$. The reduction in propagation velocity results in a suppression of correlations relative to the Hermitian scenario, a phenomenon previously observed in other non-Hermitian systems [33, 34, 94]. We can provide a simple argument to justify the renormalisation of the velocity due to non-hermiticity. The renormalised velocity can be obtained by using a local continuity equation for the particle density (Eq. 23) by working the term $\mathcal{T}_{\ell}$. In the non-interacting limit, this term can be expanded using Wick’s theorem in a local and non-local term, $\begin{aligned} \mathcal{T}_{\ell}=&-\dfrac{\gamma}{J}\left(I_{\ell}+I_{\ell-1}\right)\left(1-2\left\langle c_{\ell}^{\dagger}c_{\ell}\right\rangle\right)+\\\ &-i\gamma\sum_{n\neq\\{\ell,\ell-1\\}}^{L-2}\left[\left\langle c_{\ell}^{\dagger}c_{n+1}\right\rangle\left\langle c_{n}^{\dagger}c_{\ell}\right\rangle-\left\langle c_{n+1}^{\dagger}c_{\ell}\right\rangle\left\langle c_{\ell}^{\dagger}c_{n}\right\rangle\right]\end{aligned}.$ (26) The second term of this equation is highly non-local, and it should be zero outside the light-cone region. We proceed by writing the local continuity equation for a site $\ell$, outside the light-cone region, for an instant of time before the arrival of the magnetisation wavefront. Under these conditions $I_{\ell}=0$ and $\langle c^{\dagger}_{\ell}c_{\ell}=0\rangle$ for $\ell>0$ or $I_{\ell-1}=0$ and $\langle c^{\dagger}_{\ell}c_{\ell}=1\rangle$ for $\ell<0$. Furthermore, the non-local correlator vanishes, and so the continuity equation is approximately given by $\displaystyle\partial_{t}\left\langle n_{\ell}\right\rangle$ $\displaystyle-\left(1-\dfrac{\gamma}{J}\right)I_{\ell-1}$ $\displaystyle=0,\quad\ell>0,$ (27) $\displaystyle\partial_{t}\left\langle n_{\ell}\right\rangle$ $\displaystyle+\left(1-\dfrac{\gamma}{J}\right)I_{\ell}$ $\displaystyle=0,\quad\ell<0.$ The $\pm$ sign reflects the different directions of propagation. This formula holds only before the particle density wavefront reaches this site. As after, there is the development of off-diagonal correlations within the light-cone area. Since at short times we can still identify a sharp light cone, it is tempting to try to formulate a hydrodynamic description for the HN model. Although some advances have been made within the framework of Linblad dynamics [95, 96], formulating a hydrodynamic description for the non-Hermitian variant of the XXZ model remains extremely challenging, even in the absence of interactions. The nonlinearity of the equations of motion results in the loss of most local conservation laws. Surprisingly, as we show for the Hatano-Nelson model in Fig. 5, the hydrodynamic equations still hold in the initial time. To make a concrete comparison between the numerics and the analytical predictions, we proceed in a somewhat phenomenological way and incorporate the renormalised velocity into the hydrodynamic expressions derived for the Hermitian case [97]. With this in mind, the particle density in the spatial interval, $-t\leq\ell\leq t$, is given by the following expression, $n\left(\ell,t\right)=\dfrac{1}{\pi}\arccos\left(\dfrac{\ell}{v_{\text{eff}}\;t}\right).$ (28) which perfectly matches the results of the full numerical calculations (see Fig. 5 top right). Figure 6: Entanglement entropy for the subsystem $\left[-L/2,\ell\right]$ in different times. From right to left, the non-Hermiticity is $\gamma=0,0.2J$ and $0.4J$. This is also applicable to the charge current, as we show in Fig. 5 (bottom left). Finally, using the quantum GHD formalism [91, 97], it can likewise be extended to the entanglement entropy of the region $\left[-L/2,\ell\right]$, $I\left(\ell,t\right)=\dfrac{1}{\pi}\sqrt{1-\left(\dfrac{\ell}{v_{\text{eff}}\;t}\right)^{2}}.$ (29) $S\left(\ell,t\right)=\dfrac{1}{6}\ln\left[v_{\text{eff}}\;t\left(1-\left(\dfrac{\ell}{v_{\text{eff}}\;t}\right)^{2}\right)^{3/2}\right]+c_{1},$ (30) where $c_{1}\simeq 0.4785$ [98] as shown in the Fig. 6. Similarly to the unitary case, the entanglement increases as a result of the development of quantum correlations within a well-defined spatial region [99]. Ballistic transport of the charge current prevails, leading to a region that expands linearly over time without significant entropy production. This phenomenon is characterised by growth $\ln(t)$, which is emblematic of a local quantum quench protocol [100, 101]. In the unitary regime, ballistic propagation ceases within temporal scales commensurate with the entire system size, which is applicable to systems of finite dimensions. The non-reciprocal hopping stabilises the domain wall, thus preventing it from melting. With a finite magnitude of non-reciprocal coupling, the particle density wavefront is constrained to penetrate only to a prescribed depth. This maximum penetration depth corresponds to the total system size, $L$, for, $\gamma=0$ and tends to zero at the exceptional point $\gamma=J$. It is as if the system length is renormalised to an effective one given by $L^{\ast}\sim\dfrac{J-\gamma}{J+\gamma}L.$ (31) This length is marked by the white dashed lines in the Fig. 4. We just stress that this reasoning is only valid for $\gamma>0$, since for $\gamma<0$ the magnetisation wavefront necessarily reaches the system boundary. The expression suggests that faster quasiparticles with velocity $v^{\ast}=J+\gamma$ cease the ballistic propagation of the magnetisation wavefront upon hitting the physical boundary. Thus, the maximum propagation velocity allowed in the system increases compared to the Hermitian case, since particles can hop from right to left with velocity $J+\gamma$ (assuming $\gamma>0$). Following the initial propagation of the magnetisation wave front, the system reaches a steady state characterised by the presence of a charge current traversing the two domains. This current emerges due to a flux of particles sourced from the environment, which are subsequently annihilated, as depicted in Fig. 5. However, contrary to the setting described by Kawabata [34], non-Hermiticity leads to the spatial suppression of this current. ## 5 Application to Non-Hermitian Many-Body Systems We now consider an application of the Faber polynomial method to a full non- Hermitian many-body problem. In this case, one can simplify the evaluation of the evolution operator, while the cost of storing the state is still exponential in system size since, as we stress again, in the Faber polynomial method there is no approximation on the state which is fully represented in the basis. Figure 7: Temporal and spatial dependence of the magnetisation profile, $\langle S_{\ell}^{z}\rangle$, for different values of $\Delta$ in the interacting Hatano-Nelson model. Other parameters: $\gamma=0.8J$ and $L=24$. Figure 8: Long-time behaviour of the magnetisation profile shown in Fig. 7, $\langle S_{\ell}^{z}\rangle_{\infty}$, for different values of $\Delta$. We see that at $\Delta=0$ an emergent domain wall is formed, which is stable for small $\Delta$ (left panel). Increasing $\Delta$ the system develops a potential drop in the middle of the chain, akin to diffusive dynamics, which then further develops an oscillating patter as $\Delta$ increases (right panel). Other parameters $\gamma=0.8J$. ### 5.1 Magnetisation Dynamics in the Interacting Hatano-Nelson Model In this section, we study the effects of interactions in the dynamics of the magnetisation profile and spin current on a non-Hermitian XXZ chain with a non-reciprocal XX exchange term. $\mathcal{H}=-\sum_{\ell=0}^{L-2}\left[\frac{\left(J+\gamma\right)}{2}S_{\ell}^{+}S_{\ell+1}^{-}+\frac{\left(J-\gamma\right)}{2}S_{\ell+1}^{+}S_{\ell}^{-}+\Delta S_{\ell}^{z}S_{\ell+1}^{z}\right],$ (32) where $J$ is a XX exchange term between neighbouring spins, $\gamma$ induces an imbalance between the propagation of left/right magnetic excitations and $\Delta$ is an Ising like exchange. This system can also be viewed as an interacting Hatano-Nelson model by performing the Jordan-Wigner transformation [82, 83]. The system is prepared at time zero in an unentangled Néel state, $\ket{\Psi(t=0)}=\ket{\uparrow,\downarrow,\cdots,\downarrow,\uparrow},$ (33) which is an eigenstate of the Ising part of the model, so, one can expect that when $\Delta\gg J,\gamma$ the Néel order is preserved. For $\Delta=0$, when the model reduces to the non-recriprocal XX chain (or Hatano-Nelson in fermionic language), it is known that the initial Néel state gives rise to a domain wall state at long times as all magnetic excitations are transported to one of the edges of the system [34], just as in the Hatano-Nelson model which exhibits charge accumulation at a boundary. The extent of this proximity is governed by the degree of non-Hermiticity, which is regulated by the parameter $\gamma$. Here, we are interested in understanding the role of interactions in the magnetisation dynamics and the stability of this emergent domain wall. In Fig. 7 we plot the spatial-temporal dynamics of the magnetisation $\langle S^{z}_{\ell}(t)\rangle$ for an increasing value of $\Delta$. We see that at short times there is a rapid reshuffling of magnetic excitations driven by the non-reciprocity, towards a boundary accumulation. Interactions compete with this process and tend to preserve the initial antiferromagnetic pattern at the expense of boundary accumulation, as we see well in the right panel of Fig. 7. To better characterise the long-time dynamics, we compute the average magnetisation profile $\left\langle S^{z}_{\ell}\right\rangle_{\infty}=\mbox{lim}_{T\rightarrow{\infty}}\frac{1}{T}\int_{\tau^{\ast}}^{\tau^{\ast}+T}dt\;\langle S^{z}_{\ell}(t)\rangle,$ (34) where $\tau^{\ast}$ is the initial time corresponding to the reshuffling of magnetic until there is a stable boundary accumulation. We plot in Fig. 8 $\left\langle S^{z}_{\ell}\right\rangle_{\infty}$ for different values of $\Delta$. We see that the emergent domain-wall state generated by the non- reciprocal exchange is stable at weak interactions (left panel). However, as $\Delta$ increases, a novel region emerges that interpolates between the two magnetic domains (right panel). In particular, we see that the system develops a _magnetisation drop_ , similar to a potential drop in systems that show diffusive transport. Upon further increasing $\Delta$ we see the emergence of antiferromagnetic correlations on top of this magnetisation slope, which, as expected, are frozen by the large interaction and do not decay away. The result above is particularly intriguing as it suggests that non-reciprocal coupling and interaction collaborate to establish a current-carrying steady state in the centre of the system: the former driving the formation of a domain wall that acts as source and drain, the latter providing the necessary scattering term to dissipate and establish a finite average current. In Fig. 9 we plot the spatio-temporal dynamics of the local current profile as a function of interaction and non-reciprocity. Comparable findings have been reported [102] for the ground state of the interacting Hatano-Nelson model with nearest-neighbour repulsion. In that study, they also noted that the initial magnetisation profile (referred to as the real-space Fermi surface) is disrupted by interactions that induce a Néel order. In this work, we demonstrate that such a phenomenon can also be dynamically generated by the non-unitary time-evolution. Similarly to the non-interacting case, there is a current in this interpolating region, which satisfies the same continuity equation as in Eq. 27. As clearly seen in Fig. 9, once again there is a competition between non- reciprocity and the interaction parameter $\Delta$ in defining the size of this intermediate region, where it is possible for the particle to enter from the environment and give rise to this current. As one is looking at small system sizes, it is not possible to create a stable steady current (like in the non-interacting case). Due to this, we see oscillations in the direction of the central current, and thus there is current flowing in both directions. However, in the extreme case where $\gamma=J$, this oscillatory behaviour ceases to exist, and the current has a fixed direction. When $\Delta$ is much greater than $J$, the current disappears in the bulk region, where the Néel order is maintained. Figure 9: Temporal and spatial dependence of the local current profile, $\langle I_{n}\rangle$. Other parameters: $\gamma=0.8J$ and $L=24$. ### 5.2 Effect of interaction in the Domain-Wall melting for Hatano-Nelson Finally, we conclude this section by discussing the effect of interactions on the non-Hermitian problem discussed in Sec. 4.2, namely an initial domain wall state. In the previous section, we have focused on the domain wall melting for the non-interacting non-reciprocal Hatano-Nelson model, or in the spin analogue, the XX chain as $\Delta=0$. For a conventional Hermitian XXZ spin chain, the domain wall is only stabilised when the Ising exchange is greater than the XX exchange, $\left|\Delta\right|>J$. In contrast, $\left|\Delta\right|<J$, the domain wall melts, with a ballistic propagation of the magnetisation wavefront [103]. The Heisenberg point, $\Delta=J$, is special given the existence of the extra spin SU(2) symmetry, which allows for superdiffusive behaviour of the spin current [86, 104]. Nevertheless, there is no Heisenberg point in the spin version of the Hatano-Nelson model, as the non-Hermiticity explicitly breaks the spin SU(2) symmetry. We observe that the interactions contribute to prevent the domain wall from melting, as shown in Fig. 10. In a certain sense, non-Hermiticity and interactions help to preserve the initial magnetic order, which otherwise would be eroded by the dynamics. We have benchmarked this result with matrix product of states (MPS) calculations presented in the Appendix C. Figure 10: Spatial and temporal dependence of the magnetisation profile for different values of the Ising exchange parameter. Other parameters: spins $\gamma=0.2J$ and $L=24$. Figure 11: Magnetisation profile in function of the lattice site and time. Left - Dynamics with the jump operator, $\sqrt{\left|\gamma\right|}\left(S^{-}_{\ell}-i\text{sgn}\left(\gamma\right)S^{-}_{\ell+1}\right)$. Right - Dynamics with the jump operators, $\sqrt{\gamma}S^{+}_{\ell}S^{-}_{\ell+1}$. The system was initially prepared in a Néel State (Eq. 33). Other parameters: $\Delta=0$, $\gamma=0.8J$ and $L=20$. ## 6 Application to Quantum Jumps Unravelling In this Section, we combine the Faber polynomial technique with a high-order Monte Carlo Wave Function algorithm [11, 9] to investigate the Quantum Jumps unravelling of the Lindblad master equation, discussed in Sec. 2. In particular, we address the stochastic Schrödinger equation by propagating the initial state with the Faber polynomial method up to the time instance ($\tau$) when a quantum jump occurs. Therefore, within the time interval $t\in\left[t_{0},t_{0}+\tau\right]$, the state evolves purely non-unitary according to $\ket{\psi(t)}=\dfrac{e^{-i\mathcal{H}t}\ket{\psi(t_{0})}}{\left\|e^{-i\mathcal{H}t}\ket{\psi(t_{0})}\right\|}.$ (35) The time instance $\tau$ is obtained via the standard higher-order Monte Carlo wave function technique. It corresponds to the specific time at which the norm of the state equates to a random variable, $r$, drawn from a uniform distribution over the interval $\left[0,1\right]$. Consequently, $\tau$ is implicitly defined by the following equation, $r=1-\bra{\psi(t_{0})}e^{i\mathcal{H}^{\dagger}\left(\tau- t_{0}\right)}e^{-i\mathcal{H}\left(\tau-t_{0}\right)}\ket{\psi(t_{0})}.$ (36) The quantum jump is applied by first selecting the quantum jump channel in accordance to their probability, $p_{\mu}=\dfrac{\left\langle L^{\dagger}_{\mu}L_{\mu}\right\rangle}{\sum_{\mu}\left\langle L^{\dagger}_{\mu}L_{\mu}\right\rangle},$ (37) where the average is taken with the state, $\ket{\psi\left(t+\tau\right)}$. Then, the post-jump state, $\ket{\psi\left(t+\tau^{+}\right)}$, is obtained by applying the chosen jump operator, $L_{\alpha}$, $\ket{\psi\left(t+\tau^{+}\right)}=\dfrac{L_{\alpha}\ket{\psi\left(t+\tau\right)}}{\sqrt{\bra{\psi\left(t+\tau\right)}L^{\dagger}_{\alpha}L_{\alpha}\ket{\psi\left(t+\tau\right)}}}$ (38) This algorithm gives access to the full Lindbladian dynamics, by averaging over the propagated quantum trajectories. Further, it also provides access to the dynamics under continuous monitoring [20, 9]. This is achieved by following the many-body quantum trajectory and computing, for example, non- linear functions of the state. One such function is the entanglement entropy of quantum trajectories, as we shall discuss below. ### 6.1 Magnetisation and Entanglement Dynamics in Monitored Spin Chains As an application, we consider a quantum spin chain described by the Hermitian XXZ model with Hamiltonian $\mathcal{H}=-\sum_{\ell=0}^{L-2}\left[\frac{J}{2}S_{\ell}^{+}S_{\ell+1}^{-}+\frac{J}{2}S_{\ell+1}^{+}S_{\ell}^{-}+\Delta S_{\ell}^{z}S_{\ell+1}^{z}\right],$ (39) where $J$ is the XX exchange term between neighbouring spins and $\Delta$ an Ising like exchange. We compare the dissipative dynamics generated by two different types of jump operators. A first set of jumps that we consider describe next-neighbour decoherence of spin excitations along the chain and take the form $\displaystyle L_{0}$ $\displaystyle=\sqrt{\left|\gamma\right|}S^{-}_{0},$ (40) $\displaystyle L_{1+\ell}$ $\displaystyle=\sqrt{\left|\gamma\right|}\left(S^{-}_{\ell}-i\text{sgn}\left(\gamma\right)S^{-}_{\ell+1}\right),\;\ell=\\{0,\cdots,L-2\\}$ $\displaystyle L_{L}$ $\displaystyle=\sqrt{\left|\gamma\right|}S^{-}_{L-1}.$ Interestingly, with these choice of jump operators the non-Hermitian Hamiltonian associated to the no-click limit turns out to be given by a spin version of the many-body Hatano-Nelson model. Indeed, the following straightforward calculation yields the non-Hermitian Hamiltonian $\begin{aligned} \mathcal{H}_{\text{eff}}&=\mathcal{H}-\frac{i}{2}\sum_{\ell}L_{\ell}^{\dagger}L_{\ell}\\\ &=\mathcal{H}-\frac{\gamma}{2}\sum_{\ell=0}^{L-2}\left(S_{\ell}^{+}S^{-}_{\ell+1}-S_{\ell+1}^{+}S^{-}_{\ell}\right)-i\left|\gamma\right|\sum_{\ell=0}^{L-1}S_{\ell}^{+}S^{-}_{\ell}\end{aligned}.$ (41) The last term does not affect the dynamics, as it just an overall background decay. Although this model can be connected to the fermionic version of the Hatano-Nelson model, the Linblad equation is not quadratic due to the Jordan- Wigner strings in the terms of the form $S^{-}\rho S^{+}$. We compare the dynamics generated by the quantum jumps above with the one induced by a different set of jump operators that create a spin-flip excitation from site $n+1$ to site $n$. These are read as follows, $L_{\ell}=\sqrt{\gamma}S_{\ell}^{+}S_{\ell+1}^{-},$ (42) with $\ell=\left\\{0,\cdots,L-2\right\\}$. Previous studies have shown that these operators induce a phenomenon known as the Liouvillian skin effect [24, 105, 106, 107]. Specifically, there is an exponential localisation of the Liouvillian modes at the boundaries of the system. This becomes evident when the Linblad is projected onto the one-particle sector, where, at $J=0$ and $\Delta=0$, it simplifies to an effective Hatano-Nelson Hamiltonian tuned to its exceptional point [105], $\mathcal{H}=\gamma\sum_{\ell=0}^{L-2}S^{+}_{\ell}S^{-}_{\ell+1}.$ (43) The non-click Hamiltonian differs from the Hatano-Nelson model, representing an XXZ chain with an imaginary Ising exchange term and boundary imaginary magnetic fields, $\begin{aligned} \mathcal{H}_{\text{eff}}&=-\frac{J}{2}\sum_{\ell=0}^{L-2}\left[S_{\ell}^{+}S_{\ell+1}^{-}+\text{h.c}\right]-\left[\Delta-i\dfrac{\gamma}{2}\right]\sum_{\ell=0}^{L-2}S_{\ell}^{z}S_{\ell+1}^{z}\\\ &\quad+\frac{i\gamma}{4}\left(S_{0}^{z}-S_{L-1}^{z}\right)-\frac{i\gamma}{8}\left(L-1\right)\end{aligned}.$ (44) In Fig. 11, we plot the magnetisation dynamics starting from a product state and evolving under the two types of dissipative evolutions. In the left panel, we plot the dynamics generated by Eq. 40. Under this set of jump operators, manifestations of non-reciprocity and charge accumulation are still present in the transient dynamics [108, 109]. We note that these are the same jump operators considered in [109] with the phase, $\phi$, defined by them, tuned to get the non-reciprocal regime. However, in the long time limit, a generic state converges to a configuration with zero excitations, $\ket{\downarrow\cdots\downarrow}$ [109] (see the left plot of Fig. 11). This is attributed to the recycling terms of the form $c_{n}\rho c^{\dagger}_{n}$, which effectively remove particles from the system. Additionally, the system has another dark state within the one-magnon sector, which only plays a role in decelerating the relaxation dynamics towards the fully polarised down state [109]. The final state obtained under this Linbladian does not resemble at all the magnetisation profile that one would have obtained in the no-click limit. We then focus on the jump operators in Eq. 42 and plot in Fig. 11 (right panel) the resulting magnetisation dynamics. For the sake of simplicity, we consider the dynamics with $\Delta=0$. The dynamics driven by this model will facilitate the accumulation of spins in an up state at the left extremity of the chain. The imaginary Ising exchange term is responsible for diminishing the state’s norm when adjacent spins are antialigned, precipitating a quantum jump that propagates a spin towards the left edge. This phenomenon is depicted in the spatial and temporal evolution of the magnetisation profile shown in Fig. 11. In contrast to the Hatano-Nelson model, this model does not converge to a steady state characterised by zero excitations. This distinction is attributable to a difference in symmetry. The Hatano-Nelson model exhibits only weak U(1) symmetry, whereas the present model possesses strong U(1) symmetry, thereby ensuring that the system state is kept within the initial magnetisation sector at all times. Figure 12: Top - Time dependence of the conditional entanglement entropy for different system sizes with $\gamma=0.8J$. On the left, the dynamics corresponds to the jump operator $\sqrt{\left|\gamma\right|}\left(S^{-}_{\ell}-i\text{sgn}\left(\gamma\right)S^{-}_{\ell+1}\right)$, while on the right to $\sqrt{\gamma}S^{+}_{\ell}S^{-}_{\ell+1}$. Bottom Left: Time evolution of the entanglement entropy for $\gamma=0.1J$. The inset corresponds to the fit of the steady-state entanglement entropy to the law $S_{L/2}\left(\infty\right)=a_{0}+a_{1}L$, with $a_{0}=\left(0.145\pm 0.002\right)$ and $a_{1}=\left(0.18\pm 0.03\right)$. Bottom Right - Steady- state entanglement entropy for different system sizes as a function of the non-hermitian parameter. These systems were studied under the jump operators $\sqrt{\gamma}S^{+}_{\ell}S^{-}_{\ell+1}$. Other parameter: $\Delta=0.0J$. This disparity in symmetry radically changes the temporal dynamics of the conditionally averaged entanglement entropy, which is defined as the mean entanglement entropy across all conceivable quantum trajectories, $\bar{S}_{\ell}\left(t\right)=\int\mathcal{D}\xi_{t}\;\mathcal{P}\left(\xi_{t}\right)S_{\ell}\left(\xi_{t}\right).$ (45) with $S\left(\xi_{t}\right)$ is the entanglement of a given quantum trajectory that evolves according to Eq. 1. For early times, the entanglement entropy in both models increases linearly. Without non-Hermiticity, the system evolves under standard hermitian dynamics, leading to the entanglement entropy saturating at a value proportional to the system’s volume as it locally thermalises [110, 111]. However, in the presence of non-Hermiticity and quantum jumps, this behaviour may drastically alter. In the Hatano-Nelson model with jumps, the entanglement entropy drops to zero after an initial period determined by the measurement rate $\gamma$. Each jump causes a spin-flip, driving the system to a state devoid of magnetic excitations. This occurs regardless of the system’s total size, resulting in a trivial entanglement area law for any nonzero value of $\gamma$. The no-click limit of the Hatano-Nelson model also has this area law scaling of the entanglement entropy; however, this is driven by the single-particle skin effect[34]. In contrast, the model described by Eq. 44 does not relax to a zero excitation state, as seen in Fig. 12. The strong U(1) symmetry confines the dynamics to the magnetisation sector of the initial state333The dynamics conserves the total number of excitations in the initial state as the initial state is eigenstate of $\sum_{n=0}^{L-1}S^{z}_{n}$ and the Hamiltonian in Eq. 44 as U(1) symmetry.. Initially, the system has a linear growth of the entanglement entropy, reminiscent of the unitary evolution. However, similarly to the Hatano-Nelson model, the steady-state supports an area law entanglement for a positive value of $\gamma$, as seen in the left plot of Fig.12. This can be understood through the spectral properties of the no-click Hamiltonian (Eq. 44). The spectrum is always complex-valued, and so, in the no-click limit, the steady-state corresponds to right-eigenstate with the slowest decaying mode in the zero magnetisation sector. In particular, for certain values of $\gamma/J$, the imaginary component of the spectrum is gapped. Thus, the entanglement entropy inevitably follows an area law in the long-time limit, similar to the ground state of a gapped Hamiltonian [112, 32]. The imaginary gap ($\Delta_{\rm Im}$) in the zero magnetisation sector can be analytically obtained in the limit of $\gamma\gg J$, $\Delta_{\rm Im}=-i\dfrac{\gamma}{2J}+\mathcal{O}\left(\dfrac{J}{\gamma}\right)$ (46) The measurement apparatus effectively disentangles the system, pushing excitations towards the left boundary, and preventing the formation of long- range correlations. It is harder to completely confirm this for smaller values of $\gamma$, where the unitary evolution dominates both the non-Hermitian and stochastic terms. This mainly affects systems with a smaller dimension, where finite-size effects are substantial, since the terms proportional to $L^{-n}$ with $n>0$ in the entanglement entropy cannot be ignored. For example, for $\gamma=0.1J$, one cannot extrapolate the true entanglement scaling, as the observed linear growth might be an artefact of small system sizes. Nevertheless, the remaining data points clearly indicate the collapse of all system sizes to the same value, thus revealing the area law nature of the entanglement entropy. ## 7 Conclusions Throughout this study, we have successfully used the Faber polynomial method to characterise the non-unitary dynamics of both non-interacting and interacting Hatano-Nelson models. Additionally, we have seamlessly integrated this approach with a high-order Monte Carlo Wave Function algorithm to rigorously examine both Lindblad and continuous monitoring dynamics. In the non-interacting problem, we provided the first numerical evidence supporting the existence of a valid hydrodynamic description for the melting dynamics of a domain wall in the presence of non-Hermitian terms. This finding encourages further developments to properly formalise a theory of generalised hydrodynamics applicable to non-Hermitian systems. Our study also reveals an intriguing competition between the Ising exchange term, which tends to preserve the initial Néel order, and the non-reciprocal XX coupling, which tends to form a domain-wall ordering. For comparable values of $\Delta$ and $J$, the interaction allows the formation of an intermediate region that interpolates between the two magnetic domains, allowing the flow of current. However, we could not reach considerable system sizes to determine if this region could support a non-equilibrium steady-state current flowing in only one direction, as seen in the non-interacting case. It is clear, however, that this cannot be the case for $\Delta\gg J$, as the dynamics preserve the initial magnetic ordering, and the current can only exist in the interpolating region between the Néel-ordered domain and the ferromagnetic one generated by the non-reciprocal coupling. Conversely, our work shows that interactions and non-reciprocity help preserve the initial magnetic order when the system is initialised in a domain-wall setup, a result consistent with both the non- interacting non-Hermitian problem and the interacting Hermitian case. This study offers additional insights into the entanglement transition in quantum spin chains exhibiting Non-Hermitian or Liouvillian skin effects. In the Hatano-Nelson model with quantum jumps, we found that the area law behaviour of entanglement entropy remains for any nonzero $\gamma$, similar to the no-click limit. However, the sources are different: in the no-click limit, the area law stems from the single-particle skin effect, while in the monitored stochastic trajectory, it is due to the quantum jumps that relax the system into the fully polarised down spin-state, a product state. On the other hand, the dynamics with two-body jump operators, $L_{\ell}\propto S^{+}_{\ell}S^{-}_{\ell+1}$, allow for a non-equilibrium steady state with a magnetisation profile resembling the no-click limit of the Hatano-Nelson model. Furthermore, the average entanglement entropy still follows an area law for finite values of the ratio $\gamma/J$ in conditional dynamics. The measurement apparatuses effectively disentangle the state, suppressing the volume law that would otherwise be generated in unitary dynamics. This work extends previous studies that focused on the one-particle sector [105, 34]. Overall, our results support the utility and applicability of Faber polynomials in various research domains. This encompasses investigations into measurement-induced phase transitions, open quantum systems, and the exploration of purely non-unitary dynamics governed by non-Hermitian Hamiltonians. Faber polynomials can also be used for computing general spectral properties of non-Hermitian Hamiltonians, potentially replicating the role of Chebyshev polynomials within the Kernel polynomial method [64]. This would complement the existing Non-Hermitian Kernel polynomial method [113, 114], which still uses Chebyshev polynomials, but relies on hermitianisation techniques, at the cost of working in a vector space with twice the original dimension. Furthermore, the Faber polynomial approach could potentially be combined with MPS, similar to the developments already made in the Hermitian case [115, 116, 117, 118]. ## Acknowledgements ##### Funding information R.D.S acknowledges funding from Erasmus$+$ (Erasmus Mundus programme of the European Union) and from the Institute Quantum-Saclay under the project _QuanTEdu-France_. We acknowledge Collège de France IPH cluster where the numerical calculations were carried out. ## Appendix A Further Details on Faber Polynomials As discussed previously, Faber polynomials serve as a polynomial basis to represent complex-valued functions that are analytic within the domain $\mathcal{D}$. These are generated by a conformal mapping $\xi(w)$ that maps the complement of a closed disk of radius $\rho$ to the complement of $\mathcal{D}$, $\dfrac{\xi^{\prime}(w)}{\xi(w)-z}=\sum_{n=0}^{\infty}\dfrac{1}{w^{n+1}}F_{n}\left(z\right),$ (47) with $F_{n}(z)$ being the $n^{th}$ Faber polynomial generated by the conformal mapping $\xi(w)$, $z\in\mathcal{D}$ and $w$ is such that $|w|>\rho$. The existence of such a map, which also satisfies the conditions $\xi(w)/w\rightarrow 1$ in the limit $\left|w\right|\rightarrow\infty$, is guaranteed by the Riemann mapping theorem [57, 119]. Furthermore, $\xi(w)$ admits a Laurent expansion at $w=\infty$ of the form, $\xi(w)=w+\sum_{m\geq 0}\gamma_{m}w^{-m},$ (48) where $\gamma_{m}\in\mathbb{C}$. Using the Laurent expansion and integrating in the contour defined around the disk of radius $\rho$, it is straightforward to check that the Faber polynomials satisfies the following recurrence relation, $F_{n+1}(z)=zF_{n}(z)-\sum_{j=0}^{n}\gamma_{j}F_{n-j}(z)-n\gamma_{n},\quad n>0$ (49) where $F_{0}(z)=1$. For our purposes, we are interested in using the Faber polynomials to approximate a given function of our non-Hermitian Hamiltonian, $f\left(\mathcal{H}\right)$. The domain $\mathcal{D}$, is defined by the spectrum of the Hamiltonian. Using Eq. 47, the expression in a Faber series is given by $f(z):=\sum_{k=-\infty}^{+\infty}c_{k}F_{k}(z).$ (50) with the coefficients given by the integral, $c_{n}=\frac{1}{2\pi i}\int_{|w|=\rho}\frac{f\left(\xi\left(w\right)\right)}{w^{n+1}}dw.$ (51) As stated in the main text, we perform this integral with $\rho=1$ by properly rescaling the Hamiltonian. Furthermore, we compute the Faber coefficients related to an elliptic contour. For this contour, the conformal mapping reduces to $\xi(w)=w+\gamma_{0}+\gamma_{1}w^{-1}$, where $\gamma_{0}$ is the centre of the ellipse and $\gamma_{1}=1-b$, with $b$ the major semiaxis. This maximises the memory efficiency of the algorithm, as the recurrence relation in Eq. 49 only depends on the two previous polynomials, $\begin{aligned} F_{0}(z)&=1\\\ F_{1}(z)&=z-\gamma_{0}\\\ F_{2}(z)&=\left(z-\gamma_{0}\right)F_{1}(z)-2\gamma_{1}F_{0}(z)\\\ F_{n+1}(z)&=\left(z-\gamma_{0}\right)F_{n}(z)-\gamma_{1}F_{n-1}(z),\;n\geq 2\end{aligned}.$ (52) The coefficients $c_{n}$ of the Faber series presented in the main text (Eq. 10) are straightforwardly computed by performing the contour integral in Eq. 51 for the function $f(z)=e^{-i\delta t_{s}z}$. This is done through the use of the identity [74], $\exp\left(\dfrac{z}{2}\left[s+\dfrac{a}{s}\right]\right)=\sum_{n=-\infty}^{+\infty}\left(\dfrac{s}{i\sqrt{a}}\right)^{n}J_{n}\left(i\sqrt{a}z\right).$ (53) The Faber Polynomials have two interesting limits; when $\gamma_{1}=0$ the conformal mapping corresponds to a circle, thus the Faber Polynomials reduce to the Taylor polynomials centred around $\gamma_{0}$, $F_{n}(z)=\left(z-\gamma_{0}\right)^{n},$ (54) Whereas, in the limit $\gamma_{1}=1$, the conformal mapping evolves the real line, and so, the Faber Polynomials can be related with the Chebyshev polynomials by $\begin{aligned} F_{1}(z)&=T_{1}\left(\dfrac{z-\gamma_{0}}{2}\right),\\\ F_{2}(z)&=T_{2}\left(\dfrac{z-\gamma_{0}}{2}\right)-1,\\\ F_{n}(z)&=T_{n}\left(\dfrac{z-\gamma_{0}}{2}\right),n\geq 2\end{aligned}.$ (55) ## Appendix B General Features on the Hatano-Nelson Model In this Appendix, we review the essential characteristics of the non- interacting Hatano-Nelson model, highlighting the importance of boundary conditions on the resulting physical properties. Specifically, we examine the diagonalisation of the non-interacting Hatano-Nelson model (Eq. 18) under both open and periodic boundary conditions. For OBCs, the Hamiltonian can be diagonalised through a similarity transformation. This is done using the $\text{GL}\left(1\right)$ gauge transformation[34], $\hat{p}_{\ell}^{\dagger}:=e^{\ell\theta}\hat{c}_{\ell}^{\dagger},\quad\hat{q}_{\ell}:=e^{-\ell\theta}\hat{c}_{\ell},$ (56) with $\theta\in\mathbb{R}$ and both $p_{\ell}$ and $q_{\ell}$ two fermionic operators satisfying the unusual anticommutation relations: $\left\\{p^{\dagger}_{\ell},q_{n}\right\\}=\delta_{\ell,n}$, $\left\\{p^{\dagger}_{\ell},p_{n}\right\\}=e^{2\ell\theta}\delta_{\ell,n}$, $\left\\{q^{\dagger}_{\ell},q_{n}\right\\}=e^{-2\ell\theta}\delta_{\ell,n}$ and $\left\\{p_{\ell},q_{n}\right\\}=\left\\{p^{\dagger}_{\ell},q^{\dagger}_{n}\right\\}=0$. This indicates the biorthogonality of the Hamiltonian’s eigenbasis. The Hamiltonian can subsequently be expressed in the form $\mathcal{H}_{\text{HN}}=-\sum_{n=0}^{L-2}\left[\frac{e^{\theta}\left(J+\gamma\right)}{2}p_{n}^{\dagger}q_{n+1}+\frac{e^{-\theta}\left(J-\gamma\right)}{2}p_{n+1}^{\dagger}q_{n}\right]$ (57) The $\theta$ parameter is chosen so that the final Hamiltonian becomes hermitian, $\theta=\frac{1}{2}\log\left(\frac{J-\gamma}{J+\gamma}\right).$ (58) This reduces the Hamiltonian to $\mathcal{H}_{\text{HN}}=-\frac{\sqrt{J^{2}-\gamma^{2}}}{2}\sum_{n=0}^{L-2}\left[p_{n}^{\dagger}q_{n+1}+p_{n+1}^{\dagger}q_{n}\right].$ (59) The Hamiltonian can be diagonalised through a straightforward Fourier transformation, $\mathcal{H}_{\text{HN}}=-\sqrt{J^{2}-\gamma^{2}}\sum_{k}\cos\left(k\right)p_{k}^{\dagger}q_{k},$ (60) where $k=\frac{\pi}{L+1}\;n,n\in\left\\{1,\cdots,L\right\\}$. The quasi- particles generated by $p^{\dagger}_{k}$ and $q^{\dagger}_{k}$ are nonorthogonal, as shown by their anticommutation relations. Furthermore, they correspond to states exponentially localised at the left and right edges of the chain, $\displaystyle p_{k}^{\dagger}\ket{\text{vac}}$ $\displaystyle=\sqrt{\frac{2}{L+1}}\sum_{\ell=0}^{L-1}e^{\ell\theta}\sin\left(k\cdot\left(\ell+1\right)\right)c_{\ell}^{\dagger}\ket{\text{vac}},$ (61) $\displaystyle q_{k}^{\dagger}\ket{\text{vac}}$ $\displaystyle=\sqrt{\frac{2}{L+1}}\sum_{\ell=0}^{L-1}e^{-\ell\theta}\sin\left(k\cdot\left(\ell+1\right)\right)c_{\ell}^{\dagger}\ket{\text{vac}}.$ The parameter $\theta$ controls wave-function localisation, inducing a characteristic length scale $l$, $l\sim\left(\log\left(\frac{J-\gamma}{J+\gamma}\right)\right)^{-1}.$ (62) This characteristic length scale emerges due to the skin effect and is exclusive to the open-boundary condition scenario. Conversely, under PBCs, the Hamiltonian can be diagonalised using a Fourier transform without employing the GL gauge transformation, $\mathcal{H}_{\text{HN}}=-\sum_{k\in\text{FbZ}}\left[J\cos\left(ka\right)+i\gamma\sin\left(ka\right)\right]c_{k}^{\dagger}c_{k}.$ (63) Unlike the previous case where the eigenstates were confined to the chain ends, under periodic boundary conditions both the right and left eigenstates are indistinguishable and delocalised. Furthermore, the spectra is always complex for all values of $\gamma$. Figure 13: Comparison between the Faber polynomial expansion and the TEBD (MPO) algorithm, shown on the left (right). The calculations on the left were performed with $\gamma=0.0J$, $\Delta=0.5J$ and $L=18$, while on the right we used $\gamma=0.2J$, $\Delta=0.4J$ and $L=18$. ## Appendix C Comparison with MPS based Methods In this Appendix, we compare the Faber Polynomial technique with MPS calculations for the interacting domain-wall melting problem. First, using a second-order time-evolution block decimation (TEBD) algorithm [120], and second, a first-order matrix product operator (MPO) representation of the time-evolution operator [121]. These methods were implemented using the ITensor Library [122, 123]. The TEBD algorithm consists in performing a Suzuki-trotter break-up of the time evolution operator, $e^{-i\delta t\left(\mathcal{H}_{\rm even}+\mathcal{H}_{\rm odd}\right)}\simeq e^{-i\delta t\mathcal{H}_{\rm even}}e^{-i\delta t\mathcal{H}_{\rm odd}},$ (64) where the Hamiltonian in Eq. 32 is decomposed as a sum of two body terms either acting on the left or right sites, $\mathcal{H}=\mathcal{H}_{\rm odd}+\mathcal{H}_{\rm even}$. Whilem in the first-order MPO, we use an Euler expansion of the time-evolution operator, $e^{-i\delta t\mathcal{H}}\simeq 1-i\delta t\mathcal{H}$, and represent the Hamiltonian through an MPO. MPS calculations could be effective for this problem, as the entanglement entropy increases logarithmically over time and the domain-wall order is favoured by non-Hermiticity. However, the TEBD and MPO algorithms face errors that the Faber algorithm avoids. Firstly, there is a restriction on the maximum allowed time step. In the TEBD algorithm, the Hamiltonian is decomposed into smaller two-body terms that can be exactly exponentiated, as shown in Eq. 64. The approximation neglects the nonzero commutator term $\left[\mathcal{H}_{\rm odd},\mathcal{H}_{\rm even}\right]$, causing an error of order $\delta t^{2}$. To minimise this error, the time step must be small relative to the problem’s energy scale. This is evident in the left panel of Fig. 13, where longer integration times with a larger time step deviate from results obtained with Faber polynomials. A comparable error is present in the MPO algorithm due to the use of a first-order expansion. The Faber algorithm circumvents this issue by precisely representing the time evolution operator for any time step with any desired accuracy, though it requires the computation of the appropriate number of polynomials. Another limitation of the TEBD algorithm is the handling of long-range interactions with the Suzuki- Trotter decomposition, which requires the use of advanced techniques such as swap gates [124]. The TEBD and MPO techniques also face inaccuracies because of the truncation of the MPS bond dimension, a problem that the Faber algorithm avoids at the cost of working with a state encompassing the entire Hilbert space dimension. Of course, using the Faber polynomial limits the analysis to smaller system sizes. However, it can be advantageous in situations where MPS calculations fail, such as when the entanglement entropy scales proportionally to the system size. ## References * [1] H.-P. Breuer and F. Petruccione, _The Theory of Open Quantum Systems_ , Oxford University Press (Oxford, England, 2002). * [2] H. M. Wiseman and G. J. Milburn, _Quantum Measurement and Control_ , Cambridge University Press (Cambridge, England, 2009). * [3] G. Lindblad, _On the generators of quantum dynamical semigroups_ , Communications in Mathematical Physics 48(2), 119 (1976), 10.1007/BF01608499. * [4] V. Gorini, A. Kossakowski and E. C. G. Sudarshan, _Completely positive dynamical semigroups of N-level systems_ , Journal of Mathematical Physics 17(5), 821 (1976), 10.1063/1.522979, https://pubs.aip.org/aip/jmp/article-pdf/17/5/821/19090720/821_1_online.pdf. * [5] D. Manzano, _A short introduction to the Lindblad master equation_ , AIP Advances 10(2), 025106 (2020), 10.1063/1.5115323, https://pubs.aip.org/aip/adv/article-pdf/doi/10.1063/1.5115323/12881278/025106_1_online.pdf. * [6] L. M. Sieberer, M. Buchhold, J. Marino and S. Diehl, _Universality in driven open quantum matter_ (2023), 2312.03073. * [7] G. T. Landi, D. Poletti and G. Schaller, _Nonequilibrium boundary-driven quantum systems: Models, methods, and properties_ , Rev. Mod. Phys. 94, 045006 (2022), 10.1103/RevModPhys.94.045006. * [8] K. Jacobs, _Quantum Measurement Theory and its Applications_ , Cambridge University Press (Cambridge, England, 2014). * [9] G. T. Landi, M. J. Kewming, M. T. Mitchison and P. P. Potts, _Current fluctuations in open quantum systems: Bridging the gap between quantum continuous measurements and full counting statistics_ , PRX Quantum 5(2), 020201 (2024), 10.1103/prxquantum.5.020201. * [10] H. Carmichael, _Statistical Methods in Quantum Optics 1_ , Springer Science & Business Media (Berlin, Germany, 1999). * [11] A. J. Daley, _Quantum trajectories and open many-body quantum systems_ , Advances in Physics 63(2), 77 (2014), 10.1080/00018732.2014.933502. * [12] J. Dalibard, Y. Castin and K. Mølmer, _Wave-function approach to dissipative processes in quantum optics_ , Physical Review Letters 68(5), 580 (1992), 10.1103/physrevlett.68.580. * [13] M. B. Plenio and P. L. Knight, _The quantum-jump approach to dissipative dynamics in quantum optics_ , Reviews of Modern Physics 70(1), 101 (1998), 10.1103/revmodphys.70.101. * [14] C. Gneiting, A. V. Rozhkov and F. Nori, _Jump-time unraveling of markovian open quantum systems_ , Physical Review A 104(6), 062212 (2021), 10.1103/physreva.104.062212. * [15] Y. Li, X. Chen and M. P. A. Fisher, _Quantum zeno effect and the many-body entanglement transition_ , Phys. Rev. B 98, 205136 (2018), 10.1103/PhysRevB.98.205136. * [16] B. Skinner, J. Ruhman and A. Nahum, _Measurement-induced phase transitions in the dynamics of entanglement_ , Phys. Rev. X 9, 031009 (2019), 10.1103/PhysRevX.9.031009. * [17] X. Cao, A. Tilloy and A. De Luca, _Entanglement in a fermion chain under continuous monitoring_ , SciPost Physics 7(2) (2019), 10.21468/scipostphys.7.2.024. * [18] Y. Fuji and Y. Ashida, _Measurement-induced quantum criticality under continuous monitoring_ , Phys. Rev. B 102, 054302 (2020), 10.1103/PhysRevB.102.054302. * [19] O. Alberton, M. Buchhold and S. Diehl, _Entanglement transition in a monitored free-fermion chain: From extended criticality to area law_ , Physical Review Letters 126(17), 170602 (2021), 10.1103/physrevlett.126.170602. * [20] X. Turkeshi, A. Biella, R. Fazio, M. Dalmonte and M. Schiró, _Measurement-induced entanglement transitions in the quantum ising chain: From infinite to zero clicks_ , Phys. Rev. B 103, 224210 (2021), 10.1103/PhysRevB.103.224210. * [21] Y. L. Gal, X. Turkeshi and M. Schirò, _Entanglement dynamics in monitored systems and the role of quantum jumps_ , 10.48550/ARXIV.2312.13419 (2023). * [22] R. El-Ganainy, M. Khajavikhan, D. N. Christodoulides and S. K. Ozdemir, _The dawn of non-hermitian optics_ , Communications Physics 2(1) (2019), 10.1038/s42005-019-0130-z. * [23] S. Chandramouli, N. Ossi, Z. H. Musslimani and K. G. Makris, _Dispersive hydrodynamics in non-hermitian nonlinear schrödinger equation with complex external potential_ , Nonlinearity 36(12), 6798 (2023), 10.1088/1361-6544/ad065d. * [24] K. Sone and Y. Ashida, _Anomalous topological active matter_ , Physical Review Letters 123(20), 205502 (2019), 10.1103/physrevlett.123.205502. * [25] K. Sone, Y. Ashida and T. Sagawa, _Exceptional non-hermitian topological edge mode and its application to active matter_ , Nature Communications 11(1) (2020), 10.1038/s41467-020-19488-0. * [26] Y. Ashida, Z. Gong and M. Ueda, _Non-hermitian physics_ , Advances in Physics 69(3), 249 (2020), 10.1080/00018732.2021.1876991. * [27] Y. Ashida and M. Ueda, _Full-counting many-particle dynamics: Nonlocal and chiral propagation of correlations_ , Physical Review Letters 120(18), 185301 (2018), 10.1103/physrevlett.120.185301. * [28] A. Bácsi and B. Dóra, _Dynamics of entanglement after exceptional quantum quench_ , Physical Review B 103(8), 085137 (2021), 10.1103/physrevb.103.085137. * [29] B. Dóra, M. A. Werner and C. u. u. u. u. P. m. c. Moca, _Quantum quench dynamics in the luttinger liquid phase of the hatano-nelson model_ , Physical Review B 108(3), 035104 (2023), 10.1103/physrevb.108.035104. * [30] S. Gopalakrishnan and M. J. Gullans, _Entanglement and purification transitions in non-hermitian quantum mechanics_ , Physical Review Letters 126(17), 170503 (2021), 10.1103/physrevlett.126.170503. * [31] Y. Le Gal, X. Turkeshi and M. Schirò, _Volume-to-area law entanglement transition in a non-hermitian free fermionic chain_ , SciPost Physics 14(5) (2023), 10.21468/scipostphys.14.5.138. * [32] C. Zerba and A. Silva, _Measurement phase transitions in the no-click limit as quantum phase transitions of a non-hermitean vacuum_ , SciPost Physics Core 6(3) (2023), 10.21468/scipostphyscore.6.3.051. * [33] X. Turkeshi and M. Schiró, _Entanglement and correlation spreading in non-hermitian spin chains_ , Phys. Rev. B 107, L020403 (2023), 10.1103/PhysRevB.107.L020403. * [34] K. Kawabata, T. Numasawa and S. Ryu, _Entanglement phase transition induced by the non-hermitian skin effect_ , Physical Review X 13(2), 021007 (2023), 10.1103/physrevx.13.021007. * [35] T. E. Lee, _Anomalous edge state in a non-hermitian lattice_ , Physical Review Letters 116(13), 133903 (2016), 10.1103/physrevlett.116.133903. * [36] S. Yao and Z. Wang, _Edge states and topological invariants of non-hermitian systems_ , Phys. Rev. Lett. 121, 086803 (2018), 10.1103/PhysRevLett.121.086803. * [37] F. K. Kunst, E. Edvardsson, J. C. Budich and E. J. Bergholtz, _Biorthogonal bulk-boundary correspondence in non-hermitian systems_ , Physical Review Letters 121(2), 026808 (2018), 10.1103/physrevlett.121.026808. * [38] X. Zhang, T. Zhang, M.-H. Lu and Y.-F. Chen, _A review on non-hermitian skin effect_ , Advances in Physics: X 7(1) (2022), 10.1080/23746149.2022.2109431. * [39] K. Zhang, Z. Yang and C. Fang, _Universal non-hermitian skin effect in two and higher dimensions_ , Nature Communications 13(1) (2022), 10.1038/s41467-022-30161-6. * [40] N. Okuma and M. Sato, _Non-hermitian topological phenomena: A review_ , Annual Review of Condensed Matter Physics 14(1), 83 (2023), 10.1146/annurev-conmatphys-040521-033133. * [41] J. Claes and T. L. Hughes, _Skin effect and winding number in disordered non-hermitian systems_ , Physical Review B 103(14), l140201 (2021), 10.1103/physrevb.103.l140201. * [42] L. Li, C. H. Lee, S. Mu and J. Gong, _Critical non-hermitian skin effect_ , Nature Communications 11(1) (2020), 10.1038/s41467-020-18917-4. * [43] K. Kawabata, M. Sato and K. Shiozaki, _Higher-order non-hermitian skin effect_ , Physical Review B 102(20), 205118 (2020), 10.1103/physrevb.102.205118. * [44] X. Zhu, H. Wang, S. K. Gupta, H. Zhang, B. Xie, M. Lu and Y. Chen, _Photonic non-hermitian skin effect and non-bloch bulk-boundary correspondence_ , Physical Review Research 2(1), 013280 (2020), 10.1103/physrevresearch.2.013280. * [45] Y. Ma and T. L. Hughes, _Quantum skin hall effect_ , Physical Review B 108(10), l100301 (2023), 10.1103/physrevb.108.l100301. * [46] H. Geng, J. Y. Wei, M. H. Zou, L. Sheng, W. Chen and D. Y. Xing, _Nonreciprocal charge and spin transport induced by non-hermitian skin effect in mesoscopic heterojunctions_ , Phys. Rev. B 107, 035306 (2023), 10.1103/PhysRevB.107.035306. * [47] P. K. Suetin, _Fundamental properties of faber polynomials_ , Russian Mathematical Surveys 19(4), 121 (1964), 10.1070/rm1964v019n04abeh001155. * [48] J. H. Curtiss, _Faber polynomials and the faber series_ , The American Mathematical Monthly 78(6), 577 (1971), 10.1080/00029890.1971.11992813. * [49] H. Tal-Ezer and R. Kosloff, _An accurate and efficient scheme for propagating the time dependent schrödinger equation_ , The Journal of Chemical Physics 81(9), 3967 (1984), 10.1063/1.448136. * [50] A. Suresh, R. D. Soares, P. Mondal, J. P. S. Pires, J. M. V. P. Lopes, A. Ferreira, A. E. Feiguin, P. Plecháč and B. K. Nikolić, _Electron-mediated entanglement of two distant macroscopic ferromagnets within a nonequilibrium spintronic device_ , Physical Review A 109(2), 022414 (2024), 10.1103/physreva.109.022414. * [51] J. P. Santos Pires, B. Amorim and J. M. Viana Parente Lopes, _Landauer transport as a quasisteady state on finite chains under unitary quantum dynamics_ , Physical Review B 101(10), 104203 (2020), 10.1103/physrevb.101.104203. * [52] J. M. A. Pinho, J. P. S. Pires, S. M. João, B. Amorim and J. M. V. P. Lopes, _From bloch oscillations to a steady-state current in strongly biased mesoscopic devices_ , Physical Review B 108(7), 075402 (2023), 10.1103/physrevb.108.075402. * [53] H. P. Veiga, S. M. João, J. M. A. Pinho, J. P. S. Pires and J. M. V. P. Lopes, _Unambiguous simulation of diffusive charge transport in disordered nanoribbons_ , 10.48550/ARXIV.2311.03983 (2023). * [54] Y. Huang, D. J. Kouri and D. K. Hoffman, _General, energy-separable faber polynomial representation of operator functions: Theory and application in quantum scattering_ , The Journal of Chemical Physics 101(12), 10493 (1994), 10.1063/1.468481. * [55] W. Huisinga, L. Pesce, R. Kosloff and P. Saalfrank, _Faber and newton polynomial integrators for open-system density matrix propagation_ , The Journal of Chemical Physics 110(12), 5538 (1999), 10.1063/1.478451. * [56] L. Schulz, B. Inci, M. Pech and D. Schulz, _Subdomain-based exponential integrators for quantum liouville-type equations_ , Journal of Computational Electronics 20(6), 2070 (2021), 10.1007/s10825-021-01797-2. * [57] A. G. Borisov and S. V. Shabanov, _Wave packet propagation by the faber polynomial approximation in electrodynamics of passive media_ , Journal of Computational Physics 216(1), 391 (2006), 10.1016/j.jcp.2005.12.011. * [58] H. Fahs, _Investigation on polynomial integrators for time-domain electromagnetics using a high-order discontinuous galerkin method_ , Applied Mathematical Modelling 36(11), 5466 (2012), 10.1016/j.apm.2011.12.055. * [59] R. H. Landau, M. J. Páez and C. C. Bordeianu, _Computational physics_ , Physics textbook. Wiley-VCH, Weinheim, 2., rev. and enl. ed., 1. reprint edn., ISBN 9783527406265, Parallel als digitalisierte Ausg. erschienen (2011). * [60] N. Hatano and M. Suzuki, _Finding Exponential Product Formulas of Higher Orders_ , pp. 37–68, Springer Berlin Heidelberg, ISBN 9783540315155, 10.1007/11526216_2 (2005). * [61] N. Hatano and D. R. Nelson, _Localization transitions in non-hermitian quantum mechanics_ , Physical Review Letters 77(3), 570 (1996), 10.1103/physrevlett.77.570. * [62] N. Hatano and D. R. Nelson, _Vortex pinning and non-hermitian quantum mechanics_ , Physical Review B 56(14), 8651 (1997), 10.1103/physrevb.56.8651. * [63] M. Fruchart, R. Hanai, P. B. Littlewood and V. Vitelli, _Non-reciprocal phase transitions_ , Nature 592(7854), 363 (2021), 10.1038/s41586-021-03375-9. * [64] A. Weiße, G. Wellein, A. Alvermann and H. Fehske, _The kernel polynomial method_ , Reviews of Modern Physics 78(1), 275 (2006), 10.1103/revmodphys.78.275. * [65] A. Braun and P. Schmitteckert, _Numerical evaluation of green’s functions based on the chebyshev expansion_ , Physical Review B 90(16), 165112 (2014), 10.1103/physrevb.90.165112. * [66] A. Ferreira and E. R. Mucciolo, _Critical delocalization of chiral zero energy modes in graphene_ , Phys. Rev. Lett. 115, 106601 (2015), 10.1103/PhysRevLett.115.106601. * [67] S. M. João and J. M. Viana Parente Lopes, _Basis-independent spectral methods for non-linear optical response in arbitrary tight-binding models_ , Journal of Physics: Condensed Matter 32(12), 125901 (2019), 10.1088/1361-648x/ab59ec. * [68] S. a. M. João, M. Anđelkovió, L. Covaci, T. G. Rappoport, J. a. M. V. P. Lopes and A. Ferreira, _Kite: high-performance accurate modelling of electronic structure and response functions of large molecules, disordered crystals and heterostructures_ , Royal Society Open Science 7(2), 191809 (2020), 10.1098/rsos.191809. * [69] S. M. João, J. M. Viana Parente Lopes and A. Ferreira, _High-resolution real-space evaluation of the self-energy operator of disordered lattices: Gade singularity, spin–orbit effects and p-wave superconductivity_ , Journal of Physics: Materials 5(4), 045002 (2022), 10.1088/2515-7639/ac91f9. * [70] G. Faber, _Ueber polynomische entwicklungen_ , Mathematische Annalen 57, 389 (1903). * [71] G. Faber, _Uber polynomische entwicklungen ii_ , Mathematische Annalen 64(1), 116 (1907), 10.1007/bf01449884. * [72] A. W. Sandvik, A. Avella and F. Mancini, _Computational studies of quantum spin systems_ , In _AIP Conference Proceedings_. AIP, 10.1063/1.3518900 (2010). * [73] S. W. Ellacott, _Computation of faber series with application to numerical polynomial approximation in the complex plane_ , Mathematics of Computation 40(162), 575 (1983), 10.1090/s0025-5718-1983-0689474-7. * [74] I. S. Gradštejn, I. M. Ryzhik, D. Zwillinger and V. H. Moll, eds., _Table of integrals, series, and products_ , Academic Press, Waltham, MA, eighth edition edn., ISBN 0123849349 (2015). * [75] S. Bravyi, _Lagrangian representation for fermionic linear optics_ , Quantum Info. Comput. 5(3), 216–238 (2005). * [76] N. Laflorencie, _Quantum entanglement in condensed matter systems_ , Physics Reports 646, 1 (2016), 10.1016/j.physrep.2016.06.008. * [77] L. Amico, R. Fazio, A. Osterloh and V. Vedral, _Entanglement in many-body systems_ , Rev. Mod. Phys. 80, 517 (2008), 10.1103/RevModPhys.80.517. * [78] I. Peschel, _Calculation of reduced density matrices from correlation functions_ , Journal of Physics A: Mathematical and General 36(14), L205 (2003), 10.1088/0305-4470/36/14/101. * [79] S.-A. Cheong and C. L. Henley, _Many-body density matrices for free fermions_ , Physical Review B 69(7), 075111 (2004), 10.1103/physrevb.69.075111. * [80] I. Peschel and V. Eisler, _Reduced density matrices and entanglement entropy in free lattice models_ , Journal of Physics A: Mathematical and Theoretical 42(50), 504003 (2009), 10.1088/1751-8113/42/50/504003. * [81] X. Turkeshi, L. Piroli and M. Schirò, _Density and current statistics in boundary-driven monitored fermionic chains_ , Phys. Rev. B 109, 144306 (2024), 10.1103/PhysRevB.109.144306. * [82] P. Jordan and E. Wigner, _Über das paulische Äquivalenzverbot_ , Zeitschrift für Physik 47(9-10), 631 (1928), 10.1007/bf01331938. * [83] S. Sachdev, _Quantum Phase Transitions_ , Cambridge University Press, ISBN 9780521582544 (1999). * [84] T. Antal, Z. Rácz, A. Rákos and G. M. Schütz, _Transport in the $\mathrm{XX}$ chain at zero temperature: Emergence of flat magnetization profiles_, Phys. Rev. E 59, 4912 (1999), 10.1103/PhysRevE.59.4912. * [85] J. Lancaster, E. Gull and A. Mitra, _Quenched dynamics in interacting one-dimensional systems: Appearance of current-carrying steady states from initial domain wall density profiles_ , Phys. Rev. B 82, 235124 (2010), 10.1103/PhysRevB.82.235124. * [86] G. Misguich, K. Mallick and P. L. Krapivsky, _Dynamics of the spin- 12 heisenberg chain initialized in a domain-wall state_ , Physical Review B 96(19), 195151 (2017), 10.1103/physrevb.96.195151. * [87] V. Eisler and F. Maislinger, _Hydrodynamical phase transition for domain-wall melting in the xy chain_ , Phys. Rev. B 98, 161117 (2018), 10.1103/PhysRevB.98.161117. * [88] B. Bertini, M. Collura, J. De Nardis and M. Fagotti, _Transport in out-of-equilibriumxxzchains: Exact profiles of charges and currents_ , Physical Review Letters 117(20), 207201 (2016), 10.1103/physrevlett.117.207201. * [89] O. A. Castro-Alvaredo, B. Doyon and T. Yoshimura, _Emergent hydrodynamics in integrable quantum systems out of equilibrium_ , Physical Review X 6(4), 041065 (2016), 10.1103/physrevx.6.041065. * [90] F. H. Essler, _A short introduction to generalized hydrodynamics_ , Physica A: Statistical Mechanics and its Applications 631, 127572 (2023), 10.1016/j.physa.2022.127572. * [91] P. Ruggiero, P. Calabrese, B. Doyon and J. Dubail, _Quantum generalized hydrodynamics_ , Physical Review Letters 124(14), 140603 (2020), 10.1103/physrevlett.124.140603. * [92] T. Orito and K.-I. Imura, _Unusual wave-packet spreading and entanglement dynamics in non-hermitian disordered many-body systems_ , Physical Review B 105(2), 024303 (2022), 10.1103/physrevb.105.024303. * [93] H. Spring, V. Könye, F. A. Gerritsma, I. C. Fulga and A. R. Akhmerov, _Phase transitions of wave packet dynamics in disordered non-hermitian systems_ , SciPost Physics 16(5) (2024), 10.21468/scipostphys.16.5.120. * [94] B. Barch, _Locality, correlations, information, and non-hermitian quantum systems_ , 10.48550/ARXIV.2405.16842 (2024). * [95] V. Alba and F. Carollo, _Spreading of correlations in markovian open quantum systems_ , Physical Review B 103(2), l020302 (2021), 10.1103/physrevb.103.l020302. * [96] F. Carollo and V. Alba, _Dissipative quasiparticle picture for quadratic markovian open quantum systems_ , Physical Review B 105(14), 144305 (2022), 10.1103/physrevb.105.144305. * [97] F. Rottoli, S. Scopa and P. Calabrese, _Entanglement hamiltonian during a domain wall melting in the free fermi chain_ , Journal of Statistical Mechanics: Theory and Experiment 2022(6), 063103 (2022), 10.1088/1742-5468/ac72a1. * [98] B.-Q. Jin and V. E. Korepin, _Quantum spin chain, toeplitz determinants and the fisher–hartwig conjecture_ , Journal of Statistical Physics 116(1–4), 79 (2004), 10.1023/b:joss.0000037230.37166.42. * [99] P. Calabrese and J. Cardy, _Time dependence of correlation functions following a quantum quench_ , Phys. Rev. Lett. 96, 136801 (2006), 10.1103/PhysRevLett.96.136801. * [100] P. Calabrese and J. Cardy, _Entanglement entropy and conformal field theory_ , Journal of Physics A: Mathematical and Theoretical 42(50), 504005 (2009), 10.1088/1751-8113/42/50/504005. * [101] J.-M. Stéphan and J. Dubail, _Local quantum quenches in critical one-dimensional systems: entanglement, the loschmidt echo, and light-cone effects_ , Journal of Statistical Mechanics: Theory and Experiment 2011(08), P08019 (2011), 10.1088/1742-5468/2011/08/p08019. * [102] S. Mu, C. H. Lee, L. Li and J. Gong, _Emergent fermi surface in a many-body non-hermitian fermionic chain_ , Physical Review B 102(8), 081115 (2020), 10.1103/physrevb.102.081115. * [103] J. Sirker, _Transport in one-dimensional integrable quantum systems_ , SciPost Physics Lecture Notes (2020), 10.21468/scipostphyslectnotes.17. * [104] V. B. Bulchandani, S. Gopalakrishnan and E. Ilievski, _Superdiffusion in spin chains_ , Journal of Statistical Mechanics: Theory and Experiment 2021(8), 084001 (2021), 10.1088/1742-5468/ac12c7. * [105] T. Haga, M. Nakagawa, R. Hamazaki and M. Ueda, _Liouvillian skin effect: Slowing down of relaxation processes without gap closing_ , Physical Review Letters 127(7), 070402 (2021), 10.1103/physrevlett.127.070402. * [106] F. Yang, Q.-D. Jiang and E. J. Bergholtz, _Liouvillian skin effect in an exactly solvable model_ , Physical Review Research 4(2), 023160 (2022), 10.1103/physrevresearch.4.023160. * [107] Z. Wang, Y. Lu, Y. Peng, R. Qi, Y. Wang and J. Jie, _Accelerating relaxation dynamics in open quantum systems with liouvillian skin effect_ , Physical Review B 108(5), 054313 (2023), 10.1103/physrevb.108.054313. * [108] X. Li, M. A. Begaowe, S. Zhang and B. Flebus, _Reciprocal reservoir induced non-hermitian skin effect_ , 10.48550/ARXIV.2307.15792 (2023). * [109] S. E. Begg and R. Hanai, _Quantum criticality in open quantum spin chains with nonreciprocity_ , Physical Review Letters 132(12), 120401 (2024), 10.1103/physrevlett.132.120401. * [110] P. Calabrese and J. Cardy, _Evolution of entanglement entropy in one-dimensional systems_ , Journal of Statistical Mechanics: Theory and Experiment 2005(04), P04010 (2005), 10.1088/1742-5468/2005/04/p04010. * [111] M. Fagotti and P. Calabrese, _Evolution of entanglement entropy following a quantum quench: Analytic results for thexychain in a transverse magnetic field_ , Physical Review A 78(1), 010306 (2008), 10.1103/physreva.78.010306. * [112] M. B. Hastings, _An area law for one-dimensional quantum systems_ , Journal of Statistical Mechanics: Theory and Experiment 2007(08), P08024 (2007), 10.1088/1742-5468/2007/08/P08024. * [113] N. Hatano and J. Feinberg, _Chebyshev-polynomial expansion of the localization length of hermitian and non-hermitian random chains_ , Physical Review E 94(6), 063305 (2016), 10.1103/physreve.94.063305. * [114] G. Chen, F. Song and J. L. Lado, _Topological spin excitations in non-hermitian spin chains with a generalized kernel polynomial algorithm_ , Physical Review Letters 130(10), 100401 (2023), 10.1103/physrevlett.130.100401. * [115] A. Holzner, A. Weichselbaum, I. P. McCulloch, U. Schollwöck and J. von Delft, _Chebyshev matrix product state approach for spectral functions_ , Physical Review B 83(19), 195115 (2011), 10.1103/physrevb.83.195115. * [116] F. A. Wolf, J. A. Justiniano, I. P. McCulloch and U. Schollwöck, _Spectral functions and time evolution from the chebyshev recursion_ , Physical Review B 91(11), 115144 (2015), 10.1103/physrevb.91.115144. * [117] J. C. Halimeh, F. Kolley and I. P. McCulloch, _Chebyshev matrix product state approach for time evolution_ , Physical Review B 92(11), 115130 (2015), 10.1103/physrevb.92.115130. * [118] H. D. Xie, R. Z. Huang, X. J. Han, X. Yan, H. H. Zhao, Z. Y. Xie, H. J. Liao and T. Xiang, _Reorthonormalization of chebyshev matrix product states for dynamical correlation functions_ , Physical Review B 97(7), 075111 (2018), 10.1103/physrevb.97.075111. * [119] J. Bak and D. J. Newman, _Complex Analysis_ , Springer New York, ISBN 9781441972880, 10.1007/978-1-4419-7288-0 (2010). * [120] G. Vidal, _Efficient simulation of one-dimensional quantum many-body systems_ , Physical Review Letters 93(4), 040502 (2004), 10.1103/physrevlett.93.040502. * [121] S. Paeckel, T. Köhler, A. Swoboda, S. R. Manmana, U. Schollwöck and C. Hubig, _Time-evolution methods for matrix-product states_ , Annals of Physics 411, 167998 (2019), https://doi.org/10.1016/j.aop.2019.167998. * [122] M. Fishman, S. White and E. Stoudenmire, _The itensor software library for tensor network calculations_ , SciPost Physics Codebases (2022), 10.21468/scipostphyscodeb.4. * [123] M. Fishman, S. White and E. Stoudenmire, _Codebase release 0.3 for itensor_ , SciPost Physics Codebases (2022), 10.21468/scipostphyscodeb.4-r0.3. * [124] E. M. Stoudenmire and S. R. White, _Minimally entangled typical thermal state algorithms_ , New Journal of Physics 12(5), 055026 (2010), 10.1088/1367-2630/12/5/055026.
# Designing with Non-Finite Output Dimension via Fourier Coefficients of Neural Waveforms Jonathan S. Kent University of Illinois <EMAIL_ADDRESS> ###### Abstract > Ordinary Deep Learning models require having the dimension of their outputs > determined by a human practitioner prior to training and operation. For > design tasks, this places a hard limit on the maximum complexity of any > designs produced by a neural network, which is disadvantageous if a greater > allowance for complexity would result in better designs. > > In this paper, we introduce a methodology for taking outputs of non-finite > dimension from neural networks, by learning a “neural waveform,” and then > taking as outputs the coefficients of its Fourier series representation. We > then present experimental evidence that neural networks can learn in this > setting on a toy problem. ## Introduction It is taken as read that Deep Learning and neural networks possess incredible power to solve problems that are otherwise intractable. Recent advances have lead to them being used in chip design (?), vehicle design (?), and manufacturing (?). But in certain cases, their capabilities remain limited by their architectures. Among these limitations, as will be addressed in this paper, is that neural networks are designed with a finite number of outputs. Given a choice from 1 to 9, a neural network can never choose 10, even if that might be optimal, for example as a number of batteries or axes of motion. Despite an enormous amount of effort in automatic neural network architecture optimization (?; ?; ?; ?; ?), certain traits of these networks still need to be pre-determined. And yet, the actual number of hyperparameters necessary to specify by hand has been decreasing. It is now possible to automatically learn the width of kernels in CNNs (?; ?; ?) and the effective $\Delta t$ in neural ODEs (?). It is also possible, over time, to programatically adjust network depth (?), and most famously the effective learning rate in gradient descent (?). Additionally, it is possible to use RNNs to output sequences of a length determined by the model itself (?; ?), meaning that the output dimension of the network is learned over time. However, optimizing an RNN for the later dimensions of the output space would require significantly more computation, as well as extra requirements for modeling long-term dependencies, a classic weakness of recurrent architectures. Attempts to allow for models to operate in an infinite-dimensional space have included the use of Reproducing Kernel Hilbert Spaces (?) and quantum computation (?). This means that these approaches are poorly suited to design tasks for which ordinary neural networks are entirely appropriate. In this paper, we will introduce an approach using neural networks as they exist currently, fully capable of being accelerated by modern frameworks, for taking a non-finite number of dimensions as the output of a learned function, enabling models to make design decisions that were not thought of by their human operators. ## Method Figure 1: The proposed methodology; turning an input $x$ into a “neural waveform” $s_{x}$, and takings its Fourier coefficients $a_{x\omega}$. This method consists of two components: generating a neural waveform, and calculating its Fourier coefficients. ### Neural Waveform What we’re calling a “neural waveform” is a periodic function $s_{x}$, which is itself the output of a neural network $\mathcal{S}$, given by $s_{x}(t)=\mathcal{S}(\theta;x,t)$. Here, $\theta$ is the vector of learned parameters for the network, $x$ is the model input, and $t$ is an analogue for time. 111This is confusing notation, as $\theta,\ x,$ and $t$ all have multiple, overlapping traditional meanings between the contexts of Machine Learning and Harmonic Analysis. However, it has been chosen in an attempt to maximize the over-all legibility of this manuscript. Computationally, $s_{x}$ takes the form of a set of time-value pairs, sampled using the following method. Over the interval $[-\pi,\pi]$, and with an appropriately large integer $N$, we get a time-step $\Delta t=\frac{2\pi}{N}$, and from there $t_{n}=-\pi+n\Delta t$, for the integers $0\leq n\leq N\in\mathbb{Z}$. We can now compute $s_{x}=\\{(t_{n},\mathcal{S}(\theta;x,t_{n})|0\leq n\leq N\in\mathbb{Z}\\}$, written in a functional form as $s_{x}(t)=v|(t,v)\in s_{x}$. As an additional note, in order to ensure that $s_{x}$ is periodic, i.e. that $s_{x}(t)=s_{x}(t+2n\pi)$, during implementation, $\mathcal{S}$ may instead take $\sin(t)$ and $\cos(t)$ as a pair of inputs, rather than $t$ itself. ### Fourier Coefficients For a given positive integer angular frequency $\omega$, the Fourier cosine coefficient $a_{x\omega}$ of the waveform $s_{x}$ is given by $a_{x\omega}=\frac{1}{\pi}\int_{-\pi}^{\pi}s_{x}(t)\cdot\cos(\omega t)\partial t$ and the sine coefficient $b_{x\omega}$ by $b_{x\omega}=\frac{1}{\pi}\int_{-\pi}^{\pi}s_{x}(t)\cdot\sin(\omega t)\partial t$ (?). Because an integral is equal to its mean value times its width, calculating these coefficients is computationally easy. For example, $a_{x\omega}\ \ $ = | $\frac{1}{\pi}\int_{-\pi}^{\pi}s_{x}(t)\cdot\cos(\omega t)\partial t$ ---|--- = | $\frac{1}{\pi}\sum_{n=0}^{N}s_{x}(t_{n})\cdot\cos(\omega t_{n})\Delta t$ = | $\frac{2}{N}\sum_{n=0}^{N}s_{x}(t_{n})\cdot\cos(\omega t_{n})$ with a similar procedure for $b_{x\omega}$. These Fourier coefficients are then taken as the outputs of $\mathcal{S}(\theta;x,\cdot)$ for the input $x$. However, because the formulae for $a_{x\omega}$ and $b_{x\omega}$ are valid for any number values of $\omega$, the model $\mathcal{S}$ can be queried along any number of output dimensions, regardless of the limitations of its architecture. ## Experiments A toy problem was created to test this kind of model architecture. Specifically, whether or not it can learn to produce waveforms satisfying the conditions placed on them by a loss function that uses the Fourier coefficients as model outputs. This toy problem was simple, given an input $x$ as an integer, produce a waveform $s_{x}$ such that $a_{x\omega}=1$ for $x=\omega$ and $a_{x\omega}=0$ for $x\neq\omega$, measured using Mean Squared Error, and sampling both $x$ and $\omega$ from $0\leq x,\omega\leq 15$. A complete implementation of this experiment, including the configuration and hyperparameters, will be included in a Colaboratory notebook. See Figure 2 for the waveforms output by a trained model given $x\in[0,4]$, and Figure 3 for those given $x\in[11,15]$. It is clear that the model is absolutely capable of learning to produce waveforms of some kind. But Figure 4 makes it clear that these waveforms satisfy the conditions placed on them nearly flawlessly: where $x=\omega,\ a_{x\omega}\approx 1$, and where $x\neq\omega,\ a_{x\omega}\approx 0$. Figure 2: Neural waveforms on the toy problem, given $x\in[0,4]$. Figure 3: Neural waveforms on the toy problem, given $x\in[11,15]$. Figure 4: Fourier coefficients on the toy problem, for all inputs and frequencies. Error from the identity matrix is not great enough to appear visually. ## Analyses, Conclusions, and Future Work This methodology more or less involves taking the inner products between $s_{x}(t)$ and sinusoidal functions $\sin(\omega t)$ and $\cos(\omega t)$. Attention mechanisms in Transformers and the like (?) involves taking the inner products between the outputs of attention heads and hidden states. As a result, this method is analogous to taking attention weights, where what is being attended to are sine waves. This provides an inroads to applying more results from the work on Attention mechanisms to the problem of taking non- finite outputs from neural networks. This method, as it stands, looks to present an interesting capability, which may see use in AI/ML-aided design programs. This will require that it be coupled with some adaptive mechanism to check particular regions of frequencies, in order to sample the frequencies the model is attempting to output in with a finite amount of compute. Future work will, of course, involve integration into Computer-Aided Design programs and particular domain areas, like circuit and mechanical design, as well as significant training and testing experimentation on both the neural waveform method, and the coupled adaptive frequency selection mechanism. ## References * [Carvalho, Ramos, and Chaves 2011] Carvalho, A. R.; Ramos, F. M.; and Chaves, A. A. 2011\. Metaheuristics for the feedforward artificial neural network (ann) architecture optimization problem. Neural Computing and Applications 20(8):1273–1284. * [Chang et al. 2017] Chang, B.; Meng, L.; Haber, E.; Tung, F.; and Begert, D. 2017\. Multi-level residual networks from dynamical systems view. arXiv preprint arXiv:1710.10348. * [Dai et al. 2017] Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017\. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, 764–773. * [Dorf and Tallarida 2018] Dorf, R. C., and Tallarida, R. J. 2018\. Pocket book of electrical engineering formulas. CRC Press. * [Hasani et al. 2020] Hasani, R.; Lechner, M.; Amini, A.; Rus, D.; and Grosu, R. 2020\. Liquid time-constant networks. arXiv preprint arXiv:2006.04439. * [Idrissi et al. 2016] Idrissi, M. A. J.; Ramchoun, H.; Ghanou, Y.; and Ettaouil, M. 2016\. Genetic algorithm for neural network architecture optimization. In 2016 3rd International Conference on Logistics Operations Management (GOL), 1–4. IEEE. * [Khailany et al. 2020] Khailany, B.; Ren, H.; Dai, S.; Godil, S.; Keller, B.; Kirby, R.; Klinefelter, A.; Venkatesan, R.; Zhang, Y.; Catanzaro, B.; et al. 2020\. Accelerating chip design with machine learning. IEEE Micro 40(6):23–32. * [Kingma and Ba 2014] Kingma, D. P., and Ba, J. 2014\. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. * [Kužnar et al. 2012] Kužnar, D.; Možina, M.; Giordanino, M.; and Bratko, I. 2012\. Improving vehicle aeroacoustics using machine learning. Engineering Applications of Artificial Intelligence 25(5):1053–1061. * [Laforgue et al. 2020] Laforgue, P.; Lambert, A.; Brogat-Motte, L.; and d’Alché Buc, F. 2020\. Duality in rkhss with infinite dimensional outputs: Application to robust losses. In International Conference on Machine Learning, 5598–5607. PMLR. * [Lau et al. 2017] Lau, H.-K.; Pooser, R.; Siopsis, G.; and Weedbrook, C. 2017\. Quantum machine learning over infinite dimensions. Physical review letters 118(8):080501. * [Luo et al. 2018] Luo, R.; Tian, F.; Qin, T.; Chen, E.; and Liu, T.-Y. 2018\. Neural architecture optimization. arXiv preprint arXiv:1808.07233. * [Miikkulainen et al. 2019] Miikkulainen, R.; Liang, J.; Meyerson, E.; Rawal, A.; Fink, D.; Francon, O.; Raju, B.; Shahrzad, H.; Navruzyan, A.; Duffy, N.; et al. 2019\. Evolving deep neural networks. In Artificial intelligence in the age of neural networks and brain computing. Elsevier. 293–312. * [Mikolov et al. 2010] Mikolov, T.; Karafiát, M.; Burget, L.; Cernockỳ, J.; and Khudanpur, S. 2010\. Recurrent neural network based language model. In Interspeech, volume 2, 1045–1048. Makuhari. * [Pintea et al. 2021] Pintea, S. L.; Tomen, N.; Goes, S. F.; Loog, M.; and van Gemert, J. C. 2021\. Resolution learning in deep convolutional networks using scale-space theory. arXiv preprint arXiv:2106.03412. * [Ramchoun et al. 2016] Ramchoun, H.; Idrissi, M. A. J.; Ghanou, Y.; and Ettaouil, M. 2016\. Multilayer perceptron: Architecture optimization and training. Int. J. Interact. Multim. Artif. Intell. 4(1):26–30. * [Romero et al. 2021] Romero, D. W.; Bruintjes, R.-J.; Tomczak, J. M.; Bekkers, E. J.; Hoogendoorn, M.; and van Gemert, J. C. 2021\. Flexconv: Continuous kernel convolutions with differentiable kernel sizes. arXiv preprint arXiv:2110.08059. * [Sundermeyer, Schlüter, and Ney 2012] Sundermeyer, M.; Schlüter, R.; and Ney, H. 2012\. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. * [Vaswani et al. 2017] Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017\. Attention is all you need. In Advances in neural information processing systems, 5998–6008. * [Wuest et al. 2016] Wuest, T.; Weimer, D.; Irgens, C.; and Thoben, K.-D. 2016\. Machine learning in manufacturing: advantages, challenges, and applications. Production & Manufacturing Research 4(1):23–45.
# TransRUPNet for Improved Out-of-Distribution Generalization in Polyp Segmentation Debesh Jha, Nikhil Kumar Tomar, Ulas Bagci Machine and Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA ###### Abstract Out-of-distribution (OOD) generalization is a critical challenge in deep learning. It is specifically important when the test samples are drawn from a different distribution than the training data. We develop a novel real-time deep learning based architecture, TransRUPNet that is based on a Transformer and residual upsampling network for colorectal polyp segmentation to improve OOD generalization. The proposed architecture, TransRUPNet, is an encoder- decoder network that consists of three encoder blocks, three decoder blocks, and some additional upsampling blocks at the end of the network. With the image size of $256\times 256$, the proposed method achieves an excellent real- time operation speed of 47.07 frames per second with an average mean dice coefficient score of 0.7786 and mean Intersection over Union of 0.7210 on the out-of-distribution polyp datasets. The results on the publicly available PolypGen dataset (OOD dataset in our case) suggest that TransRUPNet can give real-time feedback while retaining high accuracy for in-distribution dataset. Furthermore, we demonstrate the generalizability of the proposed method by showing that it significantly improves performance on OOD datasets compared to the existing methods. ###### Index Terms: Computer aided diagnosis, out-of-distribution polyp segmentation, transformer, colonoscopy, residual network ## I Introduction Colonoscopy is widely considered as the gold standard for the diagnosis of colon cancer. Early detection of polyps is important as even a small increase in adenoma detection rate can significantly decrease interval colorectal cancer incidence [1]. Studies suggest a polyps miss rate of around 22-28% [2]. There are several reasons for polyp miss-rate in colonoscopy, for example, the skill of endoscopists, bowel preparation quality, fast withdrawal time, visibility, and differences in polyps characteristics. Deep learning-based algorithms have emerged as a promising approach to improve diagnostic performance by highlighting the presence of precancerous tissue in the colon and reducing the clinical burden. OOD detection and generalization is essential towards developing computer- aided diagnostic support systems in colonoscopy. It is critical to ensure the reliability and safety of deep learning systems. In this study, we introduce the novel deep learning architecture, TransRUPNet, to address the critical need for clinical integration of polyp segmentation routine, which is real- time time and retains high accuracy. Most existing deep learning models are trained based on closed-world assumption, where the test dataset is assumed to be drawn from the same distribution as the training data, known as in- distribution (ID). However, the models are deployed in an open-world scenario, test samples can be out-of-distribution (OOD) and therefore should be handled in caution. The distributional shifts can be caused by semantic shift (e.g., OOD samples are drawn from different classes), or covariate shift (e.g., OOD samples from a different domain). The detection of semantic distribution shift (e.g., due to the occurrence of new classes) is the focal point of OOD detection tasks, where the label space can be different between ID and OOD data and hence the model should not make any prediction. In addition to OOD detection and generalization, several problems adopt the “open-world” assumption and have a similar goal of identifying OOD examples for developing robust systems. The main contribution are as follows: 1. 1. We propose TransRUPNet, an encoder-decoder architecture specifically designed for accurate, real-time and improved out-of-distribution polyp segmentation. 2. 2. We compared the performance of TransRUPNet with the existing state-of-the-art (SOTA) methods in four different polyp datasets (one in-distribution and three OOD datasets) to show the method’s superiority. 3. 3. Our proposed architecture has strong generalization performance when compared with eight SOTA methods. Figure 1: Overall architecture of the TransRUPNet ## II Related Work Recently, there has been a significant advancement in the development of models for polyp segmentation. While U-Net based architectures have been widely used, several other approaches have also been proposed that focus on capturing boundary details and leveraging the camouflage property of polyps. One such architecture is PraNet [3], which incorporates reverse attention modules to incorporate boundary cues. It combines a global feature map obtained using a parallel partial decoder. Another approach proposed by [4] introduces a boundary constraint network that utilizes a bilateral boundary extraction module to analyze polyp and non-polyp regions. Polyp-PVT [5] takes a different approach by introducing a camouflage identification module with a pyramid vision transformer (PVT) encoder. This module aims to capture polyp cues that are concealed in low-level features. The success of transformer- based approaches in polyp segmentation has led to the development of more similar works in the field. ColonFormer [6] proposes a hierarchical transformer combined with a hierarchical pyramid network, incorporating a residual axial attention module for efficient polyp segmentation. Overall, these research demonstrates a wide range of architectural variations and techniques used for polyp segmentation, inspiring further research for the computer aided diagnosis system for colon polyp segmentation. ## III Method Figure 1 shows the block diagram of the proposed TransRUPNet architecture. It is an encoder-decoder network that begins with a Pyramid Vision Transformer (PVT) [7] as a pre-trained encoder. We extract three different feature maps from the encoder and pass them through a series of $1\times 1$ Conv, Batch Normalization and ReLU activation for reducing the number of feature channels to $64$. The reduced feature maps are then passed to the up block and the decoder blocks. Within the up block, the input feature map is first passed through a bilinear upsampling to upscale the feature map’s height and width to that of the original input image. Next, the upsampled feature map is passed through a residual block to learn a more robust representation. In Figure 2, each component of TransRUPNet is illustrated. The decoder block also begins with a bilinear upsampling layer to increase the spatial dimensions by a factor of $2$ and then concatenated with the reduced feature from the encoder. Next, the concatenated feature map is passed through a residual block to learn more robust semantic features which help to generate a fine-quality segmentation mask. The output from the first decoder block is passed to the next decoder block, which is further followed by an up block. We concatenate the output from all four up blocks into a single feature representation. After that, the concatenated feature map is followed by a residual block, $1\times 1$ convolution and a sigmoid activation to generate the final segmentation mask. Figure 2: Components of the TransRUPNet TABLE I: Quantitative results on the Kvasir-SEG test dataset. Method | Backbone | mIoU | mDSC | Recall | Precision | F2 | FPS ---|---|---|---|---|---|---|--- U-Net [8] | - | 0.7472 | 0.8264 | 0.8504 | 0.8703 | 0.8353 | 106.88 U-Net++ [9] | - | 0.7420 | 0.8228 | 0.8437 | 0.8607 | 0.8295 | 81.34 ResU-Net++ [10] | - | 0.5341 | 0.6453 | 0.6964 | 0.7080 | 0.6576 | 43.11 HarDNet-MSEG [11] | HardNet68 | 0.7459 | 0.8260 | 0.8485 | 0.8652 | 0.8358 | 34.80 ColonSegNet [12] | - | 0.6980 | 0.7920 | 0.8193 | 0.8432 | 0.7999 | 73.95 DeepLabV3+ [13] | ResNet50 | 0.8172 | 0.8837 | 0.9014 | 0.9028 | 0.8904 | 67.88 PraNet [3] | Res2Net | 0.8296 | 0.8942 | 0.9060 | 0.9126 | 0.8976 | 31.89 TGANet [14] | ResNet50 | 0.8330 | 0.8982 | 0.9132 | 0.9123 | 0.9029 | 36.58 TransRUPNet (Ours) | PVT | 0.8445 | 0.9005 | 0.9195 | 0.9170 | 0.9048 | 47.07 TABLE II: Quantitative results on the Kvasir-SEG test dataset. Method | Backbone | mIoU | mDSC | Recall | Precision | F2 ---|---|---|---|---|---|--- Training dataset: Kvasir-SEG – Test dataset: PolypGen (C6) U-Net [8] | - | 0.5384 | 0.6126 | 0.7054 | 0.7508 | 0.6362 U-Net++ [9] | - | 0.5355 | 0.6163 | 0.7340 | 0.7230 | 0.6564 ResU-Net++ [10] | - | 0.2816 | 0.3684 | 0.6220 | 0.3526 | 0.4326 HarDNet-MSEG [11] | HardNet68 | 0.5548 | 0.6341 | 0.7197 | 0.7722 | 0.6487 ColonSegNet [12] | - | 0.4410 | 0.5290 | 0.6199 | 0.6403 | 0.5424 DeepLabV3+ [13] | ResNet50 | 0.7031 | 0.7629 | 0.7773 | 0.8693 | 0.7674 PraNet [3] | Res2Net | 0.6691 | 0.7307 | 0.7612 | 0.8755 | 0.7378 TGANet | ResNet50 | 0.6750 | 0.7382 | 0.7692 | 0.8887 | 0.7391 TransRUPNet (Ours) | PVT | 0.7210 | 0.7786 | 0.8522 | 0.8175 | 0.7929 Training dataset: Kvasir-SEG – Test dataset: CVC-ClinicDB U-Net [8] | - | 0.5433 | 0.6336 | 0.6982 | 0.7891 | 0.6563 U-Net++ [9] | - | 0.5475 | 0.6350 | 0.6933 | 0.7967 | 0.6556 ResU-Net++ [10] | - | 0.3585 | 0.4642 | 0.5880 | 0.5770 | 0.5084 HarDNet-MSEG [11] | HardNet68 | 0.6058 | 0.6960 | 0.7173 | 0.8528 | 0.7010 ColonSegNet [12] | - | 0.5090 | 0.6126 | 0.6564 | 0.7521 | 0.6246 DeepLabV3+ [13] | ResNet50 | 0.7388 | 0.8142 | 0.8331 | 0.8735 | 0.8198 PraNet [3] | Res2Net | 0.7286 | 0.8046 | 0.8188 | 0.8968 | 0.8077 TGANet | ResNet50 | 0.7444 | 0.8196 | 0.8290 | 0.8879 | 0.8207 TransRUPNet (Ours) | PVT | 0.7765 | 0.8539 | 0.8736 | 0.8870 | 0.8590 Training dataset: Kvasir-SEG – Test dataset: BKAI-IGH U-Net [8] | - | 0.5686 | 0.6347 | 0.6986 | 0.7882 | 0.6591 U-Net++ [9] | - | 0.5592 | 0.6269 | 0.6900 | 0.7968 | 0.6493 ResU-Net++ [10] | - | 0.3204 | 0.4166 | 0.6979 | 0.3922 | 0.5019 HarDNet-MSEG [11] | HardNet68 | 0.5711 | 0.6502 | 0.7420 | 0.7469 | 0.6830 ColonSegNet [12] | - | 0.4910 | 0.5765 | 0.7191 | 0.6644 | 0.6225 DeepLabV3+ [13] | ResNet50 | 0.6589 | 0.7286 | 0.7919 | 0.8123 | 0.7493 PraNet [3] | Res2Net | 0.6609 | 0.7298 | 0.8007 | 0.8240 | 0.7484 TGANet | ResNet50 | 0.6612 | 0.7289 | 0.7740 | 0.8184 | 0.7412 TransRUPNet (Ours) | PVT | 0.7218 | 0.7945 | 0.8497 | 0.8337 | 0.8072 Figure 3: Qualitative example showing polyp segmentation ## IV Experiment ### IV-A Dataset We use four publicly available colonoscopy polyp segmentation datasets. We consider Kvasir-SEG [15] as the in-distribution dataset and other datasets such as PolypGen [16], BKAI-IGH [17], and CVC-ClinicDB [18] as the OOD dataset. PolypGen dataset is collected from 6 medical centers from Norway, Italy, France, United Kingdom, and Egypt, incorporating more than 300 patients. This dataset is complex as it contains diverse samples from different cohort populations from different countries. Therefore, we use the Kvasir-SEG for in-distribution testing and PolypGen, BKAI-IGH, and CVC- ClinicDB for OOD generalization. ### IV-B Experiment setup and configuration We select Kvasir-SEG [15] dataset for training all the models. It contains 1000 images and mask pair. We use 880 images and masks for training our method and the rest for validation and testing. In addition, we perform extensive data augmentation to increase the size of training samples. All the experiments are implemented using with PyTorch framework. We run all the experiments on an NVIDIA RTX 3090 GPU system. We use Adam optimizer with a learning rate of 1e-4 and a batch size of 8. Additionally, we use a combined binary cross-entropy and dice loss for training our models. ## V Result Comparison with SOTA on in-distrubtion data: Table I shows the result of the TransRUPNet. It obtained an mean dice coefficient of 0.9005, mIoU of 0.8445, recall of 0.9195, precision of 0.9170 and F2-score of 0.9048. With the image resolution of $256\times 256$, TransRUPNet obtained a real-time processing speed of 47.07 frames per second (FPS). The most competitive network to TransRUPNet was TGANet, to whom our architecture outperformed by 1.15% in mIoU and 0.23% in DSC. The processing speed of our network is almost 1.5 times TGANet. Comparison with SOTA on OOD data: We have evaluated the performance of TransRUPNet on three OOD datasets. For this, we train different models on Kvasir-SEG dataset and test it on PolypGen (Center 6). Kindly note that this is the experimental setup for EndoCV 2021 Challenge [19]. We obtained an improvement of 4.6% in mIoU and 4.04% in mDSC. Similarly, we obtained an improvement of 3.21% in mIoU and 3.43% in mDSC on CVC-ClinicDB datasets. Additionally, we obtained an improvement of 6.06% in mIoU and 6.56% in mDSC for the TransRUPNet when tested on BKAI-IGH datasets as compared to the SOTA TGANet [14]. Figure 3 shows the effectiveness of TransRUPNet in qualitative results. As evidenced by the Figure, TransRUPNet avoids the issues such as over- segmentation or under-segmentation, which is observed in the case of SOTA TGANet and PRANet. Additionally, TransRUPNet accurately segments one or more polyps within the frames, even under challenging conditions. This highlights the robustness of TransRUPNet in handling complex scenarios and its ability to delineate the boundaries of polyps correctly. The performance drop of TransRUPNet compared to the in-distribution datasets is observed because there are insufficiently cleaned images in datasets, such as PolypGen (C6), that show elongated black regions on the left side, leading to distorted resizing and decreased OOD performance. Additionally, there are also huge variations among the training dataset and OOD datasets. For instance, BKAI-IGH also contains images from FICE (Flexible spectral Imaging Color Enhancement), BLI (Blue Light Imaging), and LCI (Linked Color Imaging), in addition to WLI (White Light Imaging) which are not present in the training datasets. In case of CVC-ClinicDB, it is a video sequence dataset, whereas our model is trained on still frames which might have affected the performance. However, the performance for all the datasets is satisfactory considering the OOD nature of the experiment. ## VI Conclusion In this study, we proposed TransRUPNet architecture by leveraging a pre- trained Pyramid Vision Transformer (PVT) as an encoder and incorporating a simple residual block for accurate polyp segmentation. The experimental results on various in-distribution and OOD datasets demonstrate that TransRUPNet can provide real-time feedback with high accuracy and perform significantly well on OOD datasets compared to the existing methods. By addressing the challenge of OOD generalization and providing reliable polyp segmentation results, TransRUPNet can be the strong benchmark for developing computer-aided diagnostic support systems in colonoscopy. ## References * [1] G. Urban, P. Tripathi, T. Alkayali, M. Mittal, F. Jalali, W. Karnes, and P. Baldi, “Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy,” _Gastroenterology_ , vol. 155, no. 4, pp. 1069–1078, 2018. * [2] A. Leufkens, M. Van Oijen, F. Vleggaar, and P. Siersema, “Factors influencing the miss rate of polyps in a back-to-back colonoscopy study,” _Endoscopy_ , vol. 44, no. 05, pp. 470–475, 2012. * [3] D.-P. Fan, G.-P. Ji, T. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao, “Pranet: Parallel reverse attention network for polyp segmentation,” in _Proceedings of the International conference on medical image computing and computer-assisted intervention (MICCAI)_ , 2020, pp. 263–273. * [4] G. Yue, W. Han, B. Jiang, T. Zhou, R. Cong, and T. Wang, “Boundary constraint network with cross layer feature integration for polyp segmentation,” _IEEE Journal of Biomedical and Health Informatics_ , 2022. * [5] B. Dong, W. Wang, D.-P. Fan, J. Li, H. Fu, and L. Shao, “Polyp-PVT: polyp segmentation with pyramid vision transformers,” _arXiv preprint arXiv:2108.06932_ , 2021. * [6] N. T. Duc, N. T. Oanh, N. T. Thuy, T. M. Triet, and V. S. Dinh, “Colonformer: An efficient transformer based method for colon polyp segmentation,” _IEEE Access_ , vol. 10, pp. 80 575–80 586, 2022. * [7] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” in _Proceedings of the IEEE/CVF international conference on computer vision (ICCV)_ , 2021, pp. 568–578. * [8] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in _Proceedings of the International Conference on Medical image computing and computer-assisted intervention (MICCAI)_ , 2015, pp. 234–241. * [9] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: a nested u-net architecture for medical image segmentation,” in _Deep learning in medical image analysis and multimodal learning for clinical decision support_ , 2018, pp. 3–11. * [10] D. Jha, P. H. Smedsrud, M. A. Riegler, D. Johansen, T. De Lange, P. Halvorsen, and H. D. Johansen, “Resunet++: An advanced architecture for medical image segmentation,” in _Proceedings of the International Symposium on Multimedia (ISM)_ , 2019, pp. 225–2255. * [11] C.-H. Huang, H.-Y. Wu, and Y.-L. Lin, “HarDNet-MSEG A Simple Encoder-Decoder Polyp Segmentation Neural Network that Achieves over 0.9 Mean Dice and 86 FPS,” _arXiv preprint arXiv:2101.07172_ , 2021. * [12] D. Jha, S. Ali, N. K. Tomar, H. D. Johansen, D. Johansen, J. Rittscher, M. A. Riegler, and P. Halvorsen, “Real-time polyp detection, localization and segmentation in colonoscopy using deep learning,” _IEEE Access_ , vol. 9, pp. 40 496–40 510, 2021. * [13] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 801–818. * [14] N. K. Tomar, D. Jha, U. Bagci, and S. Ali, “Tganet: text-guided attention for improved polyp segmentation,” in _Proceedings of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2022)_ , 2022, pp. 151–160. * [15] D. Jha, P. H. Smedsrud, M. A. Riegler, P. Halvorsen, T. d. Lange, D. Johansen, and H. D. Johansen, “Kvasir-SEG: a segmented polyp dataset,” in _Proceedings of the International Conference on Multimedia Modeling (MMM)_ , 2020, pp. 451–462. * [16] S. Ali, D. Jha, N. Ghatwary, S. Realdon, R. Cannizzaro, O. E. Salem, D. Lamarque, C. Daul, M. A. Riegler, K. V. Anonsen _et al._ , “A multi-centre polyp detection and segmentation dataset for generalisability assessment,” _Scientific Data_ , vol. 10, no. 1, p. 75, 2023. * [17] P. N. Lan, N. S. An, D. V. Hang, D. Van Long, T. Q. Trung, N. T. Thuy, and D. V. Sang, “NeoUNet: towards accurate colon polyp segmentation and neoplasm detection,” _arXiv preprint arXiv:2107.05023_ , 2021. * [18] J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, D. Gil, C. Rodríguez, and F. Vilariño, “WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians,” _Computerized Medical Imaging and Graphics_ , vol. 43, pp. 99–111, 2015. * [19] S. Ali, N. Ghatwary, D. Jha, E. Isik-Polat, G. Polat, C. Yang, W. Li, A. Galdran, M.-Á. G. Ballester, V. Thambawita _et al._ , “Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge,” _arXiv preprint arXiv:2202.12031_ , 2022.
# Collective Intelligence in Human-AI Teams: A Bayesian Theory of Mind Approach Samuel Westby1, Christoph Riedl2 ###### Abstract We develop a network of Bayesian agents that collectively model the mental states of teammates from the observed communication. Using a generative computational approach to cognition, we make two contributions. First, we show that our agent could generate interventions that improve the collective intelligence of a human-AI team beyond what humans alone would achieve. Second, we develop a real-time measure of human’s theory of mind ability and test theories about human cognition. We use data collected from an online experiment in which 145 individuals in 29 human-only teams of five communicate through a chat-based system to solve a cognitive task. We find that humans (a) struggle to fully integrate information from teammates into their decisions, especially when communication load is high, and (b) have cognitive biases which lead them to underweight certain useful, but ambiguous, information. Our theory of mind ability measure predicts both individual- and team-level performance. Observing teams’ first 25% of messages explains about 8% of the variation in final team performance, a 170% improvement compared to the current state of the art. ## 1 Introduction The reliance on teamwork in organizations (Wuchty, Jones, and Uzzi 2007), coupled with remarkable recent progress in artificial intelligence, have supercharged the vision to develop collaborative Human-AI teams (Malone and Bernstein 2015; O’Neill et al. 2020). Human-AI teams promise to overcome human biases and information processing limitations, reaching performance higher than human-only teams could (Brynjolfsson, Rock, and Syverson 2018). Despite some recent advances (e.g., Bansal et al. 2019b; Pynadath et al. 2022; Seraj et al. 2022) there remain significant difficulties in developing agents that interact with multiple, heterogeneous humans working on cognitive tasks engaged in cooperative communication in an ad-hoc team. Here, we draw on research of cognitive processes to develop Human-AI teams and explain collaborative decision making. To communicate efficiently, humans infer the beliefs, opinions, knowledge, and related states of mind of other people (Nickerson 1999; Call and Tomasello 2008). This is referred to as social perceptiveness or theory of mind (ToM; Premack and Woodruff 1978). Recent research on collective intelligence, has provided a wide range of empirical evidence suggesting that ToM (and related processes governing collective memory, attention, and reasoning) is a significant predictor of human collective intelligence (Woolley et al. 2010; Riedl et al. 2021; Woolley et al. 2022; Engel et al. 2014). Indeed, ToM is especially beneficial for interdependent cognitive tasks that benefit when teams leverage their members’ expertise (Lewis 2003). Some work suggests that ToM is the mechanism that allows collectives to use more performance-relevant information from their environment than a single individual without such connections could, for example, by facilitating a balance between diversity and cognitive efficiency (Riedl and Woolley 2017; Hong and Page 2004). As human civilization shifts further toward knowledge work (Autor 2014) where the most value is realized if members fully use and integrate their unique expertise, this ability is increasingly important. Figure 1: Framework of human-AI teaming with Theory of Mind (ToM). a) Nested layers of ToM agents. Agents model ego networks of the individual they shadow. b) Every human team member is paired with an AI agent. Humans send messages to others through a shared environment. The ToM agent infers beliefs for both own ideas (Ego Model), and ideas of others (one Alter Model per network neighbor). Ego Model is updated with initial information and new knowledge generated by the human through self actualization. Alter Models are updated based on incoming messages from teammates through partner actualization. Agents combine information from the ego and alter models with weighting determined $\alpha$ denoting ToM ability. Recent work has started to develop a formal account of collective intelligence to explain the relationship between individual interaction and collective performance using (approximate or variational) Bayesian inference (or free energy; Friston 2010, 2013; Heins et al. 2022). The free energy principle is a mathematical framework for multiscale behavioral processes that suggests a system of self-similar agents self-organizes by minimizing variational free energy in its exchanges with the environment (Fig. 1a; Friston, Kilner, and Harrison 2006). Recent extensions have applied the framework to explain human communication (Vasil et al. 2020). A key advantage of this approach is that free energy minimization can be translated into a generative, agent-based process theory (Friston et al. 2017; Kaufmann, Gupta, and Taylor 2021). This generative theory provides a computational approach to cognition (Tenenbaum et al. 2011; Griffiths 2015) that allows us to simultaneously (a) build agents for Human-AI teams that are highly explainable but also (b) test theories about human cognitive processes and measure human theory of mind ability in real time. This promises to advance our understanding of a key process of human collective intelligence. The current state of the art to measure theory of mind—the Reading the Mind in the Eyes test (Baron-Cohen et al. 2001)—is a static, indirect, survey-based instrument, which typically explains about 3% of the variation (Riedl et al. 2021). In this paper, we develop a Bayesian theory of mind agent that can form ad-hoc mental models about its teammates, based exclusively on observations drawn from human communication (Fig. 1b). We use data collected from a large, IRB approved human-subject experiment in which 145 individuals in 29 teams of five, randomly assigned to different social networks controlling the team’s communication topology, communicate through a chat-based system to solve a Hidden Profile task (Stasser and Titus 1985). We then simulate artificial teams in which each human is shadowed by an agent. The agent observes the same incoming and outgoing messages as the human did in the experiment. Modeling human behavior with our generative AI model allows us to test whether people do indeed form mental models of what their teammates know, how effectively they do so, and whether this ability is predictive of team performance. In a last step, we perform a counterfactual simulation to demonstrate how our Bayesian agent could trigger effective interventions that would increase Human-AI team performance over the observed human-only teams. Our work provides a framework that expands theory of mind (and collective intelligence more broadly) from a static construct to a dynamical one that may vary according to situational factors, for example, due to changes in arousal, anxiety, and motivation with dynamically changing task requirements, time pressure, and recognition (Qin et al. 2022; Balietti and Riedl 2021). We contribute to a body of research that has so far mostly used toy models—often using only a single agent—with an application to real data from multi-human communication (Vasil et al. 2020; Kaufmann, Gupta, and Taylor 2021; Albarracin et al. 2022; Heins et al. 2022). Our work generates important cognitive insights into how humans communicate and reason to uncover hidden profiles. In summary, we make four main contributions. 1. 1. We develop a networked Bayesian agent that models beliefs using human-human communication. We apply the agent to data from a human team experiment and demonstrate how the agent can monitor theory of mind in real time, predict both correct and human answers, and intervene to raise human performance. 2. 2. We find the model accurately captures the decisions made by humans, varying in predictable ways with experimental stimuli like network position and task difficulty. Comparing model fits with simpler “lesioned” ToM models shows the value contributed by each component. 3. 3. We develop two real-time measures for human theory of mind ability. The first, based on observed human communication and decisions, explains 51% variation in final team performance. The second, based on communication alone, explains 8% variation in final team performance after observing just the first quarter of communication, a 170% improvement compared to the current state of the art, the Reading the Mind in the Eyes test. Simulations of artificial human-AI teams suggest a significant 4% performance increase from AI triggered interventions. 4. 4. We contribute to cognitive theory by presenting empirical evidence that cognitive biases explain the shortfall of human performance, such as a tendency to under-weight ambiguous information and failure to fully integrate information provided by others. We explain temporal patterns showing that high functioning teams send the most useful information early before converging on common beliefs. ## 2 Related Work ### Human-Agent teaming. A long history of Human-Agent teaming has evolved alongside technological developments. Early examples such as Vannevar Bush’s Memex (Bush et al. 1945) demonstrate a longstanding fascination with augmenting human performance. Recently, work has specialized in many sub-fields including understanding mental models and related constructs like team situational awareness (Chen and Barnes 2014; Converse, Cannon-Bowers, and Salas 1993; Glikson and Woolley 2020). For example, effective Human-AI interaction has been shown to rely critically on the ability to form mental models about what AI teammates are doing (Bansal et al. 2019b; Paleja et al. 2021; Bansal et al. 2019a; Gero et al. 2020; Alipour et al. 2021). Significantly less work has focused on how AI can form “mental models” of humans. Fügener et al. (2022) highlight this disparity by identifying situations where humans having mental models of the AI are not helpful, while AI having “mental models” of humans are. Given the challenges of designing multi-agent systems, human-AI teaming work has often focused on studying pairs of one agent and one human (e.g., Bansal et al. 2019a, b; Baker, Saxe, and Tenenbaum 2011; Fügener et al. 2022; Alipour et al. 2021). Furthermore, past work has often side-stepped challenges posed by language-based communication by constraining the scope to spatial or highly stylized tasks (Kaufmann, Gupta, and Taylor 2021; Baker et al. 2017; Khalvati et al. 2019). Others use Wizard of Oz techniques (Schelble et al. 2022; Hohenstein et al. 2022) to facilitate communication-based human-AI teaming interaction. To build an autonomous AI teammate that improves human-only team performance, one must build agents that overcome these obstacles. This becomes more challenging in a team of unfamiliar teammates, without a priori knowledge, while learning dynamically from language-based communication (Stone et al. 2010). ### Multi-agent Bayesian models Multi-agent Bayesian models have been used to study coordination (Khalvati et al. 2019; Wu et al. 2021), opinion dynamics (Albarracin et al. 2022), efficient information fusion (Pavlin et al. 2010), and theory of mind (Baker et al. 2017). This can be modeled as a partially observable Markov decision process (POMDP) for each agent where the states are the set of other agents’ beliefs and observations are dependent on other agents’ actions (Smith, Friston, and Whyte 2022). ## 3 Hidden Profile & Human Subject Data A primary advantage of teams over lone individuals when solving complex problems is their ability to expand the pool of available information, thereby enabling teams to reach higher quality solutions (Mesmer-Magnus and DeChurch 2009). The Hidden Profile task (Stasser and Titus 1985) is a research task designed to mimic this decision-making scenario in which individuals hold private knowledge (Stone et al. 2010). In the task, some information is commonly held among all team members while each individual is also endowed with unique private information. Subjects do not know what information is shared or private. Information sharing is the central process through which teammates collectively solve the task (Mesmer-Magnus and DeChurch 2009); conversely, failing to share all available information causes them to come to incorrect conclusions. Despite the importance of information sharing for team performance, past research has shown teams often deviate from the optimal use of information (Stasser and Titus 1985). Discussions tend to reinforce information held in common, rather than share information held uniquely by one team member (Nickerson 1999). One reason for this is that individuals impute their own knowledge on others and hence assume that private information is already shared (Nickerson 1999). This gives rise to the “hidden profile” and directly points to avenues in which AI may improve team performance: identifying which information is uniquely held by each teammate and encouraging them to share it. Specifically, an agent may detect if their own mental model diverges from the inferred mental model of another (“I know something that I believe you don’t know”) indicating a window of opportunity for an effective intervention. It also provides the basis for our measure of theory of mind ability. Individuals who form more precise mental models of their teammates (and who impute less of their own knowledge on others) will be more efficient communicators who share more useful information in a more targeted manner. We use data from an IRB approved online experiment conducted on the Volunteer Science platform (Radford et al. 2016) in which 145 individuals in 29 human- only teams of five solved a Hidden Profile task (data and code available at https://github.com/riedlc/HumanAITeamsAndCI). The task is framed as a crime investigation: the team needs to pool clues to answer questions about the target, culprit, and time of an art heist. There are six clues for each question. When combined, the correct answer out of five possible options is obvious. One randomly selected clue is given to every teammate (the public clue) and each individual receives one of the remaining five clues (the private clue, also randomly selected). Teams were randomly assigned to a communication topology using all 21 possible connected five node graphs (e.g., star, ring, chain; see Fig. 1a for one example). Teams then communicate via text-based chat where each message is sent to all neighboring teammates. After a five minute discussion phase, each individual submits answers for the culprit’s identity, the target, and the day of the art heist. Subjects were recruited from Amazon Mechanical Turk (Paolacci, Chandler, and Ipeirotis 2010) and paid a $0.75 flat fee for participation as well as a $0.25 performance bonus for each correct answer. The entire task took about seven minutes to complete. Subjects are blind to the network topology they have been assigned to as well as the total number of individuals on their team. For simplicity of exposition and analysis, we rely only on the culprit dimension of the task. We compute individual performance as $\\{0,1\\}$ depending on whether the culprit guess is correct and team performance as the average individual performance (majority voting does not substantively change results). On average, individuals received $3.1$ (SD $1.9$) chat messages from each partner. To make initial clues and communication machine interpretable, we manually code the content as strong no (SN), maybe no (MN), maybe yes (MY), and strong yes (SY) for each of the five answer options (the inferred states). This creates a set of 20 possible observations. We translate messages into likelihoods for the inferred states using fixed values, either estimated from the data using maximum likelihood estimation (MLE) from a grid search or using untrained intuitive values. For example, the message,“it might be #4”, would be coded as maybe yes with a likelihood of $1.4$ for $\\#3$, leaving the likelihoods for the states not mentioned in the message unaffected. Ambiguous statements and messages related to team coordination were coded as “neutral” and dropped from the analysis. Notice that agents form beliefs solely based on the observed human communication, even if humans make certain statements about wrong facts (e.g., “strong yes #4” when the correct answer is #3), or ambiguous statements about correct facts. Agents can thus form wrong beliefs (Albarracin et al. 2022). ## 4 Bayesian Multi-Agent Model We create a networked Bayesian agent for each individual. Each agent “shadows” one human, observing the same messages (both are inside the same Markov blanket; Fig. 1a) and infers human beliefs and derive the answer to the Hidden Profile task. That is, our Bayesian system is a model of beliefs about hidden states of their environment. Ideally, the state inferred after the communication phase is identical to the correct answer to the Hidden Profile task. The resulting model has five parameters: four information weights SN, MN, MY, SY determining the likelihood distribution of observations under inferred beliefs, and the theory of mind ability $\alpha_{D}$ which modulates the relative weighting of the self vs. partner beliefs which we describe in more detail below. Figure 2: Bayesian model of Theory of Mind. Agent 1, shadowing Player 1, models all teammates in Player 1’s ego network (Markov blanket). In Agent 1’s generative model, Alter3 corresponds to Agent 1’s beliefs of Player 3’s beliefs. At time $t$ Player 3 says, “It might be #4”, which is coded as a MY for answer 4. Given this new observation, Agent 1 uses Equation 1 to update its beliefs about Player 3’s beliefs. ### Mental Models. We use Bayesian inference to infer a posterior distribution $p(\textbf{s}\mid\textbf{o})$ over states s (five answer options), from a set of observations o (messages sent between players in the recorded chat). Since there are five discrete states, we can compute posteriors directly without the need to approximate them. More complicated environments may require the use of approximate inference methods like free energy/active inference (Friston, Kilner, and Harrison 2006). Agents are comprised of one Ego Model and one Alter Model for each neighbor (Fig. 1b). That is, the model follows a multi-agent paradigm with independent mental models nested within an agent. All models hold a posterior distribution of inferred states, but differ in how they are initialized and updated. Ego Models are initialized with priors derived from the public and private clues assigned to the player (paper stack icons in Fig. 1b) and updated with outgoing messages from a player (self actualization). Alter Models are initialized with uniform priors and updated with incoming messages of the corresponding (partner actualization). Mental models are updated by accounting for the surprise of the observation. Surprise-weighting thus encodes a preference for posteriors with low surprise. That is, the effect of new observations is diminished relative to the Ego or Alter model’s existing posterior using $p_{i}(s\mid o^{i}_{1:t})\propto p_{i}(s\mid o^{i}_{1:t-1})p_{i}(o^{i}_{t}|s)^{\overbrace{\scriptstyle-\log{p_{i}(s\mid o^{i}_{1:t-1})}}^{\text{surprise}}}$ (1) where $s$ is a state and $o^{i}_{t}$ is an observation (message) sent by player $i$ at time $t$. The likelihood is raised to the negative log of the previous time step’s posterior. | Performance | Human-Agent | | Model Comparison | ---|---|---|---|---|--- | (% Correct) | Agreement | LogLik | (Likelihood Ratio test) | $\alpha_{D}$ Human | 66.2% | | | | Random | 19.6 $\pm$ 3.1% | 20.0 $\pm$ 3.0% | -231.759 | | Prior only | 48.8 $\pm$ 3.1% | 46.6 $\pm$ 2.9% | -215.906 | vs. Random $p<0.0001$ | ToM (self-actualization only, MLE) | 63.6 $\pm$ 1.8% | 73.0 $\pm$ 1.9% | -146.372 | vs. Prior-only $p<0.0001$ | 0 ToM (partner-actualization only, MLE) | 76.7 $\pm$ 1.0% | 66.3 $\pm$ 1.0% | -133.653 | vs. Self-only $p<0.0001$ | 1 ToM (MLE) | 72.8 $\pm$ 0.8% | 75.1 $\pm$ 1.0% | -106.640 | vs. Partner-only $p<0.0001$ | 0.95 ToM (max performance) | 77.2 $\pm$ 0.8% | 71.3 $\pm$ 0.9% | -109.826 | vs. MLE $p=0.012$ | 0.95 ToM (max agreement) | 71.8 $\pm$ 0.7% | 79.7 $\pm$ 0.8% | -118.464 | vs. MLE $p<0.0001$ | 0.45 With random intervention | 79.0 $\pm$ 1.8% | 70.8 $\pm$ 1.6% | | | 0.95 With intervention | 82.1 $\pm$ 0.7% | 70.0 $\pm$ 0.6% | | vs. Rand. int. $p<0.0001$ | 0.95 Table 1: Model evaluation results and comparison with human behavior. P-values based on likelihood ratio test. We calculate standard deviations over 100 trials. Information weights (SN, MN, SY, MY) are learned from the data through a grid search: Prior (0.05, 0.05, 2, 2), self-act. (0.05, 0.05, 1.5, 2), partner-act. (0.15, 1, 1.55, 2), MLE (0.1, 1, 1.45, 2), max perf. (0.35, 0.85, 1.95, 2), max agg. (0.05, 0.75, 1.25, 1.95). Interventions use the same parameters as max performance and $t$-test for comparison. ### Agent. The agent is the hierarchical coordinator and aggregator of its mental models. The ToM ability parameter $\alpha_{D}$ modulates the relative weight with which the agent combines its Ego and Alter models. We conceptualize $\alpha_{D}$ as the ability to accurately infer beliefs of other agents and pay attention to them (Apperly and Butterfill 2009). It represents the relative weighting between the agent’s own Ego Model and its Alter Models. When $\alpha_{D}=0$, the Alter posterior is uniform and has no effect and the final prediction is based only on the Ego Model. When $\alpha_{D}=1$, the Alter Models are weighted equally to the Ego Model. Agent $i$ aggregates its mental models into a final posterior distribution using $p_{i}(s\mid\textbf{M}_{i})\propto p_{i}\left(s\mid\textbf{o}^{i}\right)\prod_{\begin{subarray}{c}m\in\textbf{M}_{i}\\\ m\neq i\end{subarray}}p_{m}\left(s\mid\textbf{o}^{m}\right)^{\alpha_{D}}$ (2) where $\textbf{M}_{i}$ is Agent $i$’s set of mental models and $p_{m}\left(s\mid\textbf{o}^{m}\right)$ is posterior of state $s$ for the mental model of player $m$ over $\textbf{o}^{m}$, the set of $i$’s observations of $m$. ### Alternative Models. To test whether the full representational capacity of theory of mind with both self-actualization and partner-actualization loops are necessary to understand human mental states, we formulate two alternative models that “lesion” one or both of the updating loops. This allows us to test whether it is possible to explain human inferences about their teammates without appealing to a fully developed theory of mind. We compute $p$-values from likelihood ratio tests comparing the models. ## 5 Results Figure 3: Human performance varies with task difficulty and number of communication partners. From left to right: a) Human performance decreases with task difficulty and is outperformed by AI agent in most cases. b) Agent improves over human performance especially when communicating with many teammates. c) Agents with many communication partners benefit most from high ToM ability $\alpha_{D}$. (parameters: $\alpha_{D}=0.95$, $SN=0.35;MN=0.85;MY=1.95;SY=2$). ### Model Evaluation. We find strong support for the hypothesis that humans use Bayesian inference to model the minds of their teammates and communicate and make decisions according to those models (Table 1). Compared to a model using only prior information (the clues distributed in the experiment), a model capturing humans’ ability to update their own beliefs (self-actualization only) fits the data significantly better. A model allowing humans to update beliefs about their teammates (partner-actualization only) fits significantly better still. Finally, a model including the capability to update both own and partner beliefs has the highest fit. Higher values for $\alpha_{D}$ generally lead to more peaked posterior distributions. This explains why the parameter values that produce the highest likelihood differ slightly from those of the highest accuracy ($\alpha_{D}^{\mathit{MLE}}=0.95$ vs. $\alpha_{D}^{\mathit{maxacc}}=0.45$). In summary, the comparative fit analysis provides reliable evidence for the mutually inferred alignment of attention (cf., mental states) among teammates. Our model accurately captures the judgments of human participants, varying in predictable ways with random experimental manipulation of task difficulty and the number of communication partners. We measure the task difficulty faced by each individual based on how much information the individual can draw about the correct answer from the two clues they initially received. This captures how difficult it is for an individual to guess the correct answer before communicating with other players (this is a somewhat noisy measure as it ignores team-level effects of the clue distributions). Not surprisingly, human performance decreases with task difficulty, suggesting that humans suffer from cognitive overload (Fig. 3a). Our agent achieves high accuracy predicting human’s incorrect answers under high task difficulty (high true-negative rate). Human performance varies with the number of communication partners (Fig. 3b). Given the nature of the task, access to more communication partners should be beneficial as this guarantees access to more information. Humans, however, perform worse with more communication partners while our ToM agent achieves its highest performance when placed in the most central network position (agent is 20% better than human with four partners). This suggests that humans struggle to integrate information when communicating with many teammates. This picture becomes even clearer when contrasting this with ToM ability $\alpha_{D}$ (Fig. 3c). Higher levels of ToM ability $\alpha_{D}$ have the highest benefit on performance in central network positions, yet $\alpha_{D}$ hardly matters when connected to just a single teammate. ### Analysis of Human Decision Biases. The ToM model predicts with high accuracy instances in which humans provide the correct answer as well as those in which they provide the wrong answer (48% true-negative accuracy). Comparing the information weighting parameters for optimal performance with those for the highest model fit with data from the human subject experiment from the MLE estimates, we can directly see why human performance falls short. Humans pay not enough attention to information ruling out alternatives (optimal information weighting for strong no $0.25$ vs. MLE fit $0.05$). The difference is even more pronounced for ambiguous information (optimal information weighting for maybe no $0.9$ vs. MLE fit $0.05$): humans undervalue information that is ambiguous, yet crucial in arriving at the correct answer. Because this information is ambiguous, humans may attempt to make sense of it by imputing their own understanding (i.e., resorting to their own prior) instead of updating their beliefs in the direction of the ambiguous message. A similar weighting difference for maybe yes statements suggests that humans communicate strong yes information in vague ways (maybe-ing their statements) and could significantly improve their performance by placing higher weight on such statements (or communicating them more forcefully). ### Measuring Theory of Mind. We propose two measures of human theory of mind ability: $\alpha_{D}$ and $\alpha_{C}$. The first, $\alpha_{D}$, is based on an individual’s ability to form and integrate accurate mental models of others when making decisions and corresponds directly to our model parameter that governs the relative weighting of the Ego vs. Alter Models. The second, $\alpha_{C}$, captures an individual’s ability to communicate the most useful information. We perform maximum likelihood estimate using a grid search over the relevant parameter space (Balietti, Klein, and Riedl 2021). Then, we fix the maximum likelihood estimate of the nuisance parameters for information weighting (SN, MN, MY, SY) but consider the marginal of all values of $\alpha_{D}$. Instead of then picking the global best fitting value for the entire data set, we pick the maximum likelihood estimate of $\alpha_{D}$ separately for each individual. That is, we use the model’s inner ToM working to estimate which value of individual $i^{\prime}s$ $\alpha_{D}$ produces the highest likelihood of the observed decision. For the second measure $\alpha_{C}$, we consider outgoing messages sent by each individual and compute the expected surprise that this message should produce for the recipient, relative to ego’s Alter Model of the recipient. Notice that we compute this internally within the Markov blanket of an agent. We do not use information about how surprising the message is for the recipient but rather how useful the sender thinks it should be relative to what they think the recipient knows. Intuitively, individuals who possess a high theory of mind ability, will be better at sending the right message to the right person compared to those with lower ToM ability. Both measures capture social perceptiveness: how much attention an individual pays to what others in the team know. We find that individual-level ToM ability $\alpha_{D}$ is a strong predictor of individual-level performance ($\beta=0.59;p<0.001;R^{2}=0.26$). Aggregating to the team level, we find that average ToM ability $\alpha_{D}^{\mathit{team}}$ is a strong predictor of final team performance (Fig. 4a). We find that the effect of ToM ability is moderated by average betweenness centrality suggesting team performance increases most when high- ToM ability $\alpha_{D}$ individuals occupy high betweenness network positions ($\beta=0.39;p=0.04$). The amount of communication sent within a team, notably, is not a significant predictor of team performance ($\beta=-0.00;p=0.265$). Figure 4: Theory of Mind ability predicts team performance. a) Team average ToM $\alpha_{D}^{\mathit{team}}$ is a strong predictor of the final team performance. b) Communication ToM $\alpha_{D}^{\mathit{team}}$ serves as a real-time measure of collective intelligence. Only about the first 25% of team messages are necessary to make significant predictions of final team performance. c) High- and low-performing teams have markedly different temporal patterns of ToM $\alpha_{C}$. Turning to our analysis of theory of mind communication ability $\alpha_{C}^{\mathit{team}}$, we find that it is a strong predictor of team- level performance ($\beta=0.47;p=0.019$). Given that we can measure $\alpha_{C}$ on the message level, it can serve as a real-time measure of theory of mind. We find that after observing only the first 25% of a team’s messages, $\alpha_{C}^{\mathit{team}}$ is a significant predictor of final team performance (Fig. 4b). We analyze the temporal pattern in which high- vs. low-performing teams communicate (Fig. 4c). High-performing teams send messages with high information content (high surprise) early during the team task but then send consolidating, low information content messages at the end to facilitate convergence (low surprise). We illustrate this in the example below (∗’s indicate high surprise messages with novel content). > Human 1: It will not happen on Tuesday∗ > Human 3: No Wednesday or Friday∗ > Human 2: Monday or Thursday∗ > Human 2: Did you get a no Thursday info? > Human 5: Yeah I got no Thursday∗ > Human 3: I got no Thursday > Human 5: So it must be Monday This suggests that team cognition is not static but instead emerges dynamically and that high-performing teams have the collective capacity to modulate shared cognition dynamically to achieve both efficient information transfer and convergence during different periods of the task. Low-performing teams on the other hand fail to send high information content messages and also fail to achieve convergence sending more surprising messages late during the task. This pattern illustrates that high information content alone is not desirable. Instead, convergence and joint attention to team consensus are crucial (Woolley et al. 2022). The current standard to predict social perceptiveness is the Reading the Mind in the Eyes (RME) test (Baron-Cohen et al. 2001; Almaatouq et al. 2021). Using data from a large meta analysis (Riedl et al. 2021) of 5,279 individuals in 1,356 groups, we find RME explains between 0% and 3% of the variation in team performance (depending on the task). Our $\alpha_{C}^{\mathit{team}}$ measure explains 8% after observing only 25% of team communication, an improvement of about 170%. Our proposed measure captures social perceptiveness passively and in real time which can be used to interpret a team’s current status and determine opportunities for interventions. Furthermore, RME captures social perceptiveness of a single individual, while our measure is group based. Our work also extends previous measures of “information diversity” (Riedl and Woolley 2017). It thus captures aspects of collective attention and memory (Gupta and Woolley 2020). ### Human-Agent Team Performance. So far, our AI agent was only passively shadowing its assigned human, reasoning about the mental states of that human and its connected teammates. In this section, we extend this reasoning to allow the agent to trigger interventions that could be deployed in human-AI teams and quantify what performance improvement this might yield. We perform a counterfactual simulation in which we allow each AI agent to identify and send one message to each network neighbor. Each agent compares its Ego Model against its Alter Models to identify divergence in inferred beliefs. Each agent then draws from the set of messages it received during the team discussion and chooses a message to send to each neighbor. To do this, the agent calculates the effect of sharing one of its available messages on the Alter Models and shares the message that results in the lowest KL divergence, defined as $D_{\text{KL}}(Q\,||\,P)=\sum_{i}Q(i)\ln\frac{Q(i)}{P(i)}$, between the Ego and Alteri posteriors over all five possible answer options. If no message lowers the KL divergence, the agent shares no message. This is summarized as taking the action $a_{ij}$ for each agent $i$ and neighbor $j$ where $a_{ij}=\underset{m\in\textbf{o}}{\textit{arg\,min}}\,D_{\text{KL}}\left(p_{\text{ego}_{i}}(\textbf{s }|\textbf{ o})\,||\,p_{\text{alter}_{j}}(\textbf{s }|\textbf{ o'},m)\right)$ (3) $m$ is selected from the set of messages o that agent $i$ sent or received, s is a vector of the five possible answers, and o’ is the set of messages agent $i$ received from agent $j$. To establish a baseline intervention, we let $a_{ij}$ be a random message in $\textbf{o}\cup\\{\text{no message\\}}$. Here, performance improves to 79.0 $\pm$ 1.8% averaged over 100 trials. For the targeted intervention, performance improves 4.9% to 82.1 $\pm$ 0.7% which is significantly higher ($t$-test $p<0.0001$) than the random intervention. Notice that this intervention would not be possible without our ToM-based multi-agent model. Without it, we could not determine which message to send to which alter. ## 6 Discussion We develop a framework that combines theory of mind, Bayesian inference, and collective intelligence into a generative computational model. Our model accurately captures the decisions made by human participants in a cognitive decision-making experiment, varying in predictable ways with experimental manipulation of task difficulty and network position. Our results suggest that humans use Bayesian inference and Theory of Mind to model their own beliefs and those of their teammates and communicate and make decisions according to those models. We provide empirical evidence that humans do not do this perfectly but suffer from cognitive biases. Nonetheless, our Bayesian agent is robust and achieves high performance even when fed biased and incorrect information, providing a pathway to implement high-performing human-AI teams. Notably, our agent works in ad hoc teams with heterogeneous partners without any pretraining. In such human-AI teams, our AI could augment humans’ limited cognitive memory, attention, and reasoning abilities to increase collective intelligence. We show empirical evidence that the collective dynamics of Bayesian agents updating probabilities of hypotheses using observations, collectively predict the performance at the team level. This provides the basis for a real-time measure of theory of mind ability, and maybe even collective intelligence more broadly (Heins et al. 2022). The better the mental models of the team members align—the less surprising observations drawn from communication become—the higher the team’s collective intelligence. Our implementation of direct surprise weighting could be extended with a fuller implementation of the free energy principle that would allow agents to learn asymmetric beliefs about the reliability of their partners’ signals. Taken together, this is a framework to capture the emergence of collective memory, attention, and reasoning in real time (Luria 1973; Gupta and Woolley 2020). ## Acknowledgements This work was supported by the Army Research Laboratory [Grant W911NF-19-2-0135]. ## References * Albarracin et al. (2022) Albarracin, M.; Demekas, D.; Ramstead, M. J.; and Heins, C. 2022. Epistemic communities under active inference. _Entropy_ , 24(4): 476. * Alipour et al. (2021) Alipour, K.; Ray, A.; Lin, X.; Cogswell, M.; Schulze, J. P.; Yao, Y.; and Burachas, G. T. 2021. Improving users’ mental model with attention-directed counterfactual edits. _Applied AI Letters_ , 2(4): e47. * Almaatouq et al. (2021) Almaatouq, A.; Alsobay, M.; Yin, M.; and Watts, D. J. 2021. Task complexity moderates group synergy. _Proceedings of the National Academy of Sciences_ , 118(36): e2101062118. * Apperly and Butterfill (2009) Apperly, I. A.; and Butterfill, S. A. 2009. Do humans have two systems to track beliefs and belief-like states? _Psychological Review_ , 116(4): 953. * Autor (2014) Autor, D. H. 2014. Skills, education, and the rise of earnings inequality among the “other 99 percent”. _Science_ , 344(6186): 843–851. * Baker, Saxe, and Tenenbaum (2011) Baker, C.; Saxe, R.; and Tenenbaum, J. 2011. Bayesian theory of mind: Modeling joint belief-desire attribution. In _Proceedings of the Thirty-Third Annual Meeting of the Cognitive Science Society_ , 2469–2474. * Baker et al. (2017) Baker, C. L.; Jara-Ettinger, J.; Saxe, R.; and Tenenbaum, J. B. 2017. Rational quantitative attribution of beliefs, desires and percepts in human mentalizing. _Nature Human Behaviour_ , 1(4): 1–10. * Balietti, Klein, and Riedl (2021) Balietti, S.; Klein, B.; and Riedl, C. 2021. Optimal design of experiments to identify latent behavioral types. _Experimental Economics_ , 24(3): 772–799. * Balietti and Riedl (2021) Balietti, S.; and Riedl, C. 2021. Incentives, competition, and inequality in markets for creative production. _Research Policy_ , 50(4): 104212. * Bansal et al. (2019a) Bansal, G.; Nushi, B.; Kamar, E.; Lasecki, W. S.; Weld, D. S.; and Horvitz, E. 2019a. Beyond accuracy: The role of mental models in human-AI team performance. In _Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing_ , 2–11. * Bansal et al. (2019b) Bansal, G.; Nushi, B.; Kamar, E.; Weld, D. S.; Lasecki, W. S.; and Horvitz, E. 2019b. Updates in human-AI teams: Understanding and addressing the performance/compatibility tradeoff. In _Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence_ , 2429–2437. * Baron-Cohen et al. (2001) Baron-Cohen, S.; Wheelwright, S.; Hill, J.; Raste, Y.; and Plumb, I. 2001. The “Reading the Mind in the Eyes” Test revised version: a study with normal adults, and adults with Asperger syndrome or high-functioning autism. _The Journal of Child Psychology and Psychiatry and Allied Disciplines_ , 42(2): 241–251. * Brynjolfsson, Rock, and Syverson (2018) Brynjolfsson, E.; Rock, D.; and Syverson, C. 2018. Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. In _The Economics of Artificial Intelligence: An Agenda_ , 23–57. Chicago, IL: University of Chicago Press. * Bush et al. (1945) Bush, V.; et al. 1945. As we may think. _The Atlantic Monthly_ , 176(1): 101–108. * Call and Tomasello (2008) Call, J.; and Tomasello, M. 2008. Does the chimpanzee have a theory of mind? 30 years later. _Trends in Cognitive Sciences_ , 12(5): 187–192. * Chen and Barnes (2014) Chen, J. Y.; and Barnes, M. J. 2014. Human–agent teaming for multirobot control: A review of human factors issues. _IEEE Transactions on Human-Machine Systems_ , 44(1): 13–29. * Converse, Cannon-Bowers, and Salas (1993) Converse, S.; Cannon-Bowers, J.; and Salas, E. 1993. Shared mental models in expert team decision making. _Individual and group decision making: Current issues_ , 221: 221–46. * Engel et al. (2014) Engel, D.; Woolley, A. W.; Jing, L. X.; Chabris, C. F.; and Malone, T. W. 2014. Reading the mind in the eyes or reading between the lines? Theory of mind predicts collective intelligence equally well online and face-to-face. _PloS One_ , 9(12): e115212. * Friston (2010) Friston, K. 2010. The free-energy principle: a unified brain theory? _Nature Reviews Neuroscience_ , 11(2): 127–138. * Friston (2013) Friston, K. 2013. Life as we know it. _Journal of the Royal Society Interface_ , 10(86): 20130475. * Friston et al. (2017) Friston, K.; FitzGerald, T.; Rigoli, F.; Schwartenbeck, P.; and Pezzulo, G. 2017\. Active inference: a process theory. _Neural Computation_ , 29(1): 1–49. * Friston, Kilner, and Harrison (2006) Friston, K.; Kilner, J.; and Harrison, L. 2006. A free energy principle for the brain. _Journal of Physiology_ , 100(1-3): 70–87. * Fügener et al. (2022) Fügener, A.; Grahl, J.; Gupta, A.; and Ketter, W. 2022. Cognitive challenges in human–artificial intelligence collaboration: investigating the path toward productive delegation. _Information Systems Research_ , 33(2): 678–696. * Gero et al. (2020) Gero, K. I.; Ashktorab, Z.; Dugan, C.; Pan, Q.; Johnson, J.; Geyer, W.; Ruiz, M.; Miller, S.; Millen, D. R.; Campbell, M.; et al. 2020. Mental models of AI agents in a cooperative game setting. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ , 1–12. * Glikson and Woolley (2020) Glikson, E.; and Woolley, A. W. 2020. Human trust in artificial intelligence: Review of empirical research. _Academy of Management Annals_ , 14(2): 627–660. * Griffiths (2015) Griffiths, T. L. 2015. Manifesto for a new (computational) cognitive revolution. _Cognition_ , 135: 21–23. * Gupta and Woolley (2020) Gupta, P.; and Woolley, A. W. 2020. The emergence of collective intelligence behavior. In _Proceedings of the Paper presented at the 8th ACM Collective Intelligence (CI) Conference, Virtual Event, Zurich, Switzerland_. * Heins et al. (2022) Heins, C.; Klein, B.; Demekas, D.; Aguilera, M.; and Buckley, C. 2022. Spin glass systems as collective active inference. _arXiv preprint arXiv:2207.06970_. * Hohenstein et al. (2022) Hohenstein, J.; Larson, L. E.; Hou, Y. T.-Y.; Harris, A. M.; Schecter, A.; Dechurch, L.; Contractor, N.; and Jung, M. F. 2022. Vero: A Method for Remotely Studying Human-AI Collaboration. In _Proceedings of the 55th Hawaii International Conference on System Sciences_. * Hong and Page (2004) Hong, L.; and Page, S. E. 2004. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. _Proceedings of the National Academy of Sciences_ , 101(46): 16385–16389. * Kaufmann, Gupta, and Taylor (2021) Kaufmann, R.; Gupta, P.; and Taylor, J. 2021. An active inference model of collective intelligence. _Entropy_ , 23(7): 830. * Khalvati et al. (2019) Khalvati, K.; Park, S. A.; Mirbagheri, S.; Philippe, R.; Sestito, M.; Dreher, J.-C.; and Rao, R. P. 2019. Modeling other minds: Bayesian inference explains human choices in group decision-making. _Science Advances_ , 5(11): eaax8783. * Lewis (2003) Lewis, K. 2003. Measuring transactive memory systems in the field: scale development and validation. _Journal of Applied Psychology_ , 88(4): 587. * Luria (1973) Luria, A. R. A. R. 1973. _The working brain; an Introduction to Neuropsychology_. New York, NY: Basic Books. * Malone and Bernstein (2015) Malone, T. W.; and Bernstein, M. S. 2015. _Handbook of Collective Intelligence_. MIT Press. * Mesmer-Magnus and DeChurch (2009) Mesmer-Magnus, J. R.; and DeChurch, L. A. 2009. Information sharing and team performance: a meta-analysis. _Journal of Applied Psychology_ , 94(2): 535. * Nickerson (1999) Nickerson, R. S. 1999. How we know—and sometimes misjudge—what others know: Imputing one’s own knowledge to others. _Psychological Bulletin_ , 125(6): 737. * O’Neill et al. (2020) O’Neill, T.; McNeese, N.; Barron, A.; and Schelble, B. 2020. Human–autonomy teaming: A review and analysis of the empirical literature. _Human Factors_ , 64: 904–938. * Paleja et al. (2021) Paleja, R.; Ghuy, M.; Ranawaka Arachchige, N.; Jensen, R.; and Gombolay, M. 2021\. The utility of explainable ai in ad hoc human-machine teaming. _Advances in Neural Information Processing Systems (NeurIPS)_ , 34: 610–623. * Paolacci, Chandler, and Ipeirotis (2010) Paolacci, G.; Chandler, J.; and Ipeirotis, P. G. 2010. Running experiments on amazon mechanical turk. _Judgment and Decision making_ , 5(5): 411–419. * Pavlin et al. (2010) Pavlin, G.; de Oude, P.; Maris, M.; Nunnink, J.; and Hood, T. 2010. A multi-agent systems approach to distributed bayesian information fusion. _Information Fusion_ , 11(3): 267–282. * Premack and Woodruff (1978) Premack, D.; and Woodruff, G. 1978. Does the chimpanzee have a theory of mind? _Behavioral and Brain Sciences_ , 1(4): 515–526. * Pynadath et al. (2022) Pynadath, D. V.; Dilkina, B.; Jeong, D. C.; John, R. S.; Marsella, S. C.; Merchant, C.; Miller, L. C.; and Read, S. J. 2022. Disaster world. _Computational and Mathematical Organization Theory_ , in press. * Qin et al. (2022) Qin, Y.; Zhang, W.; Lee, R.; Sun, X.; and Sajda, P. 2022. Predictive Power of Pupil Dynamics in a Team Based Virtual Reality Task. In _2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops_ , 592–593. IEEE. * Radford et al. (2016) Radford, J.; Pilny, A.; Reichelmann, A.; Keegan, B.; Welles, B. F.; Hoye, J.; Ognyanova, K.; Meleis, W.; and Lazer, D. 2016. Volunteer science: An online laboratory for experiments in social psychology. _Social Psychology Quarterly_ , 79(4): 376–396. * Riedl et al. (2021) Riedl, C.; Kim, Y. J.; Gupta, P.; Malone, T. W.; and Woolley, A. W. 2021. Quantifying collective intelligence in human groups. _Proceedings of the National Academy of Sciences_ , 118(21): e2005737118. * Riedl and Woolley (2017) Riedl, C.; and Woolley, A. W. 2017. Teams vs. crowds: A field test of the relative contribution of incentives, member ability, and emergent collaboration to crowd-based problem solving performance. _Academy of Management Discoveries_ , 3(4): 382–403. * Schelble et al. (2022) Schelble, B. G.; Flathmann, C.; McNeese, N. J.; Freeman, G.; and Mallick, R. 2022\. Let’s Think Together! Assessing Shared Mental Models, Performance, and Trust in Human-Agent Teams. _Proceedings of the ACM on Human-Computer Interaction_ , 6(GROUP): 1–29. * Seraj et al. (2022) Seraj, E.; Wang, Z.; Paleja, R.; Martin, D.; Sklar, M.; Patel, A.; and Gombolay, M. 2022. Learning efficient diverse communication for cooperative heterogeneous teaming. In _Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems_ , 1173–1182. * Smith, Friston, and Whyte (2022) Smith, R.; Friston, K. J.; and Whyte, C. J. 2022. A step-by-step tutorial on active inference and its application to empirical data. _Journal of Mathematical Psychology_ , 107: 102632. * Stasser and Titus (1985) Stasser, G.; and Titus, W. 1985. Pooling of unshared information in group decision making: Biased information sampling during discussion. _Journal of Personality and Social Psychology_ , 48(6): 1467. * Stone et al. (2010) Stone, P.; Kaminka, G. A.; Kraus, S.; and Rosenschein, J. S. 2010. Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination. In _Proceedings of the Twenty-Fourth Conference on Artificial Intelligence_. * Tenenbaum et al. (2011) Tenenbaum, J. B.; Kemp, C.; Griffiths, T. L.; and Goodman, N. D. 2011. How to grow a mind: Statistics, structure, and abstraction. _Science_ , 331(6022): 1279–1285. * Vasil et al. (2020) Vasil, J.; Badcock, P. B.; Constant, A.; Friston, K.; and Ramstead, M. J. 2020. A world unto itself: human communication as active inference. _Frontiers in Psychology_ , 11: 417. * Woolley et al. (2010) Woolley, A. W.; Chabris, C. F.; Pentland, A.; Hashmi, N.; and Malone, T. W. 2010\. Evidence for a collective intelligence factor in the performance of human groups. _Science_ , 330(6004): 686–688. * Woolley et al. (2022) Woolley, A. W.; Chow, R. M.; Mayo, A. T.; Riedl, C.; and Chang, J. W. 2022. Collective Attention and Collective Intelligence: The Role of Hierarchy and Team Gender Composition. _Organization Science_ , in press. * Wu et al. (2021) Wu, S. A.; Wang, R. E.; Evans, J. A.; Tenenbaum, J. B.; Parkes, D. C.; and Kleiman-Weiner, M. 2021. Too Many Cooks: Bayesian Inference for Coordinating Multi-Agent Collaboration. _Topics in Cognitive Science_ , 13(2): 414–432. * Wuchty, Jones, and Uzzi (2007) Wuchty, S.; Jones, B. F.; and Uzzi, B. 2007. The increasing dominance of teams in production of knowledge. _Science_ , 316(5827): 1036–1039.
# Exact description of quantum stochastic models as quantum resistors Tony Jin1, João S. Ferreira1, Michele Filippone1,2 and Thierry Giamarchi1 1Department of Quantum Matter Physics, Ecole de Physique University of Geneva, Quai Ernest-Ansermet 24, CH-1211 Geneva 4, Switzerland 2Université Grenoble Alpes, CEA, IRIG-MEM-L_Sim, F-38000, Grenoble, France ###### Abstract We study the transport properties of generic out-of-equilibrium quantum systems connected to fermionic reservoirs. We develop a new perturbation scheme in the inverse system size, named $1/N$ expansion, to study a large class of out of equilibrium diffusive/ohmic systems. The bare theory is described by a Gaussian action corresponding to a set of independent two level systems at equilibrium. This allows a simple and compact derivation of the diffusive current as a first order pertubative term. In addition, we obtain exact solutions for a large class of quantum stochastic Hamiltonians (QSHs) with time and space dependent noise, using a self consistent Born diagrammatic method in the Keldysh representation. We show that these QSHs exhibit diffusive regimes which are encoded in the Keldysh component of the single particle Green’s function. The exact solution for these QSHs models confirms the validity of our system size expansion ansatz, and its efficiency in capturing the transport properties. We consider in particular three fermionic models: i) a model with local dephasing ii) the quantum simple symmetric exclusion process model iii) a model with long-range stochastic hopping. For i) and ii) we compute the full temperature and dephasing dependence of the conductance of the system, both for two- and four-points measurements. Our solution gives access to the regime of finite temperature of the reservoirs which could not be obtained by previous approaches. For iii), we unveil a novel ballistic-to-diffusive transition governed by the range and the nature (quantum or classical) of the hopping. As a by-product, our approach equally describes the mean behavior of quantum systems under continuous measurement. ## I Introduction Diffusion is the transport phenomenon most commonly encountered in nature. It implies that globally conserved quantities such as energy, charge, spin or mass spread uniformly all over the system according to Fick/Ohm’s law $J=-D\nabla n\,,$ (1) where the diffusion constant $D$ relates the current density $J$ to a superimposed density gradient $\nabla n$. Despite its ubiquity, understanding the emergence of classical diffusive phenomena from underlying quantum mechanical principles is highly non trivial. Early works based on field theory and perturbative methods [1, 2] pointed out the possibility that interactions do not necessarily lead to diffusion at finite temperature, a question addressed then more rigorously by using the concepts of integrability [3]. These questions have then fueled many exciting discoveries in low-dimensional interacting systems [4]. A notable example is the ballistic-to-diffusive transition in quantum integrable XXZ spin chains [5, 6, 7, 8, 9, 10], which also exhibit a superdiffusive point in the Kardar- Parisi-Zhang universality class [11, 12, 13, 14, 15]. These discoveries have motivated the generalized hydrodynamical descriptions of integrable systems [16, 17], providing an elegant path to the question of diffusion at finite temperature [18], and paving the way to the description of diffusive phenomena based on perturbative approaches [19, 20, 21, 22, 23, 24]. The out-of-equilibrium driving protocol illustrated in Fig. 1, where a system is coupled to external dissipative baths, has been crucial to unveil and characterize such exotic transport phenomena [6, 25, 7, 26]. Figure 1: A stationary current $J$ flows in a one-dimensional lattice when connected to left (L) and right (R) fermionic reservoirs, described by Fermi distributions $f(\varepsilon)$ with different temperatures $T$ or chemical potentials $\mu$. The wiggly lines denote dissipative degrees of freedom acting on the system with rate $\gamma$. For a fixed difference of chemical potential $\delta\mu=\mu_{L}-\mu_{R}$, dissipative terms are normally responsible for the Ohmic suppression of the current, $J\propto 1/N$. . It allows to study disordered systems [27, 28, 29], uncover novel integrable structures [6, 30], and show diffusive transport [31, 32, 33, 34, 35, 36]. These open quantum systems [37, 38, 39], are described within the Lindblad formalism [40, 41], which is actively employed to investigate the exotic dynamics induced by non-trivial interactions with external degrees of freedom, such as lattice vibrations, quantum measurements [42, 43, 44, 45, 46, 47], dephasing [48, 49, 50, 51, 52, 53], losses [54, 55, 56, 57, 58, 59], coupling to a lightfield [60, 61, 62] and environmental engineering [63]. This research activity is also motivating ongoing experiments, where recent progress in space- and time-resolved techniques is applied to directly observe emergent diffusive and exotic dynamics in various quantum systems, including cold atoms [64, 60, 65, 66, 67], spin chains [68, 69, 70, 71, 72, 73] and solid-state [74, 75, 76]. In this context, theoretical predictions are usually made case-by-case, with strong constraints on geometries and driving protocols [77]. Thus, devising versatile tools to solve generic quantum models that show diffusion becomes crucial to understand emerging classical Ohmic transport. In this paper, we develop a novel approach to characterize the bulk transport properties of quantum resistors which we show to be exact and systematic for a wide class of quantum stochastic Hamiltonians (QSHs). Our starting point is the Meir-Wingreen’s formula [78, 79] (MW), which expresses the current $J$ of a system driven at its boundaries, see Fig. 1, in terms of single-particle Green’s functions. We show that, for Ohmic systems, the MW formula supports an expansion of the current in terms of the inverse of the system size $N$. We illustrate how to perform practically this $1/N$ expansion, which reveals efficient to derive the diffusive current and the diffusion constant: we assume that, in the $N\rightarrow\infty$ limit, diffusive lattices admit a simple description in terms of independently equilibrated sites and demonstrate that a well-chosen perturbation theory over this trivial state leads to the desired $1/N$ expansion. We provide a comprehensive demonstration of the validity of our approach in the context of QSHs. Relying on diagrammatic methods and out-of-equilibrium field theory [80], we show that single-particle Green’s functions of QSHs can be exactly and systematically derived relying on the self-consistent Born approximation (SCBA) – a generalization of previous results derived for a dephasing impurity in a thermal bath [50]. Equipped with this exact solution, and relying on MW formula, we explicitly derive the dissipative current flowing in the system and show that the Keldysh component of the single particle Green’s function encodes the Ohmic suppression of the current. Then, we explicitly derive the asymptotically equilibrated state by “coarse- graining” of single particle Green’s functions and validate our procedure to perform the $1/N$ expansion. We illustrate the effectiveness and versatility of our approach for three different QSHs of current interest: i) the dephasing model [31, 81, 32, 30, 82]; ii) the quantum symmetric simple exclusion process (QSSEP) [83, 35, 84, 85, 86, 87] and iii) models with stochastic long range hopping [88, 46]. The case studies (i) and (ii) illustrate the effectiveness of our approach, providing simple derivations of the current $J$ and of the diffusion constant $D$, in alternative to approaches relying on matrix-product state [31, 81, 32], integrability [30] or other case-by-case solutions [35, 33]. Additionally, we address previously unexplored regimes, by exactly solving the out-of-equilibrium problem with fermion reservoirs at arbitrary temperatures and chemical potentials. Our approach also allows to access two-times correlators in the stationary state which were not described by previous studies. For case (iii), we show instead the ability of our approach to predict novel and non-trivial transport phenomena, namely a displacement of the ballistic-to-diffusive transition induced by coherent nearest-neighbor tunneling in one-dimensional chains. A by-product of our analysis is that all the results presented here apply also for system under continuous measurement, which are currently attracting a lot of interest in the context of measurement induced phase transition [42, 88, 44, 46]. Our paper is structured as follows. Section II describes how the MW formula is a good starting point to build a systematic expansion of the current in terms of the inverse system size $N$. Section III presents QSH and shows the exactitude of SCBA for the computation of single-particle self-energies. Section IV shows how our formalism allows to fully compute the transport properties of the dephasing model, the QSSEP and the long-range model. Section V is dedicated to our conclusions and the discussion of the future research perspectives opened by our work. ## II Resistive scaling in finite-size boundary driven systems and perturbative approach In this section, we introduce generic tools aimed at studying diffusive transport in boundary-driven setups like those of Fig. 1. For these setups, the current is given by the MW formula [78]. In the simplified (yet rather general) situation, where the reservoirs have a constant density of states and the tunnel exchange of particles does not depend on energy, the MW formula reads (we assume $e=\hbar=k_{B}=1$): $J=i\int\frac{d\omega}{2\pi}\text{Tr}\left\\{\frac{1}{2}\big{(}\Gamma_{L}-\Gamma_{R}\big{)}G^{\cal K}+\right.\\\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \left.\left[\left(f_{L}-\frac{1}{2}\right)\Gamma_{L}-\left(f_{R}-\frac{1}{2}\right)\Gamma_{R}\right]\big{(}G^{\cal R}-G^{\cal A}\big{)}\right\\}\,,$ (2) where $f_{L(R)}(\omega)=[e^{(\omega-\mu_{L(R)})/T_{L(R)}}+1]^{-1}$ are the Fermi distributions associated to the left and right reservoir with chemical potentials $\mu_{L(R)}$ and temperatures $T_{L(R)}$. $G^{\cal R/A/K}$ are the retarded ($\cal R$), advanced ($\cal A$) and Keldysh ($\cal K$) components of the single-particle Green’s functions of the system. They are defined in time representation as $G^{\cal R}_{j,k}(t-t^{\prime})=-i\theta(t-t^{\prime})\langle\\{c_{j}(t),c^{\dagger}_{k}(t^{\prime})\\}\rangle$, $G^{\cal A}_{j,k}(t-t^{\prime})=[G^{\cal R}_{j,k}(t^{\prime}-t)]^{*}$ and $G^{\cal K}_{j,k}(t-t^{\prime})=-i\langle[c_{j}(t),c^{\dagger}_{k}(t^{\prime})]\rangle$, where the (curly)square brackets indicate (anti)commutation 111The dependence of the Green’s functions on time differences $t-t^{\prime}$, instead of separate times $t,t^{\prime}$ is a consequence of the fact that we consider stationary situations.. $c_{j}$ is the annihilation operator of a spinless fermion at site $j$. The $\Gamma_{L(R)}$ matrices describe system-reservoirs couplings. Our aim is to establish a systematic procedure to compute diffusive current for large systems. The starting point will be the state of the system in the thermodynamic limit ($N\rightarrow\infty$). By identifying in the MW formula (2) the terms leading to Fick’s law (1), we motivate the simple structure of the problem for an infinite system size. In resistive systems, a fixed difference of density $\Delta n:=n_{1}-n_{N}$ at the edges of the system enforces the $1/N$ suppression of the current ($J\propto\nabla n\propto\Delta n/N$). It is thus natural to perform a perturbative $1/N$ expansion of the current on the $N\rightarrow\infty$ state. We conjecture a possible perturbation scheme and show its validity in the context of QSHs. Without loss of generality, we focus on discrete $1D$ lattice systems of size $N$ 222The extension to different geometries and additional degrees of freedom is straightforward.. In this case, the $\Gamma_{L(R)}$ matrices in Eq. (2) acquire a simple form in position space: $[\Gamma_{L(R)}]_{j,k}=\Gamma\delta_{j,1(N)}\delta_{j,k}$. We also express the local densities in terms of Green’s functions, namely $2n_{j}=2\langle c^{\dagger}_{j}c_{j}\rangle=1-i\int d\omega\,G_{j,j}^{\cal K}(\omega)/(2\pi)$, which also implies $2i\Delta n=G^{\cal K}_{1,1}(t=0)-G^{\cal K}_{N,N}(t=0)=\Delta G^{\cal K}$. The MW formula then acquires the more compact form: $J=\Gamma\int d\omega\Big{[}f_{L}(\omega)\mathcal{A}_{L}(\omega)-f_{R}(\omega)\mathcal{A}_{R}(\omega)\Big{]}-\Gamma\Delta n\,,$ (3) where we have introduced the local spectral densities $\mathcal{A}_{L(R)}(\omega)=-\frac{1}{\pi}\mbox{Im}[G^{\cal R}_{1,1(N,N)}(\omega)]$ and made use of the fact that $\int d\omega\mathcal{A}_{L(R)}(\omega)=1$. The local spectral densities $\mathcal{A}_{L(R)}(\omega)$ exponentially converge in the thermodynamic limit $N\rightarrow\infty$. This feature is generally expected and is illustrated in Fig. 8 for different classes of QSHs. This observation allows to establish that the $1/N$ scaling, proper to diffusive currents, must entirely arise from $\Delta n$ in (3). The possibility to ignore the size-dependence of the first term of (3) imposes strong constraints on the $1/N$ expansion of the difference of density $\Delta n$ in diffusive systems. If we write this expansion as $2i\Delta n=\Delta G^{\cal K}=\Delta G^{(\infty)}+\frac{1}{N}\Delta G^{\prime}+\ldots$ (4) one notices immediately that the leading term $\Delta G^{(\infty)}$ has to compensate the first one in (3), implying $\begin{split}\frac{\Delta G^{(\infty)}}{2i}=\int d\omega\Big{[}f_{L}(\omega)\mathcal{A}_{1,1}(\omega)-f_{R}(\omega)\mathcal{A}_{N,N}(\omega)\Big{]}\,.\end{split}$ (5) A sufficient but not necessary condition fulfilling this relation is obtained by imposing at each boundary: $\int\frac{d\omega}{2\pi}G_{L(R)}^{\mathcal{K}(\infty)}(\omega)=-i\int d\omega\tanh\left(\frac{\omega-\mu_{L(R)}}{2T_{L(R)}}\right)\mathcal{A}_{L(R)}(\omega)\,,$ (6) which will turn out to be satisfied for QSHs. These relations have a simple and interesting interpretation. In the infinite size limit, the flowing current is zero and thus the stationary value of the densities at the boundary can be computed by supposing that they fulfill a _fluctuation-dissipation_ relation or equivalently, that these sites are at equilibrium with the neighboring reservoirs. Reinjecting (4) in the MW formula gives the current $J=i\frac{\Gamma}{2N}\Delta G^{\prime}$ (7) and as expected, we get the $1/N$ diffusive scaling. This relation tells us that the information about the diffusion constant is hidden in the $1/N$ correction to the density profile which can be in general a non trivial quantity to compute. However, we will see in the following that there is a shorter path to access it via the use of an infinite system size perturbation theory. The main idea of the $1/N$ perturbation is to find an effective simple theory that captures the relevant properties of the system in the $N\to\infty$ limit. From there, transport quantities are computed _perturbatively_ on top of this limit theory. To determine this effective theory, we conjecture that there is a typical length $a$ beyond which two points of the systems can be considered to be statistically independent. Thus, by coarse-graining the theory over cells of size $a$, each cell becomes uncoupled and in local equilibrium, see Fig. 2. Figure 2: Cartoon picture of the coarse-graining procedure. On the left, spacial correlations in the infinite size limit are depicted. These decay exponentially as a function of the distance and are non-zero only within a finite length $a$. By coarse-graining the theory over this typical length, we obtain an effective theory (right) consisting of an ensemble of uncoupled sites with a finite self-energy at equilibrium. The reasons motivating such factorization are twofold. First, the current is suppressed as $1/N$ in the large system size limit, so the infinite size theory should predict a null stationary current. Second, factorization of stationary correlations has actually been demonstrated for a certain number of diffusive toy models, most notably in the context of large deviations and macroscopic fluctuation theory [91, 92, 83, 35]. For instance, it is known that the $n^{th}$ connected correlation functions of physical observables, such as density, generically behaves as $N^{-(n-1)}$. Thus, it is natural to assume that for $N\to\infty$, correlations must be exponentially decaying over a length $a$. We will show explicitly that in all of the examples studied, this factorization in the coarse-grained theory will turn out to be true and provide an analytic estimation for $a$ in App.F. We now put these assumptions on formal grounds. Let $\tilde{j}$ and $\tilde{k}$ be the spatial indices of the coarse-grained theory $G_{\tilde{j},\tilde{k}}^{\mathcal{R/A/K}}:=\frac{1}{a}\sum_{m,n=0}^{a-1}G_{\tilde{j}a+m,\tilde{k}a+n}^{\mathcal{R/A/K}}\,.$ (8) The relation between the different components $\mathcal{R},\,\mathcal{A}$ and $\mathcal{K}$ of the single particle Green’s functions are assumed to describe uncoupled sites at equilibrium with a local self-energy $\Sigma_{\tilde{j}}$ [80]. These conditions require then local fluctuation-dissipation relations of the form $G_{\tilde{j},\tilde{k}}^{\mathcal{K}(\infty)}(\omega)=\delta_{\tilde{j},\tilde{k}}\tanh\left(\frac{\omega-\mu_{\tilde{j}}}{2T_{\tilde{j}}}\right)\Big{[}G_{\tilde{j},\tilde{j}}^{\cal R}(\omega)-G_{\tilde{j},\tilde{j}}^{\cal A}(\omega)\Big{]},$ (9) with retarded and advanced Green’s functions which are diagonal in the coarse- grained space representation $G^{\cal R(A)}_{\tilde{j},\tilde{k}}(\omega)=\frac{\delta_{\tilde{j},\tilde{k}}}{\omega-\omega^{0}_{\tilde{j}}\pm\Sigma_{\tilde{j}}(\omega)}\,.$ (10) These relations fix entirely the stationary property of the system in the infinite size limit. The specification of the free parameters $\mu_{\tilde{j}},T_{\tilde{j}},\omega^{0}_{\tilde{j}}$ and $\Sigma_{\tilde{j}}$ have to be done accordingly to the model under consideration. We will see that they take a simple form for QSHs, namely the self-energy $\Sigma_{\tilde{j}}$ is frequency independent and the $\mu_{\tilde{j}},T_{\tilde{j}}\gg\omega$ limit can be taken taken in Eq. (9), as expected in the Markovian limit of the dissipative process [79]. To get the current, one needs to go one step further and understand which terms have to be expanded. The thermodynamic equilibrated theory does not exhibit transport, thus should be left invariant by the part of the Hamiltonian that commutes with the conserved quantity, for us the local particle density. It is then natural to conjecture that the perturbative term for the current is given by the dynamical part of the theory, that is, the part of the Hamiltonian $\hat{H}_{{\rm dyn}}$ which does not commute with the local density. Thus, we conjecture that, at order $1/N$, the current is given by : $J=\langle\hat{J}\hat{H}_{\rm{dyn}}\rangle_{\infty}\,,$ (11) where the $\langle\rangle_{\infty}$ means the expectation value must be taken with respect to the infinite system size theory. This formula has the remarkable advantage that its computational complexity is very low since the coarse-grained theory is Gaussian. We remark that the $1/N$ expansion presented here is _not_ a standard expansion in the hopping amplitude $\tau$, since the latter has an exponentially large degenerate manifold of states at $\tau=0$. In Sec. IV, we show explicitly how these ideas unfold for QSHs, by comparing computations done from the $1/N$ theory with the one obtained from the exact solution that we present in the following Section Sec. III. Understanding to which extent and under which conditions Eqs. (9,10) and (11) can be applied is one of the very challenging direction of study, in particular in the context of interacting quantum systems without bulk dissipative terms. ## III Validity of the self-consistent Born approximation for Quantum stochastic Hamiltonians In this section, we present a class of quantum stochastic models and associated Liouvillians (12), that describe either stochastic local dephasing or stochastic jumps of fermionic particles on a graph. The random processes are defined by a quantum Markov equation also known as a Lindblad equation. We will show explicitly two ways, exemplified by Eqs. (15) and (62), to associate an underlying quantum stochastic model to such Lindblad equation, a process known as _unraveling_ or _dilatation_ [93, 94, 95]. Of particular interest for us is the description in terms of quantum stochastic Hamiltonians (QSHs) (15). It provides a way to resum exactly the perturbative series associated to the stochastic noise, which coincides with the self-consistent Born approximation (SCBA) for single particle Green’s functions. This method was originally devised for the particular case of a single-site dephaser in Ref. [50] and we extend it here to more general situations. We will show in Section IV that, relying on SCBA, we can derive the diffusive transport properties of these models and show the validity of the assumptions underpinning the perturbative $1/N$ expansion presented in Sec. II. Consider a graph made of discrete points, each corresponding to a site. To such graph we associate a Markovian process where spinless fermions on a given site can jump to any other site only if the target site is empty, see Fig. 3. We define $\gamma_{ij}\geq 0$ as the probability rate associated to the process of a fermion jumping from $i$ to $j$ and $\gamma_{ji}=\gamma_{ij}$ the reverse process. The generator of such process is given by the Liouvillian, which acts on the density matrix $\rho$ of the system: $\begin{split}{\cal L}(\rho)=\sum_{i,j}&\gamma_{i,j}\left(2c_{j}^{\dagger}c_{i}\rho c_{i}^{\dagger}c_{j}-\big{\\{}c_{i}^{\dagger}c_{j}c_{j}^{\dagger}c_{i},\rho\big{\\}}\right)\,.\end{split}$ (12) The total evolution of the density matrix $\rho$ is in general given by $\frac{d}{dt}\rho={\cal L}_{0}(\rho)+{\cal L}(\rho)\,,$ (13) where ${\cal L}_{0}$ generates what we call the _free evolution_ , in the sense that ${\cal L}_{0}$ is quadratic in the fermion operators $c_{i}$ and the related spectrum and propagators can be efficiently computed with Wick’s theorem [96, 97]. Such Liouvillians can generally describe single-particle Hamiltonians or dissipative processes (coherent hopping, losses,…). We will consider ${\cal L}(\rho)$ as a perturbation on top of this theory. There exists a general procedure to see ${\cal L}(\rho)$ as the emergent _averaged_ dynamics of an underlying microscopic stochastic, yet Hamiltonian, process. Lifting ${\cal L}(\rho)$ to this stochastic process is known as _unraveling_ and there is not a unique way of doing so, see Fig. 3. The stochastic Hamiltonian can be treated as a perturbation in field-theory which requires the summation of an infinite series. Our strategy is to pick the relevant stochastic theory for which there exists a simple way to reorganize the summation and then take the average in order to get the mean evolution. We now proceed to present the unraveled theory. Let $dH_{t}$ be the stochastic Hamiltonian increment, generating the evolution, which is defined by $\left|\psi_{t+dt}\right\rangle=e^{-idH_{t}}\left|\psi_{t}\right\rangle\,.$ (14) We work in the Itō prescription and consider stochastic Hamiltonians of the form $dH_{t}=\sum_{i,j}\sqrt{2\gamma_{i,j}}c_{j}^{\dagger}c_{i}dW_{t}^{i,j}.$ (15) $W_{t}^{i,j}$ describes a complex noise and we adopt the convention that $W_{t}^{ij*}=W_{t}^{j,i}$. The corresponding Itō rules are summed up by $dW_{t}^{i,j}dW_{t}^{k,l}=\delta_{i,l}\delta_{k,j}dt.$ (16) Using the Itō rules to average over the noise degrees of freedom one recovers the Liouvillian (12). Figure 3: Schematic representation of our random process. The orange box represents the Lindblad equation (12) which describes random quantum jumps between sites connected by an arrow. An arrow leaving and arriving at the same site represents a local dephasing. To a given Lindblad equation, we can associate multiple stochastic process (blue and green boxes), a process called _unraveling_ (orange dashed lines). The Lindblad equation is recovered by averaging over the noisy degrees of freedom (full blue lines). We show that the unraveling in terms of quantum stochastic Hamiltonian (QSH) is particularly useful for the diagrammatic expansion of the theory. Finally, an other point we would like to emphasize concerns the connection to systems evolving under continuous measurements. Indeed, another way to unravel (12) is to see it as the average evolution with respect to the measurement outcomes of a system for which the variables $c_{j}^{\dagger}c_{i}+c_{i}^{\dagger}c_{j}$ and $i(c_{i}^{\dagger}c_{j}-c_{j}^{\dagger}c_{i})$ are continuously monitored and independently measured with rate $\gamma_{i,j}$ [93]. Although the physics is radically different at the level of a single realisation of the noise, on average it gives the same result than the prescription (15). Hence, all the results that will be presented for the mean behavior of our class of stochastic Hamiltonians also describe the mean behavior of systems subject to continuous measurements. The unraveling procedure corresponding to continuous measurements is described in detail in Appendix A. ### III.1 Self-energy We show now that the perturbation theory in the stochastic Hamiltonian (15) can be fully resummed, leading to exact results for single particle Green’s functions. To perform this task, we rely on the Keldysh path-integral formalism [80], which describes the dynamics of the system through its action $S$. The presence of dissipative effects can be naturally included in $S$ using Lindblad formalism [98]. The action gives the Keldysh partition function ${\cal Z}={\rm tr}(\rho_{t})$ $\mathcal{Z}=\int{\cal D}[\psi^{\pm},\bar{\psi}^{\pm}]e^{iS[\psi^{\pm},\bar{\psi}^{\pm}]}.$ (17) where $\psi=(\psi^{+},\psi^{-})$ are Grassmann variables defined respectively on the positive and negative Keldysh time contours $\mathcal{C}_{\pm}$. We follow the Larkin-Ovchinnikov’s convention 333In our conventions, Larkin- Ovchinnikov’s rotation reads $\psi^{1/2}=(\psi^{+}\pm\psi^{-})/\sqrt{2}\,,\bar{\psi}^{1/2}=(\bar{\psi}^{+}\mp\bar{\psi}^{-})/\sqrt{2}$ [123]., in which the Keldysh action $S_{0}$ corresponding to the free- evolution $\mathcal{L}_{0}$ is expressed in terms of the inverse Green’s function $\boldsymbol{G}^{-1}$ namely $\mathcal{S}_{0}=\sum_{i,j}\int\frac{d\omega}{2\pi}\left(\begin{array}[]{cc}\bar{\psi}^{1},&\bar{\psi}^{2}\end{array}\right)_{i}\Big{[}\boldsymbol{G}^{-1}\Big{]}_{i,j}\left(\begin{array}[]{c}\psi^{1}\\\ \psi^{2}\end{array}\right)_{j}\,.$ (18) All variables in the integral (18) are implicitly assumed to depend on a single frequency $\omega$, which coincides with the assumption of stationary behavior, valid for our class of problems. The inverse Green’s function $\boldsymbol{G}^{-1}$ is itself expressed in terms of the retarded, advanced and Keldysh green functions $G^{\cal R/A/K}$, defined in Section II: $\Big{[}\boldsymbol{G}^{-1}\Big{]}=\left(\begin{array}[]{cc}G^{\cal R}&G^{\cal K}\\\ 0&G^{\cal A}\end{array}\right)^{-1}$ (19) and whose diagrammatic representations in the time domain are given in Fig. 4. Figure 4: Diagrammatic representation of the retarded ($\cal R$), advanced ($\cal A$) and Keldysh ($\cal K$) Green’s function. Time flows from right to left. The causality structure of the Keldysh Green functions is enforced by the suppression of correlators $\langle\psi^{2}\bar{\psi}^{1}\rangle=0$. This means that a retarded propagator can never become advanced, which pictorially translates into the fact that a solid line cannot switch to a dashed one. The action corresponding to the Liouvillian term (12) reads [98] $S_{\cal L}:=-\int dt\sum_{i,j}\gamma_{i,j}\left(\bar{\psi}^{1}_{j,t}\psi^{1}_{i,t}\bar{\psi}^{2}_{i,t}\psi^{2}_{j,t}+\bar{\psi}^{1}_{i,t}\psi^{1}_{j,t}\bar{\psi}^{2}_{j,t}\psi^{2}_{i,t}\right)\,.$ (20) which is a quartic action in the Grassmann fields. At the level of single particle Green’s functions, the action $S_{\cal L}$ is incorporated through the self energy $\boldsymbol{\Sigma}$, defined as the sum of all one-particle irreducible diagrams. As in equilibrium field theory, the Dyson equation relates the full propagator to the bare propagator and the self energies $\boldsymbol{\Sigma}$: $\boldsymbol{G}=\Big{[}\boldsymbol{G}_{0}^{-1}-\boldsymbol{\Sigma}\Big{]}^{-1}\,.$ (21) To compute the diffusive current from MW formula, $\boldsymbol{\Sigma}$ must be know to any order; an a priori difficult task given the quartic nature of the action (20). Instead, rewriting the action at the stochastic level allows us to exactly derive the self-energy $\boldsymbol{\Sigma}$ and solve this problem. In the field-theory language, the unraveling procedure exemplified by Eq. (15) leads to the equivalent action $S_{{\rm sto}}=-\sum_{i,j}\int\sqrt{2\gamma_{i,j}}\left(\bar{\psi}_{j,t}^{1}\psi_{i,t}^{1}+\bar{\psi}_{j,t}^{2}\psi_{i,t}^{2}\right)dW_{t}^{i,j}\,,$ (22) where $S_{\rm sto}$ is related to $S_{\cal L}$ by the average $\mathbb{E}[]$ over the noise degrees of freedom: $\mathbb{E}[e^{iS_{\rm sto}}]=e^{iS_{\cal L}}\,.$ (23) In formal terms, this transformation is reminiscent of a Hubbard-Stratonovich transformation where the action becomes quadratic in terms of the Grassmann variables. Note that the complexity encoded in Eq. (20) is preserved by the consequent introduction of the space and time dependent noise $dW_{t}^{i,j}$. However, the noise correlations imposed by Itō’s rules (16) allow a dramatic simplification of the diagrammatic expansion in $\gamma_{i,j}$ of the Green functions within the stochastic formulation. Such simplified structure does not manifestly appear when working with the Lindbladian (averaged) formulation of the problem (20) (see Fig. 14 in Appendix B). The resummation works as follows. In Fig. 5, we show the diagrammatic expansion of (21) up to second order in the stochastic noise $\gamma_{i,j}$. Figure 5: Perturbative series in the Keldysh formalism for our class of stochastic models. Average quantities are obtained by contracting pairs of wiggly lines together. Here a wiggly line represents either $dW_{t}^{i,j}$ or its complex conjugated pair for simplicity. The formulation of the theory in terms of QSH allows for a simple writing of the perturbative expansion. The wiggly lines represent $dW_{t}^{i,j}$. Since we are interested in the mean behavior, we have to take the average over the noise degrees of freedom. This amounts to contract wiggly lines pair by pair. From the Itō rules (16), we see that upon contraction, a wiggly line forces the two vertices it connects to have the same time and position, as illustrated in Fig. 5. The important consequence is that all the diagrams which present a _crossing_ of the wiggly lines vanish because of the causal structure of the Keldysh’s Green function, namely that $G^{\cal R}(t,t^{\prime})$ is non zero only for $t>t^{\prime}$ and conversely for $G^{\cal A}$. For a detailed proof of this statement, see Appendix B. In particular, the constraints of avoided wiggly lines establishes the validity of the self-consistent Born approximation (SCBA) for the self-energy of single particle Green’s function and generalize the approach presented in Ref. [50]. SCBA allows a simple and compact derivation of all components as exemplified by the diagrammatic representation in Fig. 6. Figure 6: a) Non-crossing rule for the contraction of wiggly lines. b) Self- energies for the different Keldysh components. Namely, we have that in position space $\boldsymbol{\Sigma}_{i,j}(t,t^{\prime})=\delta_{i,j}\delta(t,t^{\prime})\sum_{k}\gamma_{i,k}\boldsymbol{G}_{k,k}(t,t)\,.$ (24) For the retarded and advanced components, this relation takes a particularly simple form since $G_{j,k}^{\cal{R}(\cal{A})}(t,t)=\mp\frac{i}{2}\delta_{j,k}$ in position space. Note that this simple expression is only valid when the two time indices are taken to be equal and comes entirely from the causal structure of the Green’s functions in the Keldysh formalism. One way to see this is to evaluate the step function $\theta(t-t^{\prime})$ for the retarded and advanced Green’s functions from the discrete version of the path integral presented in 9.2 of [80]. To get the Keldysh component $G^{\cal K}$, one has to solve the self-consistent Dyson equation : $\boldsymbol{G}^{\cal K}=-\boldsymbol{G}^{\cal R}\left(\left[\boldsymbol{G}_{0}^{-1}\right]^{\cal K}-\boldsymbol{\Sigma}^{\cal K}\right)\boldsymbol{G}^{\cal A}\,,$ (25) which is a problem whose complexity only scales polynomially with the number of degrees of freedom in the system (such as the system size $N$ of the setup in Fig. 1). This solves the problem entirely at the level of single-particle correlation functions. Remark that this applies to any model as long as the bare theory respects a Wick’s theorem and its propagators are known. It allows a systematic study of quantum systems in the presence of external noisy degrees of freedom. This ability to calculate the Keldysh Green’s function is crucial to give an exact description of out-of-equilibrium transport in dissipative systems, as we are going to show in the next section. ## IV Applications We now proceed to employ the self-consistent approach to showcase our $1/N$ expansion, presented in Sec. II, against a large class of QSHs that display diffusive transport. The action describing the out-of-equilibrium setting represented in Fig. 1 has the form $S=S_{{\rm Bd}}+S_{0}+S_{{\rm sto}}.$ (26) The first term in the action, $S_{{\rm Bd}}$, describes the exchange coupling with gapless non-interacting fermionic reservoirs of chemical potential $\mu_{L,R}$ and temperature $T_{L,R}$. The corresponding action, under the assumptions discussed in Section II, was derived for instance in Ref. [79]: $\displaystyle S_{{\rm Bd}}$ $\displaystyle=i\Gamma\sum_{a=L,R}\int\frac{d\omega}{2\pi}\bar{\psi}_{a}\begin{bmatrix}1&2\tanh\left(\frac{\omega-\mu_{a}}{2T_{a}}\right)\\\ 0&-1\end{bmatrix}\psi_{a}\,,$ (27) where $\psi_{a}$ is a shorthand notation for $(\psi_{a}^{1},\psi_{a}^{2})$, $L$ designates site 1 and $R$ designates site $N$. The action $S_{0}$ is the quadratic action related to the intrinsic dynamics of the system, which can describe various situations from coherent dynamics to single-particle dissipative gains and losses [79]. In this paper, we will focus on one- dimensional nearest-neighbour coherent bulk hopping, which is described by the standard Hamiltonian, $H_{\tau}:=\tau\sum_{j=1}^{N-1}\left(c_{j}^{\dagger}c_{j+1}+c_{j+1}^{\dagger}c_{j}\right),$ (28) with $\tau$ the hopping amplitude. The corresponding action reads $S_{0}=-i\tau\sum_{j}\int\frac{d\omega}{2\pi}\Big{(}\bar{\psi}_{j}^{1}\psi_{j+1}^{1}+\bar{\psi}_{j}^{2}\psi_{j+1}^{2}+\mbox{c.c}\Big{)}\,.$ (29) The free propagators are directly derived from the previous expressions of the action and read $\displaystyle\begin{split}\left[G_{0}^{-1}\right]_{j,k}^{\cal{R}(\cal{A})}(\omega)=&\delta_{j,k}\Big{[}\omega\pm i\Gamma(\delta_{j,1}+\delta_{j,N})\Big{]}\\\ &\qquad\qquad+\tau(\delta_{j,k+1}+\delta_{j,k-1})\,,\end{split}$ (30) $\displaystyle\left[G_{0}^{-1}\right]_{j,k}^{\cal K}(\omega)=$ $\displaystyle 2i\Gamma\delta_{j,k}\sum_{a=L,R}\delta_{j,a}\tanh\left(\frac{\omega-\mu_{a}}{2T_{a}}\right)\,.$ (31) Notice that the reservoirs act, through the hybridization constant $\Gamma$, as natural regulators of the imaginary components of the non-interacting problem [80]. Finally $S_{{\rm sto}}$ is the action corresponding to the QSH (22). As explained in the previous section, the demonstrated validity of SCBA for the Dyson equation (25) allows to derive exact expressions for the self-energies (24), and thus for the propagators of the full theory. Such solution allows to fully determine the transport properties of the system through MW formula (3). As shown in Section III, Equation (24) implies a particularly simple form for the advanced and retarded components of the self-energy: $\Sigma^{\cal R(A)}_{i,j}=\mp i\delta_{i,j}\delta(t,t^{\prime})\sum_{l}\frac{\gamma_{i,l}}{2}\,.$ (32) Importantly, in the geometry of Fig. 1, we can derive a compact and explicit expression of (25) for the diagonal terms $G^{\cal K}(t,t)$ $\vec{G}^{\cal K}=(\mathbb{I}-M)^{-1}\cdot\vec{V}$ (33) where we introduced the $N$-dimensional vectors $\displaystyle\vec{G}_{j}^{\cal K}$ $\displaystyle=G_{j,j}^{\cal K}(t,t)\,,$ (34) $\displaystyle\vec{V}_{j}$ $\displaystyle=\frac{2\Gamma}{i}\sum_{a\in\\{L,R\\}}\int\frac{d\omega}{2\pi}G_{j,a}^{\cal R}G_{a,j}^{\cal A}\tanh\left(\frac{\omega-\mu_{a}}{2T_{a}}\right)\,,$ (35) and $M$ is an $N\times N$ matrix with elements $M_{j,k}=\sum_{l}\gamma_{k,l}\int\frac{d\omega}{2\pi}G_{j,l}^{R}G_{l,j}^{A}\,.$ (36) Notice that only $G^{\mathcal{K}}$ carries information about the biased reservoirs, as can be seen from (35). The first term in (3) depends exclusively on spectral functions, which are readily derived from Eqs. (30) and (32), while Eq. (33) sets, through Eq. (4), the expression of the density differences at the edges $\Delta n$. Note that our analysis shows that the matrix $M$ (36) is the key object encoding information about diffusion and it appears exclusively in the Keldysh component of the single-particle Green’s function (33). A convenient way to understand this is to consider systems with single-particle gains and losses that do not display Ohmic $1/N$ suppression of the current. It was shown in Ref. [79] that, while (32) remains valid in those systems, the matrix $M$ in (33) becomes 0 for these systems and the current saturates to a size- independent value. Thus, having a finite-lifetime in the retarded and advanced Green’s function is not sufficient to get diffusive transport. The imaginary contribution to the retarded/advanced self-energy, such as the one in (32), has the interpretation of a lifetime for the free single-particle excitations of the system, yet it is the Keldysh component of the self-energy that describes the consequences of dissipative scattering on the transport properties of the system. When $M\neq 0$, equation (36) gives us a linear profile for the density profile, which eventually leads to a $1/N$ diffusive contribution for the current as discussed in the II. These considerations are those underpinning our general discussion about diffusive transport in Sec. II. We now turn to the case-by-case study of the specific QSHs depicted in Fig. 7. As said in the Introduction, we will focus on three one-dimensional models: the dephasing model, the quantum symmetric simple exclusion process (QSSEP) and models with stochastic long-range hopping. For the dephasing model, every single point on the lattice is coupled with itself by the noise. For the QSSEP, the noise couples each point with its neighbours. For the long-range, a given point is paired to all the rest of the lattice with a power-law decay as a function of the distance. These processes are illustrated for all three models in Fig. 7 and we will give more details about their physical motivations in the related sections. Figure 7: Particular 1D discrete cases that will be of interest. Only the noise contribution is presented in this figure. In the dephasing model, all the sites are paired with themselves. For the QSSEP, the pairs are between nearest neighbours. In the long-range model, a given point is linked to all the rest of the lattice with a coupling decaying as power-law. Without loss of generality, in the oncoming analysis of the current $J$, we focus on a linear response regime in the chemical potential bias. We set an idential temperature for both reservoirs $T_{L}=T_{R}=T$ and $\mu_{L}\to\mu+\delta\mu,\hskip 10.00002pt\mu_{R}\to\mu-\delta\mu$. We expand Eq. (3) in $\delta\mu$. One thus obtains, to linear order in $\delta\mu$: $J=\Gamma\frac{\delta\mu}{2T}\int d\omega\frac{1}{\cosh^{2}\left(\frac{\omega-\mu}{2T}\right)}\left[\mathcal{A}(\omega)-\frac{\Gamma}{2\pi}\Delta^{\mathcal{K}}(\omega)\right]\,.$ (37) where $\mathcal{A}(\omega)$ is the edge spectral function, which coincides with $\mathcal{A}_{L/R}(\omega)$, because of the mirror symmetry of the class of QSHs that we will consider. The second term can be expressed in the form $\Delta^{\mathcal{K}}(\omega)=\left[\frac{1}{\mathbb{I}-M}\cdot\vec{W}(\omega)\right]_{1}-\left[\frac{1}{\mathbb{I}-M}\cdot\vec{W}(\omega)\right]_{N}\,,$ (38) in which $\vec{W}$ is an $N$ dimensional vector whose components are given by $\vec{W}_{j}(\omega)=G_{j,1}^{\cal R}(\omega)G_{1,j}^{\cal A}(\omega)-G_{j,N}^{\cal R}(\omega)G_{N,j}^{\cal A}(\omega)$. ### IV.1 Dephasing model The dephasing model describes fermions hopping on a 1D lattice while subject to a random onsite dephasing coming from dissipative interactions with external degrees of freedom. In the language of Sec. III, this model corresponds to the case where all the points are paired with themselves, which results in substituting the rates $\gamma_{i,j}\rightarrow\,\gamma_{\rm Dph}\delta_{i,j}\,,$ (39) in Eqs. (12) and (15) (see also Fig. 7). There are various limits in which this model can be derived. For instance, it can be thought as describing the effective dynamics of fermions interacting weakly with external bosonic degrees of freedom within the Born-Markov approximation [38]. In Refs. [31, 81, 32] it was shown, relying on matrix product operator techniques, that the dephasing model exhibits diffusive transport. Two-times correlators in the XXZ under dephasing was also studied in [52] and were shown to exhibit a complex relaxation scheme. For bosonic interacting systems, it was shown that the addition of an external dephasing could lead to anomalous transport [100, 101]. Additionally, as discussed in Section III, the mean dynamics of this model coincides with the one where the occupation numbers of fermions on each site are independently and continuously monitored [102, 45]. For this reason, the dephasing model has recently attracted a lot of interest as a prototypical model exhibiting a measurement rate-induced transition in the entanglement dynamics [43, 44]. Finally, we note that in Ref. [30] a mapping between the dephasing model and the Fermi-Hubbard model was established. Although we will not discuss this mapping here, we stress that it implies that our method also provides the computation of exact quantities valid for equivalent systems governed by Hubbard Hamiltonians. The stochastic Hamiltonian for the dephasing model is readily obtained from the substitution (39), namely $dH_{t}=\sqrt{2\gamma_{{\rm Dph}}}\sum_{j}\hat{n}_{j}dB_{t}^{j}\,,$ (40) where $B_{t}$ denotes a real Brownian motion with Itō rule $dB_{t}^{j}dB_{t}^{k}=\delta_{j,k}dt$. The retarded and advanced self-energies are obtained from Eq. (32) and read $\Sigma_{j,k}^{\cal{R}(\cal{A})}(t,t^{\prime})=\mp\frac{i}{2}\gamma_{{\rm Dph}}\delta_{j,k}\delta(t-t^{\prime})\,.$ (41) while $G^{\mathcal{R},\mathcal{A}}$ are obtained by inversion of Eq. (30) with inclusion of the self-energy (41). These functions are symmetric and given by, for $i\leq j$ [103, 79]: $G_{i,j}^{\cal{R}/\cal{A}}(\omega)=\frac{(-1)^{i+j}\tau^{j-i}B_{i-1}^{\cal{R}/\cal{A}}B_{N-j}^{\cal{R}/\cal{A}}}{\left[\omega\pm i\left(\Gamma+\frac{\gamma_{{\rm Dph}}}{2}\right)\right]B_{N-1}^{\cal{R}/\cal{A}}-\tau^{2}B_{N-2}^{\cal{R}/\cal{A}}}\,,$ (42) where $B_{i}^{\cal{R}/\cal{A}}=[(r_{+}\pm i\Gamma)r^{i}_{+}-(r_{-}\pm i\Gamma)r_{-}^{i}]/(r_{+}-r_{-})$ and $r_{\pm}=\left(\omega\pm i\frac{\gamma}{2}+\sqrt{(\omega\pm i\frac{\gamma_{{\rm Dph}}}{2})^{2}-4\tau^{2}}\right)/2$. The related spectral functions at the system edges $\mathcal{A}(\omega)=\mathcal{A}_{11}(\omega)=\mathcal{A}_{NN}(\omega)$ is represented in Fig. 8 for different system sizes $N$. Figure 8: Edge spectral function $\mathcal{A}(\omega)$ for the dephasing model (40) in the configuration of Fig. 1 for difference systems sizes $N$. Darker blue solid lines correspond to larger systems sizes $N=11,21,51,101,201,501,1001$. We consider only odd values of $N$, as they ensure the presence of a resonance at $\omega=0$. The inset shows the exponential convergence of the spectral function at a fixed (odd) system size $\mathcal{A}_{N}=\mathcal{A}(\omega=0)$ towards its asymptotic value $\mathcal{A}_{\infty}(\omega)$, obtained from Eq. (43) and corresponding to the dashed black line in the main plot (for $N\gtrsim 100$ and the parameters reported in the plot, numerical curves overlap with $\mathcal{A}_{\infty}(\omega)$). It displays $N$ peaks corresponding to the eigenspectrum of the system without dissipation. The width of the peaks is controlled non-trivially by the hybridization constant $\Gamma$ and the bulk dissipation rate $\gamma_{\rm Dph}$. Plots for closely related quantities in the $\gamma_{\rm Dph}\rightarrow 0$ limit can be found in Ref. [79]. In this nondissipative limit, the height of the peaks does not decay with the system size $N$. On the contrary, for $\gamma_{\rm Dph}>0$, the peaks vanish in the $N\rightarrow\infty$ limit, and the spectral function converges exponentially towards a smooth function $\mathcal{A}_{\infty}(\omega)$ as shown in the inset of Fig. 8. One can analytically derive $\mathcal{A}_{\infty}(\omega)$, as the retarded Green function (42) at the edges $G^{\cal R}_{1,1}=G^{\cal R}_{N,N}$ converges to $\lim_{N\rightarrow\infty}G^{\cal R}_{1,1}(\omega)=\frac{1}{\omega+i\left(\Gamma+\frac{\gamma_{\rm Dph}}{2}\right)-\frac{\tau^{2}}{r_{\rm sgn(\omega)}}}\,.$ (43) The exponential convergence of the edge spectral function is reproduced by all the other QSHs discussed below and verifies one of the preliminary assumptions exposed in Section II, identifying the density difference $\Delta n$ as the term entirely responsible for the $1/N$ suppression of the dissipative current in (3). Our approach provides an efficient way to compute the second term in (37), through an explicit derivation of the matrix $M$: $M_{j,k}=\gamma_{\rm Dph}\int\frac{d\omega}{2\pi}G_{j,k}^{\cal R}G_{k,j}^{\cal A}\,.$ (44) As we detail in Appendix D, the expressions (38), (42) and (44) allow the efficient derivation of the current (37) up to system sizes $N\simeq 10^{3-4}$. As a consequence, we can systematically study the expected crossover from a ballistic-to-diffusive regime expected at length scales $N^{*}\simeq\gamma_{{\rm Dph}}^{-1}$ [31]. See also Appendix E for additional details. Two main technical advances of our approach compared to previous studies [31, 32, 104, 26, 97, 81, 105] consist in its ability to naturally address reservoirs with finite temperatures $T<\infty$, accessing transport regimes left unexplored by previous studies and to access two-times correlators in the stationary state. An important consequence of our analysis is that the rescaled conductance of the system, that we define as $\mathcal{G}=NJ/\delta\mu$, has a non-trivial dependence on the temperature $T$ and the dephasing rate $\gamma_{\rm Dph}$, namely $\mathcal{G}=\lim_{N\rightarrow\infty}\frac{JN}{\delta\mu}=\frac{\eta\tau^{\alpha+\delta}}{T^{\alpha}\gamma_{{\rm Dph}}^{\delta}}\,.$ (45) In Fig. 9, we plot the coefficients $(\alpha,\delta,\eta)$ across the parameter space $(T,\gamma_{{\rm Dph}})$. Figure 9: Fitted parameters $(\alpha,\delta,\nu)$ of the rescaled conductance of the dephasing model as defined in Eq. (45). These values define different regions in the temperature - dephasing plane with different behaviors for the conductance, see Eq. (46). The dashed lines are a guide for the eyes to delimit the regions. The bottom right plot summarizes the characteristic values of each region. From the plot, we identify three main diffusive transport regimes, $R_{\tau,T,\gamma}$, in which these coefficients are different. Note that the regions are not connected by sharp phase transitions but instead by crossovers, which appear sharp in logarithmic scale. Deep in the three regions, the rescaled conductance takes the approximate values $\mathcal{G}=\begin{cases}\frac{\tau^{2}}{T\gamma_{{\rm Dph}}}&T\gg\gamma_{{\rm Dph}},\tau\\\ \frac{2.6\tau^{2}}{\gamma_{{\rm Dph}}^{2}}&\gamma_{{\rm Dph}}\gg T,\tau\\\ \frac{1.3\tau}{\gamma_{{\rm Dph}}}&\tau\gg\gamma_{{\rm Dph}},T\end{cases}\,.$ (46) In previous studies carried in the $T\rightarrow\infty$ limit for the reservoirs, where they can be described as Lindblad injectors [79], the conductance $\mathcal{G}$ is assumed to be proportional to the bulk diffusion constant $D$ [4, 21]. The density profiles in the system (see App. E) clearly show that such interpretation cannot be extended to lower temperatures. The emergence of coherent effects between the system and its baths leads to finite-sized boundary effects, which do not allow the determination of the bulk diffusion constant through Eq. (46). To obtain the bulk diffusion constant we can use our approach to derive the density profiles _inside the system_ and far away from its boundaries. We numerically verify Fick’s law (1) in the bulk and find the diffusion constant to be $D=\frac{2\tau^{2}}{\gamma_{\rm Dph}}\,,$ (47) which is double the conductance in the $T\gg\gamma_{{\rm Dph}}$ limit, as expected. At variance with the rescaled conductances (46), this quantity is not affected by any boundary effect and it is in agreement with previous analytical ansatzes, valid in the infinite temperature limit [31]. The independence of the diffusion constant (47) from the temperature at the boundaries is a consequence of the stochastic dephasing (40), which locally brings the system back to an infinite temperature equilibrium state regardless of boundary conditions. We thus see on this example that our approach allows to compute both the two- and four-points measurements of the resistance. Even for diffusive systems, the distinction between the two processes can be important. To conclude our analysis of the transport in the dephasing model, we note that the different transport regimes in (46) explicitly depend on the _stationary bias_ $n_{1}-n_{N}$, which suffers from boundary effects in some regions of the $(T,\gamma_{{\rm Dph}})$ parameter space. We confirm with our exact numerical solution that this is indeed the case. This interesting bias dependence is beyond the scope of the present paper and left for future studies. #### 1/N expansion Let us now show how the diffusion constant (47), that we obtained from our exact solution, can also be easily derived from the novel $1/N$ perturbative theory we introduced in Sec. II. The first step is to fix the action of the infinite size theory $S_{\infty}$ with the aid of the coarse graining procedure. We start by disposing the elements of $G^{\mathcal{R}/\mathcal{A}/\mathcal{K}}_{i,j}$ as a matrix and subdivide it in square cells of width $a$. We take the average over all the terms in the cell to obtain the effective Green function $G^{\mathcal{R}/\mathcal{A}/\mathcal{K}}_{\tilde{i},\tilde{j}}$, describing the correlations between the $\tilde{i}$ and $\tilde{j}$th cell. This procedure is illustrated in Fig. 10-(right) for the retarded Green’s function and increasing cell size ($a=1$ corresponds to no coarse graining). Figure 10: Coarse-graining procedure in the dephasing model, $\gamma_{\text{Dph}}=1$ for increasing size of the cell, $a$. Left: Real and imaginary part of the diagonal terms of $G^{\cal R}(\omega)$ for increasing cell size, $a=1,2,3,4,5,7,12,20,40,50$, respectively from light to dark. Inset: $G^{\cal K}$ component measured at one-third of the chain and $T=0.1$. Black lines depict the $1/N$ predictions obtained by inverting the matrix in Eq. (48). The symmetry around $\omega=0$ is broken as $a$ increases. Right: Color plot of the absolute value of $G^{R}(\omega=0)$ for the first $20\times 20$ coarse grained cells of a system with $N=2000$ sites, darker colors represent higher values As the cell size increases, $G^{\mathcal{R}/\mathcal{A}/\mathcal{K}}_{\tilde{i},\tilde{j}}$ becomes a diagonal matrix with the off-diagonal terms vanishing as $1/a$ and exponentially suppressed with the distance $|\tilde{i}-\tilde{j}|$. This explicit calculation confirms the diagonal structure of $G^{\mathcal{R}/\mathcal{A}/\mathcal{K}}$ and the reduction of the action to a sum of local commuting terms $S_{\infty}=\sum_{\tilde{j}}S_{\tilde{j}}$, where $S_{\tilde{j}}$ is the action associated to the $\tilde{j}$th cell. To simplify the notations, we drop the tilde indices from now on and implicitly assume that the calculations are done in the effective coarse-grained theory. The diagonal terms of $G^{\mathcal{R}}(\omega=0)$ are depicted in Fig. 10-(left) as function of frequency with $G^{\mathcal{K}}$ shown in the inset. As $a$ increases, the symmetry center of the functions changes to $\omega=-2\tau$ converging to the black curves depicting Eqs. (9),(10). As mentioned before, the only free parameters that need to be fixed in the local theory are $\mu_{j},T_{j}$ and $\Sigma_{j}(\omega)$. For the dephasing model, we find that the self energy is simply given by $i\gamma_{\rm Dph}/2$. For a single site such an imaginary term was shown [79] to coincide with the effective action of a reservoir within the limit $\mu_{j},T_{j}\to\infty$ while keeping the ratio $\mu_{j}/T_{j}$ fixed. Let $n_{j}$ be the local density at site $j$, $n_{j}=\frac{1}{2}(1-iG^{K}(t,t))$. Using $[G^{-1}]^{\mathcal{K}}=-G^{\mathcal{R}-1}G^{\mathcal{K}}G^{\mathcal{A}-1}$ and $G^{\mathcal{K}}(\omega)=-\tanh{\frac{\mu_{j}}{T_{j}}}(G^{\mathcal{R}}(\omega)-G^{\mathcal{A}}(\omega))$. Interestingly, at leading order in $1/N$, this relation turns out to be verified even at the microscopic level, i.e for $a=1$. This tells us that the local equilibration condition of the infinite size theory is always true in our case. We furthermore suppose that in the coarse-grained theory, the expression of the retarded and advanced components will be given by a single- site two-level system, i.e we suppose the following expression for $S_{\tilde{j}}$: $\displaystyle S_{\tilde{j}}$ $\displaystyle=\int\frac{d\omega}{2\pi}(\bar{\psi}_{j}^{1},\bar{\psi}_{j}^{2})\begin{pmatrix}\omega+i\frac{\gamma_{\rm Dph}}{2}&-i(2n_{j}-1)\gamma_{\rm Dph}\\\ 0&\omega-i\frac{\gamma_{\rm Dph}}{2}\end{pmatrix}\begin{pmatrix}\psi_{j}^{1}\\\ \psi_{j}^{2}\end{pmatrix}$ (48) Where we absorbed the $-2\tau$ shift of frequencies in the integral. Expression (48) is valid in the bulk, independently from any value of $\mu,T$ at the boundaries. We check explicitly that the coarse-grained theory indeed converges towards $S_{\tilde{j}}$ as $a$ is increased as shown in Fig. 10. In the path integral formalism, the $1/N$ corrections to the current (11) is given by $J=i\langle\hat{J}_{j}[\bar{\psi}^{+},\psi^{+}]S_{{\rm dyn}}\rangle_{\infty}$ (49) where $\hat{J}$ is the current operator, $\hat{J}[\bar{\psi}^{+},\psi^{+}]$ is the evaluation of this operator in the fermionic coherent basis on the $+$ Keldysh contour, $\langle\bullet\rangle_{\infty}:=\int{\cal D}[\psi^{\pm},\bar{\psi}^{\pm}]e^{iS_{\infty}}\bullet$ and $S_{{\rm dyn}}$ is the Keldysh action (29) associated to the contour integral of $\hat{H}_{{\rm dyn}}$ defined in (11). Here we have explicitly that $\hat{H}_{{\rm dyn}}=\tau\sum_{j}c_{j}^{\dagger}c_{j+1}+\rm{h.c}$. The current operator is in this case : $\hat{J}_{j}=i\tau(c_{j+1}^{\dagger}c_{j}-c_{j}^{\dagger}c_{j+1}).$ (50) A straightforward calculation reported in Appendix C then leads to an explicit derivation of Fick’s law: $\displaystyle J$ $\displaystyle=-\frac{2\tau^{2}}{\gamma_{{\rm Dph}}}\nabla n_{j}.$ (51) where $\nabla$ is the discrete gradient $\nabla n_{j}=n_{j+1}-n_{j}$. Equation (51), derived from the $1/N$ expansion, coincides with the exact result (47) in the whole parameter space. Such agreement validates the $1/N$ expansion as a systematic and efficient procedure to compute diffusion constants. From the computational point-of-view, note that the $1/N$ expansion did not resort to any numerical schemes and provided an exact expression of the diffusive constant, which could not be extracted explicitly from the Dyson equation (25). ### IV.2 QSSEP In this section, we illustrate how our method can also be applied to the study of the _quantum symmetric simple exclusion process_ (QSSEP) [35]. The QSSEP is a model of fermionic particles that hop on the lattice with random amplitudes which can be thought as the quantum generalization of classical exclusion processes [92]. Classical exclusion processes have attracted a widespread interest over the last decades as they constitute statistical models with simple rules but a rich behavior that is thought to be representative of generic properties of non-equilibrium transport. It has been particularly impactful in the formulation of the macroscopic fluctuation theory (MFT) [91], which aims at understanding in a generic, thermodynamic sense, macroscopic systems driven far from equilibrium. It is hoped that the QSSEP will play a similar role in a quantum version of MFT, which is for now largely unknown. We are interested in a model of QSSEP plus the coherent jump Hamiltonian (28) that was first studied in Ref. [33]. The case of pure QSSEP can be retrieved in the limit $\tau\to 0$. As for the dephasing model discussed in Sec. IV.1, we will see that the $1/N$ expansion formalism again offers a simple route to derive the diffusive current. As pictured in Fig. 7, the QSSEP couples nearest neighbour sites. It is derived from Eqs. (12) and (15) by taking the prescription $\gamma_{i,j}=\gamma_{\rm QS}\,\frac{\delta_{i,j+1}+\delta_{i,j-1}}{2}\,.$ (52) The associated QSH is $dH_{t}=\sqrt{\gamma_{\text{QS}}}\sum_{j}\left[c_{j}^{\dagger}c_{j+1}dW_{t}^{j+1,j}+c_{j+1}^{\dagger}c_{j}dW_{t}^{j,j+1}\right]\,.$ (53) From Eq. (24), we get the advanced and retarded components of the self energies: $\Sigma_{j,k}^{\cal{R}(\cal{A})}(t,t^{\prime})=\mp\frac{i}{2}\gamma_{\rm QS}\delta_{j,k}\delta(t,t^{\prime})\left[1-\frac{\delta_{1,j}+\delta_{j,N}}{2}\right]\,.$ (54) The retarded and advanced Green functions are given by inserting the bare propagators (30) and the self energy (54) into the Dyson equation (25) . These propagators can be directly derived from the ones of the dephasing model by making the substitutions $\gamma_{\rm Dph}\rightarrow\gamma_{\rm QS}$ and $\Gamma\rightarrow\Gamma-\gamma_{\rm QS}/2$. As a consequence, all the considerations made for the spectral function and Fig. 8, in the dephasing model, equally apply to the QSSEP. This is not the case for the Keldysh component, where the $M$ matrix has the different expression 444in the previous expression, if an index is out of boundary, it must simply be set to $0$, we don’t write that explicitly to avoid cumbersome notation. $\displaystyle M_{j,k}=\frac{\gamma_{{\rm QS}}}{2}\int\frac{d\omega}{2\pi}\big{(}$ $\displaystyle G_{j,k-1}^{\cal R}G_{k-1,j}^{\cal A}+G_{j,k+1}^{\cal R}G_{k+1,j}^{\cal A}\big{)}.$ (55) Combining the above equation with (33) allows to obtain $G^{K}$ and allows to compute the current from (3), or its linearized version (37). For all values of the parameter space $(T,\gamma_{{\rm QS}})$ the current follows the relation (see Fig. 11) $J_{j}=-\left(\frac{\gamma_{{\rm QS}}}{2}+\frac{2\tau^{2}}{\gamma_{{\rm QS}}}\right)\nabla n_{j}.$ (56) which tells us that the diffusion constant is $\frac{\gamma_{{\rm QS}}}{2}+\frac{2\tau^{2}}{\gamma_{{\rm QS}}}$ in agreement with the result presented in [33]. Figure 11: Diffusion constant of the QSSEP model as a function of the noise strength, $\gamma$, for different hopping amplitudes $\tau$ and temperatures $T$. The results are independent of the latter. The dots are obtained from the MW formula (3) while dashed lines depict the results of the $1/N$ expansion $\eqref{eq:currentcoherentQSSEP}$. For $\tau=0$, this generalizes the result from [35] which was restricted to boundaries with infinite temperature and chemical potential. #### 1/N expansion The expression (56) for the current can also be obtained easily in the $1/N$ perturbative approach illustrated in Sec. II. The action in the infinite size limit is again of the form (48). From (54) we see that the expression of the self-energy is similar to the one of the dephasing model by simply replacing $\gamma_{\rm Dph}$ by $\gamma_{\rm QS}$ up to differences that tend to $0$ in the infinite size limit. The current operator from site $j$ to $j+1$ in the bulk is given here by $\hat{J}_{j}=\frac{\gamma_{{\rm QS}}}{2}(\hat{n}_{j}-\hat{n}_{j+1})+i\tau(c_{j+1}^{\dagger}c_{j}-c_{j}^{\dagger}c_{j+1}).$ (57) The first part is easily evaluated to be $-\gamma_{{\rm QS}}\nabla n_{j}/2$ to first order in $1/N$ in the diffusive limit. For the second part, we simply need to redo the previous derivation by replacing $\gamma_{{\rm Dph}}$ by $\gamma_{{\rm QS}}$. The term $i\tau(c_{j+1}^{\dagger}c_{j}-c_{j}^{\dagger}c_{j+1})$ then becomes $-\frac{2\tau^{2}}{\gamma_{\rm QS}}(n_{j+1}-n_{j})$ which yields $\eqref{eq:currentcoherentQSSEP}$. ### IV.3 Long-range Hopping Finally we turn to the model with long-range hopping from the noise (see Fig. 7). In this model each particle can jump to any unoccupied site with a probability rate that decays with the distance as a power law of exponent $\alpha$. Power-laws appear naturally for instance in quantum simulation with Rydberg atoms [107, 108, 109] where they emerge because of the dipole-dipole interactions. Depending on the order of the interactions between atoms, different power laws can be reached. In the limit $\alpha\to\infty$, we get an “all-to-all” model, i.e there are random quantum jumps between all sites. These types of models have recently attracted interest as toy models to understand the interplay between quantum chaos and quantum information notably in the context of random unitary circuits [110, 88]. For the long-range QSH we have $\gamma_{i,j}=(1-\delta_{i,j})\frac{\gamma_{\text{LR}}}{\mathcal{N}_{\alpha}|j-k|^{\alpha}}$ (58) and the corresponding Hamiltonian is $dH_{t}=\sum_{j\neq k}\sqrt{\frac{2\gamma_{\text{LR}}}{\mathcal{N_{\alpha}}|j-k|^{\alpha}}}c_{j}^{\dagger}c_{k}dW_{t}^{k,j}.$ (59) where $\mathcal{N_{\alpha}}=2\sum_{k=1}^{N/2}k^{-\alpha}$ is a suitable normalization condition such that $\mathcal{N}_{\infty}=2$ and $\mathcal{N}_{0}=N$. The limiting cases of this model are the QSSEP and "all- to-all" model, respectively $\alpha=0$ and $\alpha\rightarrow\infty$. For the long-range hopping the expression of the retarded(advanced) self- energy is $\Sigma_{j,k}^{\cal R(A)}(t,t^{\prime})=\mp\delta_{j,k}\delta(t-t^{\prime})\frac{i}{2}\sum_{l\neq j}\frac{\gamma_{\rm LR}}{\mathcal{N}_{\alpha}|j-l|^{\alpha}}.$ (60) As before, injecting the bare propagators (30), (31) and (60) in (25) yields $G^{\cal R(A)}$. As illustrated in Fig. 15 in Appendix C.3, this form of the self-energy is equivalent to the one derived for the dephasing model (41), with the only difference that the effective dephasing rate $\gamma$ becomes site-dependent because of the presence of boundaries connected to reservoirs . We verified that the exponential convergence of the spectral function illustrated in Fig. 8, equally applies, as expected, for this model as well. The $M$ matrix is $M_{j,k}=\sum_{l\neq k}\int\frac{d\omega}{2\pi}G^{\mathcal{R}}_{j,l}(\omega)G^{\mathcal{A}}_{l,j}(\omega)\frac{\gamma_{\rm LR}}{\mathcal{N}_{\alpha}|k-l|^{\alpha}}$ (61) which combined to (33) yields $G^{K}$. In the absence of coherent hopping, there is a simple argument to conjecture a phase transition in the transport properties of the system at $\alpha=3$. If one considers the stochastic process (59) alone, its average has a simple interpretation as a classical Markov process, where the probability for a fermion at site $0$ to jump to site $j$ during a timestep $\Delta t$, given that the target site $j$ is empty, is $p_{j}:=\frac{\gamma_{\rm LR}}{\mathcal{N}_{\alpha}|j|^{\alpha}}\Delta t$. For a single particle, this defines a random walk whose variance is given by $v:=\sum_{j}p_{j}j^{2}$ which is related to the diffusion constant via $D=v/\Delta t$. This diverges at least logarithmically for $\alpha\leq 3$. However, note that there is no simple reasoning to understand what happens if one were to study the model with the coherent hopping term as, a priori, a purely classical analysis does not hold anymore. For the numerical computations, we fix $\gamma_{\text{LR}}=1$ and $T=1000$ but the results are independent of the latter. In Fig. 12, we show the dependence of the linear response current with the system size for different values of $\alpha$. Figure 12: Scaling of the linear response current as a function of the system size $N$, for varying power-law coefficients $\alpha$ in the long-range hopping Hamiltonian (59). The saturation of $J_{\text{LRH}}$ to finite values, for $\alpha\ll 1$, signals a ballistic regime of transport, which contrasts with the diffusive regime observed for $\alpha\gg 1$, where $J_{\rm LRH}$ vanishes as $N^{-1}$, as highlighted by the black dashed line. When $\alpha$ is small, the current saturates in the $N\rightarrow\infty$ limit, while for large values of $\alpha$ it decays as $N^{-1}$, as depicted in dashed gray line. This a signature of a ballistic-to-diffusive transition that occurs at a finite value of $\alpha$. To characterize this transition further, we look at the order parameter $D^{-1}=-\lim_{N\rightarrow\infty}\nabla n/J$. For diffusive systems, $D^{-1}$ is the inverse of the diffusion constant and should be zero for ballistic systems. In App. E, we discuss the numerical fitting required to obtain $D^{-1}$ from a finite-size scaling analysis. $D^{-1}$ undergoes a second order phase transition at a critical power $\alpha_{c}\approx 2.87$ (see the dark-blue dots in Fig. 13). When approaching the transition from the diffusive region, the diffusion constant diverges as $D\sim(\alpha-\alpha_{c})^{1.21}$ (see the gray dashed line in Fig. 13). It is quite remarkable and counterintuitive that setting $\tau\neq 0$ pushes the diffusive regime to values of $\alpha<3$ instead of the opposite. A naive reasoning would suggest that the addition of a coherent hopping term would push the ballistic phase to values of $\alpha$ larger than the classical estimate ($\alpha=3$), as a finite $\tau$ would favor the coherent propagation of single particles across the system. We observe that the opposite is surprisingly true, and we leave the exploration of this effect to future investigations. #### 1/N expansion For $\alpha>\alpha_{c}$, the $1/N$ expansion is valid and we can compute $D^{-1}$ in the limit of infinite temperature. The action in the infinite system size is again of the form (48) and the lifetime is fixed by (60). Unlike the previous models, there is no simple analytic expression for the diffusion constant since its derivation depends on the system size. We provide a detailed derivation of the diffusive current in App. C. In Fig. 13, we depict the results of the $1/N$ expansion for various system sizes (full lines) and overlap them with the numeric solution of (37) (dots). Figure 13: Second order phase transition in the long-range hopping of $D^{-1}=-\lim_{N\rightarrow\infty}\nabla n/J_{\text{LRH}}$ as a function of $\alpha$ and $\gamma_{\text{LR}}=1$. Dots represent the numerical solution of (37) while full lines depict the $1/N$ expansion’s predictions; both results overlap. The $N\rightarrow\infty$ limit is obtained via the fitting procedure detailed in Appendix E. The gray dashed line highlights the divergence of the diffusion constant as $D\sim(\alpha-\alpha_{c})^{1.21}$. Both methods agree up to machine precision which may be an indication that the $1/N$ perturbative approach is surprisingly exact even in the ballistic regime, $\alpha<\alpha_{c}$. As already highlighted above, the interplay between transport and coherence gives rise to a rich physics in the long-range hopping model, but understanding it in depth is beyond the goals of this paper and will be addressed in a subsequent work. ## V Conclusion In this work, we provided a comprehensive analysis of the large system size properties of diffusive quantum systems driven out-of-equilibrium by boundary reservoirs. In particular, we showed that diffusive quantum systems can be described by an effective and simple equilibrated Gaussian theory, which allows a systematic way to compute their diffusive transport properties via an expansion in the inverse system size. We illustrated the correctness of our $1/N$ expansion by comparing to exact results we obtained, using a self- consistent Born method, for a large class of quantum stochastic Hamiltonians which show diffusive behavior. In particular, the self-consistent approach allowed us to explicitly derive the structure of the effective Gaussian theory, which consists of decoupled sites with a finite lifetime and where the effective equilibration and diffusivity is entirely encoded in the Keldysh component of local correlations. As an illustration of the effectiveness of our approach, we computed the current in three models that have been of interest in the recent literature: the dephasing model, the QSSEP and a model with stochastic long-range hopping. For the dephasing model and the QSSEP, we illustrated the ability of our approach to extend the study of transport to situations with boundaries at finite temperatures and arbitrary chemical potentials. This allowed us to show how dissipative processes restore effective infinite temperature behavior in the bulk and explicitly derive the effective Gaussian theory via a coarse- graining procedure. For the long-range hopping model, our analysis unveiled that coherent hopping processes trigger diffusive behavior in regimes where transport would be ballistic in the exclusive presence of stochastic long- range hopping. This counter-intuitive phenomenon is a remarkable example of the non-trivial interplay between coherent and dissipative dynamics in open quantum systems, which could be efficiently addressed based on the self- consistent approach. The validity of the self-consistent Born approximation for our class of stochastic Hamiltonians provides in principle the solution to the noisy version of any model whose bare action is Gaussian. Our proof is not limited by stationary behavior or by the one-dimensional geometry of the problems addressed in this paper, but can be extended to time-dependent and higher dimensional problems as well. This possibility opens interesting perspectives for the investigation of novel phenomena in a large class of problems. Extension of our approach could be devised to study quantum asymmetric exclusion processes [111, 112, 113], spin and heat transport, the dynamics after a quench, fluctuations on top and relaxation to stationary states and their extensions to ladder geometries or with non-trivial topological structure. These settings have been for the moment largely untractable, or were solved by case by case methods, for which we provided here an unified framework. An important issue raised by our work consists in showing whether our description equally holds and provides technical advantage for studying the emergence of resistive behavior triggered by intrinsic many-body interactions with unitary dynamics, where the breaking of integrability leads to diffusive transport [1, 2, 3, 4, 19, 20, 21, 22, 23, 24]. A priori, the arguments presented in Section II apply for any quantum systems which follows a local Fick’s law and, as such, they have the potential for very broad applications. Additionally, it is commonly accepted that the phenomenology of diffusion is associated with integrability breaking and subsequent approach to thermal equilibrium [114, 115, 116, 117, 118]. Understanding if and how our approach can help make this link clearer is an exciting open question. In this respect, we also note that, because of the existing mapping between the Fermi-Hubbard and the dephasing model [30], the self-consistent Born approximation allows to compute exact quantities in the Fermi-Hubbard model. As far as we know, exact solutions for this model were only obtained in the framework of the Bethe Ansatz and it is thus interesting that a seemingly unrelated approach allows to obtain exact quantities as well. Whether a connection exists between the two approaches and whether the exact summation allows to compute quantities out of reach of the Bethe ansatz are interesting open questions. ###### Acknowledgements. We thank L. Mazza for useful suggestions during the writing of the manuscript. This work has been supported by the Swiss National Science Foundation under Division II. J.F. and M.F. acknowledge support from the FNS/SNF Ambizione Grant PZ00P2_174038. We also thank X. Turkeshi and M. Schiró for making us aware, in the final phase of the writing of this manuscript, of their work [105] before publication, where a study of the dephasing model from the point of view of Green’s function has also been performed. ## Appendix A Unraveling to continuous measurement In this appendix, we discuss the unraveling of Eq. (12) to a quantum stochastic differential equation describing a system under continuous monitoring. In the Itō prescription the stochastic equation of motion of a quantum system subject to continuous measurement of an observable $O+O^{\dagger}$ at rate $\gamma$ is given by [95] $d\rho={\cal L}_{0}(\rho)+\frac{\gamma}{2}L_{O}(\rho)+\sqrt{\frac{\gamma}{2}}D_{O}(\rho)dB_{t}$ (62) where $\mathcal{L}_{0}$ describes the dynamics in absence of measurement, $L_{O}(\rho)=(O\rho O^{\dagger}-\frac{1}{2}(O^{\dagger}O\rho+\rho O^{\dagger}O))$ and $D_{O}(\rho)=O\rho+\rho O^{\dagger}-\rho{\rm tr}(O\rho+\rho O^{\dagger})$. If we assume that at each link we have two independent measurement processes 1 and 2 with the same rate $2\gamma_{i,j}$ and $O_{1,i,j}:=c_{j}^{\dagger}c_{i}$ and $O_{2,i,j}:=ic_{j}^{\dagger}c_{i}$. The corresponding measured observables are $O_{1,i,j}+O_{1,i,j}^{\dagger}=c_{j}^{\dagger}c_{i}+c_{i}^{\dagger}c_{j}$ and $O_{2,i,j}+O_{2,i,j}^{\dagger}=i(c_{j}^{\dagger}c_{i}-c_{i}^{\dagger}c_{j})$, namely the so-called bond density and the current. It is straightforward to see that averaging out (62), we get (12) again. ## Appendix B Proof of the non-crossing rule We want to prove that for all stochastic Hamiltonians of the form given by (15), the only non vanishing diagrams in the averaged perturbative expansion of the retarded, advanced and Keldysh Green functions are those for which there is no crossing. This statement only relies on the causality structure of the retarded and advanced Green functions, i.e $\displaystyle G^{\cal R}(t,t^{\prime})$ $\displaystyle=0\text{\text{ if }t$<t^{\prime}$,}$ (63) $\displaystyle G^{\cal A}(t,t^{\prime})$ $\displaystyle=0\text{ if t$>t^{\prime}$}.$ (64) Let $\langle\bullet\rangle_{0}$ denote the average with respect to a quadratic theory. First, we remark that the causality structure of a given propagator depends only on its incoming edge and outgoing edge, and thus $\displaystyle G(t,t^{\prime})$ $\displaystyle:=\langle\psi^{1}(t)f[\psi^{1},\bar{\psi}^{1},\psi^{2},\bar{\psi}^{2}]\bar{\psi}^{1}(t^{\prime})\rangle_{0}=0\text{ for $t<t^{\prime}$},$ (65) $\displaystyle G^{\prime}(t,t^{\prime})$ $\displaystyle:=\langle\psi^{2}(t)f[\psi^{1},\bar{\psi}^{1},\psi^{2},\bar{\psi}^{2}]\bar{\psi}^{2}(t^{\prime})\rangle_{0}=0\text{ for $t>t^{\prime}$.}$ (66) where $f[\psi^{1},\bar{\psi}^{1},\psi^{2},\bar{\psi}^{2}]$ is an arbitrary polynomial in the Grassman variables coming from the expansion of the stochastic action. This is straightforward to show starting from the action (22): starting from an incoming full (dashed) line, one cannot switch at any point to a dashed (full) line. Hence, the causality structure is preserved for each line and thus for the whole propagator. Direct inspection of these diagrams show that there cannot be any crossing when contracting the noise terms, as it would lead to a contradiction in the time-orderings. There is only a single one particle irreducible diagram made of a single loop. This establishes the non-crossing result for the retarded and advanced components. For the Keldysh components, a case by case examination of all possible crossings that are depicted on Fig. 14 where the labels $A,B,C,D$ denote generic product of free propagators is needed. Figure 14: All possible crossings for the Keldysh component of the Green’s function with the contracted versions on the right. The red lines highlight the part of the diagram violating the causality structure and are responsible for making the diagram vanish. For each one of these diagrams, there is always a subpart that shows an incompatibility (shown in red in Fig. 14) in the time orderings causing the whole diagram to vanish. This establishes the non-crossing result for the Keldysh propagator. ## Appendix C Computation of the current in the 1/N expansion In this appendix, we compute the current in the dephasing, QSSEP and long- range model using the perturbative theory in inverse system size presented in Sec. II. ### C.1 Dephasing model For the dephasing model, the definition of the current in the bulk from site $j$ to $j+1$ is given by $\hat{J}_{j}=i\tau(c_{j+1}^{\dagger}c_{j}-c_{j}^{\dagger}c_{j+1}).$ (67) The expectation value of $\hat{J}_{j}$ in the stationary state is given by $\begin{split}J_{j}(t):=&{\rm tr}(\hat{J}_{j}\rho_{t})\\\ =&i\tau\langle\bar{\psi}_{j+1}^{+}(t)\psi_{j}^{+}(t)-\bar{\psi}_{j}^{+}(t)\psi_{j+1}^{+}(t)\rangle\\\ =&i\frac{\tau}{2}\langle\big{(}\psi_{j+1}^{1}\bar{\psi}_{j}^{1}+\psi_{j+1}^{1}\bar{\psi}_{j}^{2}+\psi_{j+1}^{2}\bar{\psi}_{j}^{2}\\\ &-(\psi_{j}^{1}\bar{\psi}_{j+1}^{1}+\psi_{j}^{1}\bar{\psi}_{j+1}^{2}+\psi_{j}^{2}\bar{\psi}_{j+1}^{2})\big{)}_{t}\rangle\end{split}$ (68) where we used the Larkin rotation and removed the terms $\psi^{2}\bar{\psi}^{1}$ as they are always $0$ for causality reasons. Using the action associated to the coherent jump $S_{\tau}$ $S_{\tau}=-\tau\int dt^{\prime}\sum_{j}\big{(}\bar{\psi}_{j}^{1}\psi_{j+1}^{1}+\bar{\psi}_{j}^{2}\psi_{j+1}^{2}+\bar{\psi}_{j+1}^{1}\psi_{j}^{1}+\bar{\psi}_{j+1}^{2}\psi_{j}^{2}\big{)}_{t^{\prime}}$ (69) we get, from (49), to leading order in $\frac{1}{N}$: $J_{j}(t)=\frac{\tau^{2}}{2}\int dt^{\prime}\langle\big{(}\psi_{j+1}^{1}\bar{\psi}_{j}^{1}+\psi_{j+1}^{1}\bar{\psi}_{j}^{2}+\psi_{j+1}^{2}\bar{\psi}_{j}^{2}\\\ -(\psi_{j}^{1}\bar{\psi}_{j+1}^{1}+\psi_{j}^{1}\bar{\psi}_{j+1}^{2}+\psi_{j}^{2}\bar{\psi}_{j+1}^{2})\big{)}_{t}\\\ \big{(}\bar{\psi}_{j}^{1}\psi_{j+1}^{1}+\bar{\psi}_{j}^{2}\psi_{j+1}^{2}+\bar{\psi}_{j+1}^{1}\psi_{j}^{1}+\bar{\psi}_{j+1}^{2}\psi_{j}^{2}\big{)}_{t^{\prime}}\rangle_{\infty}$ (70) where $\langle\rangle_{\infty}$ means the average with respect to the bare action in the infinite size limit, where all the sites are uncorrelated. Using Wick’s theorem and that $\langle\psi_{j}^{a}\bar{\psi}_{j+1}^{b}\rangle_{\infty}=0$, the previous equation greatly simplifies : $J_{j}=-\frac{\tau^{2}}{2}\int\frac{d\omega}{2\pi}\big{(}G_{j+1,j+1}^{\cal R}(\omega)G_{j,j}^{\cal K}(\omega)+G_{j,j}^{\cal A}(\omega)G_{j+1,j+1}^{\cal K}(\omega)\\\ -G_{j,j}^{\cal R}(\omega)G_{j+1,j+1}^{\cal K}(\omega)-G_{j+1,j+1}^{\cal A}(\omega)G_{j,j}^{\cal K}(\omega)\big{)}$ (71) We can now use the bare action of individual sites (in presence of the dephasing noise): $S_{j}=\int\frac{d\omega}{2\pi}(\bar{\psi}_{j}^{1},\bar{\psi}_{j}^{2})\begin{pmatrix}\omega+i\frac{\gamma_{{\rm Dph}}}{2}&-i(2n_{j}-1)\gamma_{{\rm Dph}}\\\ 0&\omega-i\frac{\gamma_{{\rm Dph}}}{2}\end{pmatrix}\begin{pmatrix}\psi_{j}^{1}\\\ \psi_{j}^{2}\end{pmatrix}$ (72) to obtain the explicit expression of the current $\begin{split}J_{j}&=\tau^{2}\int\frac{d\omega}{2\pi}\bigg{(}\frac{\gamma_{{\rm Dph}}}{(\omega^{2}+(\frac{\gamma_{{\rm Dph}}}{2})^{2})^{2}}\bigg{)}\frac{i\gamma_{{\rm Dph}}}{2}(2i(n_{j+1}-n_{j}))\\\ &=-\frac{2\tau^{2}}{\gamma_{{\rm Dph}}}\nabla n_{j}\end{split}$ (73) from which we immediately read the diffusion constant $D=\frac{2\tau^{2}}{\gamma_{\rm Dph}}$. ### C.2 QSSEP For the QSSEP, the self-energy for an individual site is $\Sigma_{j}(\omega)=\gamma_{{\rm QS}}-\frac{\gamma_{{\rm QS}}}{2}(\delta_{j,1}+\delta_{j,N})$. The current in the bulk is given by $\hat{J}_{j}=\frac{\gamma_{{\rm QS}}}{2}(\hat{n}_{j}-\hat{n}_{j+1})+i\tau(c_{j}^{\dagger}c_{j+1}-c_{j+1}^{\dagger}c_{j})$ (74) The first part of the current already scales like $1/N$ at order $0$ in the $S_{\tau}$ expansion. The second term is evaluated in the same fashion as for the dephasing model. This leads to $J_{j}=-\left(\frac{\gamma_{{\rm QS}}}{2}+\frac{2\tau^{2}}{\gamma_{{\rm QS}}}\right)\nabla n_{j}+O\left(\frac{1}{N^{2}}\right)$ (75) and $D=\frac{\gamma_{{\rm QS}}}{2}+\frac{2\tau^{2}}{\gamma_{{\rm QS}}}$. ### C.3 Long-range hopping For the long-range hopping model, the local current is defined from the local conservation equation of the particle number : $\frac{d}{dt}\hat{n}_{j}:=\hat{J}_{j}^{{\rm inc}}-\hat{J}_{j}^{{\rm out}}$ (76) with $\displaystyle\hat{J}_{j}^{{\rm inc}}$ $\displaystyle=\sum_{k<j}\frac{{\cal\gamma_{{\rm LR}}}}{{\cal N}_{\alpha}|k-j|^{\alpha}}(\hat{n}_{k}-\hat{n}_{j})+i\tau(c_{j-1}^{\dagger}c_{j}-c_{j}^{\dagger}c_{j-1}),$ (77) $\displaystyle\hat{J}_{j}^{{\rm out}}$ $\displaystyle=\sum_{k>j}\frac{{\cal\gamma_{{\rm LR}}}}{{\cal N}_{\alpha}|k-j|^{\alpha}}(\hat{n}_{j}-\hat{n}_{k})+i\tau(c_{j}^{\dagger}c_{j+1}-c_{j+1}^{\dagger}c_{j}).$ (78) Recall the expression of the self-energy at site $j$ (60) : $\Sigma_{j}=\frac{\gamma_{{\rm LR}}}{2{\cal N}_{\alpha}}\sum_{k\neq j}\frac{1}{|k-j|^{\alpha}}.$ (79) which is depicted in Fig. 15. Figure 15: Dependence on the site index $j$ of the self-energy in the long- range model for a chain of $N=100$ sites and different values of the exponent $\alpha$ of the noise, see (79). The $\alpha\rightarrow\infty$ limit corresponds to the QSSEP. To get the current with the $1/N$ expansion, we take, as for the previous model, the $0^{{\rm th}}$ order term in the first term in the expression of the current and the first order term in the second part. We obtain: $\displaystyle J_{j}^{{\rm inc}}=$ $\displaystyle\sum_{k<j}\frac{\gamma_{{\rm LR}}}{{\cal N}_{\alpha}|k-j|^{\alpha}}(n_{k}-n_{j})$ $\displaystyle+\frac{2\tau^{2}}{\Sigma_{j-1}+\Sigma_{j}}(n_{j-1}-n_{j})\text{ for }j\in[2,N],$ (80) $\displaystyle\hat{J}_{j}^{{\rm out}}=$ $\displaystyle\sum_{k>j}\frac{{\cal\gamma_{{\rm LR}}}}{{\cal N}_{\alpha}|k-j|^{\alpha}}(n_{j}-n_{k})$ $\displaystyle+i\tau(n_{j}-n_{j+1})\text{ for }j\in[1,N-1].$ (81) For simplicity, we give in this paper only the expressions for the infinite temperature and chemical potential boundary conditions which amount to take Lindblad injecting and extracting terms (see [79]). The current at the boundaries is then given by: $\displaystyle J_{1}^{{\rm in}}$ $\displaystyle=\alpha_{L}(1-n_{1})-\beta_{L}n_{1},$ (82) $\displaystyle J_{N}^{{\rm out}}$ $\displaystyle=-\alpha_{R}(1-n_{N})+\beta_{R}n_{N}.$ (83) In the stationary state we have that $\forall j\in[1,N]$, $J_{j}^{{\rm in}}=J_{j}^{{\rm out}}$ which leads to the following system of linear equation to solve in order to get the density profile : ${\cal M}.\vec{n}=\vec{v}$ (84) where $\vec{n}$ and $\vec{v}$ are $N$-dimensional vectors with elements $n_{j}$ and ${\cal M}$ is an $N\times N$ matrix such that $\begin{split}{\cal M}_{j,k}=&\frac{\gamma_{{\rm LR}}}{{\cal N}_{\alpha}|k-j|^{\alpha}}(1-\delta_{j,k})\\\ &+\frac{2\tau^{2}}{\Sigma_{j}+\Sigma_{j+1}}(\delta_{k,j+1}-\delta_{j,k}(1-\delta_{j,N}))\\\ &+\frac{2\tau^{2}}{\Sigma_{j}+\Sigma_{j-1}}(\delta_{k,j-1}-\delta_{j,k}(1-\delta_{j,1}))\\\ &-\delta_{j,k}(\sum_{k\neq j}\frac{\gamma_{{\rm LR}}}{{\cal N}_{\alpha}|k-j|^{\alpha}})\\\ &-\delta_{j,k}\delta_{j,1}(\alpha_{L}+\beta_{L})-\delta_{j,k}\delta_{j,N}(\alpha_{R}+\beta_{R})\end{split}$ (85) and $v_{j}=-\delta_{j,1}\alpha_{L}-\delta_{j,N}\alpha_{R}$ (86) Figure 16: Scaling of the current as a function of the system size in the dephasing model. From left to right: $\gamma=10^{-3},10^{-1},10^{1},10^{3}$. As the dephasing increases, diffusion sets in at smaller system sizes. The vanishing dependence of $J$ with the temperature indicates the crossover into the $R_{\gamma}$ region (46). ## Appendix D Numerical implementation In this appendix, we present some important elements of the numerical implementation. The first step to compute any presented result is to stabilize and efficiently evaluate $G^{\cal{R}(\cal{A})}(\omega)$ at any $\omega$. For the case of a uniform stochastic noise (e.g. free system, dephasing), a naive use of (42) would require evaluating the ratio of two polynomials of order $\mathcal{O}(N)$, a notoriously difficult task for large $N$ using floating point arithmetics. A possible solution would be to resort to arbitrary- precision arithmetic but this would entail a heavy speed cost. We used for the results of the present paper the fact that $G^{\cal{R}(\cal{A})}(\omega)$ can be written as a ratio of polynomials and therefore, decomposed into a product of monomials $G^{\cal R}(\omega)\sim\prod_{j}(\omega-z_{j})/\prod_{i}(\omega-p_{i})$. To efficiently find the zeros and poles of $G^{\cal R}$ 555The poles and zeros of $G^{\cal A}$ are the conjugate of $G^{\cal R}$, we note that the inverse of $G^{\cal R}$ is a simple tridiagonal matrix with a generic form $T^{-1}(\omega)=\left(\begin{array}[]{cccc}\omega+a_{1}&b_{1}&0&0\\\ b_{1}^{*}&\omega+a_{2}&\ddots&0\\\ 0&\ddots&\ddots&b_{N-1}\\\ 0&0&b_{N-1}^{*}&\omega+a_{N}\end{array}\right)$ (87) whose inverse is given by [120] $T_{i,j}(\omega)=\begin{cases}(-1)^{i+j}b_{i}...b_{j-1}\theta_{i-1}\phi_{j+1}/\theta_{L}&i<j\\\ \theta_{i-1}\phi_{j+1}/\theta_{L}&i=j\\\ (-1)^{i+j}b_{j}^{*}...b_{i-1}^{*}\theta_{j-1}\phi_{i+1}/\theta_{L}&i>j\end{cases}$ (88) where $\theta_{i}=(\omega+a_{i})\theta_{i-1}-\left|b_{i-1}\right|^{2}\theta_{i-2}$ and $\phi_{i}=(\omega+a_{i})\phi_{i+1}-\left|b_{i}\right|^{2}\phi_{i+2}$. Therefore, computing the poles and zeros of $G^{R}$ requires computing all the zeros of the sequences $\left\\{\phi_{i},\theta_{i}\right\\}_{i=0}^{L+1}$, a task that can be done efficiently. If the matrix is invariant under a reflection along the anti-diagonal, it is enough to compute a single sequence instead, $\phi_{i}=\theta_{L+1-i}$. This is always the case in the models studied in the present paper. Since $a_{i}$ does not depend on $\omega$, $\phi_{i}$ is a polynomial of degree $i$ with the initial conditions defined as $\phi_{0}=1$ and $\phi_{1}=\omega+a_{1}$. One can efficiently find all the roots $\\{z_{k}\\}_{k=1}^{i}$ of $\phi_{i}$ using a Weierstrass-like recursive method [121, 122], see Eqs. 89 and 90 for a second and fourth order scheme $\displaystyle z_{k}^{(2)}$ $\displaystyle=z_{k}-\frac{W_{k}}{\prod_{k\neq j}(z_{k}-z_{j})}=z_{k}-C_{k}^{(2)}$ (89) $\displaystyle z_{k}^{(4)}$ $\displaystyle=z_{k}-\frac{W_{k}}{1-\sum_{k\neq j}\frac{W_{j}}{z_{k}-W_{k}-z_{j}}}=z_{k}-C_{k}^{(4)}$ (90) $\displaystyle W_{k}$ $\displaystyle=\phi_{i}(z_{k})$ where $W_{k}$ is the Weierstrass weight. We chose these derivative-free schemes to avoid computing explicit derivatives that would slow down the computation. Choosing the correct initial condition is critical to the success of the scheme. To find the roots of $\phi_{i}$, we initialize the scheme with the roots of $\phi_{i-1}$ plus an extra root. We empirically found that the extra root should have a random position close to the middle root (after sorting by the real part) to guarantee the best convergence. This initial choice can still fail when some roots are located very far way from the others, which occurs for example for the model QSSEP. This happens when, at some step in the iteration, two roots coalesce and $C_{k}^{(i)}$ diverges strongly. In order to stablize this divergence, we introduce a damping factor $\kappa$ that suppresses large corrections $z_{i}^{(k)}=z_{i}-C_{i}^{(k)}e^{-\max|C_{i}^{(k)}|/\kappa}$. $\kappa$ is a purely empirically value, which we typically take as $\kappa=\max(|b|)$. The role of $\kappa$ is to slow down the algorithm and allow the coalescing roots to separate. Our root-searching algorithm has thus two parts: a quick search using a second-order damped scheme, followed by a fourth-order damped scheme to precisely locate the roots. Once all the roots are recovered, we generate the new matrix $\tilde{T}$ obtained from the estimates of the roots. We consider that $\tilde{T}$ is a good estimate only when $\max\left|T^{-1}(0)\cdot\tilde{T}^{-1}(0)\right|<10^{-10}$. With the exception of the QSSEP, we find a typical value $\max\left|T^{-1}(0)\cdot\tilde{T}^{-1}(0)\right|\sim 10^{-13}$ for any system size. Once the poles and zeros of $G^{\cal{R}(\cal{A})}(\omega)$ are computed, we proceed to compute $G^{\cal K}$ using (33). To evaluate the $M$ matrix, we resort to the residue theorem. If the poles of $G^{\cal{R}(\cal{A})}$ are simple poles, the sum over residues can be computed in parallel only requiring the evaluation of the monomials $\left\\{(\omega-z_{k})\right\\}$. We note that while each monomial $(\omega-z_{k})$ is of order unity, a sequential multiplication can lead to overflown errors in the limit of large $N$. To avoid this problem, we multiply the monomials at random. If the algorithm fails to, within machine precision, separate two roots, the residue is computed from the contour integral instead. The last step to compute $G^{\cal K}$ and the current $J$, is to perform the frequency integral convoluted with $\cosh^{-2}(\frac{\omega-\mu}{2T})$. This is done by evaluating the integral using a discrete integration scheme instead of residue theorem. Since the thermal dependence is only encoded in the $\cosh^{-2}(\frac{\omega-\mu}{2T})$, discretizing the integral allows us deal with different $(T,\mu)$ values at no significant cost. We carefully verify that the mesh is fine enough to guarantee convergence of the integral at any $(T,\mu)$. ## Appendix E Finite-size scaling In this section, we detail the finite-size scaling analysis necessary to plot Figs. 9,11 and 13. The presence of a dephasing term is not enough to ensure that the system behaves diffusely at any system size. Signatures of diffusive transport such as $J\sim 1/N$, only emerge at a characteristic dephasing length, $N^{*}\sim 1/\gamma$. At short system sizes, or short time-scales, the system behaves as if it was ballistic. In Fig. 16 we highlight this ballistic-to-diffusive transition for different values of the dephasing and temperature in the baths. At small dephasing values, one cannot reliably extract the diffusion constant by fitting a a straight line to Fig. 16. Figure 17: Diffusion constant of the dephasing model at different $(T,\gamma)$ values. In general, the diffusion constant decays with the inverse system size, which we exploit to extract the $N\rightarrow\infty$ limit from a non-linear fit, dashed lines. Instead, to extract the relevant information in the $N\to\infty$, we use the fact that the diffusion constant has itself a $1/N$ scaling [21] when measured in the middle of the chain. In the QSSEP and dephasing model, we use this result to perform non-linear fits to $D$ as shown in Fig. 17. In this figure we plot the diffusion constant of the dephasing model as measured in the middle of the chain for increasing system sizes and different $(T,\mu)$ values. The dashed lines depict the non-linear fit of the function $a+b/(N+c)$ with $a,b,c$ fitting parameters. We find that most observables in these models exhibit $1/N$ corrections as discussed in [21]. The speed of convergence however depends on the point in the phase-space $(T,\mu)$, with region $R_{\tau}$ (see Fig. 9) showing the slowest convergence. This is a consequence of the effects of the bath discussed in the main text. Deep in the $\tau$-dominated regime, we observe the breaking of Fick’s law near the edges as shown in Fig. 18. Since this effect only occurs in a finite portion of the system close to the edges, the convergence is only slowed down. We thus evaluate $D$ in the middle of the chain to mitigate its effects and get a better accuracy. For the long-range model, one needs a different approach to obtain the $N\rightarrow\infty$ limit correctly, especially when close to the ballistic- diffusive transition described in the main text. A tentative form for the finite size extrapolation is provided by the solution of the diffusion equation for single particle under a random walk with long-range hopping discussed in Sec. IV.3 which gives a diffusion constant $D=H^{(\alpha-2)}_{L}$, where $H^{(r)}_{x}$ is the generalized Harmonic number. We find that a fit $D^{-1}=\left(aH^{(\alpha-b+1)}_{N+|c|}\right)^{-1}$, correctly captures the finite-size dependence of $D^{-1}$ for all $\alpha$ values. The fitting parameters $a,b,c$ respectively describe the amplitude, critical exponent and possible finite-size corrections. In Fig. 19, we depict $D^{-1}$ against the result of the fit, respectively dots and dashed lines. The best fitting parameters are plotted in the inset. The quality of the fit allows us to conjecture that, at the transition point, the diffusion constant diverges logarithmically $D_{LR}(\alpha=\alpha_{c})\sim H^{(1)}_{N}\to\log(N)$. Figure 18: Density profiles of a chain of size $N=2000$ in the diffusive regime for the different regions $R_{\tau,\gamma,T}$ shown in Fig. 9. The breaking of Fick’s law is limited to a non-extensive number of sites near the edge. Figure 19: Inverse of the diffusion constant in the long-range model for different powers of $\alpha$. Dashed lines are fits to $D^{-1}=\left(aH^{(\alpha-b+1)}_{N+|c|}\right)^{-1}$. The results of the fitting are depicted in the inset. ## Appendix F Coarse-grain length $a$ In this section, we analytically estimate the coarse-grain length $a$ from the correlation length of the dephasing model. Due to Eq.(9), it is enough to estimate $a$ from a single Green function, in this case the retarded component. The starting point is the analytic expression of the elements $G^{R}_{i,j}$ in the bulk of the chain. For large systems, the boundaries become irrelevant and the good basis of the problem is the momenta basis. In $k$-space, the self-energy takes a diagonal form $\displaystyle\Sigma_{k,k^{\prime}}^{R}$ $\displaystyle=\left(-\frac{i\gamma}{2}\right)\delta_{k,k^{\prime}}.$ (91) For the QSSEP, there are cross-diagonal terms in momentum that vanish as $1/L$ and can be safely ignored. Since both self-energy and Hamiltonian are diagonal in the momenta basis, one has $\displaystyle G^{R}_{k,k^{\prime}}$ $\displaystyle=\delta_{k,k^{\prime}}\frac{1}{\omega-\epsilon_{k}-\Sigma_{k,k^{\prime}}},$ (92) where $\epsilon_{k}=2\tau\cos(k)$ is the eigenenergy of the bulk Hamiltonian. To find the retarded function in position space, we take the Fourier transform with the continuum limit for $k$ $\displaystyle G^{R}_{r,r^{\prime}}$ $\displaystyle=\int\frac{dk}{2\pi}\frac{e^{-ik(r-r^{\prime})}}{\omega-2\tau\cos k+i\gamma/2}.$ (93) The integral can be solved using the residue theorem and, after some lengthy yet simple manipulations, we find a compact formula $\displaystyle G^{R}_{r,r^{\prime}}$ $\displaystyle=\frac{i^{|r-r^{\prime}|-1}}{2\tau\cos y}e^{iy|r-r^{\prime}|},$ (94) where $y=\arcsin\frac{\omega+i\gamma/2}{2\tau}$ is a complex variable with $\text{Im}(y(\omega))>0$. Therefore, in the dephasing model an estimate for the correlation length is given by $\displaystyle\xi=\frac{1}{\min\big{(}\text{Im}\big{(}\arcsin\frac{\omega+i\gamma/2}{2\tau}\big{)}\big{)}}=\frac{1}{\text{arcsinh}\frac{\gamma}{4\tau}},$ (95) In the limit of small dephasing $\gamma$, we have $\xi=4\tau/\gamma$ which serves as an estimate for the coarse-grain length $a\sim\tau/\gamma$. As expected, $a$ should be of the order of the dephasing length $N^{*}\sim 1/\gamma$. ## References * Giamarchi [1991] T. Giamarchi, Umklapp process and resistivity in one-dimensional fermion systems, Phys. Rev. B 44, 2905 (1991). * Rosch and Andrei [2000] A. Rosch and N. Andrei, Conductivity of a clean one-dimensional wire, Phys. Rev. Lett. 85, 1092 (2000). * Zotos and Prelovšek [1996] X. Zotos and P. Prelovšek, Evidence for ideal insulating or conducting state in a one-dimensional integrable system, Phys. Rev. B 53, 983 (1996). * Bertini _et al._ [2021] B. Bertini, F. Heidrich-Meisner, C. Karrasch, T. Prosen, R. Steinigeweg, and M. Žnidarič, Finite-temperature transport in one-dimensional quantum lattice models, Rev. Mod. Phys. 93, 025003 (2021). * Zotos [1999] X. Zotos, Finite Temperature Drude Weight of the One-Dimensional Spin- $1/2$ Heisenberg Model, Phys. Rev. Lett. 82, 1764 (1999). * Prosen [2011] T. Prosen, Open XXZ spin chain: Nonequilibrium steady state and a strict bound on ballistic transport, Physical Review Letters 106, 2 (2011), arXiv:1103.1350 . * Ljubotina _et al._ [2017] M. Ljubotina, M. Žnidari, and T. Prosen, Spin diffusion from an inhomogeneous quench in an integrable system, Nature Communications 8, 1 (2017), arXiv:1702.04210 . * Ilievski _et al._ [2018] E. Ilievski, J. De Nardis, M. Medenjak, and T. Prosen, Superdiffusion in one-dimensional quantum lattice models, Physical review letters 121, 230602 (2018). * De Nardis _et al._ [2020a] J. De Nardis, S. Gopalakrishnan, E. Ilievski, and R. Vasseur, Superdiffusion from emergent classical solitons in quantum spin chains, Phys. Rev. Lett. 125, 070601 (2020a). * De Nardis _et al._ [2018] J. De Nardis, D. Bernard, and B. Doyon, Hydrodynamic Diffusion in Integrable Systems, Physical Review Letters 121, 160603 (2018). * Kardar _et al._ [1986] M. Kardar, G. Parisi, and Y.-C. Zhang, Dynamic Scaling of Growing Interfaces, Phys. Rev. Lett. 56, 889 (1986). * Kriecherbauer and Krug [2010] T. Kriecherbauer and J. Krug, A Pedestrian view on interacting particle systems, KPZ universality and random matrices, J. Phys. A: Math. Theor. 43, 403001 (2010). * Ljubotina _et al._ [2019] M. Ljubotina, M. Žnidarič, and T. Prosen, Kardar-parisi-zhang physics in the quantum heisenberg magnet, Physical review letters 122, 210602 (2019). * Gopalakrishnan and Vasseur [2019] S. Gopalakrishnan and R. Vasseur, Kinetic theory of spin diffusion and superdiffusion in xxz spin chains, Physical Review Letters 122, 127202 (2019). * De Nardis _et al._ [2020b] J. De Nardis, S. Gopalakrishnan, E. Ilievski, and R. Vasseur, Superdiffusion from Emergent Classical Solitons in Quantum Spin Chains, Phys. Rev. Lett. 125, 070601 (2020b). * Castro-Alvaredo _et al._ [2016] O. A. Castro-Alvaredo, B. Doyon, and T. Yoshimura, Emergent Hydrodynamics in Integrable Quantum Systems Out of Equilibrium, Phys. Rev. X 6, 041065 (2016). * Bertini _et al._ [2016] B. Bertini, M. Collura, J. De Nardis, and M. Fagotti, Transport in Out-of-Equilibrium xxz Chains: Exact Profiles of Charges and Currents, Phys. Rev. Lett. 117, 207201 (2016). * Ilievski and De Nardis [2017] E. Ilievski and J. De Nardis, Microscopic Origin of Ideal Conductivity in Integrable Quantum Models, Physical Review Letters 119, 020602 (2017). * De Nardis _et al._ [2019] J. De Nardis, D. Bernard, and B. Doyon, Diffusion in generalized hydrodynamics and quasiparticle scattering, SciPost Physics 6, 049 (2019). * Friedman _et al._ [2020] A. J. Friedman, S. Gopalakrishnan, and R. Vasseur, Diffusive hydrodynamics from integrability breaking, Phys. Rev. B 101, 180302 (2020). * Žnidarič [2019] M. Žnidarič, Nonequilibrium steady-state Kubo formula: Equality of transport coefficients, Phys. Rev. B 99, 035143 (2019). * Žnidarič [2020] M. Žnidarič, Weak Integrability Breaking: Chaos with Integrability Signature in Coherent Diffusion, Phys. Rev. Lett. 125, 180605 (2020). * Znidaric [2020] M. Znidaric, Absence of superdiffusion in the quasiperiodic spin chain at weak integrability breaking, arXiv (2020), arXiv:2012.07488 [cond-mat, physics:quant-ph] . * Ferreira and Filippone [2020] J. S. Ferreira and M. Filippone, Ballistic-to-diffusive transition in spin chains with broken integrability, Phys. Rev. B 102, 184304 (2020). * Žnidarič [2011] M. Žnidarič, Spin Transport in a One-Dimensional Anisotropic Heisenberg Model, Phys. Rev. Lett. 106, 220601 (2011). * Yamanaka and Sasamoto [2021] K. Yamanaka and T. Sasamoto, Exact solution for the lindbladian dynamics for the open xx spin chain with boundary dissipation (2021), arXiv:2104.11479 [cond-mat.stat-mech] . * Žnidarič _et al._ [2016] M. Žnidarič, A. Scardicchio, and V. K. Varma, Diffusive and Subdiffusive Spin Transport in the Ergodic Phase of a Many-Body Localizable System, Phys. Rev. Lett. 117, 040601 (2016). * Mendoza-Arenas _et al._ [2019] J. J. Mendoza-Arenas, M. Žnidarič, V. K. Varma, J. Goold, S. R. Clark, and A. Scardicchio, Asymmetry in energy versus spin transport in certain interacting disordered systems, Phys. Rev. B 99, 094435 (2019). * Žnidarič and Ljubotina [2018] M. Žnidarič and M. Ljubotina, Interaction instability of localization in quasiperiodic systems, Proceedings of the National Academy of Sciences 115, 4595 (2018). * Medvedyeva _et al._ [2016] M. V. Medvedyeva, F. H. L. Essler, and T. Prosen, Exact bethe ansatz spectrum of a tight-binding chain with dephasing noise, Phys. Rev. Lett. 117, 137202 (2016). * Žnidarič [2010a] M. Žnidarič, Exact solution for a diffusive nonequilibrium steady state of an open quantum chain, Journal of Statistical Mechanics: Theory and Experiment 2010, L05002 (2010a). * Žnidarič [2010b] M. Žnidarič, Dephasing-induced diffusive transport in the anisotropic heisenberg model, New Journal of Physics 12, 043001 (2010b). * Eisler [2011] V. Eisler, Crossover between ballistic and diffusive transport: the quantum exclusion process, Journal of Statistical Mechanics: Theory and Experiment 2011, 06007 (2011), arXiv:1104.4050 [cond-mat.stat-mech] . * Bauer _et al._ [2017] M. Bauer, D. Bernard, and T. Jin, Stochastic dissipative quantum spin chains (I) : Quantum fluctuating discrete hydrodynamics, SciPost Phys. 3, 033 (2017). * Bernard and Jin [2019] D. Bernard and T. Jin, Open quantum symmetric simple exclusion process, Phys. Rev. Lett. 123, 080601 (2019). * Bastianello _et al._ [2020] A. Bastianello, J. De Nardis, and A. De Luca, Generalized hydrodynamics with dephasing noise, Phys. Rev. B 102, 161110 (2020). * Gardiner and Zoller [2000] C. Gardiner and P. Zoller, _Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics_, Springer series in synergetics (Springer, 2000). * Breuer and Petruccione [2002] H. P. Breuer and F. Petruccione, _The theory of open quantum systems_ (Oxford University Press, Great Clarendon Street, 2002). * Wichterich _et al._ [2007] H. Wichterich, M. J. Henrich, H.-P. Breuer, J. Gemmer, and M. Michel, Modeling heat transport through completely positive maps, Phys. Rev. E 76, 031115 (2007). * Gorini _et al._ [1976] V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, Completely positive dynamical semigroups of N-level systems, Journal of Mathematical Physics 17, 821 (1976). * Lindblad [1976] G. Lindblad, On the generators of quantum dynamical semigroups, Communications in Mathematical Physics 48, 119 (1976). * Skinner _et al._ [2019] B. Skinner, J. Ruhman, and A. Nahum, Measurement-Induced Phase Transitions in the Dynamics of Entanglement, Phys. Rev. X 9, 031009 (2019). * Alberton _et al._ [2021] O. Alberton, M. Buchhold, and S. Diehl, Entanglement transition in a monitored free-fermion chain: From extended criticality to area law, Phys. Rev. Lett. 126, 170602 (2021). * Buchhold _et al._ [2021] M. Buchhold, Y. Minoguchi, A. Altland, and S. Diehl, Effective Theory for the Measurement-Induced Phase Transition of Dirac Fermions, arXiv e-prints (2021), arXiv:2102.08381 [cond-mat, physics:hep-th, physics:quant-ph] . * Cao _et al._ [2019] X. Cao, A. Tilloy, and A. De Luca, Entanglement in a fermion chain under continuous monitoring, SciPost Physics 7, 024 (2019). * Müller _et al._ [2021] T. Müller, S. Diehl, and M. Buchhold, Measurement-induced dark state phase transitions in long-ranged fermion systems, arXiv e-prints (2021), arXiv:2105.08076 [cond-mat.stat-mech] . * Zhang _et al._ [2021] P. Zhang, C. Liu, S.-K. Jian, and X. Chen, Universal entanglement transitions of free fermions with long-range non-unitary dynamics, arXiv e-prints (2021), arXiv:2105.08895 [cond-mat.str-el] . * Tonielli _et al._ [2019] F. Tonielli, R. Fazio, S. Diehl, and J. Marino, Orthogonality Catastrophe in Dissipative Quantum Many-Body Systems, Phys. Rev. Lett. 122, 040604 (2019). * Mitchison _et al._ [2020] M. T. Mitchison, T. Fogarty, G. Guarnieri, S. Campbell, T. Busch, and J. Goold, In Situ Thermometry of a Cold Fermi Gas via Dephasing Impurities, Phys. Rev. Lett. 125, 080402 (2020). * Dolgirev _et al._ [2020] P. E. Dolgirev, J. Marino, D. Sels, and E. Demler, Non-Gaussian correlations imprinted by local dephasing in fermionic wires, Phys. Rev. B 102, 100301 (2020). * Alba [2021] V. Alba, Unbounded entanglement production via a dissipative impurity, arXiv e-prints (2021), arXiv:2104.10921 [cond-mat, physics:hep-th, physics:quant-ph] . * Wolff _et al._ [2019] S. Wolff, J.-S. Bernier, D. Poletti, A. Sheikhan, and C. Kollath, Evolution of two-time correlations in dissipative quantum spin systems: Aging and hierarchical dynamics, Phys. Rev. B 100, 165144 (2019). * Lacerda _et al._ [2021] A. M. Lacerda, J. Goold, and G. T. Landi, Dephasing enhanced transport in boundary-driven quasiperiodic chains, arXiv e-prints (2021), arXiv:2106.11406 [quant-ph] . * Roberts and Clerk [2020] D. Roberts and A. A. Clerk, Driven-Dissipative Quantum Kerr Resonators: New Exact Solutions, Photon Blockade and Quantum Bistability, Phys. Rev. X 10, 021022 (2020). * Fröml _et al._ [2019] H. Fröml, A. Chiocchetta, C. Kollath, and S. Diehl, Fluctuation-Induced Quantum Zeno Effect, Phys. Rev. Lett. 122, 040402 (2019). * Rossini _et al._ [2021] D. Rossini, A. Ghermaoui, M. B. Aguilera, R. Vatré, R. Bouganne, J. Beugnon, F. Gerbier, and L. Mazza, Strong correlations in lossy one-dimensional quantum gases: From the quantum zeno effect to the generalized gibbs ensemble, Phys. Rev. A 103, L060201 (2021). * Rosso _et al._ [2021] L. Rosso, D. Rossini, A. Biella, and L. Mazza, One-dimensional spin-1/2 fermionic gases with two-body losses: weak dissipation and spin conservation, arXiv e-prints (2021), arXiv:2011.04318 . * Yamamoto _et al._ [2019] K. Yamamoto, M. Nakagawa, K. Adachi, K. Takasan, M. Ueda, and N. Kawakami, Theory of non-hermitian fermionic superfluidity with a complex-valued interaction, Phys. Rev. Lett. 123, 123601 (2019). * ller _et al._ [2021] T. M. ller, M. Gievers, H. F. ml, S. Diehl, and A. Chiocchetta, Shape effects of localized losses in quantum wires: dissipative resonances and nonequilibrium universality, arXiv e-prints (2021), arXiv:2105.01059 [cond-mat.quant-gas] . * Dogra _et al._ [2019] N. Dogra, M. Landini, K. Kroeger, L. Hruby, T. Donner, and T. Esslinger, Dissipation-induced structural instability and chiral dynamics in a quantum gas, Science 366, 1496 (2019). * Pichler _et al._ [2010] H. Pichler, A. J. Daley, and P. Zoller, Nonequilibrium dynamics of bosonic atoms in optical lattices: Decoherence of many-body states due to spontaneous emission, Phys. Rev. A 82, 063605 (2010). * Halati _et al._ [2020] C.-M. Halati, A. Sheikhan, H. Ritsch, and C. Kollath, Numerically Exact Treatment of Many-Body Self-Organization in a Cavity, Phys. Rev. Lett. 125, 093604 (2020). * Verstraete _et al._ [2009] F. Verstraete, M. M. Wolf, and J. I. Cirac, Quantum computation and quantum-state engineering driven by dissipation, Nature physics 5, 633 (2009). * Sommer _et al._ [2011] A. Sommer, M. Ku, G. Roati, and M. W. Zwierlein, Universal spin transport in a strongly interacting Fermi gas, Nature 472, 201 (2011). * Jepsen _et al._ [2020] P. N. Jepsen, J. Amato-Grill, I. Dimitrova, W. W. Ho, E. Demler, and W. Ketterle, Spin transport in a tunable Heisenberg model realized with ultracold atoms, Nature 588, 403 (2020). * Jepsen _et al._ [2021] P. N. Jepsen, W. W. Ho, J. Amato-Grill, I. Dimitrova, E. Demler, and W. Ketterle, Transverse spin dynamics in the anisotropic Heisenberg model realized with ultracold atoms, arXiv e-prints (2021), arXiv:2103.07866 [cond-mat, physics:physics, physics:quant-ph] . * Bouganne _et al._ [2020] R. Bouganne, M. B. Aguilera, A. Ghermaoui, J. Beugnon, and F. Gerbier, Anomalous decay of coherence in a dissipative many-body system, Nature Physics 16, 21 (2020). * Takigawa _et al._ [1996] M. Takigawa, N. Motoyama, H. Eisaki, and S. Uchida, Dynamics in the s=1/2 one-dimensional antiferromagnet sr2 cuo3 via 63cu nmr, Phys. Rev. Lett. 76, 4612 (1996). * Thurber _et al._ [2001] K. R. Thurber, A. W. Hunt, T. Imai, and F. C. Chou, O NMR study of q = 0 spin excitations in a nearly ideal S = 1/2 1D heisenberg antiferromagnet, Sr2CuO3, up to 800 K, Physical Review Letters 87, 247202 (2001). * Pratt _et al._ [2006] F. L. Pratt, S. J. Blundell, T. Lancaster, C. Baines, and S. Takagi, Low-temperature spin diffusion in a highly ideal $s=1/2$ heisenberg antiferromagnetic chain studied by muon spin relaxation, Phys. Rev. Lett. 96, 247203 (2006). * Maeter _et al._ [2013] H. Maeter, A. A. Zvyagin, H. Luetkens, G. Pascua, Z. Shermadini, R. Saint-Martin, A. Revcolevschi, C. Hess, B. Büchner, and H.-H. Klauss, Low temperature ballistic spin transport in the s= 1/2 antiferromagnetic heisenberg chain compound SrCuO2, Journal of Physics: Condensed Matter 25, 365601 (2013). * Scheie _et al._ [2021] A. Scheie, N. E. Sherman, M. Dupont, S. E. Nagler, M. B. Stone, G. E. Granroth, J. E. Moore, and D. A. Tennant, Detection of Kardar–Parisi–Zhang hydrodynamics in a quantum Heisenberg spin-1/2 chain, Nature Physics , 1 (2021). * Zu _et al._ [2021] C. Zu, F. Machado, B. Ye, S. Choi, B. Kobrin, T. Mittiga, S. Hsieh, P. Bhattacharyya, M. Markham, D. Twitchen, A. Jarmola, D. Budker, C. R. Laumann, J. E. Moore, and N. Y. Yao, Emergent hydrodynamics in a strongly interacting dipolar spin ensemble, arXiv e-prints (2021), arXiv:2104.07678 [cond-mat, physics:quant-ph] . * Moll _et al._ [2016] P. J. W. Moll, P. Kushwaha, N. Nandi, B. Schmidt, and A. P. Mackenzie, Evidence for hydrodynamic electron flow in PdCoO2, Science 351, 1061 (2016). * Ella _et al._ [2019] L. Ella, A. Rozen, J. Birkbeck, M. Ben-Shalom, D. Perello, J. Zultak, T. Taniguchi, K. Watanabe, A. K. Geim, S. Ilani, and J. A. Sulpizio, Simultaneous voltage and current density imaging of flowing electrons in two dimensions, Nat. Nanotechnol. 14, 480 (2019). * Sulpizio _et al._ [2019] J. A. Sulpizio, L. Ella, A. Rozen, J. Birkbeck, D. J. Perello, D. Dutta, M. Ben-Shalom, T. Taniguchi, K. Watanabe, T. Holder, R. Queiroz, A. Principi, A. Stern, T. Scaffidi, A. K. Geim, and S. Ilani, Visualizing Poiseuille flow of hydrodynamic electrons, Nature 576, 75 (2019). * Ljubotina _et al._ [2021] M. Ljubotina, D. Roy, and T. Prosen, Absence of thermalization of free systems coupled to gapped interacting reservoirs, arXiv e-prints (2021), arXiv:cond-mat/2106.08373 [cond-mat.stat-mech] . * Meir and Wingreen [1992] Y. Meir and N. S. Wingreen, Landauer formula for the current through an interacting electron region, Phys. Rev. Lett. 68, 2512 (1992). * Jin _et al._ [2020a] T. Jin, M. Filippone, and T. Giamarchi, Generic transport formula for a system driven by markovian reservoirs, Phys. Rev. B 102, 205131 (2020a). * Kamenev [2011] A. Kamenev, _Field Theory of Non-Equilibrium Systems_ (Cambridge University Press, Cambridge, 2011) pp. 1–341. * Žnidarič and Horvat [2013] M. Žnidarič and M. Horvat, Transport in a disordered tight-binding chain with dephasing, The European Physical Journal B 86, 67 (2013). * Monthus [2017] C. Monthus, Dissipative random quantum spin chain with boundary-driving and bulk-dephasing: magnetization and current statistics in the non-equilibrium-steady-state, Journal of Statistical Mechanics: Theory and Experiment 2017, 043302 (2017). * Bauer _et al._ [2019] M. Bauer, D. Bernard, and T. Jin, Equilibrium Fluctuations in Maximally Noisy Extended Quantum Systems, SciPost Phys. 6, 45 (2019). * Bernard and Jin [2021] D. Bernard and T. Jin, Solution to the quantum symmetric simple exclusion process: The continuous case, Communications in Mathematical Physics 384, 1141 (2021). * Bernard and Doussal [2020] D. Bernard and P. L. Doussal, Entanglement entropy growth in stochastic conformal field theory and the KPZ class, EPL (Europhysics Letters) 131, 10007 (2020). * Essler and Piroli [2020] F. H. L. Essler and L. Piroli, Integrability of one-dimensional lindbladians from operator-space fragmentation, Phys. Rev. E 102, 062210 (2020). * Bernard and Piroli [2021] D. Bernard and L. Piroli, Entanglement distribution in the Quantum Symmetric Simple Exclusion Process, arXiv e-prints , arXiv:2102.04745 (2021), arXiv:2102.04745 [cond-mat.stat-mech] . * Nahum _et al._ [2021] A. Nahum, S. Roy, B. Skinner, and J. Ruhman, Measurement and entanglement phase transitions in all-to-all quantum circuits, on quantum trees, and in landau-ginsburg theory, PRX Quantum 2, 010352 (2021). * Note [1] The dependence of the Green’s functions on time differences $t-t^{\prime}$, instead of separate times $t,t^{\prime}$ is a consequence of the fact that we consider stationary situations. * Note [2] The extension to different geometries and additional degrees of freedom is straightforward. * Bertini _et al._ [2015] L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio, and C. Landim, Macroscopic fluctuation theory, Rev. Mod. Phys. 87, 593 (2015). * Derrida [2007] B. Derrida, Non-equilibrium steady states: fluctuations and large deviations of the density and of the current, Journal of Statistical Mechanics: Theory and Experiment 2007, 07023 (2007), arXiv:cond-mat/0703762 [cond-mat.stat-mech] . * Dalibard _et al._ [1992] J. Dalibard, Y. Castin, and K. Mølmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68, 580 (1992). * Carmichael [1993] H. Carmichael, _An Open Systems Approach to Quantum Optics_ (Springer Berlin Heidelberg, 1993). * Belavkin [1989] V. P. Belavkin, Nondemolition measurements, nonlinear filtering and dynamic programming of quantum stochastic processes, in _Modeling and Control of Systems_ , edited by A. Blaquiére (Springer Berlin Heidelberg, Berlin, Heidelberg, 1989) pp. 245–265. * Prosen [2008] T. Prosen, Third quantization: a general method to solve master equations for quadratic open fermi systems, New Journal of Physics 10, 043026 (2008). * Guo and Poletti [2017] C. Guo and D. Poletti, Solutions for bosonic and fermionic dissipative quadratic open systems, Physical Review A 95, 052107 (2017), arXiv:1609.07838 . * Sieberer _et al._ [2016] L. M. Sieberer, M. Buchhold, and S. Diehl, Keldysh field theory for driven open quantum systems, Reports on Progress in Physics 79, 096001 (2016). * Note [3] In our conventions, Larkin-Ovchinnikov’s rotation reads $\psi^{1/2}=(\psi^{+}\pm\psi^{-})/\sqrt{2}\tmspace+{.1667em},\bar{\psi}^{1/2}=(\bar{\psi}^{+}\mp\bar{\psi}^{-})/\sqrt{2}$ [123]. * Poletti _et al._ [2012] D. Poletti, J.-S. Bernier, A. Georges, and C. Kollath, Interaction-induced impeding of decoherence and anomalous diffusion, Phys. Rev. Lett. 109, 045302 (2012). * Poletti _et al._ [2013] D. Poletti, P. Barmettler, A. Georges, and C. Kollath, Emergence of glasslike dynamics for dissipative and strongly interacting bosons, Phys. Rev. Lett. 111, 195301 (2013). * Bernard _et al._ [2018] D. Bernard, T. Jin, and O. Shpielberg, Transport in quantum chains under strong monitoring, EPL (Europhysics Letters) 121, 60006 (2018). * Tan [2019] L. S. L. Tan, Explicit inverse of tridiagonal matrix with applications in autoregressive modelling, IMA Journal of Applied Mathematics 84, 679 (2019). * Karevski and Platini [2009] D. Karevski and T. Platini, Quantum nonequilibrium steady states induced by repeated interactions, Phys. Rev. Lett. 102, 207207 (2009). * Turkeshi and Schiro [2021] X. Turkeshi and M. Schiro, Diffusion and Thermalization in a Boundary-Driven Dephasing Model, arXiv e-prints , arXiv:2106.13180 (2021), arXiv:2106.13180 [cond-mat.str-el] . * Note [4] In the previous expression, if an index is out of boundary, it must simply be set to $0$, we don’t write that explicitly to avoid cumbersome notation. * Bernien _et al._ [2017] H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, Probing many-body dynamics on a 51-atom quantum simulator, Nature 551, 579 (2017). * Bluvstein _et al._ [2021] D. Bluvstein, A. Omran, H. Levine, A. Keesling, G. Semeghini, S. Ebadi, T. T. Wang, A. A. Michailidis, N. Maskara, W. W. Ho, S. Choi, M. Serbyn, M. Greiner, V. Vuletić, and M. D. Lukin, Controlling quantum many-body dynamics in driven rydberg atom arrays, Science 371, 1355 (2021). * Henriet _et al._ [2020] L. Henriet, L. Beguin, A. Signoles, T. Lahaye, A. Browaeys, G.-O. Reymond, and C. Jurczak, Quantum computing with neutral atoms, Quantum 4, 327 (2020). * Nahum _et al._ [2018] A. Nahum, S. Vijay, and J. Haah, Operator spreading in random unitary circuits, Phys. Rev. X 8, 021014 (2018). * Hudson and Parthasarathy [1984] R. L. Hudson and K. R. Parthasarathy, Quantum ito’s formula and stochastic evolutions, Communications in Mathematical Physics 93, 301 (1984). * Derrida _et al._ [1992] B. Derrida, E. Domany, and D. Mukamel, An exact solution of a one-dimensional asymmetric exclusion model with open boundaries, Journal of Statistical Physics 69, 667 (1992). * Jin _et al._ [2020b] T. Jin, A. Krajenbrink, and D. Bernard, From stochastic spin chains to quantum kardar-parisi-zhang dynamics, Phys. Rev. Lett. 125, 040603 (2020b). * Deutsch [1991] J. M. Deutsch, Quantum statistical mechanics in a closed system, Phys. Rev. A 43, 2046 (1991). * Srednicki [1999] M. Srednicki, The approach to thermal equilibrium in quantized chaotic systems, J. Phys. A: Math. Gen. 32, 1163 (1999). * Rigol _et al._ [2008] M. Rigol, V. Dunjko, and M. Olshanii, Thermalization and its mechanism for generic isolated quantum systems, Nature 452, 854 (2008). * D’Alessio _et al._ [2016] L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol, From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics, Advances in Physics 65, 239 (2016). * Kinoshita _et al._ [2006] T. Kinoshita, T. Wenger, and D. S. Weiss, A quantum Newton’s cradle, Nature 440, 900 (2006). * Note [5] The poles and zeros of $G^{\cal A}$ are the conjugate of $G^{\cal R}$. * Usmani [1994] R. A. Usmani, Inversion of a tridiagonal jacobi matrix, Linear Algebra and its Applications 212, 413 (1994). * Gargantini and Henrici [1971] I. Gargantini and P. Henrici, Circular arithmetic and the determination of polynomial zeros, Numerische Mathematik 18, 305 (1971). * Petković and Milošević [2005] M. Petković and D. Milošević, Derivative free inclusion methods for polynomial zeros, Computing 75, 71 (2005). * Larkin and Ovchinnikov [1977] A. I. Larkin and I. N. Ovchinnikov, Nonlinear effects during vortex motion in superconductors, Zhurnal Eksperimentalnoi i Teoreticheskoi Fiziki 73, 299 (1977).
# Multiple scattering suppression for in vivo optical coherence tomography measurement using B-scan-wise multi-focus averaging method Yiqiang Zhu 1 Lida Zhu 1 Yiheng Lim 1 Shuichi Makita 1 Yu Guo 1and Yoshiaki Yasuno1 1Computational Optics Group, University of Tsukuba, Tsukuba, Ibaraki, Japan <EMAIL_ADDRESS>https://optics.bk.tsukuba.ac.jp/COG/ ††journal: opticajournal††articletype: Research Article We demonstrate a method that reduces the noise caused by multi-scattering (MS) photons in an in vivo optical coherence tomography image. This method combines a specially designed image acquisition (i.e., optical coherence tomography scan) scheme and subsequent complex signal processing. For the acquisition, multiple cross-sectional images (frames) are sequentially acquired while the depth position of the focus is altered for each frame by an electrically tunable lens. In the signal processing, the frames are numerically defocus- corrected, and complex averaged. Because of the inconsistency in the MS-photon trajectories among the different electrically tunable lens-induced defocus, this averaging reduces the MS signal. This method was validated using a scattering phantom and in vivo unanesthetized small fish samples, and was found to reduce MS noise even for unanesthetized in vivo measurement. ## 1 Introduction Optical coherence tomography (OCT) is a noninvasive imaging technique with a resolution of a few to tens of micrometers, and has been used for clinical imaging in fields such as ophthalmology[1, 2, 3, 4] and cardiology[5, 6]. Because of its long imaging depth, OCT recently has been adopted for noninvasive and nondestructive microscopy [7]. OCT can visualize the deep tissue region at around several hundred of micrometers to a few millimeters from the sample surface, and it is significantly larger than that of conventional microscopy, which is only around a few tens to hundreds of micrometers. Conventionally, the imaging depth of OCT was believed to be dominated by two factors, the depth of focus of the probe optics and the scattering of the sample. To overcome the former limit, several methods have been successful. For instance, the fusion of multiple images with several depth focus positions [8], computational refocusing methods [9, 10, 11, 12], and the combination of complex signal processing and focus fusion such as Gabor-domain OCT [13, 14]. To overcome the tissue-scattering limit of the imaging depth, longer wavelength probe has been applied. In general, a 1.3-$\mu$m wavelength probe has a better image penetration than a 830-nm or visible OCT. For retinal imaging, a 1.0-$\mu$m probe was shown to have a higher penetration [15, 16, 17, 18], and it has become a common probe wavelength band for clinical retinal imaging. More recently, even longer wavelength, such as 1.7 $\mu$m, has been used to investigate samples with high scattering, such as cardiovascular tissues [19] and brain tissues [20], and have demonstrated higher penetration than OCT using shorter wavelengths. Because these two limiting factors of imaging depth have been overcome, the multiple scattering (MS) has gradually been recognized as an additional limiting factor. In general, OCT imaging theory is based on the assumption that most MS photons are rejected by a confocal pinhole (i.e., a single-mode fiber tip) [21] and the single-scattering (SS) photons govern the imaging. In practice, however, some MS photons are captured through the confocal pinhole and appear in the image. Because an MS photon has a longer optical path than that of the SS photons, a photon that undergoes multiple scattering at a specific depth appears in the image to be located at a deeper location. In cotrast, the contribution of SS photons becomes less at the deeper depth in the image. Hence, deeper regions in an OCT image are more dominated by MS photons. This dominance of MS photon degrades the resolution and contrast of the OCT image in the deep regions [22]. In addition, it degrades the quantitative measurement capability of functional OCT, such as polarization sensitive OCT [23]. Several methods have been used to mitigate the MS effect. For example, Badon et al.proposed the smart OCT method, which modulates the pupil using a spatial light modulator (SLM) and pre-defined reflection matrix[24]. Borycki and associates used SLM-based pupil modulation and spatial correlation theory to reduce the MS effect (or equally, the coherent cross-talk) of full-field swept-source OCT[25, 26, 27]. Liu et al.proposed the aberration-diverse method for MS suppression[28]. In this method, intentional astigmatism was introduced using a deformable mirror, and multiple, typically twelve, OCT volumes were acquired with different astigmatism axes. After correcting the astigmatism using computational adaptive optics [29], the volumes were coherently averaged. The paths of the MS photons are not consistent across the different astigmatism axes, whereas those of the SS photons are consistent. Hence, the coherent average can reduce the MS-photon contributions. Although the aforementioned modalities have successfully mitigated the MS effect, they require expensive wavefront manipulation devices, such as an SLM or deformable mirror. We previously proposed the multi-focus-averaging (MFA) method, which is based on an approach similar to that of the aberration diverse method, but we used a cost-effective electrical tunable lens (ETL) [30]. Multiple, typically seven, OCT volumes are acquired with different defocus positions, and the volumes are coherently averaged after correcting the defocus using computational defocus correction. This method was also applied to Jones-matrix based PS-OCT (JM-OCT), and mitigation of polarization artifacts was demonstrated [23]. Although the aberration diverse method and MFA perform well on static samples, such as static phantom and postmortem samples, its application to in vivo measurement still poses a great challenge. Because these methods rely on coherent (i.e., complex) averaging of multiple volumes, the phases of these volumes should be consistent. In other words, the sample should be highly stable during the multiple volumetric acquisition, which usually takes a few tens of seconds. In this work, we propose a new version of the MFA method, referred to as the B-scan-wise-MFA (B-MFA) method for MS suppression in in vivo measurement. This new method sequentially acquires multiple cross-sectional OCT frames, instead of volumes, with different defocus. The defocus of each frame is corrected by applying a one-dimensional (1-D) version of computational refocusing, and all defocus-corrected frames are coherently averaged to reduce the MS signals. Note that throughout the manuscript, a single cross-sectional scan is denoted as a frame, while a B-scan refers to a set of frames acquired at the same location. This method requires phase stability only during the acquisition time of a few frames (i.e., a B-scan), not of volumes, so the required stable time is typically less than 100 ms. Hence, this method is applicable to in vivo measurement. The performance of the B-MFA method was validated by measuring a scattering phantom and in vivo small fish. ## 2 Principle and implementation of B-MFA method ### 2.1 Data acquisition and signal-processing flow Figure 1: Example trajectories of single-scattering (SS) and multi-scattering (MS) photons with two focus configurations of an electrically tunable lens (ETL). The SS photon was scattered only once, and hence its trajectory is consistent regardless of the ETL configuration. In contrast, the trajectories of MS photons become inconsistent when the ETL configuration altered. Before describing the details of the B-MFA method, we first present an overview of this method. The B-MFA method is based on the assumption that the trajectories of MS photons are not consistent if the depth positions of the focus are different[30], as depicted in Fig. 1. Hence, the first step of B-MFA is to acquire multiple frames with different focus positions using an ETL. The details of the data acquisition protocol are described in Section 2.2. In the second step, we correct the defocus using a 1-D phase-only spatial frequency filter (Section 2.3). Finally, the frames are complex averaged after the axial shifts and phase offsets of the frames have been corrected. Because we used a JM-OCT in our particular implementation, we additionally correct the bulk- phase offset in the four polarization channels of the JM-OCT. The details of the shift and phase corrections as well as the complex-averaging process are described in Section 2.4 Because MS-photons trajectories are different in frames with different focus depths, the randomized MS signal is reduced by the complex averaging, whereas the defocus corrected SS-signal is not. In this manuscript, we refer to this complex-averaged image as a “B-MFA image.” For volumetric measurement, we sequentially acquire cross-sectional B-MFA images at several slow-scan positions. ### 2.2 Data acquisition protocol Figure 2: Scanning protocol for the B-MFA method(a) and previously proposed MFA method [30] (b). N refers to the number of repeated acquisitions (i.e., frames) which are complex averaged after defocus correction to reduce the MS signals. For both methods, the defocus is altered by an ETL. For B-MFA, the defocus is altered for each frame, whereas for MFA, the defocus is altered for each volume. The first step of the B-MFA imaging is to acquire multiple frames with different depth positions of focus. Here the depth positions of the focus are actively controlled by an ETL that is equipped in the sample arm of the OCT [30]. The implementation details of the OCT system are described later in Section 3.1. The data acquisition protocol of B-MFA is summarized in a schematic time chart in Fig. 2(a). As shown in this diagram, multiple frames are sequentially acquired at each slow-scan (i.e., B-scan) position. For every frame acquisition, the ETL updates the depth position of the focus so that all frames are acquired using different focus positions. This measurement process is repeated for each slow-scan location to obtain a volumetric dataset. In our typical implementation, a single frame acquisition time including the focus transition time of ETL is 13.4 ms and the number of frames at a single slow-scan location (i.e., the number of frames per B-scan) is five (see Section 3.2 for details). Hence, the typical acquisition time for a single slow-scan location is 67 ms. ### 2.3 Computational refocusing #### 2.3.1 Computational refocusing using a 1-D phase filter After data acquisition, each frame is then processed for 1-D computational refocusing. Here, the 1-D lateral complex signal at each depth of the frame is processed by a 1-D phase-only spatial frequency filter designed based on the Fresnel-diffraction model [10]. For a defocus distance (i.e., the distance from the focus to the imaging depth) of $z_{d}$, the phase only filter is $H^{-1}\left(f_{x};z_{d}\right)=\exp\left(\frac{-i\pi\lambda_{c}z_{d}f_{x}^{2}}{2}\right),$ (1) where $f_{x}$ denotes the spatial frequencies corresponding to the fast-scan lateral position $x$, and $\lambda_{c}$ is the center wavelength of the probe beam. The refocused frame $S^{\prime}(x;z)$ is obtained using this filter and two sequential 1-D Fourier transform operations as $S^{\prime}(x;z)=\mathscr{F}_{x}^{-1}\left[{\mathscr{F}_{x}\left[{S(x;z)}\right]H^{-1}\left(f_{x};z_{d}(z)\right)}\right],$ (2) where $S(x;z)$ is the original complex frame and $\mathscr{F}_{x}\left[{\quad}\right]$ and $\mathscr{F}_{x}^{-1}\left[{\quad}\right]$ are the 1-D Fourier transform and its inverse Fourier transform, respectively, along the fast-scan ($x$) direction. Here, $z_{d}$ is considered to be a function of the depth in image $z$. In our implementation, $z_{d}$ is estimated from the measured data, as described in detail in the next section (Section 2.3.2). Note that this method corrects the defocus only along the fast-scan direction. The impact of this limitation is discussed in Section 5.3. #### 2.3.2 Estimation of the defocus distance In our method, the defocus distance $z_{d}$ is estimated from the measured OCT images, where the information entropy of the OCT images is used as a sharpness metric. Note that here, we use the information entropy of an 2D en-face OCT image at each depth, although the refocusing was performed for each lateral ($x$-) line individually. This is because a 1-D lateral signal is not informative enough to compute a reliable sharpness metric. The defocus-distance estimation is performed at each depth, and hence these initial estimates are obtained as a function of depth. We then extract the estimates from the depth region in which the estimated focus distances are linear to the depth, and use them to estimate the defocus distances over the range of all depths in the image. Specifically, the estimated defocus distance is linear-fit to the depth using an intensity-weighted linear regression. This linearly fitted line gives the final estimates of the defocus distance throughout the whole depth range. ### 2.4 Shift and phase-offset corrections and complex averaging in JM-OCT In our implementation, we used JM-OCT, which acquires four OCT cross-sectional images at each slow-scan position[31] that corresponds to the four polarization entries of the Jones matrix. Here, we describe the methods to correct the mutual phase offsets in the four images in addition to the shift and phase-offset corrections in the multiple frame acquisitions because they are crucial to complex averaging the refocused OCT signals. Figure 3: Flow of phase-offset corrections. The phase offsets in the images obtained with different defocus are corrected for each polarization channel. We first compute the product of the target (tar) image and the complex conjugate of the reference (ref) image (step 1, green boxes). Each product image is complex averaged along the depth for each polarization channel (step 2, light-blue arrow), and the resultant 1-D arrays are further complex averaged over the polarization channels (purple arrows). The phase of the resulting 1-D complex array represents the phase offset of each A-line. The phase offset of each A-line in the target image is then corrected (step 3, yellow arrow and box). These processes are repeated for all JM cross-sections (step 4). A Jones matrix cross-sectional image (JM cross-section) consists of four complex OCT images corresponding to the four polarization channels. For computational refocusing, we estimate the defocus distance using only one polarization channel with the method described in Section 2.3.2, and we apply it to all polarization channels. Namely, the defocuses of all the polarization channels are corrected with the same estimated defocus distance. Because of the deformation of the ETL, there are non-negligible depth shifts among the images taken with different defocus. We compute the shift using the cross-correlation function of linear-intensity OCT images, where the linear intensity images are obtained from a single polarization channel and the intensity image of the first defocus value is used as the reference. Before computing the cross-correlation function, the images are up-sampled four times along the depth using Fourier-domain zero-padding to achieve sub-pixel accuracy. The cross-correlation function is computed along the depth by a direct method (i.e., not the Fourier-domain method). The amount of shift is determined from the peak of the cross correlation function. After correcting the shift in the up-sampled images, the images are down-sampled to the original pixel resolution using Fourier-domain de-padding. The sub-pixel shifts of the other three polarization channels are corrected using this method but we use the shift estimate obtained using the first polarization channel. After correcting the axial shifts, the mutual phase offsets in the images with different defocus are estimated and corrected. Here, we consider the phase offsets in the JM cross-sections, where a JM cross-section is a set of four complex OCT images one for each of the four polarization channels (PCs). We first estimate the phase offsets of the images at each PC independently. For this estimation, we use only the depth region of 30 pixels from the sample surface (see Appendix A for details of the sample surface detection). The first JM cross-section is used as a reference, and the phase offsets between the complex images of the reference JM cross-section and the images of the target JM cross-section are estimated. The complex image of each PC in the target JM cross-section is multiplied by the complex conjugate of the corresponding PC’s complex image in the reference JM cross-section (Step 1 in Fig. 3), and then complex averaged along the depth (Step 2). Because this operation is performed for each of four PCs, four 1-D complex arrays along the fast scan direction are given (the phase offset of each PC in the figure). By averaging these four 1-D complex arrays corresponding to the PCs and taking the phase, we obtain a 1-D array of the phase offsets where each entry of the array represents the phase offset of each A-line. Finally, the mutual phase shift of the reference JM cross-section and the target JM cross-section is corrected for all depths by subtracting the estimated phase offsets from the complex images in the target JM cross-section, i.e., multiplying the conjugate phase in complex (Step 3). This operation is repeated for all frames (i.e., JM cross-sections) so that the phase offsets of all the frames are corrected (Step 4). After the phase offset corrections, the OCT images of each PC are complex averaged over the frames to generate four MS-reduced complex OCT images, which forms a MS-reduced JM cross-section. The final B-MFA image is obtained by averaging the squared intensities of four complex images of the MS-reduced JM cross-section. Note that the aforementioned operations are for one slow-scan position, i.e., B-scan position. We repeat these operations for all other slow-scan positions to obtain a B-MFA volume. ## 3 Validation design ### 3.1 JM-OCT setup A custom-built passive-polarization-delay (PPD) based JM-OCT was used to evaluate the B-MFA method. A PPD module splits the probe beam into two orthogonal polarizations and applies different delays to them. In addition to this probe-beam polarization multiplexing, polarization-diversity detection is used. Hence, four complex OCT images corresponding to the four polarization channels (i.e., two multiplexed polarizations of the probe beam times two detection polarizations) are obtained. The center wavelength and scanning bandwidth of the light source (AXP50124-8, AXUSN, MA, USA) are 1,310 nm and 106 nm, respectively. The effective focal length of the objective lens (LMS03, Thorlabs, NJ, USA) used in the system is 36 mm. The lateral and axial resolutions are 17 $\mu$m and 14 $\mu$m in tissue, respectively. The depth-of-focus (DOF) without the ETL is 0.36 mm. The A-line rate of the system is 50,000 A-lines/s. More details of the JM-OCT principle [31] and the implementation of the particular JM-OCT system used in this study [32, 33] can be found in elsewhere. An ETL (EL-10-30, CI-NIR-LD-MV, Optotune, Switzerland) is used in the sample arm to axially shift the focus position. The details of the sample arm equipped with the ETL are described in [30]. Note that, although we used a JM-OCT in this study, the B-MFA method can be applied to standard (i.e., non-polarization-sensitive) OCT. ### 3.2 Measurement protocol At each slow-scan (i.e., B-scan) position, five continuous cross-sectional frames were acquired as the defocus was incremented 0.18 mm at each acquisition. The defocus increment was equivalent to half the DOF, where the DOF is that without the ETL. The total focus shift of five frames was 0.72 mm. These parameters were determined by an optimization experiment that is described in details in Section 5.1. Each cross-sectional frame consists of 256 A-lines, and the acquisition time of a single frame was 5.12 ms. By accounting for the defocus transition time of ETL (7.5 ms) and the pullback time of the galvanometer scanner (0.8 ms), the five continuous frames were acquired in 67.1 ms. The phase should be stable during this acquisition time. The acquisition was repeated for 256 slow-scan positions, and the total time to acquire a volume was 17.18 s. In summary, the volumetric acquisition time and required phase-stable duration were 17.18 s and 67.1 ms, respectively. For reference, we acquired or generated three additional volumetric images. The first image is a single acquisition image, which was made by extracting only the third frame of the five sequential frames acquired for B-MFA. Although no averaging was performed, the defocus was corrected. The second reference is the single acquisition image (SFA) image. Here, we acquired a volume following the B-MFA protocol but without shifting the focus. This volume was processed in a manner identical to that of the B-MFA, i.e., the defocus correction, shift and phase corrections, and complex averaging were all the same. The third reference is a standard MFA image [30]. Here, five OCT volumes were sequentially acquired with different defocus, as shown in Fig. 2(b). The increments in defocus between two consecutive volume acquisitions were the same as that of the B-MFA. Each volume was acquired with a standard raster scan with 256 $\times$ 256 A-lines, and the total acquisition time of the sequential five volumes was 9.92 s. For MFA, the phase should be stable during the five volume acquisitions, and hence it was 9.92 s, which is 148 times longer than the required phase-stable time of B-MFA. For all scan protocols, the lateral scanning range was 1.5 mm $\times$ 1.5 mm. The lateral field was covered with 256 $\times$ 256 lateral sampling points, which yielded a lateral pixel separation of 5.86 $\mu$m that is around 1/3 of the lateral resolution. ### 3.3 Samples Figure 4: Schematic (a) and photograph (b) of the scattering phantom. The phantom consists of a scattering layer, which is a mixture of polystyrene micro-particles and ultrasound gel, and a glass plate embedded in the scattering layer. The glass plate is used to provide a scattering-free area. (c) An in vivo medaka sample held in a 3-D printed container. The container was filled with water and fish was not anesthetized. A thin cover glass was put on the container to prevent the fish from accidentally jumping out. The red box roughly indicates the measured area of the OCT. A scattering phantom and ten in vivo medaka fish were measured to evaluate the proposed method. The scattering phantom consists of four parts, two cover glasses, a scattering layer, and a glass plate under the scattering layer that creates a space without a back scattering signal. The scattering layer is a mixture of 0.04-mL polystyrene microparticles (diameter of 10 $\mu$m, 72968-10ML-F, Sigma-Aldrich, MS, USA) and 0.5-mL ultrasound gel (proJelly, Jex, Japan). The schematic and photograph of the phantom are shown in Fig. 4(a) and (b). The medaka, also known as the Japanese rice fish, is a small fish similar in size to a zebrafish and is widely used as a model animal in biological research. We measured ten in vivo medakas without anesthesia. A fish was placed in a 3-D printed container with a water-filled groove that was 5 mm $\times$ 8 mm $\times$ 31 mm (height $\times$ width $\times$ length) in size. We placed a thin glass slip above the container to prevent the fish from accidentally jumping out during the measurement. Figure 4(c) shows a photograph of the container and a sample. In this figure, the red box roughly indicates the scan area. The protocol of the fish experiment follows the animal experiment guidelines of the University of Tsukuba and is approved by the Institutional Animal Care and Use Committee of the University of Tsukuba. ## 4 Result ### 4.1 Scattering phantom Figure 5: Cross-sectional images obtained by the single acquisition (a), SFA (b), MFA (c), and B-MFA methods (d). The scale bar indicates 300 $\mu$m. The regions indicated by the green dashed boxes are enlarged in the second row (e–h). In the superficial regions in the glass plates (blue arrows), the MFA and B-MFA images have lower signal intensities than the other images because of the reduction of the MS signal. (i) Averaged intensity depth profiles of each image, which are averaged using A-lines in the orange brace in (a), where zero dB refers to the noise floor (the signal in the air region) in the single acquisition image. The black arrow indicates the surface of the scattering region. In the deep region of the scattering sample (purple shading) and the superficial area of the glass plate (light-blue shading), the MFA and B-MFA show lower signal intensity than the other methods because of the reduction in MS. Figure 5 shows the intensity cross-sectional images of the scattering phantom for the single acquisition, SFA, MFA, and B-MFA methods from left to right. The images in the second row are magnified views of the deep regions, indicated in the images in the first row. In the shallow regions of the sample, the standard MFA image has fewer particles than the other images [Fig. 5(c)]. This difference can be attributed to the fact that only the standard MFA uses 2-D computational refocusing, whereas the others use 1-D refocusing. This is discussed in detail in Section 5.3. In the deep regions (indicated by the green boxes), the B-MFA and MFA images [Fig. 5(h) and (g), respectively] show particles with higher contrast than the single acquisition and SFA images [Fig. 5(e) and (f)], as indicated by the yellow arrows. Inside the glass plate region, the B-MFA and MFA images [Fig. 5(h) and (g), respectively] have the lowest noise intensity (indicated by the blue arrows), whereas the single acquisition and SFA images [Fig. 5(e) and (f)] have the highest and intermediate noise intensities, respectively. This may indicate that standard OCT measurement noise has been mitigated in the SFA image because of the complex averaging, while both the measurement noise and MS signal are mitigated in the MFA and B-MFA images. This noise and MS suppression are more clearly and quantitatively shown in the averaged intensity depth profiles in Fig. 5(i). Here, the central 216 A-lines [indicated by the orange brace in Fig. 5(a)] were averaged. The black arrow indicates the surface of the scattering layer. In the superficial region of the scattering layer, the single acquisition (blue), SFA (grey) and B-MFA (orange) curves are similar to each other. In the deeper regions (the purple- background region in the plot), the B-MFA and MFA curves show intensities that are lower than those of the single acquisition and SFA curves by around 2 to 3 dB. The glass plate region is indicated by light blue background in the plot. Near the superficial depth in the glass (indicated by the blue brace), the single acquisition image (blue line) yields the highest intensity. The SFA (grey line) shows lower intensity than the single acquisition (2.02-dB reduction in average). This could be attributed to the reduction in measurement noise caused by the complex averaging. The B-MFA (orange line) shows a further reduction (2.07 dB with respect to SFA and 4.09 dB with respect to the single acquisition in average). This may indicate additional suppression of the MS signal. The MFA (green line) shows the lowest signal values, i.e., the best reductions in measurement noise and MS-signal suppression (0.58 dB with respect to B-MFA and 4.66 dB with respect to single acquisition in average). ### 4.2 In vivo small fish sample Figure 6: The intensity cross-sections and en-face images of a small fish sample including single acquisition (a, m), SFA (b, n), MFA (c, o), and B-MFA (d, p) images. The yellow dashed line indicates the depth of the en-face images. (e–h) and (i–l) are the enlarged images of the orange and blue boxed regions, respectively. The dark-blue and red boxes indicate the regions used to compute the signal-to-signal ratio (SSR). The braces, labeled (1), (2), and (3), represent the depth regions used to compute the sharpness metrics. The scale bar indicates 300 $\mu$m. Cross-sectional images of other fish are shown in the Supplementary Material (Fig. S1-S3). Figure 6 shows the intensity cross-sections of one of the ten fish samples. The images are, from top to bottom, single acquisition, SFA, MFA, and B-MFA images, respectively. Enlarged images of the blue and orange regions are shown to the right of the cross-sectional images. Several anatomic features, such as a layered hyperscattering structure in the muscle region (enlarged in the blue dashed boxes) and a dark region surrounded by hyperscattering, which may indicate a notochord (enlarged in orange dashed boxes) are visible in all images, but these features are most clearly visualized in the B-MFA image. Figure 6(m–p) shows the en-face images at a deep depth indicated by the yellow dashed line in Fig. 6(a). The B-MFA image [Fig. 6(p)] exhibits a lower signal intensity but higher contrast than the single acquisition and SFA images [Fig. 6(m) and (n), respectively]. These findings may suggest that the B-MFA method reduces the MS signal and improves the image contrast. We also notice that the en-face MFA image [Fig. 6(o)] shows reduced signal intensity but does not show improved contrast. We suspect that this reduced intensity in the MFA image is not fully due to the MS reduction, but is also due to signal washout caused by the motion of the sample. Details are discussed in Section 5.4. Table 1: Information entropy of the en-face slab projections of the small fish. The depths of (1)–(3) correspond to the depths indicated in Fig. 6(a) (braces). For all depths, B-MFA shows the smallest entropy, which indicates the sharpest image. Depth (mm) | Information entropy | ---|---|--- Single acquisition | SFA | Standard MFA | B-MFA (1) 1.05-1.12 | 4.62 | 4.60 | 4.57 | 4.56 (2) 1.41-1.48 | 4.18 | 4.16 | 4.08 | 4.06 (3) 1.77-1.85 | 4.52 | 4.42 | 4.39 | 4.29 To quantitatively compare the sharpness of the images, we computed the information entropy of the en-face slab projections. The projections were computed at three depth regions indicated by the pink braces in Fig. 6(a). The computed information entropy values are summarized in Table 1. At all three depth regions, the B-MFA shows the smallest information entropy. Namely, B-MFA provides the sharpest image of the four methods. To quantitatively evaluate the image contrast, we computed the signal-to- signal ratio (SSR). SSR is defined as the mean intensity ratio between two manually selected ROIs, where one ROI was chosen to include a high-scattering- intensity structure [dark blue box in Fig. 6(a)] and the other was selected in a low scattering region [red box in Fig. 6(a)] in a cross-sectional image. The size of each ROI was 35 pixels $\times$ 16 pixels (205.1 $\mu$m $\times$ 115.8 $\mu$m). Examples of the SSRs are shown in Fig. 6(a–d). The SSRs of the SFA, B-MFA, and MFA images were compared with that of the single acquisition image by computing the SSR enhancement (SSRE), which is defined as the difference in SSR from that of the single acquisition image. For the images presented in Fig. 6, the SSREs were 0.42 dB (SFA), 1.37 dB (B-MFA), and 0.36 dB (MFA). Namely, of the four images, the B-MFA image provides the largest SSRE. Figure 7: Box plot of the SSR enhancement (SSRE) with respect to the single acquisition images of ten fish samples . The horizontal labels represent the methods. The top and bottom of each box indicate the upper and lower quartiles, respectively. The center lines indicate the medians. Paired t-tests revealed that B-MFA shows a significantly higher SSER than SFA and MFA (p = 0.0008 and 0.0009, respectively). ** represents the statistical significance of p < 0.01, and ns stands for “non-significant.” The SSREs of all ten fish were computed using ROIs that include the same anatomical structures for all fish and plotted in Fig. 7. The B-MFA achieved the best mean SSRE of 1.82 dB, which is significantly larger than that of SFA (SSRE = 0.72 dB, p = 0.0008) and MFA (SSRE = -0.52 dB, p = 0.0009). The statistical comparison was done using paired t-tests. We note that the MFA method showed the largest interquartile range. This is because the MFA is more susceptible to sample motion, and the motions of the in vivo fish samples highly varied from case to case. This susceptibility of MFA to the sample motion indicates that the B-MFA might be the best method for in vivo small fish samples. The intensity cross-sectional image of the other nine fish are presented in the Supplementary Material (Fig. S1-S3) ## 5 Discussion ### 5.1 Scan protocol optimization Measurement protocol | $\Delta z$ | N | $\mathrm{D}=(\mathrm{N}-1)\times\Delta z$ ---|---|---|--- 1 | 1 DOF (0.36 mm) | 1, 2, 3, 4 | 0, 0.36, 0.72, 1.08 2 | 1/2 DOF (0.18 mm) | 1, 2, 3, $\cdots$, 7 | 0, 0.18, 0.36, $\cdots$, 1.08 3 | 1/3 DOF (0.12 mm) | 1, 2, 3, $\cdots$, 10 | 0, 0.12, 0.24, $\cdots$, 1.08 4 | 1/4 DOF (0.09 mm) | 1, 2, 3, $\cdots$, 13 | 0, 0.09, 0.18, $\cdots$, 1.08 5 | 1/6 DOF (0.06 mm) | 1, 2, 3, $\cdots$, 15 | 0, 0.06, 0.12, $\cdots$, 0.84 Table 2: Summary of B-MFA measurement protocols used for the scan-protocol optimization. We used five focus shifting steps ($\Delta z$) and several values for the number of frames to be averaged (N). The total focus shifting distance (D) was defined from $\Delta z$ and N. To determine the optimal measurement protocol for B-MFA, five focus shifting steps ($\Delta z$) and several values for the number of total frames per B-scan (N) were examined. The results for the focus shifting step, total frame number, and total focus shifting distance $\mathrm{D}=(\mathrm{N}-1)\times\Delta z$ are summarized in Table 2. A scattering phantom (Section 3.3) was measured using all protocols. Figure 8: Schematic illustration for the scatterer selection. Five scatterers (red boxes) and a superficial region in the glass plate (dashed light-blue boxes) were selected to compute the signal-to-background ratio (SBR). The former (red boxes) is used to compute the signal level, whereas the latter (light-blue boxes) is used to compute the background level. To evaluate the image contrast, the signal-to-background ratio (SBR) was computed. Here the “signal” was defined as the mean signal intensity of five manually selected particles in a scattering region of the phantom. The particles were selected at a depth 10 pixels below the top surface depth of the glass plate as schematically indicated by the pink dashed line in Fig. 8(a), and each of the five particles were selected from different cross- sectional image in a volume. Each scatterer was cropped by a 3 $\times$ 3 pixels (17.6 $\times$ 21.7 $\mu$m) window in a cross-sectional image, as indicated by the red boxes in Fig. 8(b), and the mean intensity of this window was used as the intensity of that scatterer. The background was defined as the mean signal intensity of an ROI located at the same depth as the particles but in the glass plate, i.e., a region without scattering. The ROI extends over 60 pixel $\times$ 3 pixel (351.5 $\mu$m $\times$ 21.7 $\mu$m, lateral times depth), and is indicated by the dashed light blue box in Fig. 8(b). Because there should be no scattering in the glass, the background intensity is used as a measure of the MS signal. Figure 9: SBRs obtained from several B-MFA measurement protocols. Each symbol in the plot represents a focus shifting step ($\Delta z$), and the horizontal axis represents the total focus shifting distance (D). The highest SBR was obtained with D = 0.72 mm ($2\times\mathrm{DOF}$). The SBR of each protocol is shown in Fig. 9 where each symbol corresponds to each focus shifting step $\Delta z$. The plot demonstrates that the SBR is highest at $\mathrm{D}=0.72$ mm, which is twice the DOF. At this $\mathrm{D}$, the SBRs are 28.74, 28.76, 28.63, 28.52, and 27.54 dB for $\Delta z$ = 1/6, 1/4, 1/3, 1/2, and 1 $\times$ DOF, respectively. Here, the numbers of complex averaged frames (N) of each protocol are 13, 9, 7, 5, and 3, respectively. Among these five protocols, the first four give similarly high SBR, and they can be the candidates for the optimal protocol. Because B-MFA is used for in vivo measurements, a shorter acquisition time is preferable. Therefore, we have used the protocol of ($\Delta z$ = 1/2 DOF, N = 5, $\mathrm{D}=2$ DOF) for the measurements shown in the Section 4. Note that this protocol was selected to best suit small fish samples. For other types of samples, it could be worth optimizing the protocol again to adapt it for the new samples. ### 5.2 Phase stability requirements of MFA and B-MFA The previously proposed MFA method used a 2-D computational refocusing [34], in which the complex en-face OCT signal at a depth is two-dimensionally Fourier-transformed and 2-D quadratic phase is applied in the spatial- frequency domain. Hence, the phase of the OCT signal should be stable over the volume or at least over several frames that cover an area larger than the lateral resolution. In contrast, B-MFA uses 1-D computational refocusing as described in Section 2.3.1. Hence, the phase stability is required only within a frame. Because the sample motion can cause significant phase error, the lower requirement for the phase stability of the B-MFA is an important advantage for in vivo measurement. ### 5.3 Refocus artifact of B-MFA and conventional MFA in the phantom result Figure 10: Refocus artifacts in en-face images of the phantom (first two rows) and a small fish sample (the bottom). The left images were obtained by the B-MFA method, whereas the right images were obtained by the MFA method. Because the 1-D computational refocusing used by B-MFA refocuses the imaging only along the fast-scan direction (horizontal direction of these images), the structures in the B-MFA images are elongated along the slow-scan direction (vertical direction in the image). In contrast, MFA image does not show this effect because it uses 2-D refocusing. In the B-MFA images, the elongation is less pronounced in the deep depths of the phantom (c), which is because the physical focus was close to this depth, and hence, the structure was sharp even without refocusing. Comparing the B-MFA and MFA cross-sectional images [Fig. 5(d) and (c), respectively] at the superficial region scattering layer, we notice that the B-MFA image exhibits more scattering particles than the MFA image. This difference in the numbers of particle can be attributed to the difference in the 1-D and 2-D computational refocusing. Because 1-D refocusing refocuses the image only along the fast-scan direction, the refocused particle signal must be elongated along the slow-scan (vertical) direction, as shown in the en-face slice of the same data [Fig. 10(a)]. Hence, the images of the scatterers that are not really in a particular B-scan smear into the B-scan, and this causes an artifactual increase in the number of scatterers in that B-scan. On the other hand, the 2-D refocusing used in MFA isotropically refocuses the image as shown in the corresponding en-face slice [Fig. 10(b)]. Hence, the artifactual increase of the scatterer does not occur. A similar artifact also can be seen in the small fish image shown in Fig. 10(e) and (f), although it is less evident than in the phantom case, because the tissue microstructure is aligned roughly along the slow-scan direction. In the other depth of the phantom, which is around 1.5 mm from the surface, the elongation artifact is negligible [Fig. 10(c) and (d)] because the physical focus is located near this depth. To solve this problem in the future, we may be able to adopt a computational refocusing method[35, 36] for the MFA or B-MFA method that is less susceptible to phase instability. ### 5.4 Signal reduction of the MFA in the in vivo result In the in vivo fish measurements, the MFA image [the third row of Fig. 6] showed lower signal intensity than the B-MFA image [the fourth row of Fig. 6]. In addition, the SSRE of MFA is smaller than that of B-MFA for the in vivo measurement (fifth paragraph of Section 4.2). This can be attributed to the signal washout caused by the sample motion, which reduces not only the MS signal but also the SS signal. These findings also emphasize the advantage of the B-MFA method for in vivo imaging over the MFA method. ## 6 Conclusion We proposed the B-MFA method, which suppresses MS signals in in vivo imaging. The method was validated using phantom and in vivo small fish measurements. The subjective observation of the images and objective evaluation of SSRE showed that the B-MFA method improves the image contrast by reducing the MS signals. In addition, the B-MFA showed superior performance than MFA for in vivo measurements. Here, we conclude that the B-MFA method can reduce noise caused by the MS signal in OCT images and is a better option for in vivo measurement than our previously proposed MFA method. ## Appendix ## Appendix A Sample-surface detection The details of the sample surface detection used in Section 2.4 are as follows. We first manually select the rough depth region in which the sample surface is searched, which is typically around 90 pixels (around 652 $\mu$m in air). Then, we compute the first order derivative of linear OCT intensity along the depth, where the derivative is defined as the difference between two neighboring pixels. The surface is defined at the top-most depth where the derivative is larger than a predefined threshold. The threshold is 50 in our particular case, but this value is in arbitrary unit and may differ in various OCT systems. ## Disclosures Y. Zhu, L. Zhu, Lim, Makita, Guo, Yasuno: Sky Technology (F), Nikon (F), Kao Corp. (F), Topcon (F), Panasonic (F), Santec (F). L. Zhu is currently employed by Santec. ## Funding Core Research for Evolutional Science and Technology (JPMJCR2105); Japan Society for the Promotion of Science (21H01836, 22K04962, 22KF0058); China Scholarship Council (201908130130). ## Data, Materials, and Code Availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. ## Supplemental document See Supplement 1 for supporting content. ## References * [1] W. Geitzenauer, C. K. Hitzenberger, and U. M. Schmidt-Erfurth, “Retinal optical coherence tomography: past, present and future perspectives,” Br. J. Ophthalmol. 95, 171–177 (2011). * [2] A. Geevarghese, G. Wollstein, H. Ishikawa, and J. S. Schuman, “Optical coherence tomography and glaucoma,” Annu. Rev. Vis. Sci. 7, 693–726 (2021). * [3] M. Ang, M. Baskaran, R. M. Werkmeister, J. Chua, D. Schmidl, V. Aranha dos Santos, G. Garhöfer, J. S. Mehta, and L. Schmetterer, “Anterior segment optical coherence tomography,” Prog. Retin. Eye Res. 66, 132–156 (2018). * [4] N. Venkateswaran, A. Galor, J. Wang, and C. L. Karp, “Optical coherence tomography for ocular surface and corneal diseases: a review,” Eye Vis. 5 (2018). * [5] T. Yonetsu, B. E. Bouma, K. Kato, J. G. Fujimoto, and I.-K. Jang, “Optical coherence tomography–15 years in cardiology–,” Circulation 77, 1933–1940 (2013). * [6] L. Vignali, E. Solinas, and E. Emanuele, “Research and clinical applications of optical coherence tomography in invasive cardiology: a review,” Curr. Cardiol. Rev. 10, 369–376 (2014). * [7] Y. Chen, C.-P. Liang, Y. Liu, A. H. Fischer, A. V. Parwani, and L. Pantanowitz, “Review of advanced imaging techniques,” J. Pathol. Inform. 3, 22 (2012). * [8] W. Drexler, U. Morgner, F. X. Kärtner, C. Pitris, S. A. Boppart, X. D. Li, E. P. Ippen, and J. G. Fujimoto, “In vivo ultrahigh-resolution optical coherence tomography,” Opt. Lett. 24, 1221–1223 (1999). * [9] S. Coquoz, A. Bouwens, P. J. Marchand, J. Extermann, and T. Lasser, “Interferometric synthetic aperture microscopy for extended focus optical coherence microscopy,” Opt. Express 25, 30807–30819 (2017). * [10] Y. Yasuno, J.-i. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, “Non-iterative numerical method for laterally superresolving fourier domain optical coherence tomography,” Opt. Express 14, 1006–1020 (2006). * [11] T. S. Ralston, D. L. Marks, P. Scott Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3, 129–134 (2007). * [12] A. Kumar, W. Drexler, and R. A. Leitgeb, “Numerical focusing methods for full field OCT: a comparison based on a common signal model,” Opt. Express 22, 16061–16078 (2014). * [13] J. P. Rolland, P. Meemon, S. Murali, K. P. Thompson, and K. sung Lee, “Gabor-based fusion technique for optical coherence microscopy,” Opt. Express 18, 3632–3642 (2010). * [14] C. Yoon, Y. Qi, H. Mestre, C. Canavesi, O. J. Marola, A. Cogliati, M. Nedergaard, R. T. Libby, and J. P. Rolland, “Gabor domain optical coherence microscopy combined with laser scanning confocal fluorescence microscopy,” Biomed. Opt. Express 10, 6242–6257 (2019). * [15] A. Unterhuber, B. Považay, B. Hermann, H. Sattmann, A. Chavez-Pirson, and W. Drexler, “In vivo retinal optical coherence tomography at 1040 nm-enhanced penetration into the choroid,” Opt. Express 13, 3252–3258 (2005). * [16] B. Považay, K. Bizheva, B. Hermann, A. Unterhuber, H. Sattmann, A. Fercher, W. Drexler, C. Schubert, P. Ahnelt, M. Mei, R. Holzwarth, W. J. Wadsworth, J. Knight, and P. S. J. Russel, “Enhanced visualization of choroidal vessels using ultrahigh resolution ophthalmic OCT at 1050 nm,” Opt. Express 11, 1980–1986 (2003). * [17] E. C. W. Lee, J. F. de Boer, M. Mujat, H. Lim, and S. H. Yun, “In vivo optical frequency domain imaging of human retina and choroid,” Opt. Express 14, 4403–4411 (2006). * [18] Y. Yasuno, Y. Hong, S. Makita, M. Yamanari, M. Akiba, M. Miura, and T. Yatagai, “In vivo high-contrast imaging of deep posterior eye by 1-$\mu$m swept source optical coherence tomography and scattering optical coherence angiography,” Opt. Express 15, 6121–6139 (2007). * [19] Y. Li, J. Jing, E. Heidari, J. Zhu, Y. Qu, and Z. Chen, “Intravascular optical coherence tomography for characterization of atherosclerosis with a 1.7 micron swept-source laser,” Sci. Rep. 7, 14525 (2017). * [20] J. Zhu, H. R. Freitas, I. Maezawa, L.-w. Jin, and V. J. Srinivasan, “1700 nm optical coherence microscopy enables minimally invasive, label-free, in vivo optical biopsy deep in the mouse brain,” Light Sci. Appl 10, 145 (2021). * [21] G. Yao and L. V. Wang, “Monte carlo simulation of an optical coherence tomography signal in homogeneous turbid media,” Phys Med Biol 44, 2307 (1999). * [22] R. K. Wang, “Signal degradation by multiple scattering in optical coherence tomography of dense tissue: a monte carlo study towards optical clearing of biotissues,” Phys. Med. Biol. 47, 2281 (2002). * [23] L. Zhu, S. Makita, J. Tamaoki, Y. Zhu, P. Mukherjee, Y. Lim, M. Kobayashi, and Y. Yasuno, “Polarization-artifact reduction and accuracy improvement of jones-matrix polarization-sensitive optical coherence tomography by multi-focus-averaging based multiple scattering reduction,” Biomed. Opt. Express 15, 256–276 (2024). * [24] A. Badon, D. Li, G. Lerosey, A. C. Boccara, M. Fink, and A. Aubry, “Smart optical coherence tomography for ultra-deep imaging through highly scattering media,” Sci. Adv. 2, e1600370 (2016). * [25] D. Borycki, M. Hamkało, M. Nowakowski, M. Szkulmowski, and M. Wojtkowski, “Spatiotemporal optical coherence (STOC) manipulation suppresses coherent cross-talk in full-field swept-source optical coherence tomography,” Biomed. Opt. Express 10, 2032–2054 (2019). * [26] M. Wojtkowski, P. Stremplewski, E. Auksorius, and D. Borycki, “Spatio-temporal optical coherence imaging – a new tool for in vivo microscopy,” Photonics Lett. Pol. 11, 44–49 (2019). * [27] E. Auksorius, D. Borycki, P. Wegrzyn, B. L. Sikorski, K. Lizewski, I. Zickiene, M. Rapolu, K. Adomavicius, S. Tomczewski, and M. Wojtkowski, “Spatio-temporal optical coherence tomography provides full thickness imaging of the chorioretinal complex,” iScience 25, 105513 (2022). * [28] S. Liu, M. R. E. Lamont, J. A. Mulligan, and S. G. Adie, “Aberration-diverse optical coherence tomography for suppression of multiple scattering and speckle,” Biomed. Opt. Express 9, 4919–4935 (2018). * [29] S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. 109, 7175–7180 (2012). * [30] L. Zhu, S. Makita, J. Tamaoki, A. Lichtenegger, Y. Lim, Y. Zhu, M. Kobayashi, and Y. Yasuno, “Multi-focus averaging for multiple scattering suppression in optical coherence tomography,” Biomed. Opt. Express 14, 4828–4844 (2023). * [31] Y. Yasuno, “Multi-contrast jones-matrix optical coherence tomography—the concept, principle, implementation, and applications,” IEEE J. Sel. Top. Quantum Electron. 29, 1–18 (2023). * [32] A. Miyazawa, S. Makita, E. Li, K. Yamazaki, M. Kobayashi, S. Sakai, and Y. Yasuno, “Polarization-sensitive optical coherence elastography,” Biomed. Opt. Express 10, 5162–5181 (2019). * [33] E. Li, S. Makita, Y.-J. Hong, D. Kasaragod, and Y. Yasuno, “Three-dimensional multi-contrast imaging of in vivo human skin by jones matrix optical coherence tomography,” Biomed. Opt. Express 8, 1290–1305 (2017). * [34] L. Zhu, S. Makita, D. Oida, A. Miyazawa, K. Oikawa, P. Mukherjee, A. Lichtenegger, M. Distel, and Y. Yasuno, “Computational refocusing of jones matrix polarization-sensitive optical coherence tomography and investigation of defocus-induced polarization artifacts,” Biomed. Opt. Express 13, 2975–2994 (2022). * [35] S. Ruiz-Lopera, R. Restrepo, C. Cuartas-Vélez, B. E. Bouma, and N. Uribe-Patarroyo, “Computational adaptive optics in phase-unstable optical coherence tomography,” Opt. Lett. 45, 5982–5985 (2020). * [36] S. Ruiz-Lopera, R. Restrepo, T. M. Cannon, M. Villiger, B. E. Bouma, and N. Uribe-Patarroyo, “Computational refocusing in phase-unstable polarization-sensitive optical coherence tomography,” Opt. Lett. 48, 4765–4768 (2023). ## Supplementary This file supplements Section 4.2 by showing the intensity cross-sectional images of the other medaka fish samples (samples 2 to 10) measured for validation of MS reduction by the B-scan-wise-multi-focus averaging (B-MFA) method. All of the results show that several anatomic features, such as a hyper-scattering layer structure and the boundary of hollow structure are better visible in the B-MFA image than single acquisition, SFA, and conventional MFA. Figure 11: The intensity cross-sectional mages of the medaka (samples 2–4). (a1-a3) single acquisition, (b1-b3) SFA, (c1-c3) conventional MFA, (d1-d3) B-MFA image. The blue and red boxes indicate the region used to compute the signal-to-signal ratio (SSR). Figure 12: The intensity cross-sectional image of medaka (samples 5–7). The image types and their order are identical to those of Fig. 11. Figure 13: The intensity cross-sectional image of medaka (samples 8–10). The image types and their order are identical to those of Fig. 11.
# LoFT: Enhancing Faithfulness and Diversity for Table-to-Text Generation via Logic Form Control Yilun Zhao 1 Zhenting Qi∗2 Linyong Nan1 Lorenzo Jaime Yu Flores1 Dragomir Radev1 1Yale University 2 Zhejiang University <EMAIL_ADDRESS><EMAIL_ADDRESS>Equal Contributions. ###### Abstract Logical Table-to-Text (LT2T) generation is tasked with generating logically faithful sentences from tables. There currently exists two challenges in the field: 1) _Faithfulness_ : how to generate sentences that are factually correct given the table content; 2) _Diversity_ : how to generate multiple sentences that offer different perspectives on the table. This work proposes LoFT, which utilizes logic forms as fact verifiers and content planners to control LT2T generation. Experimental results on the LogicNLG dataset demonstrate that LoFT is the first model that addresses unfaithfulness and lack of diversity issues simultaneously. Our code is publicly available at https://github.com/Yale-LILY/LoFT. ## 1 Introduction Figure 1: An example of logical table-to-text generation. (a) Statements generated by previous models Nan et al. (2022): the generation suffers from 1) _Lack of diversity_ , as three of the generated statements are focused on the same table regions (i.e., “Hale Irwin” and “Gil Morgan”), and three of them use the similar reasoning operations (i.e., comparative); 2) _Unfaithfulness_ , as one of the generated statements is factually incorrect given the table content. (b) Statements generated by LoFT: By utilizing logic forms to _control_ the generation, our method can generate multiple factually correct sentences that each use a different reasoning operation to offer various perspectives on the tabular data. Table-to-Text (T2T) generation aims to produce natural language descriptions from structured tables. A statement generated from tabular data can be inferred based on different levels of information (e.g., value of a specific cell, logical operation result across multiple cells). Although current T2T models Lebret et al. (2016); Wiseman et al. (2017); Puduppully et al. (2019); Parikh et al. (2020) have shown remarkable progress in fluency and coherence, they mainly focus on surface-level realizations without much logical inference. Recently, Chen et al. (2020a) proposed LogicNLG, which is tasked with generating textual descriptions that require logical reasoning over tabular data (i.e., LT2T generation). LT2T generation is challenging as it requires a model to learn the logical inference knowledge from table-text pairs and generate multiple _factually correct_ sentences. Another challenge for LT2T generation is the _diversity_ of generated text. Natural Language Generation (NLG) encourages the diverse output of statements over a single input, as it provides various perspectives on the data and offers users more choices. In LT2T generation, requirements for diversity naturally emerge from the need to apply different logical operations to extract different levels of table information. However, current methods Chen et al. (2021); Nan et al. (2022); Liu et al. (2022a); Zhao et al. (2022b) that address issues of unfaithfulness have overlooked the importance of diversity. As shown in Figure 1, multiple statements generated using current methods Nan et al. (2022) might only cover information from the same table region or logical operation. Such issues related to lack of diversity could limit the deployment of LT2T models in the real world. In this work, we attribute _unfaithfulness_ and lack of _diversity_ to the absence of _controllability_ over generation. Specifically, due to the large number of combinations of different logical operations and table regions, the space of factually correct statements is exponentially large. However, LogicNLG uses the whole table as the input, without providing annotations related to any other explicit control attribute. As a result, it is hard and uncontrollable for neural models to decide a favorable choice of logical selections solely based on the table input. We believe such _uncontrollability_ leads to unfaithfulness and lack of diversity issues. This work proposes LoFT, a framework that utilizes logic forms as mediators to enable _controllable_ LT2T generation. Logic forms Chen et al. (2020d, b) are widely used to retrieve evidence and explain the reasons behind table fact verification Yang et al. (2020); Yang and Zhu (2021); Ou and Liu (2022). In this work, logic forms are used as: 1) fact verifiers to ensure the factual correctness of each generated sentence; and 2) content planners to control which logical operation and table region to use during the generation. Experimental results show that LoFT surpasses previous methods in faithfulness and diversity simultaneously. ## 2 Related Work #### Logical Table-to-Text (LT2T) Generation LogicNLG Chen et al. (2020a) is tasked with generating logically faithful sentences from tables. To improve the faithfulness of generated statements, Nan et al. (2022) trained a system both as a generator and a faithfulness discriminator with additional replacement detection and unlikelihood learning tasks. Liu et al. (2022a) pre-trained a model on a synthetic corpus of table- to-logic-form generation. Zhao et al. (2022b) demonstrated that faithfulness of LT2T can be improved by pre-training a generative language model over synthetic Table QA examples. However, these methods overlook the importance of diversity in T2T generation, and might generate multiple statements that cover the same table regions or reasoning operations. Previous methods in NLG proposed to improve diversity by modifying the decoding techniques Li et al. (2016). However, these approaches degrade faithfulness as measured against baselines Perlitz et al. (2022). To enable controllable generation and improve diversity, Perlitz et al. (2022) used logical types of statements as a control. However, such methods still suffer from problems related to unfaithfulness, and may generate statements covering limited table regions. This work proposes to leverage the logic form as a fact checker and content planner to control LT2T generation, which tackles the challenges about faithfulness and diversity at the same time. #### Table Fact Verification via Logic Form Logic forms are widely used in Table Fact Verification Chen et al. (2020b). Specifically, given an input statement, the model Yang et al. (2020); Yang and Zhu (2021); Ou and Liu (2022) will first translate it into logic form. Then the logic form will be executed over the table, and return true/false as the entailment label for a given statement. While several works Chen et al. (2020d); Shu et al. (2021); Liu et al. (2021) focused on generating fluent statements from logic forms, the utilization of logic forms to benefit LT2T generation is still unexplored. (a) LoFT training stage. (b) LoFT inference stage. Figure 2: The illustration of LoFT. (a) During the training stage, the SASP model is first applied to translate each statement in the LogicNLG training set into the logic form. Then LoFT is trained to generate the reference statement given the translated logic form and serialized table data. (b) During the inference stage, given each table, the logic form synthesis pipeline was first applied to synthesize candidate logic forms that cover different table regions and logical operations. LoFT is applied to generate statements for each candidate logic form. Then a statement verifier is used to filter out those potentially unfaithful statements. As a result, LoFT can generate a diverse set of faithful statements covering different table regions and reasoning operations. For each table in the LogicNLG test set, we randomly sampled five candidate statements for evaluation. ## 3 LoFT This section first introduces the logic form utilized, and then delves into the training and inference process of LoFT. We also explain how the use of logic forms can enhance both faithfulness and text-diversity in LT2T generation. ### 3.1 Logic Form Implementation Logic forms are widely used to retrieve evidence and explain the reasons behind table fact verification. We use the same implementation as Chen et al. (2020d), which covers 8 types of the most common logical operations (e.g., count, aggregation) to describe a structured table. Each logical operation corresponds to several Python-based functions. For example, the definition of function all_greater(view, header, value) under “majority” category is: checking whether all the values under header column are greater than value, with the scope (i.e., view) of all or a subset of table rows. The complete list of logical operation types and corresponding function definitions are shown in Table 4 in Appendix. ### 3.2 LoFT Training #### Training Task Formulation Given the serialized tabular data with selected columns as $T$, the training objective of LoFT is to generate a sentence $\bm{y}=(y_{1},y_{2},\dots,y_{n})$ that is both fluent and faithful, with the translated logic form $l$ as control. $\bm{y}=\mathrm{argmax}\prod_{i=1}^{n}P(y_{i}|y_{<i},T,\,l;\,\theta)$ (1) where $\theta$ denotes the parameters of a seq2seq LM. #### Training Dataset Collection Since the LogicNLG dataset does not contain logic form annotations, we had to augment each statement in the training set with its corresponding logic forms. To construct {statement, logic form} parallel data for the LogicNLG training set, we adapted SASP Ou and Liu (2022), the state-of-the-art model for TabFact dataset, which leverages structure-aware semantic parsing over tables to translate the given statement into logic form. In this work, given an example in the LogicNLG training set, SASP was applied to generate its logic form, resulting in a total of 15,637 examples for LoFT training. ### 3.3 LoFT Inference During the inference stage, for each given table, we first applied the logic form synthesis pipeline to synthesize multiple candidate logic forms Liu et al. (2022a). For each of these logic forms, the system generates its corresponding statement. The faithfulness of these statements were further checked by a verifier. #### Logic Form Synthesis Pipeline To synthesize a candidate set of logic forms paired with each supporting table, we applied a similar logic form synthesis pipeline as Liu et al. (2022a). We extracted templates of logic forms from the collected LoFT training dataset. Specifically, we categorized functions with similar definitions (e.g., max/min, greater/less) into smaller groups to obtain a more abstract template. Each function category corresponded to one unique table reasoning skill. For each template, we masked specific entities in the logic forms as typed placeholders (i.e., col to denote a column header, obj to denote an object). Finally, we obtained 45 different templates, covering 8 table logical operations. Table 4 shows the complete list of reasoning operations and corresponding function definitions. Given the table and each set of selected columns, the pipeline would synthesize a total of 20 candidate logic forms whose execution result over the table is True. To generate a candidate logic form, the pipeline first sampled a logic form using a weighted-sampling technique with the weight equal to the template distribution in the LoFT training dataset (Section 3.2). The weighted sampling is to ensure that the generated candidate logic forms follow a similar distribution as LogicNLG. To instantiate the sampled template, a bottom-up sampling strategy is adopted to fill in each placeholder of the template and finally generate the logic form. #### Statement Generation & Verification Through the logic form synthesis pipeline, we obtained a large number of candidate logic forms. For each logic form, we used LoFT to generate the corresponding statement. The candidate statements might still contain some factually incorrectness, thus we applied an NLI-based verifier to filter out those potentially unfaithful generations. Specifically, we used the TabFact Chen et al. (2020b) dataset to train a classifier, which adopts RoBERTa-base as the backbone. We fed each generated statement and its corresponding table into the classifier, and only kept those statements that were predicted as entailed. Then we randomly sampled five statements as the output for each table in LogicNLG. ### 3.4 Enhancing LT2T via Logic Form Control This subsection provides two perspectives to explain why logic forms can help improve both faithfulness and diversity of LT2T generation. #### Logic Form as Content Planner Logic forms pass column or cell values as arguments, guiding the model to focus on relevant table regions. The function category of the logic form, such as count, helps the model better organize logical-level content planning. #### Logic Form as Fact Verifier Logic forms are defined with unambiguous semantics, hence are reliable mediators to achieve faithful and controllable logical generations. During the inference stage, we synthesize candidate logic forms with 100% execution correctness. The sampled logic form serves as a fact verifier and conveys accurate logical-level facts for controllable LT2T generation. Model | Surface-level | Diversity-level | Faithfulness-level ---|---|---|--- BLEU-1/2/3$\uparrow$ | Distinct-2$\uparrow$ | s-BLUE-4$\downarrow$ | SP-Acc$\uparrow$ | NLI-Acc$\uparrow$ | TAPEX-Acc$\uparrow$ GPT2-TabGen Chen et al. (2020a) | 48.8/27.1/12.6 | 59.0 | 55.3 | 42.1 | 68.7 | 45.0 GPT2-C2F Chen et al. (2020a) | 46.6/26.8/13.3 | 60.3 | 52.8 | 42.7 | 72.2 | 44.1 DCVED∗ Chen et al. (2021) | 49.5/28.6/15.3 | – | – | 43.9 | 76.9 | – DEVTC‡ Perlitz et al. (2022) | 51.3/30.6/16.3 | 73.7 | 21.3 | 44.3 | 77.9 | 55.6 R2D2 Nan et al. (2022) | 51.8/32.4/18.6 | 60.1 | 51.5 | 50.8 | 85.6 | 60.2 LoFT | 48.1/27.7/14.9 | 79.5 | 17.7 | 57.7 | 86.9 | 61.8 Table 1: Performance on the LogicNLG test set. ${\ddagger}$: results from our own implementation; $*$: code not released and we used the results reported in original papers. LoFT achieves great improvement on faithfulness and diversity. Diversity | DEVTC | R2D2 | LoFT ---|---|---|--- Criteria | Best$\uparrow$ | Worst$\downarrow$ | Best$\uparrow$ | Worst$\downarrow$ | Best$\uparrow$ | Worst$\downarrow$ Table Coverage | 8 | 16 | 5 | 20 | 29 | 5 Reasoning Op | 19 | 1 | 2 | 37 | 24 | 2 Table 2: Number of times the system was selected as best or worst by majority vote (including ties). LoFT outperforms other baselines in terms of diversity for both table coverage and reasoning operations. Model | Faithfulness $\uparrow$ | Fluency $\uparrow$ ---|---|--- Agreement / $\kappa$ | Agreement / $\kappa$ DEVTC | 63.5 / 0.69 | 86.5 / 0.80 R2D2 | 71.5 / 0.73 | 90.0 / 0.84 LoFT | 75.0 / 0.76 | 88.0 / 0.81 Table 3: Human evaluation results on the criteria of faithfulness and fluency, with the total agreement by Fleiss’ Kappa ($\kappa$) Fleiss (1971). LoFT has the best performance in terms of faithfulness, while achieving comparable performance in fluency. ## 4 Experimental Setup We next discuss the evaluation metrics, baselines, and implementation details for the experiments. ### 4.1 Evaluation Metrics We applied various automated evaluation metrics at different levels to evaluate the model performance from multiple perspectives. #### Surface-level Following Chen et al. (2020a), we used BLEU-1/2/3 to measure the consistency of generated statements with the reference. #### Diversity-level We used Distinct-$n$ Li et al. (2016) and self-BLEU-$n$ Zhu et al. (2018) to measure the diversity of five generated statements for each table. Distinct-$n$ is defined as the total number of distinct $n$-grams divided by the total number of tokens in the five generated statements; Self-BLEU-$n$ measures the average $n$-gram BLEU score between generated statements. We measured Distinct-$2$ and Self-BLEU-$4$ in our experiment. #### Faithfulness-level Similar as the previous works Chen et al. (2020a); Nan et al. (2022); Liu et al. (2022a), we used a parsing-based evaluation metric (i.e., SP-Acc) and two NLI-based evaluation metrics (i.e., NLI-Acc and TAPEX-Acc) to measure the faithfulness of generation. SP-Acc directly extracts the meaning representation from the generated sentence and executes it against the table to verify the correctness. NLI-Acc and TAPEX-Acc use TableBERT Chen et al. (2020b) and TAPEX Liu et al. (2022b) respectively as their backbones, and were finetuned on the TabFact dataset Chen et al. (2020b). Liu et al. (2022a) found that NLI-Acc is overly positive about the predictions, while TAPEX-Acc is more reliable to evaluate the faithfulness of generated sentences. ### 4.2 Baseline Systems We implemented following baseline systems for the performance comparison: GPT2-TabGen Chen et al. (2020a) directly fine-tunes GPT-2 over the LogicNLG dataset; GPT2-C2F Chen et al. (2020a) first produces a template which determines the global logical structure, and then generates the statement conditioned on the template; DCVED Chen et al. (2021) applies a de-confounded variational encoder-decoder to reduce the spurious correlations during LT2T generation training; DEVTC Perlitz et al. (2022) utilized reasoning operation types as an explicit control to increase the diversity of LT2T generation; and R2D2 Nan et al. (2022) trains a generative language model both as a generator and a faithfulness discriminator with additional replacement detection and unlikelihood learning tasks, to enhance the faithfulness of LT2T generation. ### 4.3 Implementation Details Following Shu et al. (2021), we converted each logic form into a more human- readable form for both LoFT training and inference data. LoFT was implemented using fairseq library Ott et al. (2019), with BART-Large Lewis et al. (2020) as the backbones. All experiments were conducted on an 8 NVIDIA RTX-A5000 24GB cluster. Both LoFT and the statement verifier was trained for 5,000 steps with a batch size of 128. The best checkpoints were selected by the validation loss. ## 5 Experimental Results This section discusses automated and human evaluation results of different systems. ### 5.1 Main Results Table 1 presents the results on LogicNLG. LoFT outperforms all the baselines on the criteria of diversity and faithfulness, and is the first model that achieves state-of-the-art results on both faithfulness- and diversity-level. It is worth noting that in the LogicNLG setting, a generated statement is allowed to cover a different table region or reasoning operations from the references, as long as it is fluent and factually correct. However, in such cases, the reference-based metrics will be low, explaining why the BLEU-1/2/3 scores of LoFT are lower than other models. ### 5.2 Human Evaluation We conducted the human evaluation with four expert annotators using the following three criteria: (1) _Faithfulness_ (scoring 0 or 1): if all facts contained in the generated statement are entailed by the table content; (2) _Diversity_ (voting the best & worst): if the five generated statements cover information from different table regions, and use different reasoning operations; (3) _Fluency_ (scoring 0 or 1): if the five generated statements are fluent and without any grammar mistakes. We chose R2D2 Nan et al. (2022) and DEVTC Perlitz et al. (2022) for comparison, as they achieved best-performance results in faithfulness and diversity, respectively. We sampled 50 tables from the LogicNLG test set. For each table, we selected all five generated statements from each model’s output. To ensure fairness, the model names were hidden to the annotators, and the display order between three models was randomly shuffled. Human evaluation results show that LoFT delivers improvements in both faithfulness (Table 3) and diversity (Table 2), while achieving comparable performance in fluency (Table 3). ## 6 Conclusions This work proposes LoFT, which utilizes logic forms as fact verifiers and content planners to enable controllable LT2T generation. Experimental results on LogicNLG demonstrate that LoFT delivers a great improvement in both diversity and faithfulness of LT2T generation. ## Limitations The first limitation of our approach is that LoFT does not explore long text generation Moosavi et al. (2021). LoFT only supports the generation of multiple single sentences. To enable long text generation (i.e., generate a long paragraph that delivers various perspectives on the table data), a global content planner Su et al. (2021) needs to be designed to highlight which candidate sentences should be mentioned and in which order. Additionally, we believe that LoFT can also be applied to text generation over hybrid context with both textual and tabular data Chen et al. (2020c); Zhao et al. (2022a); Nakamura et al. (2022). The second limitation of our work is that the statement verifier discussed in Section 3.3 was trained using the same data as NLI-Acc and TAPEX-Acc. This might bring some bias for NLI-based metrics on faithulness-level evaluation. In the future, we will exploit a more robust automated evaluation system Fabbri et al. (2021); Liu et al. (2022c) to comprehensively evaluate the LT2T model performances from different perspectives. Moreover, we applied the SASP model Ou and Liu (2022) to convert statements into logic forms (Section 3.2). Some converted logic forms may be inconsistent with the original statement. We believe that future work could incorporate the Logic2Text Chen et al. (2020d) dataset into training data to further improve the LoFT performance. ## Ethical Consideration We used the LogicNLG Chen et al. (2020a) dataset for training and inference. LogicNLG is publicly available under MIT license111https://opensource.org/licenses/MIT and widely used in NLP research and industry. ## References * Chen et al. (2020a) Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020a. Logical natural language generation from open-domain tables. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7929–7942, Online. Association for Computational Linguistics. * Chen et al. (2020b) Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020b. Tabfact: A large-scale dataset for table-based fact verification. In _International Conference on Learning Representations_. * Chen et al. (2020c) Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Wang. 2020c. Hybridqa: A dataset of multi-hop question answering over tabular and textual data. _Findings of EMNLP 2020_. * Chen et al. (2021) Wenqing Chen, Jidong Tian, Yitian Li, Hao He, and Yaohui Jin. 2021. De-confounded variational encoder-decoder for logical table-to-text generation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 5532–5542, Online. Association for Computational Linguistics. * Chen et al. (2020d) Zhiyu Chen, Wenhu Chen, Hanwen Zha, Xiyou Zhou, Yunkai Zhang, Sairam Sundaresan, and William Yang Wang. 2020d. Logic2text: High-fidelity natural language generation from logical forms. * Fabbri et al. (2021) Alexander R. Fabbri, Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. _Transactions of the Association for Computational Linguistics_ , 9:391–409. * Fleiss (1971) Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. _Psychological bulletin_ , 76(5):378. * Lebret et al. (2016) Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 1203–1213, Austin, Texas. Association for Computational Linguistics. * Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7871–7880, Online. Association for Computational Linguistics. * Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In _NAACL 2016_. * Liu et al. (2022a) Ao Liu, Haoyu Dong, Naoaki Okazaki, Shi Han, and Dongmei Zhang. 2022a. PLOG: Table-to-logic pretraining for logical table-to-text generation. In _EMNLP 2022_. * Liu et al. (2021) Ao Liu, Congjian Luo, and Naoaki Okazaki. 2021. Improving logical-level natural language generation with topic-conditioned data augmentation and logical form generation. _arXiv preprint arXiv:2112.06240_. * Liu et al. (2022b) Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2022b. TAPEX: Table pre-training via learning a neural SQL executor. In _International Conference on Learning Representations_. * Liu et al. (2022c) Yixin Liu, Alexander R. Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir Radev. 2022c. Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation. * Moosavi et al. (2021) Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, and Iryna Gurevych. 2021\. Scigen: a dataset for reasoning-aware text generation from scientific tables. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)_. * Nakamura et al. (2022) Kai Nakamura, Sharon Levy, Yi-Lin Tuan, Wenhu Chen, and William Yang Wang. 2022\. HybriDialogue: An information-seeking dialogue dataset grounded on tabular and textual data. In _Findings of the Association for Computational Linguistics: ACL 2022_ , pages 481–492, Dublin, Ireland. Association for Computational Linguistics. * Nan et al. (2022) Linyong Nan, Lorenzo Jaime Flores, Yilun Zhao, Yixin Liu, Luke Benson, Weijin Zou, and Dragomir Radev. 2022. R2D2: Robust data-to-text with replacement detection. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 6903–6917, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Ott et al. (2019) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)_ , pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. * Ou and Liu (2022) Suixin Ou and Yongmei Liu. 2022. Learning to generate programs for table fact verification via structure-aware semantic parsing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 7624–7638, Dublin, Ireland. Association for Computational Linguistics. * Parikh et al. (2020) Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1173–1186, Online. Association for Computational Linguistics. * Perlitz et al. (2022) Yotam Perlitz, Liat Ein-Dot, Dafna Sheinwald, Noam Slonim, and Michal Shmueli-Scheuer. 2022. Diversity enhanced table-to-text generation via type control. _arXiv preprint arXiv:2205.10938_. * Puduppully et al. (2019) Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with entity modeling. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2023–2035, Florence, Italy. Association for Computational Linguistics. * Shu et al. (2021) Chang Shu, Yusen Zhang, Xiangyu Dong, Peng Shi, Tao Yu, and Rui Zhang. 2021. Logic-consistency text generation from semantic parses. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 4414–4426, Online. Association for Computational Linguistics. * Su et al. (2021) Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier. 2021. Plan-then-generate: Controlled data-to-text generation via planning. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 895–909, Punta Cana, Dominican Republic. Association for Computational Linguistics. * Wiseman et al. (2017) Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2253–2263, Copenhagen, Denmark. Association for Computational Linguistics. * Yang et al. (2020) Xiaoyu Yang, Feng Nie, Yufei Feng, Quan Liu, Zhigang Chen, and Xiaodan Zhu. 2020\. Program enhanced fact verification with verbalization and graph attention network. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7810–7825, Online. Association for Computational Linguistics. * Yang and Zhu (2021) Xiaoyu Yang and Xiaodan Zhu. 2021. Exploring decomposition for table-based fact verification. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 1045–1052, Punta Cana, Dominican Republic. Association for Computational Linguistics. * Zhao et al. (2022a) Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang. 2022a. MultiHiertt: Numerical reasoning over multi hierarchical tabular and textual data. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 6588–6600, Dublin, Ireland. Association for Computational Linguistics. * Zhao et al. (2022b) Yilun Zhao, Linyong Nan, Zhenting Qi, Rui Zhang, and Dragomir Radev. 2022b. ReasTAP: Injecting table reasoning skills during pre-training via synthetic reasoning examples. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 9006–9018, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Zhu et al. (2018) Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In _The 41st International ACM SIGIR Conference on Research; Development in Information Retrieval_ , SIGIR ’18, page 1097–1100, New York, NY, USA. Association for Computing Machinery. ## Appendix A Appendix Reasoning Op | Function Category | Name | Arguments | Output | Description ---|---|---|---|---|--- Unique | UNIQUE | only | view | bool | returns whether there is exactly one row in the view Aggregation | AGGREGATION | avg/sum | view, header, string | number | returns the average/sum of the values under the header column Count | COUNT | count | view | number | returns the number of rows in the view Ordinal | ORD_ARG | nth_argmax/nth_argmin | view, header string | view | returns the row with the n-th max/min value in header column ORDINAL | nth_max/nth_min | view, header string | number | returns the n-th max/n-th min of the values under the header column SUPER_ARG | argmax/argmin | view, header string | view | returns the row with the max/min value in header column Comparative | COMPARE | eq/not_eq | object, object | bool | returns if the two arguments are equal round_eq | object, object | bool | returns if the two arguments are roughly equal under certain tolerance greater/less | object, object | bool | returns if 1st argument is greater/less than 2nd argument diff | object, object | object | returns the difference between two arguments Majority | MAJORITY | all_eq/not_eq | view, header string, object | bool | returns whether all the values under the header column are equal/not equal to 3rd argument all_greater/less | view, header string, object | bool | returns whether all the values under the header column are greater/less than 3rd argument all_greater_eq/less_eq | view, header string, object | bool | returns whether all the values under the header column are greater/less or equal to 3rd argument most_eq/not_eq | view, header string, object | bool | returns whether most of the values under the header column are equal/not equal to 3rd argument most_greater/less | view, header string, object | bool | returns whether most of the values under the header column are greater/less than 3rd argument most_greater_eq/less_eq | view, header string, object | bool | returns whether most of the values under the header column are greater/less or equal to 3rd argument Conjunction | FILTER | filter_eq/not_eq | view, header string, object | view | returns the subview whose values under the header column is equal/not equal to 3rd argument filter_greater/less | view, header string, object | view | returns the subview whose values under the header column is greater/less than 3rd argument filter_greater_eq /less_eq | view, header string, object | view | returns the subview whose values under the header column is greater/less or equal than 3rd argument OTHER | filter_all | view, header string | view | returns the view itself for the case of describing the whole table Other | OTHER | hop | view, header string | object | returns the value under the header column of the row OTHER | and | bool, bool | bool | returns the boolean operation result of two arguments Table 4: A complete list of function definitions for the logic forms (Similar as Chen et al. (2020d)).
$\begin{split}F\left[x,\,y\right]\sim&\left\\{\begin{array}[]{lll}\displaystyle\frac{q-1}{x_{\text{IR}}^{1-q}}\,\frac{1}{x^{q}}&\text{for}&x_{0}\sqrt{\xi}=x_{\text{IR}}\leq x\leq y\\\\[15.0pt] \hskip 10.11775pt0&&\text{otherwise}\leavevmode\nobreak\ .\end{array}\right.\end{split}$ (63) In terms of arguments in Eq. (62), the finite support of the function $F$ is defined by the interval $(x_{0}^{2}\,c_{1}\log y^{\prime})^{1/2}\leq(y^{\prime}/y)^{1/2}x\leq y^{\prime}$ and, subsequently, it defines the integration range $y_{1}\leq y^{\prime}\leq y_{2}$ in which $F$ in Eq. (62) is nonvanishing. There are two choices of $(y_{1},\,y_{2})$ depending on the value of $y^{-1/2}x=\frac{k}{\sqrt{m_{r}H}}$. When $y^{-1/2}x$ is large (large momentum region), the integration is done over $x^{2}/y=y_{1}\leq y^{\prime}\leq y_{2}=y$ where $x^{2}/y$ is the intersection between two curves $y^{\prime}$ and $(y^{\prime}/y)^{1/2}x$. The integration in Eq. (62) can be done straightforwardly: $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)&=8\pi f_{a}^{2}c_{1}m_{r}(x_{0}^{2}c_{1})^{\frac{q-1}{2}}y^{\frac{q-3}{2}}\frac{1}{x^{q}}\left(\frac{q-1}{2}\right)^{-\frac{q+3}{2}}\\\\[5.0pt] &\quad\times\left[\Gamma\left(\frac{q+5}{2},\,\frac{q-1}{2}\log\frac{x^{2}}{y}\right)-\Gamma\left(\frac{q+5}{2},\,\frac{q-1}{2}\log y\right)\right]\leavevmode\nobreak\ ,\end{split}$ (64) where $\Gamma(s,x)=\int_{x}^{\infty}dt\,t^{s-1}e^{-t}$ is the incomplete Gamma function. The variable $x^{2}/y=\frac{k^{2}}{m_{r}H}$ for the fixed momentum $k$. When $\frac{m_{r}}{H}$ is taken to be large, the second argument of $\Gamma$ function in Eq. (64) becomes large as well. Since $\Gamma(s,\,x)\rightarrow x^{s-1}e^{-x}$ as $x\rightarrow\infty$, the result in Eq. (64) can be approximated as, in the large $m_{r}/H$ limit, $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)\approx\frac{8H\mu_{\text{eff}}\sqrt{\xi}}{x_{0}}\left(\frac{Hx_{0}\sqrt{\xi}}{k}\right)^{q}f\left(\frac{m_{r}}{H},\,\frac{k}{m_{r}}\right)\leavevmode\nobreak\ ,\end{split}$ (65) where $\pi f_{a}^{2}\log y\sim\mu_{\text{eff}}$ and $c_{1}\log y\sim\xi$ and the function $f(y,\,u)$ is defined as $f(y,\,u)=\left(\frac{\log(yu^{2})}{u^{2}\log y}\right)^{\frac{q+3}{2}}u^{4}-1\leavevmode\nobreak\ .$ (66) Since $f(y,\,1)=0$ and $\partial f/\partial u<0$ for $\exp\frac{q+3}{2(q-1)}<y^{1/2}u=\frac{k}{\sqrt{m_{r}H}}$, the function $f(y,\,u)$ is positive decreasing function in $u$ for $u<1$. Therefore, we see that $\partial\rho_{a}/\partial k$ rapidly decays faster than $\sim k^{-q}$ for the momentum $k\gtrsim e^{\frac{q+3}{2(q-1)}}\sqrt{m_{r}H}$, and its contribution to the axion abundance will be accordingly suppressed. When $y^{-1/2}x$ is small (low momentum region), the integration is done over $-\frac{x_{0}^{2}c_{1}y}{x^{2}}W_{-1}(-\frac{x^{2}}{x_{0}^{2}c_{1}y})=y_{1}\leq y^{\prime}\leq y_{2}=y$ where $y_{1}$ in terms of Lambert $W$ function is the intersection between two curves $(y^{\prime}/y)^{1/2}x$ and $(x_{0}^{2}c_{1}\log y^{\prime})^{1/2}$. The integration in Eq. (62) can also be done straightforwardly and it is given by $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)&=8\pi f_{a}^{2}c_{1}m_{r}(x_{0}^{2}c_{1})^{\frac{q-1}{2}}y^{\frac{q-3}{2}}\frac{1}{x^{q}}\left(\frac{q-1}{2}\right)^{-\frac{q+3}{2}}\\\\[5.0pt] &\times\left[\Gamma\left(\frac{q+5}{2},\,\frac{q-1}{2}\log\left(-\frac{x_{0}^{2}c_{1}y}{x^{2}}W_{-1}\left(-\frac{x^{2}}{x_{0}^{2}c_{1}y}\right)\right)\right)-\Gamma\left(\frac{q+5}{2},\,\frac{q-1}{2}\log y\right)\right]\leavevmode\nobreak\ ,\\\\[5.0pt] &=8\pi f_{a}^{2}c_{1}m_{r}(x_{0}^{2}c_{1})^{\frac{q-1}{2}}y^{\frac{q-3}{2}}\frac{1}{x^{q}}\left(\frac{q-1}{2}\right)^{-\frac{q+3}{2}}\\\\[5.0pt] &\times\left[\Gamma\left(\frac{q+5}{2},\,-\frac{q-1}{2}W_{-1}\left(-\frac{x^{2}}{x_{0}^{2}c_{1}y}\right)\right)-\Gamma\left(\frac{q+5}{2},\,\frac{q-1}{2}\log y\right)\right]\leavevmode\nobreak\ ,\end{split}$ (67) where $\log(-wW_{k}(-w^{-1}))=-W_{k}(-w^{-1})$ was used in the second relation. It can be easily shown that $-W_{-1}(-w^{-1})$ becomes large at large $y$ for the low momentum. Using $\Gamma(s,\,x)\rightarrow x^{s-1}e^{-x}$ as $x\rightarrow\infty$ as before, the expression in Eq. (67) can similarly be approximated as, in the large $\log\frac{m_{r}}{H}$ limit, $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)\approx\frac{8H^{2}\mu_{\text{eff}}\xi}{k}\left[\left(\frac{-W_{-1}\left(-\frac{H}{m_{r}}\log\frac{m_{r}}{H}\left(\frac{k}{x_{0}H\sqrt{\xi}}\right)^{2}\right)}{\log\frac{m_{r}}{H}}\right)^{2}-\left(\frac{k}{x_{0}H\sqrt{\xi}}\right)^{1-q}\right]\leavevmode\nobreak\ ,\end{split}$ (68) where the identity $\exp[-W_{k}(-w^{-1})]=-wW_{k}(-w^{-1})$ was used and $\pi f_{a}^{2}\log y\sim\mu_{\text{eff}}$ and $c_{1}\log y\sim\xi$ were used for the neat expression. The above expression can be further simplified by approximating the Lambert function, $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)\approx\frac{8H^{2}\mu_{\text{eff}}\xi}{k}\left[\left(1-2\frac{\log\frac{k}{x_{0}H\sqrt{\xi}}}{\log\frac{m_{r}}{H}}\right)^{2}-\left(\frac{k}{x_{0}H\sqrt{\xi}}\right)^{1-q}\right]\leavevmode\nobreak\ .\end{split}$ (69) From Eq. (69), we see that axion spectrum $\partial\rho_{a}/\partial k$ scales as $\sim k^{-1}$ in the low momentum region $x_{0}H\sqrt{\xi}\lesssim k\lesssim z_{k}\sqrt{m_{r}H}$ 161616$z_{k}=(-\frac{x_{0}^{2}c_{1}}{2}W_{-1}(-\frac{2}{x_{0}^{2}c_{1}}))^{1/4}$ can be read off from the equality $-\frac{x_{0}^{2}c_{1}y}{x^{2}}W_{-1}(-\frac{x^{2}}{x_{0}^{2}c_{1}y})=\frac{x^{2}}{y}$, provided that $\frac{x^{2}}{x_{0}^{2}c_{1}y}<e^{-1}$, for our relevant choice of parameters, and it is found to be order one constant. The large and low momentum regions are separated below and above $\sim\sqrt{x_{0}m_{r}H}$ when the IR cutoff of $H$ is used. . Comparing with [14], the net effect by the modification of the IR cutoff from $x_{0}H$ to $x_{0}H\sqrt{\xi}$ is equivalent to shifting $k^{0}=x_{0}H$ (in their notation) to $k^{0}=x_{0}H\sqrt{\xi}$ without changing the overall factor. Another place where the modified IR cutoff can affect is the average axion field value, $\langle a^{2}\rangle^{1/2}/f_{a}$ where $\langle a^{2}(t)\rangle=\int dkk^{-2}(\partial\rho_{a}(k,\,t)/\partial k)$. Using our result in Eq. (69), we can evaluate $\langle a^{2}(t)\rangle$ (or by a simple power counting to get the leading term) with the modified cutoff $k_{\text{IR}}=x_{0}H\sqrt{\xi}$, and it gives rise to $\langle a^{2}(t)\rangle\approx\frac{8\mu_{\text{eff}}}{x_{0}^{2}}(1/2-(1+q)^{-1}-\log^{-1}\frac{m_{r}}{H}+\log^{-2}\frac{m_{r}}{H})$. That is, $\langle a^{2}(t)\rangle\sim 4\mu_{\text{eff}}$ at late times for $q>1$, as opposed to $\langle a^{2}(t)\rangle\sim 4\mu_{\text{eff}}\xi$ when the IR cutoff is $\sim H$. The ratio $\langle a^{2}\rangle^{1/2}/f_{a}$ is reduced by the factor of $\sqrt{\xi}$. While $\langle a^{2}(t_{\star})\rangle\sim 4\mu_{\text{eff}\star}\gg 1$ is still expected around the time $t_{\star}$ of the QCD crossover based on our simulation, it may be informative to consider the opposite case with $\langle a^{2}(t)\rangle^{1/2}/f_{a}\ll 1$ and see how the modified IR cutoff by $\sqrt{\xi}$ can change the parametric of the axion abundance. In this hypothetical situation, the nonlinearities due to the axion potential will be suppressed, and the axion number density at later time $t>t_{\star}$ will be simply given by $n_{a}^{\text{str}}(t)=(H(t)/H_{\star})^{3/2}\,n_{a}^{\text{str}}(t_{\star})$ where $n_{a}^{\text{str}}(t_{\star})=\int dkk^{-1}(\partial\rho_{a}/\partial k)\approx 8H_{\star}\mu_{\text{eff}\star}\sqrt{\xi_{\star}}$, as opposed to $8H_{\star}\mu_{\text{eff}\star}\xi_{\star}$ when $k_{\text{IR}}\sim H$ was assumed. That is, the shifted cutoff by $\sqrt{\xi}$ suppresses the abundance by the factor of $\sqrt{\xi}$. Now getting back to the our situation where a large $\langle a^{2}(t_{\star})\rangle^{1/2}/f_{a}$ is expected, and the nonlinearities arising due to the axion potential can not be neglected. The axions after time $t_{\star}$ still evolves as free relativistic fields, and the axion energy density at time $t>t_{\star}$ is accordingly redshifted and it is given by, in the range of $x_{0}\sqrt{\xi_{\star}H_{\star}H}<k<z_{k}\sqrt{m_{r}H}$, $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)\approx\frac{8H^{2}\mu_{\text{eff}\star}\xi_{\star}}{k}\left[\left(1-2\frac{\log\frac{k}{x_{0}\sqrt{\xi_{\star}H_{\star}H}}}{\log\frac{m_{r}}{H_{\star}}}\right)^{2}-\left(\frac{k}{x_{0}\sqrt{\xi_{\star}H_{\star}H}}\right)^{1-q}\right]\leavevmode\nobreak\ .\end{split}$ (70) The evolution continues until the transition time, denoted by $t_{\ell}$, where the axion energy density stored in the gradient terms becomes comparable with the axion potential and during which the axion energy density is assumed to promptly converted into non-relativistic ones, namely $\rho_{\text{IR}}(t_{\ell})=\int_{x_{0}\sqrt{\xi_{\star}H_{\star}H}}^{c_{m}m_{a}(t_{\ell})}dk\frac{\partial\rho_{a}}{\partial k}(k,\,t_{\ell})=c_{V}m_{a}^{2}(t_{\ell})f_{a}^{2}\leavevmode\nobreak\ ,$ (71) where $c_{m}$ and $c_{V}$ are order one parameters that have to be determined by numerical simulation. Axions with $k>c_{m}m_{a}$ will decay faster than those contributing to the dominant abundance. The relation in Eq. (71) leads to the condition (differs by the definition of $\kappa$ compared to [14]), $8H^{2}_{\ell}\mu_{\text{eff}\star}\xi_{\star}\left[\log\kappa\left(1-2\frac{\log\kappa}{\log\frac{m_{r}}{H_{\star}}}+\frac{4}{3}\frac{\log^{2}\kappa}{\log^{2}\frac{m_{r}}{H_{\star}}}-\frac{1-\kappa^{1-q}}{q-1}\right)\right]=c_{V}m_{a}^{2}(t_{\ell})f_{a}^{2}\leavevmode\nobreak\ ,$ (72) where $\kappa=\frac{c_{m}m_{a}(t_{\ell})}{x_{0}\sqrt{\xi_{\star}H_{\star}H}}$ and $H_{\ell}=H(t_{\ell})$. Parametrizing the axion mass as $m_{a}(t)=H_{\star}(H_{\star}/H)^{\alpha/4}$ with $m_{a}(t_{\star})=H_{\star}$ and introducing [14], $z\equiv\left(\frac{m_{a}(t_{\ell})}{H_{\star}}\right)^{1+\frac{6}{\alpha}}\leavevmode\nobreak\ ,$ (73) we obtain, keeping only first term as the dominant one in Eq. (72) at late times $\log\gg 1$, $\frac{8\mu_{\text{eff}\star}\xi_{\star}}{c_{V}f_{a}^{2}}\log\left(\frac{c_{m}}{x_{0}\sqrt{\xi_{\star}}}z^{\frac{\alpha+2}{\alpha+6}}\right)=z^{\frac{2(\alpha+4)}{\alpha+6}}\leavevmode\nobreak\ .$ (74) Comparing with the result in [14], the modified momentum IR cutoff amounts to shifting $x_{0}$ to $x_{0}\sqrt{\xi_{\star}}$. The axion abundance at $t=t_{\ell}$ is estimated as $n_{a}^{\text{str}}(t_{\ell})=c_{n}\frac{\rho_{\text{IR}}(t_{\ell})}{m_{a}(t_{\ell})}=c_{n}c_{V}m_{a}(t_{\ell})f_{a}^{2}$ whereas the contribution from the misalignment with the order one angle is $n_{a}^{\text{mis}}(t_{\ell})=c^{\prime}_{n}m_{a}(t_{\star})f_{a}^{2}(H_{\ell}/H_{\star})^{3/2}$. Expressing $m_{a}(t_{\ell})$ in terms of the solution $z$ using Eq. (73) and taking ratio of two different types of contributions, we obtain $\frac{n_{a}^{\text{str}}(t_{\ell})}{n_{a}^{\text{mis}}(t_{\ell})}=\frac{c_{n}c_{V}}{c^{\prime}_{n}}z\leavevmode\nobreak\ ,$ (75) where the solution $z$ of Eq. (87) is given by $z=\left[-\frac{\alpha+2}{2(\alpha+4)}\frac{8\mu_{\text{eff}\star}\xi_{\star}}{c_{V}f_{a}^{2}}W_{-1}\left(-\frac{2(\alpha+4)}{\alpha+2}\frac{c_{V}f_{a}^{2}}{8\mu_{\text{eff}\star}\xi_{\star}}\left(\frac{c_{m}}{x_{0}\sqrt{\xi_{\star}}}\right)^{-\frac{2(\alpha+4)}{\alpha+2}}\right)\right]^{\frac{\alpha+6}{2(\alpha+4)}}\leavevmode\nobreak\ ,$ (76) which is the same as that in [14] except for $x_{0}\rightarrow x_{0}\sqrt{\xi}$. Using the relation, $-W_{-1}(-w^{-1})=\log(w\log(w\log(\cdots)))$, the above ratio of two different contributions is finally given by (in the similar form to [14]) $\frac{n_{a}^{\text{str},q>1}(t_{\ell})}{n_{a}^{\text{mis}}(t_{\ell})}=\frac{c_{n}c_{V}}{c^{\prime}_{n}}\left[\frac{4\mu_{\text{eff}\star}\xi_{\star}}{c_{V}f_{a}^{2}}\frac{\alpha+2}{\alpha+4}\log\left(\frac{4\mu_{\text{eff}\star}\xi_{\star}}{c_{V}f_{a}^{2}}\frac{\alpha+2}{\alpha+4}\left(\frac{c_{m}}{x_{0}\sqrt{\xi_{\star}}}\right)^{\frac{2(\alpha+4)}{\alpha+2}}\log(\cdots)\right)\right]^{\frac{\alpha+6}{2(\alpha+4)}}\leavevmode\nobreak\ .$ (77) Comparing to [14], the final result differs by the replacement of $x_{0}\rightarrow x_{0}\sqrt{\xi_{\star}}$ without changing the overall factors. Upon replacements of $\mu_{\text{eff}\star}\sim\pi f_{a}^{2}\log_{\star}$, $\xi_{\star}\sim c_{1}\log_{\star}$ inside $\log$ at late times, or $\log_{\star}\gg 1$, the shifted momentum induces the modification of the overall coefficient of the axion abundance (and sub- leading terms) as $\frac{n_{a}^{\text{str},q>1}(t_{\ell})}{n_{a}^{\text{mis}}(t_{\ell})}\approx\frac{c_{n}c_{V}}{c^{\prime}_{n}}\left[\frac{4\mu_{\text{eff}\star}\xi_{\star}}{c_{V}f_{a}^{2}}\frac{\alpha}{\alpha+4}\left(\log\log\frac{m_{r}}{H_{\star}}+\mathcal{O}(\log\log\log\frac{m_{r}}{H_{\star}})\right)\right]^{\frac{1}{2}\left(1+\frac{2}{\alpha+4}\right)}\leavevmode\nobreak\ ,$ (78) where the factor $\frac{\alpha}{\alpha+4}$ will change to $\frac{\alpha+2}{\alpha+4}$ when using $k_{\text{IR}}=x_{0}H$ as in [14]. For instance, it can cause roughly 20% reduction of the overall rate for the typical choice $\alpha=8$. ### E.2 Estimation for $q=1$ We extend the previous discussion to the case with $q=1$ for the sake of completeness (and for the clear comparison with literature). We primarily present the estimate with IR cutoff $x_{\text{IR}}=x_{0}\sqrt{\xi}$ while commenting on the case with $x_{\text{IR}}=x_{0}$ when it is relevant. $x_{\text{UV}}=1$ is chosen as before. The instantaneous emission for $q=1$ is given by $\begin{split}F\left[x,\,y\right]\sim&\left\\{\begin{array}[]{lll}\displaystyle\frac{1}{\log{\frac{y}{x_{\text{IR}}}}}\,\frac{1}{x}&\text{for}&x_{0}\sqrt{\xi}=x_{\text{IR}}\leq x\leq y\\\\[15.0pt] \hskip 10.11775pt0&&\text{otherwise}\leavevmode\nobreak\ .\end{array}\right.\end{split}$ (79) Since the finite support of $F$ is the same as in Section E.1, the integration in Eq. (62) can be done similarly for large and small momenta $k$, or $y^{-1/2}x$ over similar integration ranges to Section E.1. When $y^{-1/2}x$ is large, the integration gives rise to $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)&=\frac{2\pi f_{a}^{2}c_{1}m_{r}}{yx}\left(\log^{2}{y}-\log^{2}{\frac{x^{2}}{y}}\right)\approx\frac{8H^{2}\mu_{\text{eff}}\xi}{k}\frac{\log\frac{k}{H}}{\log\frac{m_{r}}{H}}\left(1-\frac{\log\frac{k}{H}}{\log\frac{m_{r}}{H}}\right)\leavevmode\nobreak\ ,\end{split}$ (80) where $\pi f_{a}^{2}\log{y}\sim\mu_{\text{eff}}$ and $c_{1}\log{y}\sim\xi$ were used. While $\partial\rho_{a}/\partial k$ scales as $\sim k^{-1}$ as expected, it is multiplied by the logarithmic suppression $\frac{\log{(k/H)}}{\log{(m_{r}/H)}}$. Similarly, when $y^{-1/2}x$ is small, the integration is given by $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)&=\frac{2\pi f_{a}^{2}c_{1}m_{r}}{yx}\left[\log^{2}{y}-\log^{2}\left(-\frac{{x_{0}}^{2}c_{1}y}{x^{2}}W_{-1}\left(-\frac{x^{2}}{{x_{0}}^{2}c_{1}y}\right)\right)\right]\\\\[5.0pt] &=\frac{2\pi f_{a}^{2}c_{1}m_{r}}{yx}\left[\log^{2}{y}-\left(-W_{-1}\left(-\frac{x^{2}}{{x_{0}}^{2}c_{1}y}\right)\right)^{2}\right]\leavevmode\nobreak\ ,\end{split}$ (81) where the relation $\log(-wW_{k}(-w^{-1}))=-W_{k}(-w^{-1})$ is used in the second relation. Using the approximate relations $\pi f_{a}^{2}\log{y}\sim\mu_{\text{eff}}$ and $c_{1}\log{y}\sim\xi$ and approximating the Lambert function, the expression in Eq. (81) is further simplified as $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)\approx\frac{8H^{2}\mu_{\text{eff}}\xi}{k}\frac{\log{\frac{k}{x_{0}H\sqrt{\xi}}}}{\log{\frac{m_{r}}{H}}}\left(1-\frac{\log{\frac{k}{x_{0}H\sqrt{\xi}}}}{\log{\frac{m_{r}}{H}}}\right)\leavevmode\nobreak\ .\end{split}$ (82) The axion spectrum $\partial\rho_{a}/\partial k$ in the large momentum also scales as $\sim k^{-1}$, as is expected, with the logarithmic suppression. Unlike the situation with $q>1$, the current case with $q=1$ shows the power law behavior $\sim k^{-1}$ over the entire momentum range as well as the logarithmic suppression. These properties remain the same even if $x_{\text{IR}}=x_{0}$ is taken. It does not change $\partial\rho_{a}/\partial k$ except for replacing $x_{0}H\sqrt{\xi}$ with $x_{0}H$ and, subsequently, the splitting of the momentum range into $x_{0}H<k\leq\sqrt{x_{0}m_{r}H}$ (for low) and $\sqrt{x_{0}m_{r}H}<k$ (for high). Switching from $q>1$ to $q=1$ case also affects the average axion field value. Using the result in Eq. (82) with cutoff $k_{\text{IR}}=x_{0}H\sqrt{\xi}$, the average axion field value is given by $\langle a^{2}(t)\rangle\approx\frac{2\mu_{\text{eff}}}{x_{0}^{2}\log{\frac{m_{r}}{H}}}\quad\text{for}\quad x_{\text{IR}}=x_{0}\sqrt{\xi}\leavevmode\nobreak\ ,$ (83) that is, $\langle a^{2}(t)\rangle\sim 2\pi f_{a}^{2}$ at late times with no $\log{\frac{m_{r}}{H}}$ enhancement unlike to $q>1$ case. However, this behavior is IR cutoff sensitive, and taking $x_{\text{IR}}=x_{0}$ instead gives $\langle a^{2}(t)\rangle\approx\frac{2\mu_{\text{eff}}\xi}{x_{0}^{2}\log{\frac{m_{r}}{H}}}\quad\text{for}\quad x_{\text{IR}}=x_{0}\leavevmode\nobreak\ ,$ (84) which has a $\log$ enhancement. It implies that the size of $\langle a^{2}(t_{\star})\rangle^{1/2}/f_{a}$ is not obvious in priori when $q=1$, explicit numerical check may be necessary as in [15]. Here we discuss the parametric behavior of axion number density for both cases, one with $\langle a^{2}(t_{\star})\rangle^{1/2}/f_{a}\ll 1$ and the other one with $\langle a^{2}(t_{\star})\rangle^{1/2}/f_{a}\gg 1$. In the former case, the scaling in $\log{\frac{m_{r}}{H_{\star}}}$ is sensitive to the IR cutoff. As was discussed in Section E.1, the axion number density at later time $t>t_{\star}$ is given by $n_{a}^{\text{str}}(t)=(H(t)/H_{\star})^{3/2}\,n_{a}^{\text{str}}(t_{\star})$ where $n_{a}^{\text{str}}(t_{\star})=\int dkk^{-1}(\partial\rho_{a}/\partial k)\approx\frac{8\pi f_{a}^{2}H_{\star}}{x_{0}}\sqrt{\xi_{\star}}$ for $x_{\text{IR}}=x_{0}\sqrt{\xi}$. The abundance is enhanced by the factor of $\sqrt{\xi_{\star}}$, when $x_{\text{IR}}=x_{0}$, or $n_{a}^{\text{str}}(t_{\star})\approx\frac{8\pi f_{a}^{2}H_{\star}\xi_{\star}}{x_{0}}$. However, this may not cause a large deviation in the abundance $n_{a}^{\text{str}}(t_{\star})$ since $\xi_{\star}$ is limited to $\xi_{\star}\lesssim\frac{x_{0}^{2}}{2\pi}$ to satisfy $\langle a^{2}(t_{\star})\rangle^{1/2}/f_{a}\lesssim 1$. Now we discuss the latter case, where $\langle a^{2}(t_{\star})\rangle^{1/2}/f_{a}$ is large. Similarly to the case for $q>1$, the axion radiation around the transition time $t_{\ell}$ is estimated by solving the relation in Eq. (71) with the axion energy density at time $t>t_{\star}$, $\begin{split}\frac{\partial\rho_{a}}{\partial k}(k,\,t)\approx\frac{8H^{2}\mu_{\text{eff}\star}\xi_{\star}}{k}\frac{\log{\frac{k}{x_{0}\sqrt{\xi_{\star}H_{\star}H}}}}{\log{\frac{m_{r}}{H_{\star}}}}\left(1-\frac{\log{\frac{k}{x_{0}\sqrt{\xi_{\star}H_{\star}H}}}}{\log{\frac{m_{r}}{H_{\star}}}}\right)\leavevmode\nobreak\ ,\end{split}$ (85) in the range of $x_{0}\sqrt{\xi_{\star}H_{\star}H}<k<z_{k}\sqrt{m_{r}H}$. Solving the relation in Eq. (71) and expressing in terms of $\kappa=\frac{c_{m}m_{a}(t_{\ell})}{x_{0}\sqrt{\xi_{\star}H_{\star}H}}$ and $H_{\ell}=H(t_{\ell})$ gives rise to $8\pi f_{a}^{2}H_{l}^{2}\xi_{\star}(\log\kappa)^{2}\left(\frac{1}{2}-\frac{1}{3}\frac{\log{\kappa}}{\log{\frac{m_{r}}{H_{\star}}}}\right)=c_{V}m_{a}^{2}(t_{\ell})f_{a}^{2}\leavevmode\nobreak\ .$ (86) With the same parametrization of the axion mass $m_{a}(t)$ and introducing the $z$ parameter as in Section E.1, we obtain, keeping only the first term in the parenthesis on the left hand side in Eq. (86), $\left(\frac{4\pi\xi_{\star}}{c_{V}}\right)^{\frac{1}{2}}\log\left(\frac{c_{m}}{x_{0}\sqrt{\xi_{\star}}}z^{\frac{\alpha+2}{\alpha+6}}\right)=z^{\frac{\alpha+4}{\alpha+6}}\leavevmode\nobreak\ ,$ (87) where choosing $x_{\text{IR}}=x_{0}$ amounts to the replacement of $x_{0}\sqrt{\xi_{\star}}$ with $x_{0}$. As in Section E.1, the relative axion abundance with respect to the misalignment can be estimated using the solution $z$ to Eq. (87) which is given by $z=\left[-\frac{\alpha+2}{\alpha+4}\left(\frac{4\pi\xi_{\star}}{c_{V}}\right)^{\frac{1}{2}}W_{-1}\left(-\frac{\alpha+4}{\alpha+2}\left(\frac{c_{V}}{4\pi\xi_{\star}}\right)^{\frac{1}{2}}\left(\frac{c_{m}}{x_{0}\sqrt{\xi}}\right)^{-\frac{\alpha+4}{\alpha+2}}\right)\right]^{\frac{\alpha+6}{\alpha+4}}\leavevmode\nobreak\ .$ (88) Unlike the case for $q>1$, the above solution $z$ in Eq. (88) does not hold for an arbitrarily large $\xi_{\star}$ for given fixed value of $c_{m}$. The value of $-W_{-1}(-w^{-1})$ exists only for $w\geq e$, and the solution in Eq. (88) makes sense only for $\xi_{\star}$ smaller than $\left(\frac{1}{e}\frac{\alpha+2}{\alpha+4}\sqrt{\frac{4\pi}{c_{V}}}\right)^{\alpha+2}\left(\frac{c_{m}}{x_{0}}\right)^{\alpha+4}$, denoted by $\xi_{\star\text{max}}$. The solution to Eq. (88) does not exist for larger values of $\xi_{\star}>\xi_{\star\text{max}}$ since the low momentum range $x_{0}\sqrt{\xi_{\star}H_{\star}H}<k\leq c_{m}m_{a}(t)$ (that dominantly contributes to $\rho_{IR}$) becomes too narrow so that $\rho_{IR}(t)$ can not be large enough to satisfy Eq. (88). A large $\xi_{\star}$ may become compatible with the solution to Eq. (88) if $c_{m}$ is allowed to depend on $\xi_{\star}$. While the actual values of $c_{m}$ should be determined from numerical simulations, applying the property $-W_{-1}(-w^{-1})\geq 1$ to the solution $z$ in Eq. (88), we obtain $z\geq\left(\frac{\alpha+2}{\alpha+4}\left(\frac{4\pi\xi_{\star}}{c_{V}}\right)^{\frac{1}{2}}\right)^{\frac{\alpha+6}{\alpha+4}}\leavevmode\nobreak\ ,$ (89) where switching inequality to equality amounts to restoring the multiplicative $-W_{-1}(-w^{-1})$ function which depends at most logarithmically on $w$. Finally, the axion abundance from strings in the scaling regime is estimated as $\frac{n_{a}^{\text{str},q=1}(t_{\ell})}{n_{a}^{\text{mis}}(t_{\ell})}\approx\frac{c_{n}c_{V}}{c^{\prime}_{n}}\left[\frac{\alpha+2}{\alpha+4}\left(\frac{4\pi\xi_{\star}}{c_{V}}\right)^{\frac{1}{2}}\right]^{1+\frac{2}{\alpha+4}}\leavevmode\nobreak\ ,$ (90) with a possible multiplicative factor scaling at most as $\log\log\frac{m_{r}}{H_{\star}}$. ## References * [1] R. D. Peccei and H. R. Quinn, CP Conservation in the Presence of Instantons, Phys. Rev. Lett. 38 (1977) 1440–1443. * [2] S. Weinberg, A New Light Boson?, Phys. Rev. Lett. 40 (1978) 223–226. * [3] F. Wilczek, Problem of Strong $P$ and $T$ Invariance in the Presence of Instantons, Phys. Rev. Lett. 40 (1978) 279–282. * [4] M. Dine and W. Fischler, The Not So Harmless Axion, Phys. Lett. B 120 (1983) 137–141. * [5] J. Preskill, M. B. Wise, and F. Wilczek, Cosmology of the Invisible Axion, Phys. Lett. B 120 (1983) 127–132. * [6] L. F. Abbott and P. Sikivie, A Cosmological Bound on the Invisible Axion, Phys. Lett. B 120 (1983) 133–136. * [7] D. J. E. Marsh, Axion Cosmology, Phys. Rept. 643 (2016) 1–79, [arXiv:1510.07633]. * [8] T. W. B. Kibble, Evolution of a system of cosmic strings, Nucl. Phys. B 252 (1985) 227. [Erratum: Nucl.Phys.B 261, 750 (1985)]. * [9] M. Gorghetto, E. Hardy, and G. Villadoro, Axions from Strings: the Attractive Solution, JHEP 07 (2018) 151, [arXiv:1806.04677]. * [10] M. Kawasaki, T. Sekiguchi, M. Yamaguchi, and J. Yokoyama, Long-term dynamics of cosmological axion strings, PTEP 2018 (2018), no. 9 091E01, [arXiv:1806.05566]. * [11] A. Vaquero, J. Redondo, and J. Stadler, Early seeds of axion miniclusters, JCAP 04 (2019) 012, [arXiv:1809.09241]. * [12] V. B. Klaer and G. D. Moore, Global cosmic string networks as a function of tension, JCAP 06 (2020) 021, [arXiv:1912.08058]. * [13] M. Buschmann, J. W. Foster, and B. R. Safdi, Early-Universe Simulations of the Cosmological Axion, Phys. Rev. Lett. 124 (2020), no. 16 161103, [arXiv:1906.00967]. * [14] M. Gorghetto, E. Hardy, and G. Villadoro, More axions from strings, SciPost Phys. 10 (2021), no. 2 050, [arXiv:2007.04990]. * [15] M. Buschmann, J. W. Foster, A. Hook, A. Peterson, D. E. Willcox, W. Zhang, and B. R. Safdi, Dark matter from axion strings with adaptive mesh refinement, Nature Commun. 13 (2022), no. 1 1049, [arXiv:2108.05368]. * [16] C. A. J. O’Hare, G. Pierobon, J. Redondo, and Y. Y. Y. Wong, Simulations of axionlike particles in the postinflationary scenario, Phys. Rev. D 105 (2022), no. 5 055025, [arXiv:2112.05117]. * [17] M. Hindmarsh, J. Lizarraga, A. Lopez-Eiguren, and J. Urrestilla, Scaling Density of Axion Strings, Phys. Rev. Lett. 124 (2020), no. 2 021301, [arXiv:1908.03522]. * [18] M. Yamaguchi, M. Kawasaki, and J. Yokoyama, Evolution of axionic strings and spectrum of axions radiated from them, Phys. Rev. Lett. 82 (1999) 4578–4581, [hep-ph/9811311]. * [19] M. Yamaguchi, Scaling property of the global string in the radiation dominated universe, Phys. Rev. D 60 (1999) 103511, [hep-ph/9907506]. * [20] M. Yamaguchi, J. Yokoyama, and M. Kawasaki, Evolution of a global string network in a matter dominated universe, Phys. Rev. D 61 (2000) 061301, [hep-ph/9910352]. * [21] M. Yamaguchi and J. Yokoyama, Quantitative evolution of global strings from the Lagrangian view point, Phys. Rev. D 67 (2003) 103514, [hep-ph/0210343]. * [22] T. Hiramatsu, M. Kawasaki, T. Sekiguchi, M. Yamaguchi, and J. Yokoyama, Improved estimation of radiated axions from cosmological axionic strings, Phys. Rev. D 83 (2011) 123531, [arXiv:1012.5502]. * [23] M. Kawasaki, K. Saikawa, and T. Sekiguchi, Axion dark matter from topological defects, Phys. Rev. D 91 (2015), no. 6 065014, [arXiv:1412.0789]. * [24] D. P. Bennett, The evolution of cosmic strings, Phys. Rev. D 33 (1986) 872. [Erratum: Phys.Rev.D 34, 3932 (1986)]. * [25] D. P. Bennett, Evolution of cosmic strings. 2., Phys. Rev. D 34 (1986) 3592. * [26] R. A. Battye and E. P. S. Shellard, Global string radiation, Nucl. Phys. B 423 (1994) 260–304, [astro-ph/9311017]. * [27] R. A. Battye and E. P. S. Shellard, Axion string constraints, Phys. Rev. Lett. 73 (1994) 2954–2957, [astro-ph/9403018]. [Erratum: Phys.Rev.Lett. 76, 2203–2204 (1996)]. * [28] C. J. A. P. Martins and E. P. S. Shellard, String evolution with friction, Phys. Rev. D 53 (1996) 575–579, [hep-ph/9507335]. * [29] W. Zhang, A. Myers, K. Gott, A. Almgren, and J. Bell, AMReX: Block-Structured Adaptive Mesh Refinement for Multiphysics Applications, arXiv:2009.12009. * [30] W. Zhang, A. Almgren, V. Beckner, J. Bell, J. Blaschke, C. Chan, M. Day, B. Friesen, K. Gott, D. Graves, M. Katz, A. Myers, T. Nguyen, A. Nonaka, M. Rosso, S. Williams, and M. Zingale, AMReX: a framework for block-structured adaptive mesh refinement, Journal of Open Source Software 4 (May, 2019) 1370. * [31] K. Clough, P. Figueras, H. Finkel, M. Kunesch, E. A. Lim, and S. Tunyasuvunakool, GRChombo : Numerical Relativity with Adaptive Mesh Refinement, Class. Quant. Grav. 32 (2015), no. 24 245011, [arXiv:1503.03436]. * [32] A. Drew and E. P. S. Shellard, Radiation from global topological strings using adaptive mesh refinement: Methodology and massless modes, Phys. Rev. D 105 (2022), no. 6 063517, [arXiv:1910.01718]. * [33] M. J. Berger and J. Oliger, Adaptive mesh refinement for hyperbolic partial differential equations, Journal of Computational Physics 53 (1984), no. 3 484–512. * [34] L. Fleury and G. D. Moore, Axion dark matter: strings and their cores, JCAP 01 (2016) 004, [arXiv:1509.00026]. * [35] M. Yamaguchi and J. Yokoyama, Lagrangian evolution of global strings, Phys. Rev. D 66 (2002) 121303, [hep-ph/0205308]. * [36] K. Saikawa, J. Redondo, A. Vaquero, and M. Kaltschmidt, Spectrum of global string networks and the axion dark matter mass, arXiv:2401.17253. * [37] T. W. B. Kibble, Topology of Cosmic Domains and Strings, J. Phys. A 9 (1976) 1387–1398. * [38] T. W. B. Kibble, Some Implications of a Cosmological Phase Transition, Phys. Rept. 67 (1980) 183. * [39] A. Vilenkin, Cosmic Strings, Phys. Rev. D 24 (1981) 2082–2089. * [40] A. Albrecht and N. Turok, Evolution of Cosmic Strings, Phys. Rev. Lett. 54 (1985) 1868–1871. * [41] D. P. Bennett and F. R. Bouchet, Evidence for a Scaling Solution in Cosmic String Evolution, Phys. Rev. Lett. 60 (1988) 257. * [42] B. Allen and E. P. S. Shellard, Cosmic string evolution: a numerical simulation, Phys. Rev. Lett. 64 (1990) 119–122. * [43] W. H. Press, B. S. Ryden, and D. N. Spergel, Dynamical Evolution of Domain Walls in an Expanding Universe, Astrophys. J. 347 (1989) 590–604. * [44] J. N. Moore, E. P. S. Shellard, and C. J. A. P. Martins, On the evolution of Abelian-Higgs string networks, Phys. Rev. D 65 (2002) 023503, [hep-ph/0107171]. * [45] G. Vincent, N. D. Antunes, and M. Hindmarsh, Numerical simulations of string networks in the Abelian Higgs model, Phys. Rev. Lett. 80 (1998) 2277–2280, [hep-ph/9708427]. * [46] A. Dabholkar and J. M. Quashnock, Pinning Down the Axion, Nucl. Phys. B 333 (1990) 815–832. * [47] R. L. Davis, Goldstone Bosons in String Models of Galaxy Formation, Phys. Rev. D 32 (1985) 3172. * [48] R. L. Davis, Cosmic Axions from Cosmic Strings, Phys. Lett. B 180 (1986) 225–230. * [49] D. Harari and P. Sikivie, On the Evolution of Global Strings in the Early Universe, Phys. Lett. B 195 (1987) 361–365. * [50] C. Hagmann, S. Chang, and P. Sikivie, Axions from string decay, Nucl. Phys. B Proc. Suppl. 72 (1999) 81–86, [hep-ph/9807428]. * [51] D. H. Lyth, Axions and inflation: Vacuum fluctuations, Phys. Rev. D 45 (May, 1992) 3394–3404. * [52] P. Sikivie, Axion Cosmology, Lect. Notes Phys. 741 (2008) 19–50, [astro-ph/0610440]. * [53] T. Hiramatsu, M. Kawasaki, K. Saikawa, and T. Sekiguchi, Axion cosmology with long-lived domain walls, JCAP 01 (2013) 001, [arXiv:1207.3166]. * [54] T. Hiramatsu, M. Kawasaki, K. Saikawa, and T. Sekiguchi, Production of dark matter axions from collapse of string-wall systems, Phys. Rev. D 85 (2012) 105020, [arXiv:1202.5851]. [Erratum: Phys.Rev.D 86, 089902 (2012)]. * [55] P. Fox, A. Pierce, and S. D. Thomas, Probing a QCD string axion with precision cosmological measurements, hep-th/0409059. * [56] G. Grilli di Cortona, E. Hardy, J. Pardo Vega, and G. Villadoro, The QCD axion, precisely, JHEP 01 (2016) 034, [arXiv:1511.02867].
# UHH-LT at SemEval-2020 Task 12: Fine-Tuning of Pre-Trained Transformer Networks for Offensive Language Detection Gregor Wiedemann Seid Muhie Yimam Language Technology Group Department of Informatics University of Hamburg, Germany {gwiedemann, yimam<EMAIL_ADDRESS> Chris Biemann ###### Abstract Fine-tuning of pre-trained transformer networks such as BERT yield state-of- the-art results for text classification tasks. Typically, fine-tuning is performed on task-specific training datasets in a supervised manner. One can also fine-tune in unsupervised manner beforehand by further pre-training the masked language modeling (MLM) task. Hereby, in-domain data for unsupervised MLM resembling the actual classification target dataset allows for domain adaptation of the model. In this paper, we compare current pre-trained transformer networks with and without MLM fine-tuning on their performance for offensive language detection. Our MLM fine-tuned RoBERTa-based classifier officially ranks 1st in the SemEval 2020 Shared Task 12 for the English language. Further experiments with the ALBERT model even surpass this result. ## 1 Offensive Language Detection The automatic detection of hate-speech, cyber-bullying, or aggressive and offensive language became a vividly studied task in natural language processing (NLP) in recent years. The offensive language detection (OLD) shared Task 6 of 2019’s International Workshop on Semantic Evaluation (SemEval) [Zampieri et al., 2019b] attracted submissions from more than 100 teams. The Offensive Language Identification Dataset (OLID) used in this shared task comprises three hierarchical classification sub-tasks: A) offensive language detection, B) categorization of offensive language, and C) offensive language target identification [Zampieri et al., 2019a]. For Task A, 14,100 Twitter tweets were manually labeled as either offensive (OFF) or not offensive (NOT). Task B distinguishes the 4,640 offensive tweets from task A into targeted (TIN) or general, untargeted (UNT) offensive language. Task C, finally, separates targeted insults into the three categories: groups (GRP), individuals (IND), and others (OTH). OffensEval 2020, the SemEval 2020 Offensive Language Detection Shared Task, does not provide an own manually labeled training set for the English language [Zampieri et al., 2020]. Instead, a large ‘weakly labeled’ dataset was published by the organizers, containing roughly nine million tweets. Each tweet has been automatically classified by an ensemble of five different supervised classifiers trained on the OLID dataset. The weakly labeled dataset contains the raw tweet (with user mentions replaced by a special ‘USER’ token) along with the average label probability and the variance of the five classifier predictions. Since there is no way that such weak labels themselves carry more useful information to a machine learning system than the original dataset on which the five classifiers were trained, we decided not to use any of the weakly labeled information. Instead, for our classification systems, we rely on the 2019 OLID dataset only. However, the OffensEval 2020 dataset is an ample source to build models using unsupervised learning, particularly for domain-adaptation of a pre-trained language model such as BERT [Devlin et al., 2019] or its successors which are based on the transformer neural network architecture. Unfortunately, training a transformer-based language model in an unsupervised manner is incredibly resource-consuming, making it impractical to learn from large datasets without access to larger GPU clusters or TPU hardware. Regarding this, the contribution of our paper is two-fold: 1. 1. We evaluate to what extent different pre-trained transformer-based neural network models can be fine-tuned to detect offensive language and its sub- categories. An ensemble based on the ALBERT [Lan et al., 2019] model achieves the best overall performance. 2. 2. We study how an additional fine-tuning step with masked language modeling (MLM) of the best individual model RoBERTa [Liu et al., 2019b] conducted on in-domain data affects the model performance. An ensemble of models trained with this strategy was submitted as our official contribution to the OffensEval 2020 shared task for the English language and achieved first place in the competition. ## 2 Related Work #### Offensive language detection: Nowadays, a number of public datasets are available to train machine classifiers for detecting English offensive language. Unfortunately, underlying data sources, category definitions, data sampling strategies, and annotation guidelines differ to a large extent between these datasets. Hence, results of different datasets are hardly comparable, and training sets usually cannot be combined to obtain a more robust classification system. ?), and ?) conducted insightful surveys on this rapidly growing field. ?), ?), and ?) recently organized shared tasks on the topic. Although winning systems can achieve striking prediction accuracy, OLD is far from being a solved problem. Prediction performance usually drops severely if the target data comprises different characteristics than the training data. ?), for instance, show that many machine learning architectures can be fooled easily just by adding the word “love” to an offensive tweet to make it appear as non-offensive. ?) highlight that linguistic information alone is not enough in many cases to decide whether a tweet is hateful or not. Also context information, e.g. about tweeting users themselves [Ribeiro et al., 2018], or mentioned users in tweets [Wiedemann et al., 2018] can be a useful feature for automatic OLD. #### Pre-trained language models for text classification: Transfer learning with deep neural networks, in general, has proven to be superior over supervised learning for text classification, especially for small training data situations. This is illustrated exemplarily in our last year’s approach [Wiedemann et al., 2019] to the OLD SemEval shared task which employed unsupervised pre-training of a recurrent neural network architecture with a topic cluster prediction task. Practically all winners of the aforementioned shared task competitions employ some form of a fine-tuned bidirectional transformer-based language model, a neural network architecture for which ?) published with BERT the seminal work. This architecture has been proven highly successful for transfer learning. A base model is pre-trained with a MLM task and a next-sentence prediction (NSP) task in an unsupervised manner on very large datasets. The knowledge about language regularities and semantic coherence encoded in the network during this step can then be employed successfully in later training steps of fine-tuning the network weights to the actual classification task. For instance, ?) fine-tuned the pre-trained BERT model winning the 2019 SemEval OLD shared task. Also ?), and ?) used it successfully for offensive language and hate speech detection. ?) test a wide range of BERT fine-tuning methods for text classification and develop best practice recommendations. Since BERT, a number of successor models improving the network architecture, the pre-training strategy, or the pre-training dataset have been published. A selection of these models will be evaluated in Section 3. ## 3 Fine-tuning Transformer Networks We investigate two questions regarding the fine-tuning of pre-trained transformer networks for OLD. First, which pre-trained model performs best on the 2020 OLD shared task? Second, we investigate how much language model fine- tuning on in-domain data prior to classification fine-tuning improves the performance of the best model. ### 3.1 Model Selection of Transformer Networks As we have indicated in Section 2, transformer networks have been successfully employed for several text classification tasks. We test the following transformer-based pre-trained models for the OffensEval 2020 OLD shared task. #### BERT – Bidirectional Encoder Representations from Transformers: this seminal transformer- based language model employs an attention mechanism that enables to learn contextual relations between (sub-)words in a text sequence [Devlin et al., 2019]. BERT uses two training strategies: 1) MLM where 15 % of the tokens in a sequence are replaced (masked) for which the model learns to predict the original tokens, and 2) NSP where the model receives pairs of sentences as input and learns to predict whether or not the second sentence is a successor of the first one in their original document context. #### RoBERTa – A Robustly Optimized BERT Pretraining Approach: this is a replication of BERT developed by Facebook [Liu et al., 2019b] with the following modifications 1) training the model longer with bigger batches as well as more and cleaner data, 2) discard the NSP objective, 3) training on longer sequences, and 4) dynamically change the masking patterns, e.g. taking care of masking complete multi-word units. RoBERTa outperformed BERT on most tasks of the GLUE NLP benchmark (ibid.). #### XLM-RoBERTa – XLM-R: this is a cross-lingual version of RoBERTa which is trained on several languages at once [Conneau et al., 2019]. The model itself is equivalent to RoBERTa, but the training data consists of texts from more than 100 languages filtered from the CommonCrawl111https://commoncrawl.org dataset. #### ALBERT – A Lite BERT for Self-supervised Learning of Language Representations: this is a modification on BERT especially to mitigate memory limitations and training time issues [Lan et al., 2019]. The main contributions that ALBERT makes over the design choices of BERT are 1) decomposing the embedding parameters into smaller matrices that will be projected to the hidden space separately, 2) share parameters across layers to improve or stabilize the learned parameters, and 3) inter-sentence coherence loss, which is based on sentence order prediction (SOP), in contrast to BERT’s simpler NSP objective. ### 3.2 Masked Language Model Fine-tuning ?) showed that further pre-training of BERT with the masked language model task can improve later results of supervised task-specific fine-tuning. The authors tested within-task, in-domain and cross-domain further pre-training. Evaluations show that the first strategy is susceptible to overfit the training set and, thus, may harm classification performance. The last strategy does not help since BERT is already trained on general-domain data. In-domain further pre-training, however, helps to improve later classification performance if there is a substantial overlap in language characteristics between further pre-training data and supervised training data. The ‘weakly labeled’ dataset of the 2020 OLD shared task most certainly is a valuable dataset for further in-domain pre-training. However, with ca. 9 million tweets it is also rather large. Pre-training on the complete dataset is not possible regarding our hardware limitations.222MLM of the RoBERTa-large model with the full dataset on a single GPU with 12 GB RAM would take estimated 40 days. However, due to increasing memory consumption of the Adam optimizer during training, the process will stop unfinished way earlier due to a memory exception. Therefore, we conduct MLM pre-training only on a small sample of the original data. We strip URLs and user mentions from tweets, remove duplicates and, finally, randomly sample 5 % of the original dataset size, i.e. 436.123 tweets for further pre-training. We further pre-train the presumably best model RoBERTa-large [Liu et al., 2019b] (cp. Section 4) for one epoch (batch size 4, and learning rate 2e-5). ### 3.3 Ensembling For our official OffensEval 2020 test set submission as team UHH-LT, we aggregated predictions from classifiers with different ensemble approaches. #### Ensemble of model variants: We fine-tuned different transformer models with the OffensEval 2019 training data using the corresponding test data for validation. The following models were tested: BERT-base and BERT-large (uncased), RoBERTa-base and RoBERTa- large, XLM-RoBERTa, and four different ALBERT models (large-v1, large-v2, xxlarge-v1, and xxlarge-v2). Each model was fine-tuned for 6 epochs with a learning rate of 5e-6, maximum sequence length of 128, and batch size 4. After each epoch, the model was evaluated on the validation set. The best performing epoch was saved for the ensembling. We tested two ensemble approaches: 1) majority vote from all models, and 2) majority vote from one model type but with different parameter sizes such as BERT-base and BERT-large. #### MLM RoBERTa ensemble: To be able to learn from the entire 2019 OLID dataset (training and test set), as well as to smooth instabilities of predictions originating from random effects during model training, we also aggregated predictions using 10-fold cross-validation. For this, the further MLM pre-trained RoBERTa-large model is fine-tuned 10 times, each time with 90 % of the OLID data for training and the remaining 10 % as validation set. The best model after 6 epochs of training with learning rate 5e-6 and batch size 8 is used to predict the OLD 2020 test data. The final predictions for submission were obtained via majority vote on the 10 predictions per test data instance. ## 4 Results | NOT | OFF | | ---|---|---|---|--- Model | P | R | F1 | P | R | F1 | Macro F1 | Acc. Baselines All NOT | 72.21 | 100.00 | 41.93 | - | 0.00 | 0.00 | 41.93 | 72.21 All OFF | - | 0.00 | 0.00 | 27.78 | 100.00 | 43.49 | 21.74 | 27.79 Single pre-trained transformer models BERT-base | 99.06 | 90.2 | 94.42 | 79.34 | 97.78 | 87.60 | 91.01 | 92.31 BERT-large | 99.65 | 90.35 | 94.77 | 79.81 | 99.17 | 88.44 | 91.60 | 92.80 RoBERTa-base | 99.45 | 90.70 | 94.88 | 80.33 | 98.70 | 88.57 | 91.73 | 92.93 RoBERTa-large | 99.53 | 90.92 | 95.03 | 80.73 | 98.89 | 88.89 | 91.96 | 93.13 XLM-RoBERTa | 99.03 | 91.31 | 95.01 | 81.22 | 97.69 | 88.69 | 91.85 | 93.08 ALBERT-large-v1 | 98.87 | 90.24 | 94.36 | 79.32 | 97.31 | 87.40 | 90.88 | 92.20 ALBERT-large-v2 | 98.87 | 90.20 | 94.34 | 79.26 | 97.31 | 87.36 | 90.85 | 92.18 ALBERT-xxlarge-v1 | 98.35 | 91.09 | 94.58 | 80.57 | 96.02 | 87.62 | 91.10 | 92.46 ALBERT-xxlarge-v2 | 98.47 | 91.73 | 94.98 | 81.76 | 96.30 | 88.44 | 91.71 | 93.00 Ensembles of pre-trained transformer models All | 99.65 | 90.95 | 95.10 | 80.83 | 99.17 | 89.06 | 92.08 | 93.23 BERT | 99.42 | 91.16 | 95.11 | 81.11 | 98.61 | 89.01 | 92.06 | 93.23 RoBERTa | 99.57 | 90.84 | 95.01 | 80.62 | 98.98 | 88.86 | 91.93 | 93.11 ALBERT-all | 98.23 | 92.66 | 95.36 | 83.37 | 95.65 | 89.00 | 92.23 | 93.49 ALBERT-xxlarge | 98.70 | 92.16 | 95.32 | 82.62 | 96.85 | 89.17 | 92.25 | 93.47 Table 1: Performance (in %) of baselines, single models, and ensemble models on the OLID test set. Table 1 shows results of binary offensive language detection for a naive baseline (assuming all tweets as either offensive or not), as well as for the individual fine-tuned transformer models and their corresponding ensembles. All transformer models largely outperform the naïve baseline, some of them (e.g. XLM-RoBERTa) even outperform most of the other system submissions in the competition.333https://sites.google.com/site/offensevalsharedtask/results-and- paper-submission Our best individual model is RoBERTa-large with an F1-score of 91.96 %. Hence, we select this model as the basis for further MLM pre-training. From Table 2, we can see that the MLM fine-tuned RoBERTa model achieved consistently better results in terms of Macro F1 than the single pre-trained transformer models (to lower random effects of neural network training, the table shows average values of 10 runs). Regarding the ensembles of model variants, we see in Table 1 that all approaches consistently perform better than the individual models. Here, the ensemble averaging the predictions from the two ALBERT-xxlarge models performed best with an F1-score of 92.25 %. For the OffensEval 2020 Shared Task, we decided to submit the results form the MLM pre-trained RoBERTa ensemble.444Of course, during the submission phase of the shared task, the test set labels were not available. We, thus, based our decision for this specific model on its performance on last year’s OLID test set. Table 3 presents the official results of our system in the sub-tasks A, B, and C together with their ranks achieved in the competition. While our ensemble reaches the top rank for task A, there are a handful of competing approaches achieving better results for offensive language categorization (B) and target identification (C). The post-submission experiments on the official test set as presented in this paper (cp. Table 1) show that the ALBERT-based ensembles would have even beat this first ranked submission. However, the MLM fine-tuned RoBERTa model was considerably more successful on task C than the best ALBERT-xxlarge ensemble, especially to detect the “OTH” class. | NOT | OFF | | ---|---|---|---|--- Model | P | R | F1 | P | R | F1 | Macro F1 | Acc. RoBERTa-large | 98.96 | 91.49 | 95.07 | 81.54 | 97.49 | 88.76 | 91.93 | 93.15 RoBERTa-large MLM-ft | 99.15 | 91.53 | 95.18 | 81.66 | 97.96 | 89.06 | 92.12 | 93.31 Table 2: Performance (in %) of MLM fine-tuned models on the OLID test set (average of 10 runs). Team | UHH-LT ---|--- | Macro F1 (%) | Rank Task A | 92.04 | 1 out of 84 Task B | 65.98 | 6 out of 42 Task C | 66.83 | 3 out of 38 Table 3: Official results of our OffenseEval 2020 test set submissions for tasks A, B, and C (RoBERTa-large MLM test ensemble, cp. Table 2). Figure 2 shows the corresponding confusion matrices for the submitted predictions. One observation that can be revealed from the matrices is that the predictions for Tasks B and C are considerably biased towards the majority class. For Task A, however, we see more false positive cases for the offensive class which is underrepresented in the training data. A qualitative look into a sample of false predictions (cp. Fig. 1) reveals tweets wrongly predicted as offensive (false positives), some of which seem inconsistently annotated in the gold standard. Parts of expressions in Examples 1, and 2 qualified as offensive in many other training examples. Examples 3, and 4 contain some negative words that may have triggered a higher offensiveness prediction score. For the false negative samples, it is not really obvious why the models missed the correct class, since except for example 6, they actually contain strong offensive vocabulary. 1. FP: _@USER the gov and nyc mayor are the biggest joke except maybe for the idiots who elected them_ 2. FP: _@USER What the fuck_ 3. FP: _@USER I know it was bad but I used to love it_ 4. FP: _men who voted yes are childish, sorry are you 17??Men, would you have a problem if a girl said if she’s not receiving head she’s not giving head?_ 5. FN: @USER It’s as if every single one of her supporters are just as stupid as her. Wehdone.. 6. FN: I’m gonna say he doesn’t really. You should check the zip code demographics of his various properties Only liars could deny that Al Sharpton hates white people 7. FN: @USER Fuck the chargers 8. FN: Last night I watched the Democrats throwing shit up against the wall, and none of it stuck. Figure 1: False positive (FP) and false negative (FN) examples of our UHH-LT Task A submission. Figure 2: Confusion matrices from the submitted UHH-LT ensemble predictions. ## 5 Conclusion After last year’s SemEval shared task on offensive language detection was already dominated by the then newly published BERT model, for the year 2020 competition we were successful in fine-tuning BERT’s successor models to create the best performing system. The predictions obtained from fine-tuning of the ALBERT model on the OLID dataset achieved 92.25 % macro F1-score as the best overall result (including our post-submission experiments) on the official test set of the Shared Task A for the English language. We also found the ‘weak labels’ distributed along with 9 million tweets by the shared task organizers not useful for training our classifiers. However, the tweets themselves provided useful in-domain data for unsupervised pre-training. With 92.04 % macro-F1, our predictions based on further language model pre-training of the RoBERTa model on ca. 440.000 tweets, before fine-tuning on last year’s OLID dataset achieved the first rank in the official SemEval competition for sub-task A, and also high ranks (6, and 3) for the other two sub-tasks. We conclude that further pre-training of a transformer model with in-domain data is useful for offensive language detection. However, for tasks B and C, our models are clearly biased towards the majority class resulting in somewhat lower ranks. Hence, taking the high class imbalance of the OLID dataset better into account could further improve our results. ## Acknowledgements The paper was supported by BWFG Hamburg (Germany) within the “Forum 4.0” project as part of the ahoi.digital funding line, and the DFG project “FAME” (WI 4949/2-1, grant no. 406289255). ## References * [Basile et al., 2019] Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019\. SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54–63, MN, USA. ACL. * [Conneau et al., 2019] Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019\. Unsupervised cross-lingual representation learning at scale. CoRR, abs/1911.02116. * [Devlin et al., 2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019\. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, MN, USA. ACL. * [Fortuna and Nunes, 2018] Paula Fortuna and Sérgio Nunes. 2018\. A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR), 51(4):85. * [Gröndahl et al., 2018] Tommi Gröndahl, Luca Pajola, Mika Juuti, Mauro Conti, and N. Asokan. 2018\. All you need is “love”: Evading hate speech detection. In Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security, page 2–12, NY, USA. ACM. * [Lan et al., 2019] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019\. ALBERT: A lite BERT for self-supervised learning of language representations. CoRR, abs/1909.11942. * [Liu et al., 2019a] Ping Liu, Wen Li, and Liang Zou. 2019a. NULI at SemEval-2019 task 6: Transfer learning for offensive language detection using bidirectional transformers. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 87–91, MN, USA, June. ACL. * [Liu et al., 2019b] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. * [Mandl et al., 2019] Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019\. Overview of the HASOC track at FIRE 2019: Hate speech and offensive content identification in Indo-European languages. In Proceedings of the 11th Forum for Information Retrieval Evaluation, pages 14–17, Kolkata, India. * [Mozafari et al., 2019] Marzieh Mozafari, Reza Farahbakhsh, and Noël Crespi. 2019\. A BERT-based transfer learning approach for hate speech detection in online social media. In Hocine Cherifi, Sabrina Gaito, José Fernendo Mendes, Esteban Moro, and Luis Mateus Rocha, editors, Proceedings of the 8th International Conference on Complex Networks and their Applications, pages 928–940, Lisbon, Portugal. * [Ribeiro et al., 2018] Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virgílio A. F. Almeida, and Wagner Meira Jr. 2018\. Characterizing and detecting hateful users on Twitter. In Proceedings of the 12th International Conference on Web and Social Media, ICWSM 2018, pages 676–679, CA, USA. AAAI Press. * [Risch et al., 2019] Julian Risch, Anke Stoll, Marc Ziegele, and Ralf Krestel. 2019\. hpiDEDIS at GermEval 2019: Offensive language identification using a German BERT model. In Proceedings of the 15th Conference on Natural Language Processing (KONVENS 2019), pages 405–410, Erlangen, Germany. * [Schmidt and Wiegand, 2017] Anna Schmidt and Michael Wiegand. 2017\. A survey on hate speech detection using natural language processing. In Proceedings of the 5th International Workshop on Natural Language Processing for Social Media, pages 1–10, Valencia, Spain. ACL. * [Struß et al., 2019] Julia Maria Struß, Melanie Siegel, Josep Ruppenhofer, Michael Wiegand, and Manfred Klenner. 2019\. Overview of GermEval task 2, 2019 shared task on the identification of offensive language. In Proceedings of the 15th Conference on Natural Language Processing (KONVENS 2019), pages 354–365, Erlangen, Germany. * [Sun et al., 2019] Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019\. How to fine-tune BERT for text classification? In Maosong Sun, Xuanjing Huang, Heng Ji, Zhiyuan Liu, and Yang Liu, editors, Chinese Computational Linguistics, pages 194–206, Cham. Springer. * [Wiedemann et al., 2018] Gregor Wiedemann, Eugen Ruppert, Raghav Jindal, and Chris Biemann. 2018\. Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter. In Proceedings of GermEval Task 2018, 14th Conference on Natural Language Processing (KONVENS), pages 85–94, Vienna, Austria. * [Wiedemann et al., 2019] Gregor Wiedemann, Eugen Ruppert, and Chris Biemann. 2019\. UHH-LT at SemEval-2019 task 6: Supervised vs. unsupervised transfer learning for offensive language detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 782–787, MN, USA. ACL. * [Zampieri et al., 2019a] Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 1415–1420, MN, USA. ACL. * [Zampieri et al., 2019b] Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval). In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval), MN, USA. ACL. * [Zampieri et al., 2020] Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Çağrı Çöltekin. 2020\. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020). In Proceedings of The 14th International Workshop on Semantic Evaluation (SemEval), Barcelona, Spain. ACL.
# Emerging Dimming as Coronal Heating Episodes Anna V. Payne NASA Graduate Fellow Institute for Astronomy, University of Hawai‘i at Mānoa, Honolulu, HI 96822, USA Xudong Sun (孙旭东) Institute for Astronomy, University of Hawai‘i at Mānoa, Pukalani, HI 96768, USA; <EMAIL_ADDRESS> (Received January 25, 2021; Revised March 11, 2021; Accepted March 13, 2021) ###### Abstract Emerging dimming occurs in isolated solar active regions (ARs) during the early stages of magnetic flux emergence. Observed by the Atmospheric Imaging Assembly, it features a rapid decrease in extreme-ultraviolet (EUV) emission in the 171 Å channel images, and a simultaneous increase in the 211 Å images. Here, we analyze the coronal thermodynamic and magnetic properties to probe its physical origin. We calculate the time-dependent differential emission measures for a sample of 18 events between 2010 and 2012. The emission measure (EM) decrease in the temperature range $5.7\leq\log_{10}T\leq 5.9$ is well correlated with the EM increase in $6.2\leq\log_{10}T\leq 6.4$ over eight orders of magnitude. This suggests that the coronal plasma is being heated from the quiet-Sun, sub-MK temperature to 1–2 MK, more typical for ARs. Potential field extrapolation indicates significant change in the local magnetic connectivity: the dimming region is now linked to the newly emerged flux via longer loops. We conclude that emerging dimming is likely caused by coronal heating episodes, powered by reconnection between the emerging and the ambient magnetic fields. Solar extreme ultraviolet emission (1493); Solar coronal heating (1989); Solar magnetic flux emergence (2000) ††facilities: SDO ## 1 Introduction Our nearest star, the Sun, serves as a unique laboratory for stellar processes. In particular, strong magnetic fields in active regions (ARs) interact with the coronal plasma, changing its thermal structure as they emerge. Studying the formation, evolution, and decay of ARs allows for a deeper understanding of solar variability, and how it impacts us on Earth in the form of space weather. The relation between the magnetic field and the plasma thermodynamic properties may hold the key to solving the coronal heating problem. One interesting feature recently discovered is the “emerging dimming” (Zhang et al., 2012) in isolated ARs. This type of ARs emerge with no other pre- existing AR in the vicinity, so the local magnetic flux is likely balanced with little open flux. As initially reported, 24 isolated ARs between 2010 June to 2011 May exhibited decreases of emission in an extreme-ultraviolet (EUV) channel dominated by the lower-temperature Fe IX 171 Å line (0.6 MK) during the early emerging stages. For the higher-temperature lines, in particular Fe XIV 211 Å (2 MK), the emission increased continuously (Figure 1). No dimming was observed in other channels. The emerging dimming regions are situated next to or around the emerging flux, with a fan or halo shape, respectively. Zhang et al. (2012) speculated that coronal magnetic reconnection between the emerging and background fields heats up the coronal plasma, thus causing the cooler channel to dim and the warmer channels to brighten. Here we analyze the coronal thermodynamic and magnetic properties of emerging dimming. We specifically test the hypothesis that the observed dimming is caused by plasma heating, rather than other mechanisms such as mass loss. Using multi-wavelength EUV data, we calculate the time-dependent differential emission measure (DEM) and the emission measure (EM) in selective temperature ranges. Further, we study the change of the magnetic field connectivity for a representative case using a potential field extrapolation model. These calculations yield insights on how the density of the optically-thin plasma at different temperatures changes temporally and in response to the surface magnetic field, thus allowing us to probe the underlying physical processes. Figure 1: An example of emerging dimming around AR 11570, which is used as a representative example throughout the paper. Left and right columns show SDO observations taken at the start and the maximum of dimming, about 9 hr apart. Top and middle rows show AIA 171 and 211 Å channels; bottom row shows HMI LOS magnetograms. The red contours outline the emerging dimming region. The cyan cross denotes the location of the pixel used in Figure 2 as an example. ## 2 Data & Methods ### 2.1 Observation With the advent of the Solar Dynamics Observatory (SDO; Pesnell et al., 2012), launched in 2010, the solar corona is continuously monitored over 10 UV/EUV wavelengths with a cadence of 12 s and spatial sampling of 0$\farcs$6 with the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012). In addition, the Helioseismic and Magnetic Imager (HMI; Schou et al., 2012), also on board SDO, measures both line-of-sight (LOS) and vector magnetic fields in the photosphere. The LOS and vector magnetograms are taken at a cadence of 45 s and 12 minute respectively, with a spatial sampling of 0$\farcs$5. With high cadence and a continuous, voluminous dataset, SDO is a rich resource for studying how emerging dimming forms, and how they change through time. By observing the Sun in different, optically-thin EUV channels with AIA, information about the coronal electron density and temperature is recorded through time. Contemporaneous EUV and magnetic field observations allows for direct association between the coronal dynamics and their photospheric sources. We used the list of emerging ARs reported in Schunker et al. (2016) to search for cases of emerging dimming. The two main properties provided by the list are (1) their visibility in the continuum, and (2) the degree of spatial isolation, i.e., whether the emergence occurs in relatively quiet areas of the Sun that do not contain, and are not in the vicinity of, pre-existing ARs. The second criterion allows us to select only isolated ARs where emerging dimming was first identified (Zhang et al., 2012). It also avoids the possible influence of AR-AR interaction so the interpretation of the results is more straightforward. ### 2.2 Data Acquisition & Processing We search for emerging dimming events over a two-year period from 2010 to 2012 based on visual inspection. Using the starting times of flux emergence given in Schunker et al. (2016), we check the AIA 171 Å images using JHelioviewer (Müller et al. 2017), which provides an easy way to query and visualize SDO data. If we observe significant dimming next to or around the emerging flux, we manually define (1) a time range (typically a few days) to encapsulate the full history of the AR, including times before and during full emergence, and (2) a fixed field of view (FOV) in the native Helioprojective coordinate. We subsequently extract SDO data cubes corresponding to this time range and FOV for each event at 30 minute cadence. The data cubes include six AIA EUV channels (94, 131, 171, 193, 211, and 335 Å) and HMI LOS magnetograms. Our final sample consists of 18 events (Table 1): nine were included in Zhang et al. (2012); nine are new from 2012. We discuss the efficacy of this sample in Section 4. We perform all data reductions using the SolarSoftWare (SSW) IDL package. SSW query of the SDO database returns level-1 AIA images, in which the raw images have been de-spiked, had bad pixels removed, and flat-fielded. We use the module aia_prep.pro to process the images into level 1.5, adjusting images to a common plate scale and ensuring all images are centered at the same pixel. The images are also de-rotated and co-aligned so as to enable a direct comparison between different times. Our analysis focuses on an “emerging dimming region” defined as a contiguous, enclosed area whose 171 Å brightness declined post-emergence of the central AR. The contours for each AR were selected based on the 171 Å image pixel values during times when the AR’s surrounding area visibly darkened. This subregion is fixed for each AR through its entire time series. This is illustrated by the red contours in Figure 1 for a sample event in AR 11570. The dimming regions were extracted based on a contour applied to the 171 Å images after the time of emergence given in Schunker et al. (2016). The contour threshold was set to be below half of the average surrounding medium around the AR in the 171 Å data to form a contiguous entity surrounding the AR. The same contour was then applied to the 211 Å images and magnetograms. We consider the start of the dimming time, $t_{0}$, around the time of emergence given in Schunker et al. (2016). The time of maximum dimming, $t_{*}$, was then determined as the minimum of the spatially integrated EMlow curve, described in the following section. Table 1: Thermodynamic properties of all 18 emerging dimming events analyzed in this work. AR | $t_{*}$ | $t_{*}-t_{0}$ (hr) | $I(t_{*})/I_{\mathrm{QS}}$ | $\sum\mathrm{EM}(t_{*})-\sum\mathrm{EM}(t_{0})$ (cm-3) | $\sum\mathrm{EM}(t_{*})/\sum\mathrm{EM}(t_{0})$ | Morphology ---|---|---|---|---|---|--- 171 Å | 211 Å | low | high | low | high | 11122 | 2010-11-06T05:00 | 17.0 | 0.50 | 2.19 | $-7.14\times 10^{55}$ | $9.58\times 10^{56}$ | 0.06 | 1.66 | Fan 11179 | 2011-03-21T12:30 | 16.5 | 0.50 | 2.61 | $-2.56\times 10^{51}$ | $5.15\times 10^{52}$ | 0.02 | 7.55 | Fan 11194 | 2011-04-13T04:30 | 6.5 | 0.75 | 3.23 | $-2.45\times 10^{49}$ | $2.20\times 10^{51}$ | 0.06 | 2.21 | Fan 11198 | 2011-04-22T06:30 | 7.0 | 0.68 | 3.90 | $-1.54\times 10^{51}$ | $3.33\times 10^{52}$ | 0.09 | 1.34 | Halo 11211 | 2011-05-08T03:30 | 5.5 | 0.74 | 0.68 | $-7.73\times 10^{51}$ | $9.11\times 10^{52}$ | 0.24 | 1.20 | Fan 11214 | 2011-05-13T23:12 | 10.5 | 0.55 | 1.15 | $-2.50\times 10^{51}$ | $9.75\times 10^{51}$ | 0.28 | 1.01 | Fan 11215 | 2011-05-12T04:30 | 13.5 | 0.53 | 1.33 | $-1.07\times 10^{55}$ | $5.09\times 10^{55}$ | 0.19 | 1.12 | Fan 11220 | 2011-05-22T05:00 | 11.0 | 1.16 | 2.32 | $-4.82\times 10^{53}$ | $1.57\times 10^{55}$ | 0.11 | 1.87 | Fan 11221 | 2011-05-22T03:30 | 3.5 | 0.80 | 1.46 | $-1.06\times 10^{50}$ | $1.66\times 10^{52}$ | 0.71 | 1.08 | Fan 11400 | 2012-01-13T22:48 | 14.0 | 0.47 | 1.25 | $-9.94\times 10^{54}$ | $1.68\times 10^{56}$ | 0.16 | 1.22 | Fan 11414 | 2012-02-04T11:24 | 10.0 | 0.58 | 1.33 | $-3.29\times 10^{54}$ | $1.30\times 10^{56}$ | 0.12 | 1.47 | Fan 11431 | 2012-03-04T15:48 | 15.0 | 0.73 | 2.62 | $-8.00\times 10^{53}$ | $1.32\times 10^{55}$ | 0.02 | 4.43 | Fan 11437 | 2012-03-16T14:48 | 7.0 | 0.63 | 1.17 | $-5.69\times 10^{55}$ | $1.17\times 10^{57}$ | 0.06 | 1.65 | Halo 11446 | 2012-03-22T22:24 | 10.0 | 0.53 | 1.74 | $-2.80\times 10^{54}$ | $3.04\times 10^{56}$ | 0.25 | 1.25 | Fan 11570 | 2012-09-12T04:00 | 9.0 | 0.73 | 1.94 | $-4.27\times 10^{54}$ | $8.55\times 10^{56}$ | 0.27 | 1.38 | Fan 11603 | 2012-10-31T00:48 | 12.0 | 0.64 | 2.96 | $-2.13\times 10^{54}$ | $1.11\times 10^{56}$ | 0.16 | 1.35 | Fan 11607 | 2012-11-04T22:18 | 5.5 | 0.59 | 1.92 | $-3.03\times 10^{56}$ | $2.33\times 10^{57}$ | 0.11 | 1.23 | Fan 11624 | 2012-11-28T04:00 | 16.0 | 0.74 | 1.42 | $-4.88\times 10^{55}$ | $1.38\times 10^{57}$ | 0.25 | 1.36 | Fan Notes. The first nine events were included in the original sample in Zhang et al. (2012); the last nine are new cases. $t_{*}$ and $t_{0}$ indicate the time of maximum dimming and the beginning of flux emergence, respectively. $I(t_{*})/I_{\mathrm{QS}}$ is the average brightness within the dimming contour at $t_{*}$ divided by the average brightness of the nearby quiet Sun (QS). $\sum\mathrm{EM}(t_{*})-\sum\mathrm{EM}(t_{0})$ is the change of the EM integrated over the emerging dimming region; $\sum\mathrm{EM}(t_{*})/\sum\mathrm{EM}(t_{0})$ shows the relative change. The “low” and “high” columns refer to EM calculated for the temperature range $5.7\leq\log_{10}T\leq 5.9$ and $6.1\leq\log_{10}T\leq 6.3$, respectively. Random errors of all variables are negligible. Morphology refers to whether the emerging dimming region sits next to the emerging magnetic flux on one side (fan), or fully surround the flux (halo) as classified by Zhang et al. (2012). Figure 2: Example DEM solution before (light blue, solid line) and during emerging dimming (dark blue, dashed line) for AR11570. This DEM solution is obtained from one pixel within the map shown in Figure 1. Error bars are caused by random noise, determined by the method of Monte-Carlo resampling. The two transparent orange regions denote the temperature bins used for the EM calculations, where the low temperature bin ranges from $\log_{10}T=5.7-5.9$ and the high temperature bin ranges from $\log_{10}T=6.2-6.4$. ### 2.3 Differential Emission Measure (DEM) Analysis To understand the physical origin of emerging dimming, we analyze the coronal DEM: $\mathrm{DEM}(T)\,dT=\int^{\infty}_{0}n_{e}^{2}(T,z)\,dz.$ (1) Here, DEM is a function of the temperature $T$; $n_{e}(T,z)$ is the electron number density as function of $T$ and the spatial coordinate $z$. By convention, $z$ is $0$ at the coronal base and increases toward the observer along the LOS. The integral of DEM over a finite temperature range $T_{0}\leq T\leq T_{1}$ is called the emission measure (EM): $\mathrm{EM}(T)=\int^{T_{1}}_{T_{0}}\mathrm{DEM}(T)\,dT.$ (2) The DEM is related to the narrow-band, optically-thin EUV observations by an integral in the temperature space: $y_{i}=\int^{\infty}_{0}K_{i}(T)\,\mathrm{DEM}(T)\,dT,$ (3) where $y_{i}$ is the exposure-normalized pixel value (i.e., count rate) in the i-th EUV channel, and $K_{i}(T)$ is the known instrument- and channel- dependent temperature response function. Given a set of AIA observations $y_{i}$, our goal is to solve for DEM from Equation 3 so as to learn about the thermodynamic parameters $n_{e}$ and $T$. This process is called DEM inversion. We employ the DEM algorithm described in Cheung et al. (2015), which computes DEM solutions using a linear programming (also called linear optimization) approach based on the concept of sparsity. This differs from other commonly used procedures, which are mostly based on $\chi^{2}$-minimization. The sparsity constraint reduces the risk of overfitting for underdetermined systems. After creating data cubes for each emerging dimming event in the six AIA channels, we use the SSW module aia_sparse_em_solve.pro to calculate the DEM solutions over time. In the end, for each event we have a cube of a DEM solution for each AIA pixel, aligned over time to cover the whole duration. The calculations are typically performed in the logarithm temperature space with a $\Delta\log_{10}T=0.1$ bin size. Our analysis is mostly focused on the total EM integrated over the emerging dimming region, denoted as $\sum\mathrm{EM}$. We integrate for following two temperature ranges: $\sum\mathrm{EM_{low}}$ for $5.7\leq\log_{10}T\leq 5.9$, and $\sum\mathrm{EM_{high}}$ for $6.1\leq\log_{10}T\leq 6.3$. These ranges are chosen to encapsulate the characteristic temperatures, around the peak of the response function of the 171 and 211 Å channels. Figure 3: Spatially integrated EM for AR 11570 over the emerging dimming region over time, elapsed from 2012-09-11T19:00 UT which corresponds to the emergence time of the AR. Top: $\mathrm{EM_{low}}$; bottom: $\mathrm{EM_{high}}$. The transparent vertical line denotes the time of maximum dimming. The gap in the curves is due to missing AIA data. Error bars are too small to be visible. The algorithm from Cheung et al. (2015) does not include a method to estimate the random error associated with the noise in EUV images. In order to estimate the error bars, we devise a method of resampling in a Monte-Carlo-like manner. We utilize the SSWIDL module aia_bp_estimate_error.pro for an estimate of the uncertainty $\sigma(y_{i})$ for each nominal AIA pixel value $y_{i}$ in the $i$-th channel. Assuming the noise is Gaussian like, we create for each pixel a sample following the normal distribution $\mathcal{N}(y_{i},\sigma^{2}(y_{i}))$, and repeat for all pixels independently. For a sample size of $N$, this effectively creates $N$ realizations of the same AIA image; this is done for each channel at each time step. After calculating $N$ unique DEM solutions, we estimate the error as the 68% range centered at the medium of the returned DEM and EM values. We find a sample size $N=100$ returns sufficiently stable quotes of the errors. Due to a large number of contributing pixels, the random error in $\sum\mathrm{EM}$ is negligible. The overall uncertainty is expected to be dominated by the poorly understood systematics, which is on the order of $20\%$ (Judge, 2010). ### 2.4 Potential Field Extrapolation Zhang et al. (2012) hypothesized that coronal magnetic reconnection occurs between the emerging and background fields during emerging dimming events. If so, magnetic connectivity should change, that is, field lines originate from inside emerging dimming regions should have different end points after the AR emergence. To quantitatively assess the changes, we use a local potential field extrapolation algorithm to model the coronal magnetic field. The algorithm is based on a Green’s function method (Sakurai, 1989) and is implemented by Wiegelmann (2004). We use the photospheric magnetic field in the radial direction ($B_{r}$) derived from HMI vector magnetograms (Hoeksema et al., 2014) as the lower boundary condition. We ignore the local curvature and perform the extrapolation in a Cartesian coordinate with a 364 km resolution. We subsequently trace field lines from the emerging dimming regions identified in co-aligned AIA images. Field lines with starting points close to the side boundary are not included as to minimize the boundary effect. Figure 4: Spatially integrated EM for all ARs shown in Table 1. For each AR, the top panel shows the normalized $\mathrm{EM_{low}}$ evolution and the bottom panel shows the normalized $\mathrm{EM_{high}}$ evolution. The transparent vertical line denotes the time of maximum dimming. The gap in the curves is due to missing AIA data. Error bars are too small to be visible. Figure 5: Scatter plot of the absolute change of the spatially integrated EM, $\left|\sum\mathrm{EM}(t_{*})-\sum\mathrm{EM}(t_{0})\right|$, in the low and high temperature bins for all events. Line shows the result of linear regression in the logarithm space. Error bars are too small to be visible. The best fit linear regression line, shown as the red dotted line, returns a slope of $0.95\pm 0.12$ in log-log space corresponding to $r^{2}=0.95$. The $95\%$ confidence limits and prediction limits are shown by the transparent region and dashed gray lines, respectively. ## 3 Results The SDO observations shown in Figure 1 reveal several interesting features. The emerging dimming region resides to the west of the emerging flux with a fan-shaped boundary. In the AIA 171 Å channel, the mean intensity in the emerging dimming region at maximum dimming $I(t_{*})$ is not only darker than the pre-dimming counterpart, but also much darker than the surrounding quiet Sun (QS) $I_{\mathrm{QS}}$, with $I(t_{*})/I_{\mathrm{QS}}=0.73$. In the AIA 211 Å channel, the same region becomes brighter with $I(t_{*})/I_{\mathrm{QS}}=1.94$. Diffuse loop-like structure appear to connect the emerging dimming region to the emerging flux. An example of DEM solution is shown in Figure 2. For this pixel, the DEM has a single peak at $\log_{10}T=6.2$ prior to the start of the dimming. At maximum dimming, however, the DEM peak shifts to $\log_{10}T=6.3$, and the peak value significantly increases. The DEM values in the range $5.7\leq\log_{10}T\leq 5.9$, i.e., sub-MK coronal plasma typical for QS, decreases drastically during this period to near depletion. At the same time, the DEM values in the range $6.1\leq\log_{10}T\leq 6.3$ increases by over 100$\%$. The amount of the plasma over 1 MK has significantly increased. Such behavior is expected from the AIA observations. Figure 3 shows the evolution of the spatially integrated EM in AR 11570. $\sum\mathrm{EM_{low}}$ increases first, then decreases until the maximum dimming is reached. $\sum\mathrm{EM_{high}}$, on the other hand, continues to increase throughout. The ratio between the total EM at the maximum dimming and pre-dimming times $\sum\mathrm{EM}(t_{*})/\sum\mathrm{EM}(t_{0})$ are 0.27 and 1.38 for low- and high-temperature bins, respectively. The maximum dimming occurs 9 hr after the initial flux emergence. Figure 6: Top view of selective field lines from the potential field model for AR 11570. The two panels correspond to the two columns in Figure 1. Field lines in two panels have the identical starting foot points inside the dimming contours (shaded yellow). Colors indicate their lengths, with blue (red) being shorter (longer). The background image shows $B_{r}$ saturated at $\pm$500 G. In our sample, the EM variation in the low-temperature channel exhibits significant variations in time, whereas the high-temperature channel increases more smoothly (Figure 4). We summarize in Table 1 selective variables to quantify these evolution. Several trends are obvious. 1. 1. The duration of emerging dimming, defined as the time difference between the maximum and the start of dimming $t_{*}-t_{0}$, is, on average, 10.9 hours. 2. 2. At the maximum dimming, the AIA 171 Å emission in the emerging dimming region is lower than the nearby QS, whereas in 211 Å it is higher. 3. 3. The change of total EM, $\sum\mathrm{EM}(t_{*})-\sum\mathrm{EM}(t_{0})$, is always negative (positive) for the low-$T$ (high-$T$) bin, as expected from our case selection criteria. The relative change, $1-\sum\mathrm{EM}(t_{*})/\sum\mathrm{EM}(t_{0})$, can be over $95\%$ in the low-$T$ channel in the most extreme cases (AR 11179, AR 11431). 4. 4. The total EM and its change are typically 1 to 2 orders of magnitudes greater in the high-$T$ channel. It may be partially due to the larger integration range in the real $T$ space. A bin size of 0.2 in $\log_{10}T$ translates to $\Delta T=0.3$ and $0.9$ MK for the low- and high-$T$ channels, respectively. 5. 5. Similar to Zhang et al. (2012), most events here display a fan-shaped morphology. Only 1 of the 9 events has a halo morphology. We find a strong correlation between the changes of total EM in the two temperature bins, as illustrated in Figure 5. The EM values cover a wide range of 8 orders of magnitude. A best-fit linear regression in logarithm space reveals a slope of $0.95\pm 0.12$. Figure 7: Top: histograms of the vertical magnetic field $B_{r}$ inside the dimming region in AR11570, at two times shown in Figure 6. Bottom: histograms of the length of the field lines originating from the dimming region. Figure 6 shows selective potential field lines originated within the dimming contour for AR 11570. Figure 7 shows the histograms of $B_{r}$ and the field line lengths before and during emerging dimming. While the magnetic field within the contour does not appear to change much (Figure 1 and Figure 7), the local magnetic connectivity does change significantly over the emerging dimming period. Many field lines that originally close locally now appear to connect to the newly emerged flux via longer, higher-arching loops. This is consistent with the loop-like features observed in AIA 211 Å. ## 4 Discussion & Conclusion We have analyzed a total of 18 emerging dimming events in a quantitative manner to understand their physical origin. Our DEM analysis shows a marked change of the total EM in the emerging dimming regions. The simultaneous decrease in the range $5.7\leq\log_{10}T\leq 5.9$ and increase in $6.2\leq\log_{10}T\leq 6.4$ well explain the behaviors observed in AIA 171 and 211 Å channels. We find two lines of evidence in support of the hypothesis that emerging dimming is caused by coronal heating. First, the changes of $\sum\mathrm{EM}$ in the two temperature bins are well correlated. The fact that such correlation holds over 8 orders of magnitude strongly suggest a common physical cause. As the heating acts on plasma of all temperatures, a proportionality between different bins are perhaps not surprising. Another common mechanism for EUV dimming, mass loss via plasma ejection or expansion (e.g., Harra & Sterling, 2001), will result in a decrease of EM in all temperature bins and can thus be ruled out. Second, the change of magnetic connectivity suggests ongoing reconnection, which provides a viable energy source for coronal heating. Such heating episodes due to new flux emergence have been quantitatively studied before (Tarr et al., 2014). Additionally, the emerging dimming region magnetic field remains QS-like. We find that the net magnetic flux is well balanced. Most loops are closed, with very few open field lines despite the small computation domain. A mass-loss scenario is thus unlikely. Our sample size is relatively small, and the selection criteria are somewhat arbitrary (for example, no event is included between June 2011 and January 2012). Nevertheless, the fact that all events studied evolve in a similar fashion suggests that our conclusion is not biased by the sample selection. We further note that the two parent samples our study is based on, i.e., the emerging dimming sample (Zhang et al., 2012) and the flux emergence sample (Schunker et al., 2016), are both complete. We discuss two interesting aspects mentioned in Zhang et al. (2012) and reproduced in this study. First, there is a delay of hours between the start of flux emergence and the maximum dimming. Both the flux emergence rate and the AR total flux were shown to negatively correlate with the delay, and positively correlate with the emerging dimming duration. Such is consistent with a magnetic reconnection origin. Second, most emerging dimming events have a fan-like morphology rather than a halo. Such is likely determined by the property of the ambient field. If it is mostly unipolar (as in a coronal hole), the minority polarity of the emerging bipole will reconnect in all directions. This naturally leads to a dome-shaped separatrix (e.g., Tarr et al., 2014), whose quasi-circular footprints map to the halo dimming region. A recent study found reduced 171 Å emission in moat-like regions around seven isolated, well-formed ARs (Singh et al., 2021). These extended dark moats are also visible in the 304, 131, 94, and 335 Å channels, and less so in the 193 and 211 Å channels. There are no signs of brightening. A DEM analysis indicates a reduction of plasma in the entire $5.7\leq\log_{10}T\leq 6.2$ range. These observations point to a physical origin different from the emerging dimming. Following Antiochos & Noci (1986), the authors propose that the magnetic loops from the moats are pushed to low altitudes by the strong AR fields, and are thus restricted to lower-than-coronal temperatures. Similar dimming is reported for Sun-as-a-star synthetic observations when a large, isolated AR is transiting the disc (Toriumi et al., 2020). The EUV irradiance in all AIA channels are positively correlated with the sunspot magnetic flux except for 171 Å. The AR studied is relatively stable with no significant flux emergence. Hence, the cause may be more in line with that of Singh et al. (2021). If it is related to emerging dimming, it is possible that the depleted low-temperature plasma is never replenished. For more active stars with larger stellar spots, the effect may be more significant. We note that magnetic flux emergence simulations can now reproduce realistically many observed structures on AR scales (e.g., Toriumi & Hotta, 2019). When extended to include a coronal domain (e.g., Cheung et al., 2019) or coupled with a coronal model, they will provide a self-consistent “digital laboratory” for testing the relevant physics for emerging dimming. We conclude that emerging dimming is likely related to coronal heating episodes powered by reconnection between the emerging and the ambient magnetic fields. An assessment of the mass and energy budget will be deferred to future investigation. This work is supported by the state of Hawai‘i and NSF award #1848250. We thank M. Cheung and W. Liu for discussion and help on the DEM error analysis. The SDO data are courtesy NASA, the HMI, and the AIA science teams. ## References * Antiochos & Noci (1986) Antiochos, S. K. & Noci, G. 1986, ApJ, 301, 440 * Cheung et al. (2015) Cheung, M. C. M., Boerner, P., Schrijver, C. J., et al. 2015, ApJ, 807, 143 * Cheung et al. (2019) Cheung, M. C. M., Rempel, M., Chintzoglou, G., et al. 2019, NatAstron, 3, 160 * Harra & Sterling (2001) Harra, L. K. & Sterling, A. C. 2001, ApJ, 561, L215 * Hoeksema et al. (2014) Hoeksema, J. T., Liu, Y., Hayashi, K., et al. 2014, Sol. Phys., 289, 3483 * Judge (2010) Judge, P. G. 2010, ApJ, 708, 1238 * Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17 * Müller et al. (2017) Müller, D., Nicula, B., Felix, S., et al. 2017, A&A, 606, A10 * Pesnell et al. (2012) Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Sol. Phys., 275, 3 * Sakurai (1989) Sakurai, T. 1989, Space Sci. Rev., 51, 11 * Schou et al. (2012) Schou, J., Scherrer, P. H., Bush, R. I., et al. 2012, Sol. Phys., 275, 229 * Schunker et al. (2016) Schunker, H., Braun, D. C., Birch, A. C., Burston, R. B., & Gizon, L. 2016, A&A, 595, A107 * Singh et al. (2021) Singh, T., Sterling, A. C., & Moore, R. L. 2021, ApJ, 909, 57 * Tarr et al. (2014) Tarr, L. A., Longcope, D. W., McKenzie, D. E., & Yoshimura, K. 2014, Sol. Phys., 289, 3331 * Toriumi et al. (2020) Toriumi, S., Airapetian, V. S., Hudson, H. S., et al. 2020, ApJ, 902, 36 * Toriumi & Hotta (2019) Toriumi, S. & Hotta, H. 2019, ApJ, 886, L21 * Wiegelmann (2004) Wiegelmann, T. 2004, Sol. Phys., 219, 87 * Zhang et al. (2012) Zhang, J., Yang, S., Liu, Y., & Sun, X. 2012, ApJ, 760, L29
colorlinks=true, linkcolor=blue, filecolor=blue, urlcolor=blue, citecolor=blue, 11institutetext: School of Data Science, The Chinese University of Hong Kong (Shenzhen) Longxiang Avenue, Shenzhen, 518172, China 11email<EMAIL_ADDRESS> # Learning Word Embedding with Better Distance Weighting and Window Size Scheduling Chaohao Yang 0009-0002-6198-2469 ###### Abstract Distributed word representation (a.k.a. word embedding) is a key focus in natural language processing (NLP). As a highly successful word embedding model, Word2Vec offers an efficient method for learning distributed word representations on large datasets. However, Word2Vec lacks consideration for distances between center and context words. We propose two novel methods, Learnable Formulated Weights (LFW) and Epoch-based Dynamic Window Size (EDWS), to incorporate distance information into two variants of Word2Vec, the Continuous Bag-of-Words (CBOW) model and the Continuous Skip-gram (Skip-gram) model. For CBOW, LFW uses a formula with learnable parameters that best reflects the relationship of influence and distance between words to calculate distance-related weights for average pooling, providing insights for future NLP text modeling research. For Skip-gram, we improve its dynamic window size strategy to introduce distance information in a more balanced way. Experiments prove the effectiveness of LFW and EDWS in enhancing Word2Vec’s performance, surpassing previous state-of-the-art methods. ###### Keywords: Natural language processing Word embedding Word2Vec Learnable weights Window size scheduling ## 1 Introduction NLP researchers have long aimed to obtain high-quality word vector representations. One traditional approach is one-hot encoding, where a vector has a “1" at the index corresponding to the word and “0"s elsewhere. However, this approach suffers from the curse of dimensionality when dealing with large vocabulary sizes that can reach millions [2]. Additionally, one-hot encoding fails to capture syntactic or semantic properties because all word distances in the vector space are equal. Distributed word representations have been developed to overcome the limitations of one-hot encoding [8]. In this approach, words are represented by lower-dimensional vectors, typically a few hundred dimensions, where each element can take on various values [18]. These distributed word vectors can capture both syntactic and semantic properties, allowing syntactic and semantic similarities to be measured using Euclidean distance or cosine similarity between vectors. Distributed word representations may also preserve analogical relationships between words [12]. For example, the vector subtraction $vector(``king")-vector(``man")+vector(``woman")$ is most likely to result in a vector closest to $vector(``queen")$. Among all distributed word representation models, the Word2Vec model [12] is the most successful one, with outstanding performance in both modeling effectiveness and training efficiency. (a) (b) Figure 1: Illustrations of two previous methods for improving Word2Vec with example window size 3. The distance-related weights method in (a) combines all context words ($w_{t+i}$) with their corresponding weights ($\lambda_{i}$) when averaging them up to predict the center word ($w_{t}$). The dynamic window size strategy in (b) uses dynamically selected window sizes (indicated by the dashed box in the figure) to sample more from nearby context words, allowing them to contribute more to the training process. However, both variants of Word2Vec, namely CBOW and Skip-gram, discard the distance information between center and context words when modeling text [16, 4]. Specifically, CBOW predicts the center word (also known as the current word) based on the unweighted average of context word embeddings within a context window around it, while Skip-gram gives the prediction for one context word in the window each time based on the center word. All context words, regardless of their distances to the center word, are treated equally in those predictive tasks. There are two major problems concerning this design: 1) Words closer to each other in distance are usually also closer in their relationships, and their predictive powers with respect to each other are generally larger compared to words that are farther from them, which should be considered in Word2Vec’s predictive tasks. 2) The distance between words reflects the order and relative positions of words in a text, which is essential for extracting the meaning of the text. For example, the meanings of “people can save water” and “water can save people” are greatly different despite the fact that they contain the same words. Therefore, Word2Vec discarding word distance information encounters difficulties in both its predictive training process and semantic modeling ability. Therefore, many researchers have been working on introducing distance information into these models. For CBOW, some researchers have proposed adding distance-related weights to the context average to make the model distance- sensitive [16, 4]. For Skip-gram, exactly one center word and one context word are used each time for modeling and no average is needed. Therefore, The most preferred way among researchers is to dynamically select a random context window size for each center word [12] and only use the context words with distances within this dynamic window size for prediction. Since context words closer to the center word are more likely to be in the window, the probability of each context word being used will decrease linearly as their distance to the center word increases, allowing context words closer to the center to contribute more to the training process. Figure 1 provides some illustrations for these two methods. Despite their effectiveness, some problems still remain for these distance information introducing methods. For distance-related weights, the major concern is how to construct weights that can utilize a reasonable prior relationship with distances and are adaptive to specific situations. For the dynamic window size strategy, the problem is the frequent and irregular changes in window size caused by random selections, which ruins the training balance for each word and reduces modeling performance. We propose two novel methods, Learnable Formulated Weights (LFW) and Epoch- based Dynamic Window Size (EDWS), to solve these problems and improve the performance of Word2Vec. LFW introduces a prior formula with a small number of learnable parameters to calculate the distance-related weights. EDWS cancels the random selection while preserving the dynamicity by gradually increasing the window size by the number of training epochs. The effectiveness of both our methods has been demonstrated by various experiments. Specifically, we accomplish an accuracy improvement of 15.3% with LFW on CBOW and 2.5% with EDWS on Skip-gram. ## 2 Related Work The distributed representation of words has a long history since [8] and [7]. In its subsequent development, it gradually accepted the distributional hypothesis presented in distributional representation papers such as Latent Semantic Indexing [3] and Latent Dirichlet Allocation [5] and derived a series of methods that use context words to obtain the distributed representation of center words. A famous neural network language model (NNLM) architecture for generating word representation of this kind is proposed in [2], which uses a window of context words to predict the word right after them. In NNLM, context words are first embedded into distributed vector representations, then sequentially concatenated together as input to the following hyperbolic tangent hidden layer, and finally transformed into probability distributions in the vocabulary space at the output layer as the predicted result. In this process, the order of the sequentially concatenated word vectors reflects the order of the context words, which takes the distance information into account. Thanks to its outstanding modeling ability of contextual relationships, NNLM has been applied in fields like speech recognition and machine translation [6, 20] and in research of developing new NLP training methods [21]. However, NNLM runs too slowly and is difficult to train on large datasets. [19] and [14] point out this problem, and many attempts to solve it have been made since then. Among all attempts, [12] creatively uses part of the NNLM architecture (mainly removing the hidden layer of NNLM) to train word vectors, which effectively improves the training efficiency and makes the new model called Word2Vec suitable for large datasets. [13] also proposes hierarchical softmax, negative sampling, and subsampling methods to further enhance Word2Vec’s performance. Since then, Word2Vec has become one of the most popular lightweight techniques for learning word embeddings [9] and is still widely used in recent applications like fake news detection, sentiment analysis, and depression expression classification [11, 1, 17, 15]. However, one major issue concerning Word2Vec is that both of its architecture variants, namely CBOW and Skip-gram, discard the concatenation process of the NNLM word embedding layer and its distance-modeling ability. [12] proposes the dynamic window size strategy to solve this problem, giving each context word a distance-related sample probability by randomly changing the window size, but at the cost of ruining the training balance for each word. Other studies like [16] and [4] attempt to make CBOW distance-sensitive by introducing distance- related weights. The former provides an independent learnable weight to each context word position, while the latter uses fixed weights calculated by a prescribed distance-related formula. However, These methods either do not model the prior relationships between distance and weights or lose the adaptability of weights. Another approach presented in [10] introduces different projection layers for different word positions to capture their distance information, which suffers from a much more complex architecture and higher training costs. In summary, further research is still needed to explore more effective methods to incorporate distance information into the Word2Vec model. ## 3 Proposed Methods ### 3.1 Learnable Formulated Weights In the CBOW model, on a sliding window of the text segment $(w_{t-r},...,w_{t-2},w_{t-1},\\\ w_{t},w_{t+1},w_{t+2},...,w_{t+r})$ with window size $r$, we predict the center word $w_{t}$’s occurrence probability using the context words $C_{t}=(w_{t-r},...,w_{t-1},w_{t+1},...,w_{t+r})$ as $P(w_{t}|C_{t})=\exp(u_{w_{t}}\cdot u_{C_{t}})/\sum_{w^{\prime}\in V}exp(u_{w^{\prime}}\cdot u_{C_{t}})$ (1) where $u_{w_{t}}=Ex_{w_{t}}$, $u_{C_{t}}=(\sum_{|i|\leq r,i\neq 0}u_{t+i})/|C_{t}|$, and $V$ is the vocabulary set. Here, $E$ is the neural network weight matrix of size $d\times|V|$, and $x_{t}$ is the one hot representation of word $w_{t}$ with element 1 at position $t$ and zero on all other positions. So, $u_{w_{t}}=Ex_{w_{t}}$ picks the $t$-th column of the $E$ matrix, and this column therefore can be viewed as a distributed representation of the word $w_{t}$, i.e., the $d$-dimensional embedding of $w_{t}$. The above model is not distance-sensitive, since the distance between a context word and the target (center) word is absent in Eq.1. However, Our experience says that $w_{t+1}$’s relation with $w_{t}$ is more significant than $w_{t+2}$’s relation with $w_{t}$. Consequently, $w_{t+1}$’s predictive power with respect to $w_{t}$ is generally larger than $w_{t+2}$’s predictive power w.r.t $w_{t}$. For this reason, it is reasonable to add a distance-related weight $\lambda_{i}$ for each context word $u_{t+i}$ to reflect their influence on predicting $w_{t}$ when averaging them up for $u_{C_{t}}$. Specifically $u_{C_{t}}=\frac{1}{Z}\sum_{|i|\leq r,i\neq 0}\lambda_{i}u_{t+i}$ (2) where $Z=\sum_{|i|\leq r,i\neq 0}\lambda_{i}$ is the normalization factor. The main question to be answered for this strategy is how the distance-related weights are constructed. We propose the Learnable Formulated Weights (LFW) method to address this issue, which constructs a formula with a small number of learnable parameters that takes distance as input to calculate weights at different distances. This method combines the prior form of the formula with the posterior learning results of the parameters, allowing us to directly model the relationship between weights and distances and perform dynamic adjustments on specific values of parameters. There are two major aspects of the prior relationship between weights and distances we will model in this paper, each corresponding to two assumptions. The first aspect is whether or not the two context words on both sides of $w_{t}$ with the same distance share the same weight, and the second aspect is which kind of mathematical relationship (exponential or power) better models the relationship between weights and distances on a single side. By combining assumption choices for each of the aspects, we will be able to model the prior relationship in four different ways and find out the best combination for CBOW. Now we will give the formula for each of the combinations of assumptions mentioned above. Note that for each formula there is a learnable constant added to the end to model the basic amount of relation between any two words in the same text to keep them integrated together. The first combination of assumptions is the same weights on both sides and the negative power law decay of weights with respect to distance, and the formula representing this is $\lambda_{i}=|i|^{-\alpha}+\beta$ (3) where $0<|i|\leq r$ represents the distance between a context word and $w_{t}$, and $\alpha$ and $\beta$ are two learnable parameters initialized to 0 before training. The second combination of assumptions is different weights on both sides and the negative power law decay of weights with respect to distance, so the formula for this should be $\lambda_{i}=\begin{cases}\quad|i|^{-\alpha_{0}}+\beta_{0}\quad\text{if }-r\leq i<0,\\\ \quad|i|^{-\alpha_{1}}+\beta_{1}\quad\text{if }0<i\leq r\end{cases}$ (4) where $\alpha_{0}$, $\beta_{0}$, $\alpha_{1}$, and $\beta_{1}$ are four learnable parameters initialized to 0 before training. Based on the content above, the formulas corresponding to the third and fourth assumption combinations can be given accordingly as Eq.5 and Eq.6, except that the weights decrease exponentially with respect to distance: $\lambda_{i}=e^{-\alpha|i|}+\beta$ (5) $\lambda_{i}=\begin{cases}\quad e^{-\alpha_{0}|i|}+\beta_{0}\quad\text{if }-r\leq i<0,\\\ \quad e^{-\alpha_{1}|i|}+\beta_{1}\quad\text{if }0<i\leq r\end{cases}$ (6) In the experimental part, we will use the above four formulas for weight calculation respectively to demonstrate which of those formulas is the most effective, and at the same time prove the superiority of our method over other approaches to adding weights to CBOW. ### 3.2 Epoch-based Dynamic Window Size For the Skip-gram model, each center word $w_{t}$ is used as an input to predict context words in a sliding window of size $r$. That is, for each context word $w_{t+i}$ with $0<|i|\leq r$, we predict its occurrence probability as $P(w_{t+i}|w_{t})=exp(u_{w_{t+i}}\cdot u_{w_{t}})/\sum_{w^{\prime}\in V}exp(u_{w^{\prime}}\cdot u_{w_{t}})$ (7) Because no average of context words appears in this model, we cannot make the model distance-sensitive by adding weights for context words. However, since the context words that appear more in the prediction have more influence on how the word embeddings are trained, we can make the model sensitive to distance by sampling more from context words that are closer to the center word and sampling less otherwise. In other words, the probability of being sampled will act in a similar way as the weights added to the CBOW model. A naive way to do this is to dynamically select a random context window size $r^{\prime}$ from 1 to $r$ for each center word $w_{t}$, and use $w_{t}$ to predict only context words within distance $r^{\prime}$. In this case, the probability of each $w_{t+i}$ with $0<|i|\leq r$ to be used for prediction is $P(w_{t+i})=P(|i|\leq r^{\prime})=\frac{r-|i|+1}{r}$ (8) We can learn from Eq.8 that the probability of each $w_{t+i}$ to be used for prediction decreases linearly as the distance $|i|$ increases. Therefore, the context words closer to $w_{t}$ will obtain larger weights than those farther away. One major problem concerning this dynamic window size strategy is the frequent and irregular changes in window size for each center word resulting from random selections, which ruins the training balance for each word and reduces modeling performance. For example, imagine that the window sizes selected for two center words $w_{t_{1}}$ and $w_{t_{2}}$ are 1 and r respectively, then the number of predictions made by $w_{t_{2}}$ is $r$ times larger than that of $w_{t_{2}}$, causing the model to pay special attention to the center words in some parts while ignoring the training of others, which will lead to an unreasonable allocation of calculations and a decrease in model performance. We propose the Epoch-based Dynamic Window Size (EDWS) to solve this problem, which cancels the random selection while preserving the dynamicity by gradually increasing the window size by the number of training epochs. In our specific implementation, we use three different window sizes with a ratio of $1:2:3$ to represent the beginning, middle, and end of the training process. The smallest window size appears first, then the middle and largest one, each running for the same number of epochs. Therefore, the window size $r^{\prime}_{k}$ on the $k$th epoch is given by $r^{\prime}_{k}=\lceil\frac{3k}{K}\rceil\frac{r}{3}$ (9) where $K$ is the total number of epochs and r is the maximum window size, both are multiples of 3 in this case, but can be other values if under a different implementation. The EDWS strategy has two major advantages over the original dynamic window size. First, EDWS eliminates the irregularity brought by random selections at its source. Second, EDWS specifies the window sizes to appear in ascending order, forming a gradual learning process from nearby to distant contexts, which aligns with general learning principles. Given these advantages, EDWS is theoretically more effective than the original dynamic window size, which will be demonstrated through experiments in later parts of this paper. ## 4 Experiments Model | Window Size | Semantic | Syntactic | Total ---|---|---|---|--- CBOW | 5 | 960 | 2987 | 3947 10 | 1317 | 2938 | 4255 15 | 1687 | 2814 | 4501 20 | 1756 | 2867 | 4623 Skip-gram | 5 | 1995 | 2590 | 4585 10 | 2179 | 2673 | 4852 15 | 2666 | 2368 | 5034 20 | 2710 | 2429 | 5139 Table 1: Performance of CBOW and Skip-gram with different window sizes. Because some (2392 out of 19544) analogical reasoning questions involve words that do not appear in the vocabulary of the training dataset, the number of correct answers is reported rather than the accuracy. In this section, we first find the appropriate maximum window size via multiple pre-experiments. Then, we test the effectiveness of LFW on CBOW and EDWS on Skip-gram, each compared with its original model. ### 4.1 Datasets We use two corpora constructed from English Wikipedia for training: enwik9 and text8 111Datasets and text preprocessing script are from http://mattmahoney.net/dc/textdata.html. The enwik9 dataset contains the first $10^{9}$ bytes of English Wikipedia with about 120 million words after being processed by Matt Mahoney’s text preprocessing script, and text8 contains the first $10^{8}$ bytes of the preprocessed enwik9 with about 17 million words. We discard words that appear no more than 5 times in each dataset to focus on more valuable words while reducing computation overhead, forming two vocabularies with sizes of 194,377 and 63,643. The enwik9 dataset is much larger than text8, so we conduct pre-experiments on text8 to determine the hyperparameter choice, i.e., the window size. Then, we conduct comparative experiments on enwik9 based on the chosen hyperparameter to demonstrate the effectiveness of our proposed methods. As for test results, We use the analogical reasoning task dataset proposed in [12] to measure the quality of the trained word embeddings. By evaluating the embeddings’ analogical inference ability, this task reflects the embeddings’ mastery of language and forces a more precise distribution of the trained word embeddings in the vector space to meet the requirements of linguistic relations. There are 8,869 semantic and 10,675 syntactic questions in the test set in total. The semantic questions are divided into five categories, while the syntactic questions are divided into nine categories, as shown in Table 4. An example of these questions is finding the word with the same relative relationship with the word “woman” as the word “man” has with the word “king”. The question should be answered by finding the word embedding that is closest to $vector(``king")-vector(``man")+vector(``woman")$ in cosine distance. The quality of word embeddings is then measured by the total accuracy of all questions in this task. ### 4.2 Window Size Choice We list the results of CBOW and Skip-gram with different window sizes in Table 1. Since the dynamic window size strategy is proposed as an original design of Skip-gram, we will use this strategy for all original Skip-gram models used in the experiments, and the window size represents its maximum window size in random selection. We set the word vector dimension to a relatively small value of 128 in this step to reduce the calculation cost of pre-experiments. Consistent with the results reported in [4], we find that the improvement in model performance is significantly related to the increase in window size before the window size reaches 15. After that, the correlation becomes less significant. Considering the impact of increasing the window size on the calculation amount, we choose the window size to be 15 for formal experiments. ### 4.3 Experimental Results Model | Semantic | Syntactic | Total ---|---|---|--- CBOW | 27.26% | 45.70% | 37.34% LFW CBOW Eq.3 | 48.39% | 56.24% | 52.68% LFW CBOW Eq.4 | 47.32% | 55.46% | 51.77% LFW CBOW Eq.5 | 44.51% | 55.77% | 50.66% LFW CBOW Eq.6 | 43.14% | 54.75% | 49.48% Table 2: Performance of LFW Model | Semantic | Syntactic | Total ---|---|---|--- Skip-gram | 58.43% | 29.63% | 42.70% EDWS Skip-gram | 61.60% | 31.59% | 45.21% Table 3: Performance of EDWS In this step, we conduct experiments to show our methods’ effectiveness for Word2Vec on the enwik9 dataset. We set the word vector dimension to 600, following the configuration in [16]. Table 2 and Table 3 show positive experimental results for both proposed methods, where the fonts in bold indicate the highest accuracy for each column. The LFW methods following Eq.3, Eq.4, Eq.5, and Eq.6 improve the overall accuracy of CBOW by 15.3%, 14.4%, 13.3%, and 12.1%, respectively, and EDWS improves that of Skip-gram by more than 2.5%. Results suggest that formula Eq.3 models the relationship between weight and distance best among all proposed formulas. This may be because: 1) There is no significant difference in the relationship between context on two sides and the center word and using the same parameters on both sides can reduce overfitting. 2) The closest context words have a significantly higher influence on the center word than the other, while the influence of the other words decreases relatively slowly with distance, which can be modeled better with the power function that decreases greatly from distance 1 to 2 and gently at farther distances. To establish an intuitive impression of the relationship between weights and distances obtained using power and exponential functions, we provide Figure 2, which shows the curves formed by the relationship between weights and distances learned from the enwik9 dataset when using Eq.3 (power law decay) and Eq.5 (exponential decay). The curves in the figure take their points at integer distances within the window size, and the sum of the weights for each curve is normalized to 1 for ease of comparison. The behavior of those curves solidifies our analysis of the success of Eq.3. Therefore, we suggest using Eq.3 as the formula for LFW and future context modeling efforts. to achieve the largest performance improvement. Under their best version, Our methods’ performance exceeds the state-of-the- art accuracy improvement of 13.5% on CBOW and 1.8% on Skip-gram achieved by independent learnable weights presented in [16]. Additionally, the LFW method consumes less than 15% of extra training time, which is a huge improvement compared with the 72% (computed from its 42% slower speed) consumed by the method in [16]. The test results and the improvement amounts of LFW Eq.3 (best version) detailed to each category are shown in Table 4 to illustrate its effectiveness better. Significant relative improvements are brought to nearly every category by our proposed method. The experimental results prove the effectiveness of both our methods. Figure 2: The curves of normalized weights for power law decay and exponential decay Categories | CBOW | LFW CBOW Eq.3 | $\Delta$ Acc (%) ---|---|---|--- Semantic | 27.26% (2418/8869) | 48.39% (4292/8869) | 21.13% (77.50%) capital-common-countries | 80.04% (405/506) | 92.49% (468/506) | 12.45% (15.56%) capital-world | 31.06% (1405/4524) | 62.07% (2808/4524) | 31.01% (99.86%) currency | 3.00% (26/866) | 3.93% (34/866) | 0.92% (30.77%) city-in-state | 15.08% (372/2467) | 28.70% (708/2467) | 13.62% (90.32%) family | 41.50% (210/506) | 54.15% (274/506) | 12.65% (30.48%) Syntactic | 45.70% (4879/10675) | 56.24% (6004/10675) | 10.54% (23.06%) adjective-to-adverb | 11.49% (114/992) | 17.34% (172/992) | 5.85% (50.88%) opposite | 17.49% (142/812) | 31.03% (252/812) | 13.55% (77.46%) comparative | 65.62% (874/1332) | 76.73% (1022/1332) | 11.11% (16.93%) superlative | 21.75% (244/1122) | 38.86% (436/1122) | 17.11% (78.69%) present-participle | 44.22% (467/1056) | 45.17% (477/1056) | 0.95% (2.14%) nationality-adjective | 75.05% (1200/1599) | 84.24% (1347/1599) | 9.19% (12.25%) past-tense | 41.60% (649/1560) | 45.38% (708/1560) | 3.78% (9.09%) plural | 62.31% (830/1332) | 81.68% (1088/1332) | 19.37% (31.08%) plural-verbs | 41.26% (359/870) | 57.70% (502/870) | 16.44% (39.83%) Total | 37.34% (7297/19544) | 52.68% (10296/19544) | 15.34% (41.10%) Table 4: Performance of LFW Eq.3 on each category of the analogical reasoning task. $\Delta$ indicates the absolute and relative performance improvement of LFW Eq.3 compared to original CBOW. ## 5 Conclusion In this paper, we propose two novel methods, LFW and EDWS, to improve the performance of Word2Vec. For CBOW, We introduce distance-related weights calculated by a prior formula with a small number of learnable parameters, the best version of which can be widely adopted in NLP text modeling tasks and research regarding how the influence and relevance between words change with distance. For Skip-gram, we invent an epoch-based dynamic window size method to eliminate the irregularity brought by dynamic window size at its source. Experiments prove that both methods effectively improve the performance of their corresponding models, surpassing state-of-the-art approaches to incorporating distance information into Word2Vec. For future work, we will focus on combining the two methods to improve model performance, such as specifying the running epochs of different window sizes based on the learnable formulated weights at different distances. Research related to this area should be a valuable direction in the future. ## References * [1] Aoumeur, N.E., Li, Z., Alshari, E.M.: Improving the polarity of text through word2vec embedding for primary classical arabic sentiment analysis. Neural processing letters 55(3), 2249–2264 (2023) * [2] Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. Journal of Machine Learning Research 3, 1137–1155 (2003) * [3] Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. Journal of machine Learning research 3(Jan), 993–1022 (2003) * [4] Chang, C.Y., Lee, S.J., Lai, C.C.: Weighted word2vec based on the distance of words. In: 2017 International Conference on Machine Learning and Cybernetics (ICMLC). vol. 2, pp. 563–568. IEEE (2017) * [5] Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. Journal of the American society for information science 41(6), 391–407 (1990) * [6] Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R., Makhoul, J.: Fast and robust neural network joint models for statistical machine translation. In: proceedings of the 52nd annual meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 1370–1380 (2014) * [7] Elman, J.L.: Distributed representations, simple recurrent networks, and grammatical structure. Machine learning 7, 195–225 (1991) * [8] Hinton, G.E., et al.: Learning distributed representations of concepts. In: Proceedings of the eighth annual conference of the cognitive science society. vol. 1, p. 12. Amherst, MA (1986) * [9] Karani, D.: Introduction to word embedding and word2vec. Towards Data Science 1 (2018) * [10] Ling, W., Dyer, C., Black, A.W., Trancoso, I.: Two/too simple adaptations of word2vec for syntax problems. In: Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: human language technologies. pp. 1299–1304 (2015) * [11] Mallik, A., Kumar, S.: Word2vec and lstm based deep learning technique for context-free fake news detection. Multimedia Tools and Applications 83(1), 919–940 (2024) * [12] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) * [13] Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems 26 (2013) * [14] Morin, F., Bengio, Y.: Hierarchical probabilistic neural network language model. In: International workshop on artificial intelligence and statistics. pp. 246–252. PMLR (2005) * [15] Nugraha, M.R.A., Sibaroni, Y.: Classification of depression expressions on twitter using ensemble learning with word2vec. Inform: Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi 9(1), 67–74 (2024) * [16] Qiu, L., Cao, Y., Nie, Z., Yu, Y., Rui, Y.: Learning word representation considering proximity and ambiguity. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 28 (2014) * [17] Rakshit, P., Sarkar, A.: A supervised deep learning-based sentiment analysis by the implementation of word2vec and glove embedding techniques. Multimedia Tools and Applications pp. 1–34 (2024) * [18] Schakel, A.M., Wilson, B.J.: Measuring word significance using distributed representations of words. arXiv preprint arXiv:1508.02297 (2015) * [19] Schwenk, H., Gauvain, J.L.: Training neural network language models on very large corpora. In: Proceedings of human language technology conference and conference on empirical methods in natural language processing. pp. 201–208 (2005) * [20] Shi, Y., Zhang, W.Q., Cai, M., Liu, J.: Efficient one-pass decoding with nnlm for speech recognition. IEEE Signal Processing Letters 21(4), 377–381 (2014) * [21] Turian, J., Ratinov, L., Bengio, Y.: Word representations: a simple and general method for semi-supervised learning. In: Proceedings of the 48th annual meeting of the association for computational linguistics. pp. 384–394 (2010)
# Representative Feature Extraction During Diffusion Process for Sketch Extraction with One Example Kwan Yun∗ Youngseo Kim∗ Kwanggyoon Seo Chang Wook Seo Junyong Noh KAIST, Visual Media Lab ###### Abstract We introduce DiffSketch, a method for generating a variety of stylized sketches from images. Our approach focuses on selecting representative features from the rich semantics of deep features within a pretrained diffusion model. This novel sketch generation method can be trained with one manual drawing. Furthermore, efficient sketch extraction is ensured by distilling a trained generator into a streamlined extractor. We select denoising diffusion features through analysis and integrate these selected features with VAE features to produce sketches. Additionally, we propose a sampling scheme for training models using a conditional generative approach. Through a series of comparisons, we verify that distilled DiffSketch not only outperforms existing state-of-the-art sketch extraction methods but also surpasses diffusion-based stylization methods in the task of extracting sketches. --- Figure 1: Results of DiffSketch and distilled $\text{DiffSketch}_{distilled}$, trained with one example. The left sketches were generated by DiffSketch, while the right sketches were extracted from images using $\text{DiffSketch}_{distilled}$. **footnotetext: These authors contributed equally to this work ## 1 Introduction Sketching is performed in the initial stage of artistic creation of drawing, serving as a foundational process for both conceptualizing and conveying artistic intentions. It also serves as a preliminary representation that visualizes the core structure and content of the eventual artwork. As sketches can exhibit distinct styles despite their basic form composed of simple lines, many studies in computer vision and graphics have attempted to train models for automatically extracting stylized sketches [55, 25, 2, 5, 43] that differ from abstract lines [30, 51, 54]. Majority of current sketch extraction approaches utilize image-to-image translation techniques to produce high-quality results. These approaches typically require a large dataset when training an image translation model from scratch, making it hard to personalize the sketch auto-colorization [6, 17, 66, 63] or sketch-based editting [24, 67, 44, 33]. On the other hand, recent research has explored the utilization of diffusion model [36, 40] features for downstream tasks [60, 16, 64, 50]. Features derived from pretrained diffusion models are known to contain rich semantics and spatial information [50, 60], which is known to help the training with limited data [3]. Previous studies have utilized these features extracted from a subset of layers, certain timesteps, or every specific intervals. Unfortunately, these hand-selected features often do not contain most of the information generated during the entire diffusion process. To this end, we propose Diffsketch, a new method that can extract representative features from a pretrained diffusion model and train the sketch extraction model with one data. For feature extraction from the denoising process, we statistically analyze the features and select those that can represent the whole feature information from the denoising process. Our new generator aggregates the features from multiple timesteps, fuses them with VAE features, and decodes these fused features. The way we train the generator with synthetic features differs from that employed by previous diffusion-based stylization methods in that our method is specially designed for sketch extraction. While most diffusion-based stylization methods adopt the original pretrained diffusion model by swapping features [11, 50] or by inverting style into a certain prompt [10, 39], these techniques do not provide fine control over the style of the sketch, making them unsuitable for extracting sketches in a desired style. In contrast, DiffSketch trains a generator model from scratch specifically for sketch extraction of a desired style. In addition to the newly proposed model architecture, we introduce a method for effective sampling during training. It is easy to train a network with data that share similar semantic information to ground truth data. However, relying solely on such data for training will hinder the full utilization of the capacity provided by the diffusion model. Therefore, we adopt a new sampling method to ensure training with diverse examples while enabling effective training. Finally, we distill our network into a streamlined image- to-image translation network for improved inference speed and efficient memory usage. The resulting $\text{DiffSketch}_{distilled}$ is the final network that is capable of performing a sketch extraction task. The contributions can be summarized as follows: * • We propose DiffSketch, a novel method that utilizes features from a pretrained diffusion model to generate sketches, learning from one manual sketch data. * • Through analysis, we select the representative features during the diffusion process and utilize the VAE features as fine detailed input to the sketch generator. * • We propose a new sampling method to train the model effectively with synthetic data. ## 2 Related Work ### 2.1 Sketch Extraction At its core, sketch extraction utilizes edge detection. Edge detection serves as the foundation not only for sketch extraction but also for tasks like object detection and segmentation [65, 1]. Initial edge detection studies primarily focused on identifying edges based on abrupt variations in color or brightness [4, 55]. Although these techniques are direct and efficient without requiring extensive datasets to train on, they often produce outputs with artifacts, like scattered dots or lines. To make extracted sketches authentic, learning-based strategies have been introduced. These strategies excel at identifying object borders or rendering lines in distinct styles [57, 58, 25, 22, 21]. Chan et al. [5] took a step forward from prior techniques by incorporating the depth and semantic information of images to procure superior-quality sketches. In a more recent development, Ref2sketch [2] permits to extract stylized sketches using reference sketches through paired training. Semi-Ref2sketch [43] adopted contrastive learning for semi-supervised training. All of these methods share the same limitation; they require a large amount of sketch data for training, which is hard to gather. Due to data scarcity, training a sketch extraction model is generally challenging. To address this challenge, our method is designed to train a sketch generator using just one manual drawing. ### 2.2 Diffusion Features for Downstream Task Diffusion models [12, 31] have shown cutting-edge results in tasks related to generating images conditioned on text prompt [36, 40, 35]. There have been attempts to analyze the features for utilization in downstream tasks such as segmentation [3, 60, 16], image editing [50], and finding dense semantic correspondence [26, 64, 48]. Most earlier studies chose a specific subset of features for their own downstream tasks. Recently, Luo et al. [26] proposed an aggregator that learns features from all layers and that uses equally sampled time steps. We advance a step further by analyzing and selecting the features from multiple timesteps, which represent the overall features. We also propose a two-stage aggregation network and feature-fusing decoder utilizing additional information from VAE to generate finer details. ### 2.3 Deep Features for Sketch Extraction Most of recent sketch extraction methods utilize the deep features of a pretrained model for sketch extraction training [2, 43, 61, 62]. While the approach of utilizing deep features from a pretrained classifier [14, 68] is widely used to measure perceptual similarity, vision-language models such as CLIP [34] are used to measure semantic similarity [5, 51]. These methods indirectly use the features by comparing them for the loss calculation during the training process instead of using them directly to generate a sketch. Unlike the previous approaches, we directly use the denoising diffusion features that contain rich information to extract sketches for the first time. ## 3 Diffusion Features Figure 2: Analysis on sampled features. PCA is applied to DDIM sampled features from different classes. (a) : features colored with human-labeled classes. (b) : features colored with denoising timesteps. During a backward diffusion process, a latent or image with noise repeatedly invokes a UNet [37] to reduce the noise. The UNet produces several intermediate features with different shapes. This collection of features contains rich information about texture and semantics, which can be used to generate an image in various domains. For instance, features from the lower to intermediate layers of the UNet reveal global structures and semantic regions, while features from higher layers exhibit fine and high-frequency information [50, 26]. Furthermore, features become more fine-grained over time steps [11]. As these features have different information depending on their embedded layers and processed timesteps, it is important to select diverse features to fully utilize the information they provide. ### 3.1 Diffusion Features Selection Here, we first present a method for selecting features by analysis. Our approach involves selecting representative features from all the denoising timesteps and building our novel sketch generator, $G_{sketch}$ to extract a sketch from an image by learning from a single data. To perform analysis for this purpose, we first sampled 1,000 images randomly and collected all the features from multiple layers and timesteps during Denoising Diffusion Implicit Model (DDIM) sampling, with a total of 50 steps [47]. We conducted Principal component analysis (PCA) on these features from multiple classes and all timesteps to examine the distribution of features depending on their semantics and timesteps. The PCA results are visualized in Figure 2. For our experiments, we manually classified the sampled images and their corresponding features into 17 classes with human perception, where each class contains more than 5 images. As illustrated by the left graphs in Figure 2 (a), features from the same class tend to have similar characteristics, which can be seen as an additional proof to the previous literature finding that features contain semantic information [64, 3, 60]. There is also a smooth trajectory across timesteps as shown in Figure 2 (b). Therefore, selecting features from a hand-crafted interval can be more beneficial than using a single feature, as it provides richer information, as previously suggested [26]. Upon further examination, we can observe that features tend to start at a similar point in their initial timesteps ($t\approx 50$) and diverge thereafter (cyan box). In addition, during the initial steps, nearby values do not show a large difference compared to those in the middle (black box), while the final features exhibit distinct values even though they are on the same trajectory (orange box). These findings provide insights that can guide the selection of representative features. As we aim to capture the most informative features across timesteps instead of using all features, we first conducted a K-means cluster analysis (K-means) [13] with Within Clusters Sum of Squares distance (WCSS) to determine the number of representative clusters. One way to compute the K-means cluster with WCSS distance is to use the elbow method. However, we could not identify a clear visual elbow when 30 PCA components were used. Therefore, we used a combination of the Silhouette Score (SS) [38] and the Davies-Bouldin Index (DBI) [7]. For all features from each sampled image, we chose the first $K$ that matched both $k^{\prime}$’th highest SS score and $k^{\prime}$’th lowest DBI score. From this process, we chose our $K$ as 13 although this $K$ value may vary with the number of diffusion sampling processes. We select the representative features from the center of each cluster to use them as input to our sketch generation network. To verify that the selected features indeed offer better representation compared to those selected from equal timesteps and random features, we calculated the minimum Euclidean distance from each projected feature to the selected 13 features across 1,000 images. We found that our method led to the minimum distance (18,615.6) among the distances achieved by using the equal timestep selection (19,004.9) and random selection (23,957.2). More explanations are provided in the supplementary material. ### 3.2 Diffusion Features Aggregation Inspired by feature aggregation networks for downstream tasks [60, 26], we build our two-level aggregation network and feature fusing decoder (FFD), both of which constitute our new sketch generator ${G}_{sketch}$. The architectures of $G_{sketch}$ and FFD are shown in Figure 4 (b) and (d), respectively. The diffusion features $f_{l,t}$, generated on layer $l$ and timestep $t$, are passed through the representative feature gate $G^{*}$. They are then upsampled to a certain resolution by $U_{md}$ and $U_{tp}$, and passed through a bottleneck layer $B_{l}$ followed by being assigned with mixing weights $w$. The second aggregation network receives the first fused feature $F_{fst}$ as an additional input feature. $\displaystyle F_{fst}$ $\displaystyle=\sum_{t=0}^{T}\sum_{l=1}^{l_{md}}w_{l,t}\cdot B_{l}(U_{md}(G^{*}(f_{l,t}))),$ (1) $\displaystyle F_{fin}$ $\displaystyle=\sum_{t=0}^{T}\sum_{l={l_{md}+1}}^{L}w_{l,t}\cdot B_{l}(U_{tp}(G^{*}(f_{l,t})))$ $\displaystyle+\sum_{l={l_{md}+1}}^{L}w_{l}\cdot B_{l}(U_{tp}(F_{fst}))$ Here, $L$ is the total number of UNet layers, while $l_{md}$ indicates the middle layer, which are set to be 12 and 9, respectively. Bottleneck layer $B_{l}$ is shared across timesteps. $T$ is the total number of timesteps. $F_{fst}$ denotes the first level aggregated features and $F_{fin}$ denotes the final aggregated features. These two levels of aggregation allow us to utilize the features in a memory efficient manner by mixing the features sequentially in a lower resolution first and then in a higher resolution. ### 3.3 VAE Decoder Features Unlike recent applications on utilizing diffusion features, where semantic correspondences are more important than high-frequency details, sketch generation utilizes both semantic information and high-frequency details such as texture. As shown in Figure 3, VAE decoder features contain high-frequency details such as hair and wrinkles. From this observation, we designed our network to utilize VAE features following the aggregation of UNet features. Extended visualizations are provided in the supplementary material. Figure 3: Visualization of features from UNet and VAE in lower and higher resolution layers. Lower resolution layers are the first layers while higher resolution layers are the 11th for UNet and the 9th for VAE. We utilize all the VAE features from the residual blocks to build FFD. The aggregated features $F_{fin}$ and VAE features are fused together to generate the output sketch. Specifically, in the fusing step $i$, VAE features with the same resolution are passed through the channel reduction layer followed by the convolution layer. These processed features are concatenated to the previously fused feature $x_{i}$ and the result is passed through the fusion layer to output $x_{i+1}$. For the first step ($i=0$), $x_{0}$ is $F_{fin}$. All features in the same step has same resolution. We denote the number of total features at $i$ as $N$ without subscript for simplicity. This process is shown in Figure 4 (d) and can be expressed as follows: $\displaystyle x_{i+1}$ $\displaystyle=\text{FUSE}\left[\left\\{\sum_{n=1}^{N}\text{Conv}(\text{CH}(v_{i,n}))\right\\}+x_{i}\right]$ (2) $\displaystyle\hat{I}_{sketch}$ $\displaystyle=\text{OUT}\left[\left\\{\sum_{n=1}^{N}\text{Conv}(\text{CH}(v_{M,n}))\right\\}+x_{M}+{I}_{source}\right]$ where $CH$ is the channel reduction layer, Conv is the convolution layers, FUSE is the fusion layer, OUT is the final convolution layer applied before outputting a $\hat{I}_{sketch}$, $\sum$ and addition represent concatenation in the channel dimension. Only at the last step ($i=M$), the source image, $I_{source}$ is also concatenated to generate the output sketch. ## 4 DiffSketch DiffSketch learns to generate a pair of image and sketch through the process described below, which is also shown in Figure 4. 1. 1. First, the user generates an image using a prompt with Stable Diffusion (SD) [36] and draws a corresponding sketch while its diffusion features $F$ are kept. 2. 2. The diffusion features $F$, its corresponding image $I_{source}$, and drawn sketch $I_{sketch}$ constitute a triplet data to train the sketch generator $G_{sketch}$ with directional CLIP guidance. 3. 3. With trained $G_{sketch}$, paired image and sketch can be generated with a condition. This becomes the input for the distilled network for fast sketch extraction. In the following subsections, we will describe the structure of sketch generator $G_{sketch}$ (Sec. 4.1), its loss functions (Sec. 4.2), and the distilled network (Sec. 4.4). Figure 4: Overview of Diffsketch. The UNet features generated during the denoising process are fed to the Aggregation networks to be fused with the VAE features to generate a sketch corresponding to the image that Stable Diffusion generates. ### 4.1 Sketch Generator Our sketch generator $G_{sketch}$ is built to utilize the features from the denoising diffusion process by performed UNet and the VAE as described in Secs. 3.2 and 3.3. $G_{sketch}$ takes the representative features from UNet as input, and aggregate them and fuse them with the VAE decoder features ${v}_{i,n}$ to synthesizes the corresponding sketch $\hat{I}_{sketch}$. Unlike other image-to-image translation-based sketch extraction methods in which the network takes an image as input [2, 5, 43], our method accepts multiple deep features that have different spatial resolutions and channels. ### 4.2 Objectives To train $G_{sketch}$, we utilize the following loss functions: $L=L_{\text{rec}}+\lambda_{\text{across}}L_{\text{across}}+\lambda_{\text{within}}L_{\text{within}}$ (3) where $\lambda_{\text{across}}$ and $\lambda_{\text{within}}$ are the balancing weights. $L_{\text{across}}$ and $L_{\text{within}}$ are directional CLIP losses proposed in Mind-the-gap (MTG) [69], where $L_{\text{within}}$ preserves the direction across the domain, by enforcing the difference between $I_{samp}$ and $I_{source}$ to be similar to that between $I_{sampsketch}$ and $I_{sketch}$ in CLIP embedding space. Similarly, $L_{\text{across}}$ enforces the difference between $I_{sampsketch}$ and $I_{samp}$ to be similar to that between $I_{sketch}$ and $I_{source}$. $L_{\text{rec}}$ enforces the generated sketch from one known feature $F$ and the ground truth sketch $I_{sketch}$ to be similar. While MTG uses an MSE loss for the pixel-wise reconstruction, we use an L1 distance to avoid blurry sketch results, which is important in the generation of stylized sketches. Our $L_{\text{rec}}$ can be expressed as follows: $L_{\text{rec}}=\lambda_{\text{L1}}L_{\text{L1}}+\lambda_{\text{LPIPS}}L_{\text{LPIPS}}+\lambda_{\text{CLIPsim}}L_{\text{CLIPsim}}$ (4) where $\lambda_{\text{L1}}$, $\lambda_{\text{LPIPS}}$, and $\lambda_{\text{CLIPsim}}$ are the balancing weights. $L_{\text{CLIPsim}}$ calculates the semantic similarity in the cosine distance, $L_{\text{LPIPS}}$ [68] captures the perceptual similarity, and $L_{\text{L1}}$ calculates the pixel-wise reconstruction. More details can be found in Sec. 5.1. ### 4.3 Sampling Scheme for Training Our method uses one source image and its corresponding sketch as the only ground truth when guiding the sketch style, using the direction of CLIP embeddings. Therefore, our losses rely on well-constructed CLIP manifold. When the domains of two images $I_{source}$ and $I_{samp}$ differ largely, the confidence in the directional CLIP loss becomes low in general (experiment details are provided in the supplementary material). To fully utilize the capacity of the diffusion model and produce sketches in diverse domains, however, it is important to train the model on diverse examples. To ensure learning from diverse examples without decreasing the CLIP loss confidence, we propose a novel sampling scheme, condition diffusion sampling for training (CDST). We envision that this sampling can be useful when training a model with a conditional generator. This method initially samples a data $I_{samp}$ from one known condition $C$ and gradually changes the sampling distribution to random by using a diffusion algorithm when training the network. The condition on the iteration $iter$ ($0\leq iter\leq S$) can be described as follows: $\displaystyle\alpha_{iter}$ $\displaystyle=\sqrt{(1-\frac{iter}{S})},\beta_{iter}=\sqrt{\frac{iter}{S}},$ (5) $\displaystyle C_{iter}$ $\displaystyle=\frac{\alpha_{iter}}{\alpha_{iter}+\beta_{iter}}C+\frac{\beta_{iter}}{\alpha_{iter}+\beta_{iter}}D_{SD},$ where $D_{SD}$ represents the distribution of the pretrained SD, while $S$ indicates the number of total diffusion duration during training. ### 4.4 Distillation Once the sketch generator $G_{sketch}$ is trained, DiffSketch can generate pairs of images and sketches in the trained style. This generation can be performed either randomly or with a specific condition. Due to the nature of the denoising diffusion model, however, in which the result is refined through the denoising process, long processing time and high memory usage are required. Moreover, when extracting sketches from images, the quality can be degraded because of the inversion process. Therefore, to perform image-to- sketch extraction efficiently while ensuring high-quality results, we train $\text{DiffSketch}_{distilled}$ using Pix2PixHD [52]. To train $\text{DiffSketch}_{distilled}$, we extract 30k pairs of image and sketch samples using our trained DiffSketch, adhering to CDST. Additionally, we employ regularization to ensure that the ground truth sketch $I_{sketch}$ can be generated and discriminated effectively during the training of $\text{DiffSketch}_{distilled}$. With this trained model, images can be extracted in a given style much more quickly than with the original DiffSketch. Table 1: Quantitative results on ablation with LPIPS and SSIM. Best scores are denoted in bold. Sketch Styles | anime-informative | HED | XDoG | Average ---|---|---|---|--- Methods | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ Ours | 0.2054 | 0.6835 | 0.2117 | 0.5420 | 0.1137 | 0.6924 | 0.1769 | 0.6393 Non-representative features 1 | 0.2154 | 0.6718 | 0.2383 | 0.5137 | 0.1221 | 0.6777 | 0.1919 | 0.6211 Non-representative features 2 | 0.2042 | 0.6869 | 0.2260 | 0.5281 | 0.1194 | 0.6783 | 0.1832 | 0.6311 One timestep features (t=0) | 0.2135 | 0.6791 | 0.2251 | 0.5347 | 0.1146 | 0.6962 | 0.1844 | 0.6367 W/O CDST | 0.2000 | 0.6880 | 0.2156 | 0.5341 | 0.1250 | 0.6691 | 0.1802 | 0.6304 W/O L1 | 0.2993 | 0.3982 | 0.2223 | 0.5011 | 0.1203 | 0.6547 | 0.2140 | 0.5180 FFD W/O VAE features | 0.2650 | 0.5044 | 0.2650 | 0.4061 | 0.2510 | 0.3795 | 0.2603 | 0.4300 ## 5 Experiments ### 5.1 Implementation Details We implemented DiffSketch and trained generator $G_{sketch}$ on an Nvidia V-100 GPU for 1,200 iterations. When training DiffSketch, we applied CDST with $S$ in Eq. 5 to be 1,000. The model was trained with a fixed learning rate of 1e-4. The balancing weights $\lambda_{\text{across}}$, $\lambda_{\text{within}}$, $\lambda_{\text{L1}}$, $\lambda_{\text{LPIPS}}$, and $\lambda_{\text{CLIPsim}}$ are fixed at 1, 1, 30, 15, and 30, respectively. $\text{DiffSketch}_{distilled}$ was trained on two A6000 GPUs using the same architecture and parameters from its original paper except for the output channel, where ours was set to one. We also added regularization on every 16 iterations. $\text{DiffSketch}_{distilled}$ was trained with 30,000 pairs that were sampled from DiffSketch with CDST ($S=30,000$). LPIPS [68] and SSIM [53] were used for evaluation metrics, in both ablation study and comparison with baselines. LPIPS was to calculate perceptual similarity with pre-trained classifier. SSIM was calculated for structural similarity of sketch image. ### 5.2 Datasets For training, DiffSketch requires a sketch corresponding to an image generated from SD. To facilitate a numerical comparison, we established the ground truth for given images. Specifically, three distinct styles were employed for quantitative evaluation: 1) HED [59] utilizes nested edge detection and is one of the most widely used edge detection methods. 2) XDoG [56] takes an algorithmic approach of using a difference of Gaussians to extract sketches. 3) Informative-anime [5] employs informative learning. This method is the state-of-the-art among single modal sketch extraction methods and is trained on the Anime Colorization dataset [18], which consists of 14,224 sketches. For qualitative evaluation, we added hand-drawn sketches of two more styles. For testing, we employed the test set from BSDS500 dataset [29] and also randomly sampled an additional 1,000 images from the test set of Common Objects in Context (COCO) dataset [23]. As a result, our training set consisted of 3 sketches and the test dataset consisted of 3,600 pairs (1,200 pairs for each style) of image-sketch. Two hand-drawn sketches were used only for perceptual study because there is no ground truth to compare with. ### 5.3 Ablation Study Figure 5: Visual examples of the ablation study. Ours generates higher quality results with details such as face, separated with hair region, compared to the alternatives. Figure 6: Visualization of additional ablation: Ours were trained and sampled with CDST. In contrast, W/O CDST were trained and sampled randomly. We conducted an ablation study on each component of our method compared to the baselines as shown in Table 1. Experiment were performed to verify the contribution of each component; feature selections, CDST, losses, and FFD. To perform the ablation study, we randomly sampled 100 images and extracted sketches with HED, XDog, and Anime-informative and paired them with all 100 images. All seeds were fixed to generate sketches from the same sample. The ablation study was conducted as follows. For Non-representative features, we randomly selected the features from the denoising timesteps while keeping the number of timesteps equal to ours (13). We performed this random selection and analysis twice. For one timestep feature, we only used the features from the final timestep $t=0$. To produce a result without CDST, we executed random text prompt guidance for the diffusion sampling process during training. For the alternative loss approach, we contrasted L1 Loss with L2 Loss for pixel- level reconstruction, as proposed in MTG. To evaluate the effect of the FFD, we produced sketches after removing the VAE features. The quantitative and qualitative results of the ablation study are shown in Table 1 and Figure 5, respectively. Ours achieved the higest average scores on both indices. Both Non-representative features achieved overall low scores indicating that representative feature selection helps obtain rich information. Similarly, using one time step features achieved lower scores than ours on average, showing the importance of including diverse features. W/O CDST scored lower than ours on both HED and XDoG styles. W/O L1 and FFD W/O features performed the worst due to the blurry and blocky output, respectively. The blocky results are due to lack of fine information from VAE. #### Condition Diffusion Sampling for Training While we tested on randomly generated images for quantitative evaluation, our CDST can be applied to both training DiffSkech and sampling for training $\text{DiffSketch}_{distilled}$. Therefore, we performed an additional ablation study on CDST, comparing Ours (trained and sampled with CDST), with W/O CDST (trained and sampled randomly). The outline of the sketch is clearly reproduced, following the style, when CDST is used as shown in Figure 6. ### 5.4 Comparison with Baselines We compared our method with 5 different alternatives including state-of-the- art sketch extraction methods [2, 43] and diffusion based methods [39, 19, 9]. Ref2sketch [2] and Semi-Ref2sketch [43] are methods specifically designed to extract sketches in the style of a reference from a large pretrained network on diverse sketches in a supervised (Ref2sketch) and a semi-supervised (Semi- Ref2sketch) manner. DiffuseIT [19] is designed for image-to-image translation by disentangling style and content. DreamBooth [39] finetunes a Stable Diffusion model to generate personalized images, while Textural Inversion [10] optimizes an additional text embedding to generate a personalized concept for a style or object. For DreamBooth and Textual Inversion, DDIM inversion was conducted to extract sketches. Table 2 presents the result of the quantitative evaluation that used BSDS500 and COCO datasets in a one-shot setting. Overall, ours achieved the best scores. While Semi-Ref2sketch scored higher on some of SSIM scores, the method relies on a large sketch dataset to train while ours requires only one. Figure 7 presents visual results produced by different methods. While Semi-Ref2sketch and Ref2sketch generated superior quality sketches to the results produced by others, they do not faithfully follow the style of the reference sketches, especially for dense styles. Diffusion-based methods sometimes overfit to the style image (DiffuseIT) or change the content of the images (DreamBooth, Textual Inversion). $\text{DiffSketch}_{distilled}$ generated superior results compared to these baselines, effectively maintaining its styles and content. Figure 7: Qualitative comparison with alternative sketch extraction methods. Table 2: Quantitative comparison of different methods on BSDS500 and COCO datasets. | BSDS500 - anime | BSDS500 - HED | BSDS500 - XDoG | BSDS500 - average ---|---|---|---|--- Methods | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ $\text{Ours}_{distilled}$ | 0.21746 | 0.49343 | 0.22706 | 0.59314 | 0.14280 | 0.64874 | 0.19577 | 0.57844 Ref2sketch | 0.33621 | 0.46932 | 0.41993 | 0.31448 | 0.57096 | 0.13095 | 0.44237 | 0.30492 Semi-Ref2sketch | 0.23916 | 0.50972 | 0.39675 | 0.34200 | 0.50447 | 0.30918 | 0.38013 | 0.38697 DiffuseIT | 0.48365 | 0.29789 | 0.49217 | 0.19104 | 0.57335 | 0.11030 | 0.51639 | 0.19974 DreamBooth | 0.80608 | 0.30149 | 0.74550 | 0.18523 | 0.72326 | 0.19465 | 0.75828 | 0.22712 Textual Inversion | 0.82789 | 0.26373 | 0.77098 | 0.16416 | 0.64662 | 0.21953 | 0.74850 | 0.21581 | COCO - anime | COCO - HED | COCO - XDoG | COCO - average ---|---|---|---|--- Methods | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ | LPIPS↓ | SSIM↑ $\text{Ours}_{distilled}$ | 0.17634 | 0.36021 | 0.20039 | 0.36093 | 0.14806 | 0.38319 | 0.17493 | 0.36811 Ref2sketch | 0.32142 | 0.50517 | 0.37764 | 0.37230 | 0.56012 | 0.16835 | 0.41973 | 0.34861 Semi-Ref2sketch | 0.21337 | 0.64732 | 0.32920 | 0.39487 | 0.47974 | 0.31894 | 0.34077 | 0.45371 DiffuseIT | 0.46527 | 0.36092 | 0.47905 | 0.24611 | 0.56360 | 0.14595 | 0.50264 | 0.25099 DreamBooth | 0.76399 | 0.30517 | 0.72278 | 0.22066 | 0.67909 | 0.21655 | 0.72195 | 0.24746 Textual Inversion | 0.81458 | 0.29168 | 0.78835 | 0.19952 | 0.63215 | 0.22074 | 0.74503 | 0.23731 ### 5.5 Perceptual Study We conducted a user study to evaluate different sketch extraction methods on human perception. We recruited 45 participants to complete a survey that used test images from two datasets, processed in five different styles, to extract sketches. Each participant was presented with a total of 20 sets of source image, target sketch style, and resulting sketch. Participants were asked to choose one that best follows the given style while preserving the content of the source image. The result should not depend on demographics distribution, therfore we did not focus on group of people as previous sketch studies [2, 43, 5]. As shown in Table 3, our method received the highest scores when compared with the alternative methods. Ours outperformed the diffusion-based methods by a large margin and even received a higher preference rating than the specialized sketch extraction method that was trained on a large sketch dataset. Table 3: Results from the user perceptual study given style example and the source image. The percentage indicates the selected frequency. Methods | User Score ---|--- Ours | 68.67% Ref2sketch | 6.00% Semi-Ref2sketch | 18.56% DiffuseIT | 0.22% DreamBooth | 0.00% Textual Inversion | 0.22% ## 6 Limitation and Conclusion We proposed DiffSketch, a novel method to train a sketch generator using representative features and extract sketches in diverse styles. For the first time, we conducted the task of extracting sketches from the features of a diffusion model and demonstrated that our method outperforms previous state- of-the-art methods in extracting sketches. The ability to extract sketches in a diverse style, trained with one example, will have various use cases not only for artistic purposes but also for personalizing sketch-to-image retrieval and sketch-based image editing. We built our generator network specialized for generating sketches by fusing aggregated features with the features from a VAE decoder. Consequently, our method works well with diverse sketches including dense sketches and outlines. Because our method not directly employ a loss function to compares stroke styles, however, it fails to generate highly abstract sketches or pointillism. One possible research direction could involve incorporating a new sketch style loss that does not require additional sketch data, such as penalizing based on stroke similarity in close-ups. Although we focused on sketch extraction, our analysis of selecting representative features and the proposed training scheme are not limited to the domain of sketches. Extracting representative feature holds potential to improve applications leveraging diffusion features, including semantic segmentation, visual correspondence, and depth estimation. We believe this research direction promises to broaden the impact and utility of diffusion feature-based applications. ## References * Arbelaez et al. [2010] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. _IEEE transactions on pattern analysis and machine intelligence_ , 33(5):898–916, 2010. * Ashtari et al. [2022] Amirsaman Ashtari, Chang Wook Seo, Cholmin Kang, Sihun Cha, and Junyong Noh. Reference based sketch extraction via attention mechanism. _ACM Transactions on Graphics (TOG)_ , 41(6):1–16, 2022. * Baranchuk et al. [2021] Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, and Artem Babenko. Label-efficient semantic segmentation with diffusion models. _arXiv preprint arXiv:2112.03126_ , 2021. * Canny [1986] John Canny. A computational approach to edge detection. _IEEE Transactions on pattern analysis and machine intelligence_ , (6):679–698, 1986. * Chan et al. [2022] Caroline Chan, Frédo Durand, and Phillip Isola. Learning to generate line drawings that convey geometry and semantics. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 7915–7925, 2022. * Ci et al. [2018] Yuanzheng Ci, Xinzhu Ma, Zhihui Wang, Haojie Li, and Zhongxuan Luo. User-guided deep anime line art colorization with conditional adversarial networks. In _Proceedings of the 26th ACM international conference on Multimedia_ , pages 1536–1544, 2018. * Davies and Bouldin [1979] David L Davies and Donald W Bouldin. A cluster separation measure. _IEEE transactions on pattern analysis and machine intelligence_ , (2):224–227, 1979. * Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on_ , pages 248–255. IEEE, 2009. * Gal et al. [2022a] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. _arXiv preprint arXiv:2208.01618_ , 2022a. * Gal et al. [2022b] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. _arXiv preprint arXiv:2208.01618_ , 2022b. * Hertz et al. [2022] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. _arXiv preprint arXiv:2208.01626_ , 2022. * Ho et al. [2020] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in neural information processing systems_ , 33:6840–6851, 2020. * Hotelling [1933] Harold Hotelling. Analysis of a complex of statistical variables into principal components. _Journal of educational psychology_ , 24(6):417, 1933. * Johnson et al. [2016] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In _Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14_ , pages 694–711. Springer, 2016. * Karras et al. [2019] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In _CVPR_ , 2019. * Khani et al. [2023] Aliasghar Khani, Saeid Asgari Taghanaki, Aditya Sanghi, Ali Mahdavi Amiri, and Ghassan Hamarneh. Slime: Segment like me. _arXiv preprint arXiv:2309.03179_ , 2023. * Kim et al. [2019] Hyunsu Kim, Ho Young Jhoo, Eunhyeok Park, and Sungjoo Yoo. Tag2pix: Line art colorization using text tag with secat and changing loss. In _Proceedings of the IEEE/CVF international conference on computer vision_ , pages 9056–9065, 2019. * Kim [2018] Taebum Kim. Anime sketch colorization pair. https://www.kaggle.com/ktaebum/anime-sketch-colorization-pair, 2018. * Kwon and Ye [2023] Gihyun Kwon and Jong Chul Ye. Diffusion-based image translation using disentangled style and content representation. In _The Eleventh International Conference on Learning Representations_ , 2023. * Levina and Bickel [2001] Elizaveta Levina and Peter Bickel. The earth mover’s distance is the mallows distance: Some insights from statistics. In _Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001_ , pages 251–256. IEEE, 2001. * Li et al. [2017] Chengze Li, Xueting Liu, and Tien-Tsin Wong. Deep extraction of manga structural lines. _ACM Transactions on Graphics (SIGGRAPH 2017 issue)_ , 36(4):117:1–117:12, 2017. * Li et al. [2019] Mengtian Li, Zhe Lin, Radomir Mech, Ersin Yumer, and Deva Ramanan. Photo-sketching: Inferring contour drawings from images. In _2019 IEEE Winter Conference on Applications of Computer Vision (WACV)_ , pages 1403–1412. IEEE, 2019. * Lin et al. [2015] Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár. Microsoft coco: Common objects in context, 2015. * Liu et al. [2022] Feng-Lin Liu, Shu-Yu Chen, Yukun Lai, Chunpeng Li, Yue-Ren Jiang, Hongbo Fu, and Lin Gao. Deepfacevideoediting: Sketch-based deep editing of face videos. _ACM Transactions on Graphics_ , 41(4):167, 2022. * lllyasviel [2017] lllyasviel. sketchkeras. https://github.com/lllyasviel/sketchKeras, 2017. * Luo et al. [2023] Grace Luo, Lisa Dunlap, Dong Huk Park, Aleksander Holynski, and Trevor Darrell. Diffusion hyperfeatures: Searching through time and space for semantic correspondence. _arXiv preprint arXiv:2305.14334_ , 2023. * Mardia [1970] Kanti V Mardia. Measures of multivariate skewness and kurtosis with applications. _Biometrika_ , 57(3):519–530, 1970. * Mardia [1974] Kanti V Mardia. Applications of some measures of multivariate skewness and kurtosis in testing normality and robustness studies. _Sankhyā: The Indian Journal of Statistics, Series B_ , pages 115–128, 1974. * Martin et al. [2001] David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In _Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001_ , pages 416–423. IEEE, 2001. * Mo et al. [2021] Haoran Mo, Edgar Simo-Serra, Chengying Gao, Changqing Zou, and Ruomei Wang. General virtual sketching framework for vector line art. _ACM Transactions on Graphics (TOG)_ , 40(4):1–14, 2021. * Nichol and Dhariwal [2021] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In _International Conference on Machine Learning_ , pages 8162–8171. PMLR, 2021. * Oquab et al. [2023] Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. _arXiv preprint arXiv:2304.07193_ , 2023. * Portenier et al. [2018] Tiziano Portenier, Qiyang Hu, Attila Szabo, Siavash Arjomand Bigdeli, Paolo Favaro, and Matthias Zwicker. Faceshop: Deep sketch-based face image editing. _arXiv preprint arXiv:1804.08972_ , 2018. * Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , pages 8748–8763. PMLR, 2021. * Ramesh et al. [2021] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In _International Conference on Machine Learning_ , pages 8821–8831. PMLR, 2021. * Rombach et al. [2022] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 10684–10695, 2022. * Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18_ , pages 234–241. Springer, 2015. * Rousseeuw [1987] Peter J Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. _Journal of computational and applied mathematics_ , 20:53–65, 1987. * Ruiz et al. [2023] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 22500–22510, 2023. * Saharia et al. [2022] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. _Advances in Neural Information Processing Systems_ , 35:36479–36494, 2022. * Schuhmann et al. [2021] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs, 2021. * Schuhmann et al. [2022] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. _Advances in Neural Information Processing Systems_ , 35:25278–25294, 2022. * Seo et al. [2023] Chang Wook Seo, Amirsaman Ashtari, and Junyong Noh. Semi-supervised reference-based sketch extraction using a contrastive learning framework. _ACM Transactions on Graphics (TOG)_ , 42(4):1–12, 2023. * Seo et al. [2022] Junyoung Seo, Gyuseong Lee, Seokju Cho, Jiyoung Lee, and Seungryong Kim. Midms: Matching interleaved diffusion models for exemplar-based image translation. _arXiv preprint arXiv:2209.11047_ , 2022. * Shapiro and Wilk [1965] Samuel Sanford Shapiro and Martin B Wilk. An analysis of variance test for normality (complete samples). _Biometrika_ , 52(3/4):591–611, 1965. * sharpei pups [2014] sharpei pups. 6.5 weeks old sharpei puppies. https://www.youtube.com/watch?v=plIyQg6llp8, 2014. Accessed: 23-11-2023. * Song et al. [2020] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. _arXiv preprint arXiv:2010.02502_ , 2020. * Tang et al. [2023] Luming Tang, Menglin Jia, Qianqian Wang, Cheng Perng Phoo, and Bharath Hariharan. Emergent correspondence from image diffusion. _arXiv preprint arXiv:2306.03881_ , 2023. * TheSaoPauloSeries [2013] TheSaoPauloSeries. São paulo city mini-documentary: (full hd) the são paulo series. https://www.youtube.com/watch?v=A3pBJTTjwCM, 2013. Accessed: 23-11-2023. * Tumanyan et al. [2023] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 1921–1930, 2023. * Vinker et al. [2022] Yael Vinker, Ehsan Pajouheshgar, Jessica Y Bo, Roman Christian Bachmann, Amit Haim Bermano, Daniel Cohen-Or, Amir Zamir, and Ariel Shamir. Clipasso: Semantically-aware object sketching. _ACM Transactions on Graphics (TOG)_ , 41(4):1–11, 2022. * Wang et al. [2018] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018. * Wang et al. [2004] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_ , 13(4):600–612, 2004. * Willett et al. [2023] Nora S Willett, Fernando de Goes, Kurt Fleischer, Mark Meyer, and Chris Burrows. Stylizing ribbons: Computing surface contours with temporally coherent orientations. _IEEE Transactions on Visualization and Computer Graphics_ , 2023. * Winnemöller [2011] Holger Winnemöller. Xdog: advanced image stylization with extended difference-of-gaussians. In _Proceedings of the ACM SIGGRAPH/eurographics symposium on non-photorealistic animation and rendering_ , pages 147–156, 2011. * Winnemöller et al. [2012] Holger Winnemöller, Jan Eric Kyprianidis, and Sven C Olsen. Xdog: An extended difference-of-gaussians compendium including advanced image stylization. _Computers & Graphics_, 36(6):740–753, 2012. * Xiang et al. [2021] Xiaoyu Xiang, Ding Liu, Xiao Yang, Yiheng Zhu, and Xiaohui Shen. Anime2sketch: A sketch extractor for anime arts with deep networks. https://github.com/Mukosame/Anime2Sketch, 2021. * Xie and Tu [2015a] Saining Xie and Zhuowen Tu. Holistically-nested edge detection. In _Proceedings of the IEEE international conference on computer vision_ , pages 1395–1403, 2015a. * Xie and Tu [2015b] Saining Xie and Zhuowen Tu. Holistically-nested edge detection, 2015b. * Xu et al. [2023] Jiarui Xu, Sifei Liu, Arash Vahdat, Wonmin Byeon, Xiaolong Wang, and Shalini De Mello. Open-vocabulary panoptic segmentation with text-to-image diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 2955–2966, 2023. * Yi et al. [2019] Ran Yi, Yong-Jin Liu, Yu-Kun Lai, and Paul L Rosin. Apdrawinggan: Generating artistic portrait drawings from face photos with hierarchical gans. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 10743–10752, 2019. * Yi et al. [2020] Ran Yi, Yong-Jin Liu, Yu-Kun Lai, and Paul L Rosin. Unpaired portrait drawing generation via asymmetric cycle mapping. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 8217–8225, 2020. * Yuan and Simo-Serra [2021] Mingcheng Yuan and Edgar Simo-Serra. Line art colorization with concatenated spatial attention. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 3946–3950, 2021. * Zhang et al. [2023a] Junyi Zhang, Charles Herrmann, Junhwa Hur, Luisa Polania Cabrera, Varun Jampani, Deqing Sun, and Ming-Hsuan Yang. A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence. _arXiv preprint arXiv:2305.15347_ , 2023a. * Zhang et al. [2015] Kaihua Zhang, Lei Zhang, Kin-Man Lam, and David Zhang. A level set approach to image segmentation with intensity inhomogeneity. _IEEE transactions on cybernetics_ , 46(2):546–557, 2015. * Zhang et al. [2018a] Lvmin Zhang, Chengze Li, Tien-Tsin Wong, Yi Ji, and Chunping Liu. Two-stage sketch colorization. _ACM Transactions on Graphics (TOG)_ , 37(6):1–14, 2018a. * Zhang et al. [2023b] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 3836–3847, 2023b. * Zhang et al. [2018b] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 586–595, 2018b. * Zhu et al. [2022] Peihao Zhu, Rameen Abdal, John Femiani, and Peter Wonka. Mind the gap: Domain gap control for single shot domain adaptation for generative adversarial networks. In _International Conference on Learning Representations_ , 2022. Supplementary Material Figure 8: Visualization of WCSS values according to the number used for K-means clustering. The left plots are the WCSS of the features from an randomly sampled image while the right plot shows the average WCSS values of the features from 1,000 randomly sampled images. ## Overview This supplementary material consists of 5 Sections. Section A describes implementation details (Sec. A). Sec. B provides additional details and findings on diffusion features selection. Sec. C presents extended details of VAE decoder features. Sec. D contains the results of additional experiments on CDST. Lastly, Sec. E presents additional qualitative results with various style sketches. ## A. Implementation Details #### DiffSketch DiffSketch leverages the Stable Diffusion v1.4 sampled with DDIM [47] pretrained with the LAION-5B [42] dataset, which produced images of resolution 512 $\times$ 512\. With the pretrained Stable Diffusion, we use a total of 50 time steps T for sampling. The training of DiffSketch was performed for 1200 iterations which required less than 3 hours on an Nvidia V100 GPU. For the training using HED [59], we concatenated the first two layers with the first three layers to stylize sketch. In the case of XDoG [55], we used Gary Grossi style. #### $\text{DiffSketch}_{distilled}$ $\text{DiffSketch}_{distilled}$ was developed to conduct sketch extraction efficiently with the streamlined generator. The training of $\text{DiffSketch}_{distilled}$ was performed for 10 epochs for 30,000 sketch- image pairs generated from DiffSKetch, following the CDST. The training of $\text{DiffSketch}_{distilled}$ required approximately 5 hours on two Nvidia A6000 GPUs. The inference time of both DiffSketch and $\text{DiffSketch}_{distilled}$ was 4.74 seconds and 0.0139 seconds, respectively, when tested on an Nvidia A5000 GPU with image with same resolutions. #### Comparison with Baselines For the baselines, the settings used in our study were based on the official code provided by the authors and information obtained from their respective papers. For both Ref2Sketch [2] and Semi-ref2sketch [43], we used the given checkpoint, the official pre-trained model provided by the authors. For DiffuseIT [19], we also used the official code and checkpoint given by the authors in which the diffusion model was trained with the Imagenet [8] dataset, not FFHQ [15] because our comparison is not constrained to the face. For Dreambooth [39] and Textual Inversion [10], we used DDIM inversion [47] to invert the source image to the latent code of Stable Diffusion. ## B. Diffusion Features Selection To conduct K-means clustering for diffusion feature selection, we first employed the elbow method, visualizing the results. However, a distinct elbow was not visually apparent, as shown in Figure 8. The left 6 images are WCSS values from randomly selected images out of our 1,000 test images. All 6 plots show similar patterns, making it hard to select a definitive elbow as stated in the main paper. The right image, which exhibits similar results, shows the average of WCSS on all 1,000 images. Therefore, we chose to use the Silhouette score [38] and Davies-Bouldin index [7], which are two of the most widely used numerical method when choosing the optimal number of clusters. However, they are two different methods, whose results do not always match with each other. We first visualized and found the contradicting results of these two methods as shown in Figure 9. Therefore, we chose to use the one that first matches the $i^{\text{th}}$ highest silhouette score and the $i^{\text{th}}$ lowest Davies-Bouldin index simultaneously. This process of choosing the optimal number of clusters can be written as follows : Algorithm 1 Finding the Optimal Number of Clusters 1:$MAX\\_clusters=Total\\_time\\_steps/2$ 2:$sil\\_indices\leftarrow\text{sorted}(\text{range}(MAX\\_clusters),\text{key}=\lambda k:silhouette\\_scores[k],\text{reverse}=True)$ 3:$db\\_indices\leftarrow\text{sorted}(\text{range}(MAX\\_clusters),\text{key}=\lambda k:db\\_scores[k],\text{reverse}=False)$ 4:for $i\leftarrow 0$ to $MAX\\_clusters$ do 5: if $sil\\_indices[i]$ in $db\\_indices[:i+1]$ then 6: $k\\_optimal$ = $sil\\_indices[i]$+1 7: break 8: end if 9:end for We conducted this process twice with two different numbers of PCA components (10 and 30), yielding the results shown in Figure 10. The averages (13.26 and 13.34) and standard deviations (0.69 and 0.69) were calculated. As the mode value with both PCA components was 13, and the rounded average was also 13, we chose our optimal k to be 13. Using this number of clusters, we chose the representative feature as the one nearest to the center of each cluster. Figure 9: Visualization of contradicting results of Silhouette scores and Davis Bouldin indices on five different images. From this process, we ended up with the following t values: [0,3,8,12,16,21,25,28,32,35,39,43,47]. To verify the process, if an optimal number of clusters in each image can really be globally adjusted, we compared our selected features against the baselines. These baselines included sampling at equal time intervals (t=[i*4+1 for i in the range of (0,13)]) and randomly selecting 13 values. We calculated the minimum Euclidean distance from each feature and confirmed that our method resulted in the minimum distance across 1,000 randomly sampled images. This is illustrated in Table 4. Table 4: Sum of the minimum distances from all features Method | Euclidean Distance ---|--- Ours | 18,615.6 Equal time steps | 19,004.9 Random sample | 23,957.2 Figure 10: Visualization on histogram for optimal k value with different number of PCA components. Figure 11: Additional analysis on sampled features. PCA is applied to DDIM sampled features from different classes. Up : features colored with human-labeled classes. Down : features colored with denoising timesteps In the main paper, we found several key insights through the visualization of features within the manually selected classes, which we summarize extensively here. First, semantically similar images lead to similar trajectories, although not identical. Second, features in the initial stage of the diffusion process (when t is approximately 50) retain similar information despite significant differences in the resulting images. Third, features in the middle stage of the diffusion process (when t is around 25) exhibit larger differences between adjacent features in their time steps. Lastly, the feature at the final time step (t=0) possesses distinctive information, varying significantly from previous values. This is also evident in the additional visualization presented in Figure 11. Our automatically selected features indicate a prioritization of the final feature (t=0), and that selection was made more from the middle stages than from the initial steps (t=[21,25,28] versus t=[43,47]). Our finding offer some guidance for manual feature selecting to consider the time steps, especially when memory is constrained. The order of the preference will on the last feature (t=0), a middle one (t is near 25), and the middle to final time steps while the features from initial steps are preferred less in general. For instance, when selecting four features from 50 time steps, a possible selection could be t=[0,12,25,37]. ### B.2 Features From Additional Models While we focused on T=50 DDIM sampling, for generalization, we examined different intervals (T=25, T=100) and different model. For these experiments, we randomly sampled 100 images. While our main experiments were conducted with manually classified images, we utilized DINOv2 [32], which was contrastively trained in a self-supervised manner and has learned visual semantics. With DINOv2, we separated the data into 15 different clusters and followed the process described in the main paper to plot the features. Here, we used 15 images from each cluster to calculate the PCA axis while we used 17 classes in the main experiments. The results, as shown in Figure 12 and Figure 13, indicate that even with different sampling methods, the same conclusions regarding the sampling method can be drawn. The last feature exhibits a distinct value, while the features from the initial time step show similar values. In addition, we also tested on different model, Stable diffusion V2.1 which produce 768$\times$768 images. Following the same process, we randomly sampled 100 images and clustered with DINOv2 and plot as shown in Figure 14. This result also shows that even with different model with different resolution, the same conclusions can be drawn, showing the scalability of our analysis. Figure 12: Additional analysis on sampled features. PCA is applied to 25 steps of DDIM sampled features with different clusters. Up : features colored with DINOv2 clusters. Down : features colored with denoising timesteps. Figure 13: Additional analysis on sampled features. PCA is applied to 100 steps of DDIM sampled features with different clusters. Up : features colored with DINOv2 clusters. Down : features colored with denoising timesteps. Figure 14: Additional analysis on Stable diffusion v2.1 sampled features. PCA is applied to 50 steps of DDIM sampled features with different clusters. Up : features colored with DINOv2 clusters. Down : features colored with denoising timesteps. ## C. VAE Decoder Features VAE features fused with the Aggregation network features for FFD in the proposed model architecture. Figure 15 shows a visualization of the VAE features. We used a set of 20 generated face images and extracted features from different decoder layers of the UNet and VAE decoder, at the last time step (t=0) similar to that of PNP [50]. We observe that the use of VAE decoder resulted in higher-frequency details than the UNet decoder. While the feature from UNet decoder contains semantic information, the features from VAE decoder produces finer details such as hair, wrinkles, and small letters. Figure 15: Extended visualization of features from UNet and VAE. (a) shows the UNet decoder features in lower resolution (layers 1), intermediate resolution (layers 5), and higher resolution (layers 11). (b) shows the VAE decoder features in lower resolution (layers 1), intermediate resolution (layers 6), and higher resolution (layers 9). ## D. Condition Diffusion Sampling for Training ### D.1 Rationale Behind CDST An underlying assumption of CDST is that for a directional CLIP loss, two images with a similar domain ($I_{source}$ and $I_{samp}$ in the main paper) leads to higher confidence compared to two images with a different domain. To examine this, we performed a confidence score test using 4SKST [43] which consists of four different sketch styles paired with color images. 4KST is suitable for the confidence score test because it contains images from two different domains, photos, and anime images, in four different styles. We manually separated into photos and anime images since it was not labeled. Here, we computed a confidence score to determine if the directional clip loss is more reliable when the calculated source images are in the same domain. We performed a test with three settings, measuring cosine similarity between the images $I_{A}$ (Photo) and $I_{B}$ (Anime) from different domains with the corresponding sketches $S_{A}$ and $S_{B}$. All these images were encoded into the CLIP embedding. We employed two similarity scores $Sim_{within}$ and $Sim_{across}$ in the same manner as the main paper (Sec.4.2). We calculated the similarity of the features in the photo domain, in the anime domain, and across the two domains. The equation can be expressed as follows: Table 5: Confidence scores on 4SKST with four different styles. Similarity | Style1 | Style2 | Style3 | Style4 | Average ---|---|---|---|---|--- confidence(Anime,Anime) | 104.2608 | 102.8716 | 108.2026 | 101.3530 | 104.1720 confidence(Photo,Photo) | 101.9346 | 98.8005 | 102.4516 | 100.5453 | 100.9330 confidence(Photo,Anime) | 94.5036 | 94.0189 | 98.1867 | 92.3874 | 94.7742 $\displaystyle Sim(X,Y)={\cos(\overrightarrow{I_{X}I_{Y}}\cdot\overrightarrow{S_{X}S_{Y}})+\cos(\overrightarrow{I_{X}S_{X}}\cdot\overrightarrow{I_{Y}S_{Y}})\over{N}}$ (6) where $cos(a\cdot b)$ is the cosine similarity and N is the total number of $cos$ calculation. X,Y corresponds to the images in each domain. With these computed similarities, the confidence score in domain A and domain B can be written as follows where $Sim_{(}ALL,ALL)$ denotes the average similarity of all images: $\displaystyle\textit{confidence(A,B)}={{Sim(A,B)}\over{Sim_{(}ALL,ALL)}}\times 100$ (7) In Table 5, we show the confidence test results on four different style sketches. For all four styles, calculating the directional CLIP loss in the same domain produced higher confidence compared to the confidence computed across a different domain. Accordingly, we propose a sampling scheme, CDST to train the generator in the same domain at the initial stage of the training, which leads to higher confidence while widening its capacity in the latter iterations of training. ### D.2 Additional Experiment on CDST In the main paper, we used $D_{SD}$ for CDST. However, the distribution of the condition of a pretrained stable diffusion network is not known. Therefore, we approximate $D_{SD}$ by randomly sampling 1,000 text prompts from the LAION-400M [41], which is a subset of the trained text-image pairs of the SD model. We then tokenized and embedded these prompts for preprocessing, following the process of the pretrained SD model. We conducted PCA on these 1,000 sampled embeddings to extract 512 principal components. We then checked the normality of the sampled embeddings with all 512 principal component axes using the Shapiro-Wilk test [45] with a significance level of $\alpha=5\%$. As a result, 214 components rejected the null hypothesis of normality. This indicates that each of its marginals cannot be assumed to be univariate normal. Next, we conducted the Mardia test [27, 28] with the same 1,000 samples, taking into account skewness and kurtosis to check if the distribution is multivariate. The results failed to reject the null hypothesis of normality with a significance level of $\alpha=5\%$. Therefore, we assumed $D_{SD}$ as a multivariate normal distribution for our sampling during training. We examined whether our calculated distribution of stable diffusion ($D_{SD}$) is similar to the ground truth embedding distribution of LAION-400M. For verification, we sampled 100k data from the embedded LAION-400M as a subset of ground truth. We also sampled same amount of embeddings from the multivariate normal distribution (Ours), univariate normal distribution for each axis, and a uniform distribution between the max and min values of the sampled embedded LAION-400M as a baseline. We used Earth moving distance (EMD) [20] and found out that the multivariate normal distribution lead the lowest distance, as shown in Table 6. $\displaystyle M_{ij}=\lVert{{dist}}_{i}-{{dist_{GT}}}_{j}\rVert_{2},$ (8) $\displaystyle a_{i}=\frac{1}{{{len(dist)}}},\quad b_{j}=\frac{1}{{{len(dist_{GT})}}},$ $\displaystyle W({{dist}},{{GTdist}})={\text{EMD}}(a,b,M).$ This result does not prove that $D_{SD}$ has multivariate normality, and the difference with the normal distribution is marginal. However, it is sufficient for our usage of the condition diffusion sampling for training. Table 6: Distance from GT embeddings. Method | EMD ---|--- Multivariate normal (Ours) | 244.22 normal distribution for each axis | 244.31 uniform distribution | 1480.57 ## E. Qualitative Results We present additional results from the baseline comparisons in Figure 16 and 17. Each figure shows the results that compared $\text{DiffSketch}_{distilled}$ and the baseline methods on the COCO dataset [23] and the BSDS500 dataset [29], respectively. Addition to this, we also provide visual examples of video sketch extraction results on diverse domain including buildings, nature, and animals [46, 49] using $\text{DiffSketch}_{distilled}$ in Figure 18 and supplementary video. Figure 16: Qualitative comparison with alternative sketch extraction methods on COCO dataset. Figure 17: Qualitative comparison with alternative sketch extraction methods on BSDS500 dataset. Figure 18: Qualitative examples of video sketch extraction.
# Deep Learning-based Action Detection in Untrimmed Videos: A Survey Elahe Vahdani and Yingli Tian∗ E. Vahdani is with the Department of Computer Science, The Graduate Center, The City University of New York, NY, 10016. E-mail<EMAIL_ADDRESS> Y. Tian is with the Department of Electrical Engineering, The City College, and the Department of Computer Science, the Graduate Center, the City University of New York, NY, 10031<EMAIL_ADDRESS>∗Corresponding authorThis material is based upon work supported by the National Science Foundation under award number IIS-2041307. ###### Abstract Understanding human behavior and activity facilitates advancement of numerous real-world applications, and is critical for video analysis. Despite the progress of action recognition algorithms in trimmed videos, the majority of real-world videos are lengthy and untrimmed with sparse segments of interest. The task of temporal activity detection in untrimmed videos aims to localize the temporal boundary of actions and classify the action categories. Temporal activity detection task has been investigated in full and limited supervision settings depending on the availability of action annotations. This paper provides an extensive overview of deep learning-based algorithms to tackle temporal action detection in untrimmed videos with different supervision levels including fully-supervised, weakly-supervised, unsupervised, self- supervised, and semi-supervised. In addition, this paper also reviews advances in spatio-temporal action detection where actions are localized in both temporal and spatial dimensions. Moreover, the commonly used action detection benchmark datasets and evaluation metrics are described, and the performance of the state-of-the-art methods are compared. Finally, real-world applications of temporal action detection in untrimmed videos and a set of future directions are discussed. ###### Index Terms: Action Understanding, Temporal Action Detection, Untrimmed Videos, Deep Learning, Full and Limited Supervision. ## 1 Introduction This paper provides a comprehensive overview of temporal action detection. This task aims to detect the start and end of action instances in long untrimmed videos and predict the action categories. Temporal action detection is crucial for many video analysis applications such as sports analysis, autonomous driving, anomaly detection in surveillance cameras, understanding instructional videos, etc. Learning with limited supervision is a scheme where annotations of actions are unavailable or only partially available during the training phase. Because annotation of long untrimmed videos is very time- consuming, designing action detection methods with limited supervision has been very popular. This survey reviews temporal action detection methods with full and limited supervision signals. Figure 1: Temporal action detection aims to localize action instances in time and recognize their categories. The first row demonstrates an example of action “long jump” detected in an untrimmed video from THUMOS14 dataset [1]. The second row is an example of an untrimmed video including several action instances of interest with various lengths. ### 1.1 Motivation Social networks and digital cameras have led to substantial video and media content produced by individuals each day. Hence, video understanding and analysis continues to be one of the essential research subjects in computer vision. While deep learning has accomplished remarkable performance in many computer vision tasks, video understanding is still far from ideal. Action understanding, notably, as a vital element of video analysis, facilitates the advancement of numerous real-world applications. For instance, collaborative robots need to recognize how the human partner completes the job to cope with the variations in the task [2]. Sport analysis systems must comprehend game actions to report commentaries of live activities [3]. Autonomous driving cars demand an understanding of operations performed by the surrounding cars and pedestrians [4]. In this paper, we define trimmed videos as pre-segmented video clips that each contains only one action instance. In other words, the context of the action, i.e., moments before or after the action are not included in the video. Therefore, action detection in trimmed videos only need to classify the action categories without the need to detect starting and ending timestamps. Recognizing actions in trimmed videos has many applications in video surveillance, robotics, medical diagnosis [5], and has achieved excellent performance in recent years [6, 7, 8]. However, the majority of videos in the wild, i.e., recorded in unconstrained environments, are naturally untrimmed. Untrimmed videos are lengthy unsegmented videos that may include several action instances, the moments before or after each action, and the transition from one action to another. The action instances in one video can belong to several action classes and have different duration. Temporal activity detection in untrimmed videos aims to localize the action instances in time and recognize their categories. This task is considerably more complicated than action recognition which merely seeks to classify the categories of trimmed video clips. Fig. 1 shows an example of temporal activity detection in an untrimmed video recorded in a stadium. The first row demonstrates the detection of action ”long jump” in temporal domain where the start and end time of the action are localized. The goal is to only detect the actions of interest, i.e., actions that belong to a predefined set of action classes. The temporal intervals of other activities that do not belong to this set of actions are called temporal background. For example, the segments right before or right after action ”long jump” may belong to other diverse activities such as crowd cheering in the stadium. In some cases, the frames right before or right after an action are visually very similar to the start or end of the action which makes the localization of action intervals very challenging. Another challenge (as shown in the second row of Fig. 1) is that action instances may occur at any time of the video and have various duration, lasting from less than a second to several minutes [9]. Temporal action detection mainly targets activities of high-level semantics and videos with a sparse set of actions (e.g., actions only cover $30\%$ of the frames in [10]). However, in some cases, the goal is to predict action labels at every frame of the video. In such cases, the task is referred to as temporal action segmentation which targets the fine-grained actions and videos with dense occurrence of actions ($93\%$ of the frames in [11]). One can convert between a given segmentation and a set of detected instances in the temporal domain by simply adding or removing temporal background segments [12]. Temporal action detection similar to object detection belongs to the family of detection problems. Both of these problems aim to localize the instances of interest, i.e., action intervals in temporal domain versus object bounding boxes in spatial domain, Fig. 2 (a and c). When targeting fine- grained actions, temporal action detection (segmentation) is similar to semantic segmentation as both aim to classify every single instance, i.e., frames in temporal domain versus pixels in spatial domain, Fig. 2 (b and d). As a result, many techniques for temporal action detection and segmentation are inspired by the advancements in object detection and semantic segmentation [13, 14, 15]. Action detection has drawn much attention in recent years and has broad applications in video analysis tasks. As surveillance cameras are increasingly deployed in many places, the demand for anomaly detection has also surged. Anomalous events such as robbery or accidents occur less frequently compared with normal activities and it can be very time-consuming to detect such events by humans. Therefore, automatic detection of suspicious events has a great advantage. By growing popularity of social media many people follow online tutorials and instructional videos to learn how to perform a task such as “changing the car tire” properly for the first time. The instructional videos are usually untrimmed and include several steps of the main task, e.g., “jack up the car” and “put on the tire” for changing the tire. Automatic segmentation of these videos to the main action steps can facilitate and optimize the learning process. Another application is in sport video analysis to localize the salient actions and highlights of a game and analyze the strategies of specific teams. Furthermore, action detection has a critical role in self-driving cars to analyze the behavior of pedestrians, cyclists, and other surrounding vehicles to make safe autonomous decisions. Figure 2: Task Relations: (a) Temporal detection of action “Long Jump” on THUMOS14 [1]. (b) Temporal detection (segmentation) of fine-grained actions shown by different colors in a “making pancakes” video on Breakfast [11]. (c) and (d) Results from [16] on PASCAL [17]. ### 1.2 Taxonomy To the best of our knowledge, this is the first comprehensive survey describing deep learning based algorithms for activity detection in untrimmed videos with different supervision levels. We describe the fully-supervised methods in Section 2.3 and methods with limited supervision (weakly- supervised, unsupervised, self-supervised, and semi-supervised) in Section 2.4. Section 3 summarizes action detection benchmark datasets, evaluation metrics, and performance comparison between the-state-of-the-art methods. Finally, Section 4 discusses the most common real-world applications of action detection and possible future directions. We provide a brief introduction of the tasks here. Temporal action detection aims to find the precise temporal boundary and label of action instances in untrimmed videos. Depending on annotation availability in train set, temporal action detection can be studied in the following settings (also listed in Table I). * • Fully supervised action detection: Temporal boundaries and labels of action instances are available for training. * • Weakly-supervised action detection: Only the video-level labels of action instances are available. The order of action labels can be provided or not. * • Unsupervised action detection: There are no annotations for the action instances. * • Semi-supervised action detection: The data is split to a small set $S_{1}$ and a large set $S_{2}$. The videos in $S_{1}$ are fully annotated (as in fully- supervised) while the videos in $S_{2}$ are either not annotated (unsupervised) or only annotated with video-level labels (as in weakly- supervised). * • Self-supervised action detection: A pretext task is defined to extract information from the data in an unsupervised setting by leveraging its structure. Then, this information is used to improve the performance for temporal action detection (downstream task) which can be supervised, unsupervised, or semi-supervised. * • Action detection with limited supervision: Limited supervision is the opposite of full supervision where the annotations are unavailable or partially available. In this paper, we define limited supervision to include weakly- supervised, unsupervised, self-supervised, and semi-supervised settings as they are defined above. TABLE I: Main categories of temporal action detection task with different supervision levels in training set. “✓” indicates “available”; “✗” is for “unavailable”, and $\ast$ is “partially available”. Supervision Level | Action Temporal Boundaries | Action Labels ---|---|--- Fully-supervised | ✓ | ✓ Weakly-supervised | ✗ | ✓ Unsupervised | ✗ | ✗ Semi-supervised | $\ast$ | $\ast$ Self-supervised | ✓$\ast$ ✗ | ✓$\ast$ ✗ ## 2 Temporal Action Detection Methods We begin this section by introducing important technical terms in Section 2.1. Given an input video, video feature encoding is necessary to extract representative visual features of the video (discussed in Section 2.2). Action detection methods with full supervision are described in Section 2.3 and action detection methods with limited supervision are reviewed in Section 2.4. ### 2.1 Term Definition To facilitate reading subsequent sections, we define common terms, scores, and loss functions here. ###### Definition 1. Temporal action detection. This task aims to find the precise temporal boundaries and categories of action instances in untrimmed videos. Annotation of an input video is denoted by ${\Psi}_{g}$ and includes a set of action instances as the following: ${\Psi}_{g}=\\{{\varphi}_{n}=(t_{s,n},t_{e,n},l_{n})\\}_{n=1}^{N},$ (1) where $N$ is the number of action instances, and ${\varphi}_{n}$ is the $n$-th action instance. The start time, end time, and label of ${\varphi}_{n}$ are denoted by $t_{s,n}$, $t_{e,n}$, and $l_{n}$, respectively. Label $l_{n}$ belongs to set $\\{1,\cdots,C\\}$, where $C$ is the number of action classes of interest in the whole dataset. The annotation ${\Psi}_{g}$ can be fully, partially, or not available for the videos of the training set. ###### Definition 2. Temporal proposals. The temporal regions of input video that are likely to contain an action are called temporal proposals. Each temporal proposal $P_{n}$ is an interval identified with a starting time $t_{s,n}$, an ending time $t_{e,n}$, and a confidence score $c_{n}$. The confidence score is the predicted probability that the interval contains an action. Proposal $P_{n}$ can be formulated as: $P_{n}=(t_{s,n},t_{e,n},c_{n}).$ (2) ###### Definition 3. Temporal IoU (tIoU). This is the ratio of temporal intersection over union between two temporal intervals. It is often measured between a predicted proposal (interval $I_{p}$) and its closest ground-truth action instance (interval $I_{g}$), formulated as: $tIoU(I_{p},I_{g})=\frac{I_{p}\cap I_{g}}{I_{p}\cup I_{g}}.$ (3) ###### Definition 4. Temporal proposal labeling. Label of proposal $I_{p}$ is determined by ground- truth action instance $I_{g}$ that has the maximum tIoU with $I_{p}$. Let’s denote the class label of $I_{g}$ with $c$. Then, depending on a predefined threshold $\sigma$, the proposal is declared as positive (true positive) with label $c$ if $tIoU\geq\sigma$. Otherwise, it is negative (or false positive). Also, if a ground-truth action instance is matched with several proposals, only the proposal with the highest confidence score is accepted as true positive and the others are declared as false positives. ###### Definition 5. Precision and recall for proposal generation. Precision is the ratio of true positive proposals to the total number of predicted proposals. Precision must be high to avoid producing exhaustively many irrelevant proposals. Recall is the ratio of true positive proposals to the total number of ground-truth action instances. Recall must be high to avoid missing ground-truth instances. ###### Definition 6. Actionness score. Actionness score at a temporal position is the probability of occurrence of an action instance at that time. This score is often denoted by $a_{t}\in[0,1]$ for time $t$. ###### Definition 7. Startness and endness scores. Startness score (endness score) at a temporal position is the probability of start (end) of an action instance at that time. ###### Definition 8. Action completeness score. The maximum tIoU between a candidate proposal and ground truth action instances is called action completeness of that proposal. It was shown by [18] that incomplete proposals that have low tIoU with ground- truth intervals could have high classification scores. Therefore, action completeness must be considered to evaluate and rank the predicted proposals. ###### Definition 9. Action classification score. Generated temporal proposals are fed to action classifiers to produce a probability distribution over all action classes. This can be represented by vector $(p^{1},\cdots,p^{C})$ where $p^{i}$ is the probability of action class $i$, and $C$ is the number of classes. For a fair comparison, researchers utilize classifiers from earlier work SCNN-classifier [19], UntrimmedNet [19], [20], etc. They uniformly sample a constant number of frames from the video segment and feed it to ConvNets such as C3D [21], two stream CNNs [22] or temporal segment networks [23]. In some cases, the recognition scores of sampled frames are aggregated with the Top-k pooling or weighted sum to yield the final prediction. ### 2.2 Video Feature Encoding Untrimmed videos are often lengthy and can be as long as several minutes, and thus it is difficult to directly input the entire video to a visual encoder for feature extraction due to the limits of computational resources. For instance, popular video feature extractors such as 3D-CNNs can only operate on short clips spanning about 4 seconds. A common strategy for video representation is to partition the video into equally sized temporal intervals called snippets, and then apply a pre-trained visual encoder over each snippet. Formally, given input video $X$ with $l$ frames, a sequence $S$ of snippets with regular duration $\sigma$ is generated where $S=\\{s_{n}\\}_{n=1}^{l_{s}}\hskip 7.22743pt,\hskip 7.22743ptl_{s}=\frac{l}{\sigma},$ (4) and $s_{n}$ is the n-th snippet. Then, each snippet is fed to a pre-trained visual encoder such as two-stream [22], C3D [21], or I3D [24] for feature extraction. In two-stream network [22], snippet $s_{n}$ which is centered at $t_{n}$-th frame of the video, has an RGB frame $x_{t_{n}}$, and a stacked optical flow $o_{t_{n}}$ derived around the center frame. The RGB frame $x_{t_{n}}$ is fed to spatial network ResNet [25], extracting feature vector $f_{S,n}$. The optical flow $o_{t_{n}}$ is fed to temporal network BN- Inception [26], extracting feature $f_{T,n}$. The spatial and temporal features, $f_{S,n}$ and $f_{T,n}$, are concatenated to represent the visual feature $f_{n}$ for snippet $s_{n}$. Similarly, in I3D [24], a stack of RGB and optical flow frames from each snippet $s_{n}$ are fed to I3D network, extracting spatial and temporal feature vectors $f_{S,n}$ and $f_{T,n}$ which are concatenated to create feature $f_{n}$. In C3D [21], the frames of each snippet $s_{n}$ are directly fed to a 3D-CNN architecture to capture spatio-temporal information, and extracting feature vector $f_{n}$. ### 2.3 Action Detection with Full Supervision In fully-supervised action detection, the annotation (${\Psi}_{g}$ in Eq. (1)) of temporal boundaries and labels of action instances are provided for each video of training set. During inference, the goal is to find the temporal boundaries of action instances and predict their labels. A main step in action detection is temporal proposal generation to identify the temporal intervals of the video that are likely to include action instances. Fully-supervised temporal proposal generation methods can be categorized to anchor-based and anchor-free. Anchor-based methods generate action proposals by assigning dense and multi-scale intervals with pre-defined lengths at each temporal position of the video (Section 2.3.1). Anchor-free methods often predict action boundary confidence or actionness scores at temporal positions of the video, and employ a bottom-up grouping strategy to match pairs of start and end (Section 2.3.2). There are also several methods that combine the advantages of anchor-free and anchor-based proposal generation methods (Section 2.3.3). After generating the proposals, rich features must be extracted from the proposals to evaluate the quality of proposals. Section 2.3.4 reviews common loss functions that are used during training for proposal evaluation. Section 2.3.5 discusses modeling long-range dependencies to capture the relation between video segments in untrimmed videos to improve action localization. Finally, Section 2.3.6 summarizes spatio-temporal action detection methods. #### 2.3.1 Anchor-based Proposal Generation and Evaluation Anchor-based methods, also known as top-down methods, generate temporal proposals by assigning dense and multi-scale intervals with pre-defined lengths to uniformly distributed temporal locations in the input video. Formally, given a video with $T$ frames, $\frac{T}{\sigma}$ temporal positions, known as anchors, are uniformly sampled from every $\sigma$ frames. Then, several temporal windows with different duration $\\{d_{1},d_{2},\cdots,d_{n}\\}$ are centered around each anchor as initial temporal proposals. The proposal lengths ($d_{i}$ s) must have a wide range to align with action instances of various lengths that can last from less than a second to several minutes in untrimmed videos [9]. Then visual encoders and convolution layers are applied on the temporal proposals for feature extraction and features are used to evaluate the quality of temporal proposals and adjust their boundaries (Section 2.3.4). Figure 3: Anchor-based methods assign multi-scale intervals with pre-defined lengths at uniformly distributed temporal positions. ##### 2.3.1.1 Feature Extraction of Multi-scale Proposals As mentioned earlier, temporal proposals have very diverse time spans to align with action instances. However, fixed-size features must be extracted from each proposal to be fed to fully connected layers for proposal evaluation (action classification and regression). Here, we review different strategies to extract fixed-size features from proposals with different lengths. Sampling and Feature Concatenation: Shou et al. in SCNN [19] uniformly sampled a fixed number of frames from each proposal and fed them to a visual encoder for feature extraction. This is not computationally efficient because there are many overlapping proposals and overlapping segments are processed multiple times. To address this problem, Gao et al. in Turn-Tap [27] and CBR [28] decomposed the video into non-overlapping equal-length units and extracted the features of each unit only once. Different numbers of consecutive units are grouped together at each anchor unit to generate multi-scale proposals. To obtain the proposal features, the features of all units are concatenated. Using this approach, the proposal features are computed from unit-level features, which are calculated only once. However, concatenation of features within each proposal or sampling frames do not lead to rich feature extraction. 3D RoI Pooling: This approach extracts fixed size features from multi-scale proposals using 3D RoI pooling. Specifically, an input feature volume of size $l\times h\times w$ ( $l$ for temporal dimension, $h$ for height and $w$ for width dimensions) is divided into $l_{s}\times h_{s}\times w_{s}$ sub-volumes (where $l_{s},h_{s},$ and $w_{s}$ are fixed), and max pooling is performed inside each sub-volume. Therefore, proposals of various lengths generate output volume features of the same size, which is $d\times l_{s}\times h_{s}\times w_{s}$, where $d$ is the channel dimension. The idea of 3D RoI pooling for action detection is an extension of the 2D RoI pooling for object detection in Faster R-CNN [29]. This idea was first introduced in R-C3D[30] and used in other frameworks such as AGCN [31] and AFNet [32]. The limitation of this approach is that the multi-scale proposals at each location share the same receptive field, which may be too small or too large for some anchor scales. ##### 2.3.1.2 Receptive Field Alignment with Proposal Span To address the variation of action duration, multi-scale anchors are assigned to each temporal location of the video. Before receptive field alignment, multi-scale anchors at any position share the same receptive field size. This is problematic because if the receptive field is too small or too large with respect to the anchor size, the extracted feature may not contain sufficient information or include too much irrelevant information. Here, we review the strategies to align the receptive field size with proposal span. Multi-tower Network: TAL-Net [13] proposed a multi-tower network, compose of several temporal convNets, each one responsible for a certain anchor-size. Then, the receptive field of each anchor segment was aligned with its temporal span using dilated temporal convolutions. This idea was also used in TSA-Net [33]. However, assigning pre-defined temporal intervals limits the accuracy of generated proposals. Temporal Feature Pyramid Network: In a temporal feature pyramid network (TFPN), the predictions are yielded from multiple resolution feature maps. This idea was first introduced in Single Shot Detector (SSD) [34] for object detection, and then extended to temporal domain for action detection in SSAD [35] and $\text{S}^{3}\text{D}$ [36]. They proposed an end-to-end network where the lower-level feature maps with higher resolution and smaller receptive field are responsible to detect short action instances while the top layers with lower resolution and larger receptive field, detect long action instances. For each feature map cell, several anchor segments with multiple scales are considered around the center that are fed to convolutional layers for evaluation. The limitation of this approach is that lower layers in the pyramid are unaware of high-level semantic information, and top layers lack enough details, so they all fail to localize the actions accurately. U-shaped Temporal Feature Pyramid Network: In order to mitigate the problem with regular TFPNs, a U-shaped TFPN architecture was designed to connect high- level and low-level features. This idea was first introduced in Unet [37], FPN [38], and DSSD [39] for object detection and then was generalized to temporal domain in MGG [40], PBRNet [41], RapNet [42], C-TCN [43], and MLTPN [44]. The video representation features are extracted using off-the-shelf feature extractors. Then temporal convolution and max pooling layers are applied to reduce the temporal dimension and increase the receptive field size. This is followed by temporal deconvolution layers for upscaling. Then, high-level features are combined with corresponding low-level features with lateral connections between the convolutional and deconvolutional layers. U-shaped TFPNs have drawn much attention recently and achieved state-of-the art results for temporal action detection task. #### 2.3.2 Anchor-free Proposal Generation and Evaluation Anchor-free methods employ a bottom-up grouping strategy for proposal generation based on predicted boundary probability or actionness scores at temporal positions of the video. Anchor-free methods are capable to generate proposals with precise boundary and flexible duration because the proposal lengths are not predefined. ##### 2.3.2.1 Proposal Generation with Actionness Scores Zhao et al. in SSN [18] proposed to identify continuous temporal regions with high actionness scores (def 6) as proposals (known as TAG proposals). Continuous temporal regions are grouped using a classic watershed algorithm [45] applied on the 1D signal formed by complemented actionness values. The proposals are fed to a temporal pyramid for feature extraction and proposal evaluation. The feature extraction process is too simple to capture rich features. ##### 2.3.2.2 Proposal Generation with Boundary Scores These methods predict three probability signals for actionness (def 6), startness and endness scores (def 7). They generate temporal proposals by matching the temporal positions that are likely to be the start or end of an action (peak of startness and endness signals). In BSN [46] proposal features are constructed by concatenation of a fixed number of points, sampled from the actionness scores (def 6) by linear interpolation. BSN ignores the global information for actions with blurred boundaries, causing unreliable confidence scores. Also, proposal features are too weak to capture enough temporal context, and the feature construction and confidence evaluation are performed for each proposal separately, which is inefficient. BMN [47] explores the global context for simultaneously evaluating all proposals end-to-end. They construct a feature map by aggregating the features of all proposals together. The feature map is fed to convolution layers to simultaneously evaluate all proposals. The advantage of this approach is to extract rich feature and temporal context for each proposal and exploit the context of adjacent proposals. Also, proposal evaluation is very fast during inference. However, they use the same method as BSN [46] to generate boundary probabilities (start and end) which ignores the global information for actions with blurred boundaries. DBG [48] simultaneously evaluates all proposals to explore global context and extract rich features similar to BMN [47]. Moreover, instead of only exploiting the local information to predict boundary probabilities (probability of start and end), DBG proposed to employ global proposal-level features. Figure 4: Anchor-free proposal generation with boundary matching. These methods predict action boundary probabilities at uniformly distributed temporal positions and match the start and end points with high probabilities as proposals. In order to model the relations between the boundary and action content of temporal proposals, BC-GNN [49] proposed a graph neural network where boundaries and content of proposals are taken as the nodes and edges of the graph, and their features are updated through graph operations. Then the updated edges and nodes are used to predict boundary probabilities and content confidence score to generate proposals. A2Net [50] and AFSD [51] visit the anchor-free mechanism, where the network predicts the distance to the temporal boundaries for each temporal location in the feature sequence. AFSD [51] also proposes a novel boundary refinement strategy for precise temporal localization. #### 2.3.3 Anchor-based and Anchor-free Combination Anchor-based methods consider segments of various lengths as initial proposals at regularly distributed temporal positions of the video. However, because the segment sizes are designed beforehand, they cannot accurately predict the temporal boundary of actions. Also, because the duration of action instances varies from seconds to minutes, covering all ground-truth instances with anchor-based methods is computationally expensive. Anchor-free methods predict action boundary confidence or actionness score at all temporal positions of the video, and employ a bottom-up grouping strategy to match pairs of start and end. Anchor-free methods are capable to generate proposals with precise boundaries and flexible duration. However, in some cases they only exploit local context to extract the boundary information. Therefore, they are sensitive to noise, likely to produce incomplete proposals, and fail to yield robust detection results. Several methods such as CTAP [52], MGG [40], PBRNet [41], RapNet [42] balance the advantages and disadvantages between anchor-based and anchor-free approaches for proposal generation. CTAP [52] designed a complementary filter applied on the initial proposals to generate the probabilities of proposal detection by anchor-free TAG [18] (defined in 2.3.2). The original use of complementary filtering is to estimate a signal given two noisy measurements, where one of them is mostly high-frequency (maybe precise but not stable) similar to TAG proposals and the other one is mostly low-frequency (stable but not precise) similar to sliding-window proposals. Also, several temporal feature pyramid networks (TFPN, defined in 2.3.1) such as MGG [40], PBRNet [41], and RapNet [42] generate coarse segment proposals of various length with TFPN (anchor-based), and simultaneously predict fine-level frame actionness (anchor-free). The advantage of this idea is to adjust the segment boundary of proposals with frame actionness information during the inference. #### 2.3.4 Common Loss Functions for Proposal Evaluation After generating the temporal proposals, rich features are extracted from the proposals to evaluate their quality. Several convolutional layers are applied on the features to predict actionness score (def 6), completeness score (def 8), classification score (def 9), and to adjust the temporal boundary of the proposals. Here, we review common loss functions that are used during training to supervise these predicted scores and evaluate the quality of proposals. ###### Definition 10. Actionness loss. This is a binary cross-entropy loss that classifies the temporal proposals as action or background. Given $N$ proposals, this loss is defined as: $L_{\text{act}}=-\frac{1}{N}\sum_{i=1}^{N}b_{i}\log(a_{i})+(1-b_{i})\log(1-a_{i}),$ (5) where $a_{i}$ is the predicted actionness score (def 6), and $b_{i}\in\\{0,1\\}$ is a binary ground-truth label for the $i$-th proposal. If the proposal is positive (def 4), then $b_{i}=1$. Otherwise, $b_{i}=0$. ###### Definition 11. Action completeness loss. Given $N$ proposals, the completeness loss is defined as: $L_{\text{com}}=\frac{1}{N_{\text{pos}}}\sum_{i=1}^{N}d(c_{i},g_{i})\cdot[l_{i}>0],$ (6) where $c_{i}$ is the predicted action completeness score (def 8) for $i$-th proposal, and $g_{i}$ is the ground-truth action completeness score. $d$ is a distance metric which is often $L_{2}$ or smooth $L_{1}$ loss. $l_{i}$ is the label of the $i$-th proposal and condition $[l_{i}>0]$ implies that action completeness is only considered for positive proposals (def 4). $N_{\text{pos}}$ is the number of positive proposals during each mini-batch. ###### Definition 12. Action overlap loss. This is another variation of action completeness loss which rewards the proposals with higher temporal overlap with ground truths and is defined as the following: $\mathcal{L}_{overlap}=\frac{1}{N_{\text{pos}}}\sum_{i}\frac{1}{2}\cdot\Big{(}\frac{(p^{l_{i}}_{i})^{2}}{(g_{i})^{\alpha}}-1\Big{)}\cdot[l_{i}>0],$ (7) where $p_{i}$ is the classification probability vector over action labels for the $i$-th proposal, and $p^{l_{i}}_{i}$ is the probability of action class $l_{i}$. Other notations are the same as in $L_{\text{com}}$ (def 11) and $\alpha$ is a hyper-parameter. ###### Definition 13. Action classification loss. This is the classification (cross-entropy) loss and the probability distribution is over all action classes as well as temporal background as the following: $L_{\text{cls}}=-\frac{1}{N}\sum_{i=1}^{N}\log(p^{l_{i}}_{i}),$ (8) where $l_{i}\in\\{0,1,\cdots,C\\}$ is the label of $i$-th proposal, and $p^{l_{i}}_{i}$ is the probability of class $l_{i}$. ###### Definition 14. Action regression loss. To adjust the temporal boundary of proposals, the start and end offset of proposals are predicted and supervised by a regression loss as the following: $L_{\text{reg}}=\frac{1}{N_{\text{pos}}}\sum_{i=1}^{N}|(o_{s,i}-o^{\star}_{s,i})+(o_{e,i}-o^{\star}_{e,i})|\cdot[l_{i}>0],$ (9) where term $o_{s,i}$ is the difference between the start coordinate of $i$-th proposal and the start coordinate of the closest ground truth action instance. The term $o^{\star}_{s,i}$ is the predicted offset. Similarly, $o_{e,i}$ and $o^{\star}_{e,i}$ are the ground-truth and predicted offset for end coordinate of the $i$-th proposal. The condition $[l_{i}>0]$ implies that boundary adjustment is only considered for positive proposals (def 4). #### 2.3.5 Modeling Long-range Dependencies As mentioned earlier, untrimmed videos are often lengthy and must be partitioned into shorter clips for feature extraction. Processing these shorter clips independently can lead to loss of temporal or semantic dependencies between video segments. Therefore, several tools such as recurrent neural networks, graph convolutions, attention mechanism and transformers are used to capture these dependencies. The advantage of modeling dependencies is to refine the temporal boundary of proposals or predict their action category or action completeness given the information from other neighboring proposals. ##### 2.3.5.1 Recurrent Neural Networks RNNs are used for sequence modeling and are capable of capturing long-term dependencies in videos. Buch et al. in Sst [53] and SS-TAD [54] used RNNs for action detection. They partition the video into non-overlapping equal-length segments and feed each segment to a visual encoder for feature extraction. At time $t$, visual feature $f_{t}$ and the hidden state of the previous time step ($h_{t-1}$) are fed to a Gated Recurrent Unit (GRU)-based architecture to produce hidden state $h_{t}$. This hidden state is then fed to fully connected layers to evaluate multi-scale proposals at time $t$ by producing actionness scores (def 6). In an earlier work, Yuan et al. in PSDF [55] captured the motion information over multiple resolutions and utilized RNNs to improve inter-frame consistency. Yeung et al. learn decision policies for an RNN-based agent [56], and later proposed an LSTM model to process multiple input frames with temporal attention mechanism [57]. LSTMs are also used in other frameworks such as [58], [59], [60] to evaluate temporal proposals. The advantage of using RNNs is that hidden state at time $t$ encodes the information from previous time steps which is useful to capture temporal dependencies. However, RNNs are not capable to encode very long videos as the hidden vector gets saturated after some time steps. Figure 5: Capturing temporal dependencies in untrimmed videos with RNNs. Hidden state at time $t$, $h_{t}$, encodes the information from previous time steps. This picture is regenerated from [53]. ##### 2.3.5.2 Graph Models A full action often consists of several sub-actions that may independently be detected in several overlapping proposals. Based on this observation, Zeng et al. in PGCN [61] captured proposal-proposal relations by applying graph- convolution networks (GCNs). They constructed a graph where the nodes are the proposals. The edges connect highly overlapped proposals as well as disjoint but nearby proposals to provide contextual information. The edge weights model the relation between the proposals by measuring cosine similarity of their features. Through graph convolutions feature of each proposal gets updated by aggregating the information from other proposals. The updated features are then used to predict action categories, completeness, and refining the boundaries. Figure 6: Modeling proposal-proposal relations with graph convolutional networks (GCNs), where the nodes are the proposals and the edges model the relations between proposals. The feature of proposal $p_{3}$ is influenced by the features of proposals $p_{1},p_{2}$, and $p_{4}$. Image is reproduced from PGCN [61]. Li et al. in AGCN [31] proposed an attention based GCN to model the inter and intra dependencies of the proposals. Intra attention learns the long-range dependencies among pixels inside each action proposal and inter attention learns the adaptive dependencies among the proposals to adjust the imprecise boundary. Bai et al. in BC-GNN [49] proposed a graph neural network to model the relations between the boundary and action content of temporal proposals. Xu et al. proposed G-TAD [62] to capture the relations between different snippets of input video. They constructed a graph where the nodes are temporal segments of the video and the edges are either temporal or semantic. The temporal edges are pre-defined according to the snippets’ temporal order but the semantic edges are dynamically updated between the nodes according to their feature distance. Temporal and semantic context of the snippets are aggregated using graph convolutions. All possible pairs of start and end with duration within a specific range are considered to generate the proposals. To evaluate each proposal, the temporal and semantic features of the corresponding sub-graph are extracted. Chang et al. in ATAG [63] also designed an adaptive GCN similar to G-TAD [62] to capture local temporal context where the graph nodes are the snippets and the edges model the relation between snippets. The temporal context is then captured through graph convolutions where the feature of each snippet is influenced and updated by the features of other snippets. VSGN [64] builds a graph on video snippets similar to G-TAD [62], but also exploits correlations between cross-scale snippets. They propose a cross-scale graph pyramid network which aggregates features from cross scales, and progressively enhances the features of original and magnified scales at multiple network levels. ##### 2.3.5.3 Transformers Some action instances in a video have non-sequential dependencies, meaning that they are related but are separated by other events in the video. Also, some action instances may have overlaps in their temporal extents. Based on these observations, Nawhal et al. in AGT [65] proposed an encoder decoder transformer to capture non-linear temporal structure by reasoning over videos as nonsequential entities. Their encoder generates a context graph where the nodes are initially video level features and the interactions among nodes are modeled as learnable edge weights. Also, positional information for each node is provided using learnable positional encodings. Their decoder learns the interactions between context graph (latent representation of the input video) and graph structured query embeddings (latent representations of the action queries). Tan et al. in RTD-Net [66] proposed a relaxed transformer to directly generate action proposals without the need to human prior knowledge for careful design of anchor placement or boundary matching mechanisms. The transformer encoder models long-range temporal context and captures inter-proposal relationships from a global view to precisely localize action instances. They also argued that the snippet features in a video change at a very slow speed and direct employment of self-attention in transformers can lead to over-smoothing. Therefore, they customized the encoder with a boundary-attentive architecture to enhance the discrimination capability of action boundary. Chang et al. in ATAG [63] designed an augmented transformer to mine long-range temporal context for noisy action instance localization. The snippet-level features generated by transformer are used to classify the snippets to action or background under supervision of a binary classification loss. Throughout this process the transformer learns to capture long-term dependencies at snippet level. #### 2.3.6 Spatio-temporal Action Detection Spatio-temporal action detection aims to localize action instances in both space and time, and recognize the action labels. In the fully-supervised setting of this task, the temporal boundary of action instances at the video- level, the spatial bounding box of actions at the frame-level, and action labels are provided during training and must be detected during inference. Fig. 7 shows an example of this task. The start and end of action “long jump” are detected in temporal domain. Also, bounding box of the actor performing the action is detected in each frame in spatial domain. Figure 7: Spatio-temporal activity detection task: action ”long jump” is localized in time and space. Other than temporal interval of the action, bounding box of the person performing the action is detected in each frame. ##### 2.3.6.1 Frame-level Action Detection Early methods [67, 68] were based on extensions of the sliding window scheme, requiring strong assumptions such as a cuboid shape, i.e., a fixed spatial extent of the actor across frames. Later, advancements in object detection inspired frame-level action detection methods to recognize human action classes at the frame level. In the first stage action proposals are produced by a region proposal algorithm or densely sampled anchors, and in the second stage the proposals are used for action classification and localization refinement. Hundreds of action proposals are extracted per video given low- level cues, such as super-voxels [69, 70] or dense trajectories [71, 72, 73], and then proposals are classified to localize actions. After detecting the action regions in the frames, some methods [74, 75, 76, 77, 78, 79, 80, 81] use optical flow to capture motion cues. They employ linking algorithms to connect the frame-level bounding boxes into spatio- temporal action tubes. Gkioxari et al. [74] used dynamic programming approach to link the resulting per-frame detection. The cost function of the dynamic programming is based on detection scores of the boxes and overlap between detection of consecutive frames. Weinzaepfel et al. [79] replaced the linking algorithm by a tracking-by-detection method. Then, two-stream Faster R-CNN was introduced by [76, 78]. Saha et al. [78] fuse the scores of both streams based on overlap between the appearance and the motion. Peng et al. [76] combine proposals extracted from the two streams and then classify and regress them with fused RGB and multi-frame optical flow features. They also use multiple regions inside each action proposal and then link the detection across a video based on spatial overlap and classification score. Another group [82, 83, 84] rely on an actionness measure, i.e., a pixel-wise probability of containing any action. To estimate actionness, they use low- level cues such as optical flow [84], CNNs with a two-stream architecture [83] or RNNs [83]. They extract action tubes by thresholding the actionness scores [82] or by using a maximum set coverage [84]. The output is a rough localization of the action as it is based on noisy pixel-level maps. The main disadvantage of these methods is that the temporal property of videos is not fully exploited as the detection is performed on each frame independently. Effective temporal modeling is crucial as a number of actions are only identifiable when temporal context information is available. ##### 2.3.6.2 Clip-level Action Detection As mentioned earlier, temporal modeling is necessary for accurate action localization. Here, we discuss methods that exploit temporal information by performing action detection at the clip (i.e., a short video snippet) level. Kalogeiton et al. [85] proposed action tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs action categories and regressed tubelets, i.e., sequences of bounding boxes with associated scores. The tubelets are then linked to construct action tubes (sequence of bounding boxes of action). Gu et al. [86] further demonstrate the importance of temporal information by using longer clips and taking advantage of I3D pre- trained on the large-scale video dataset [24]. In order to generate action proposals, they extend 2D region proposals to 3D by replicating them over time, assuming that the spatial extent is fixed within a clip. However, this assumption would be violated for the action tubes with large spatial displacement over time, in particular when the clip is long or involves rapid movement of actors or camera. Thus, using long cuboids directly as action proposals is not optimal, since they introduce extra noise for action classification. Yang et al. [87] perform action detection at clip level, and then linked them to build action tubes across the video. They employ multi-step optimization process to progressively refine the initial proposals. Other methods [6], [88] exploited human proposals coming from pretrained image detectors and replicated them in time to build straight spatio-temporal tubes. ##### 2.3.6.3 Modeling Spatio-temporal Dependencies Understanding human actions usually requires understanding the people and objects around them. Therefore, state-of-the-art methods model the relation between actors and the contextual information such as other people and other objects. Some methods used the graph-structured networks [89, 90] and attention mechanism [91, 88, 92] to aggregate the contextual information from other people and objects in the video. Wu et al. [88] provided long-term supportive information that enables video models to better understand the present. The designed a long-term feature bank and a feature bank operator FBO that computes interactions between the short- term and long-term features. They integrate information over a long temporal support, lasting minutes or even the whole video. Girdhar et al. [91] proposed a transformer-style architecture to weight actors with features from the context around him. Tomei et al. [93] employed self-attention to encode people and object relationships in a graph structure, and use the spatio-temporal distance between proposals. Ji et al. proposed Action Genome [94] to model action-object interaction, by decomposing actions into spatio-temporal scene graphs. Ulutan et al. [92] suggested combining actor features with every spatio-temporal region in the scene to produce attention maps between the actor and the context. Pan et al. [95] proposed a relational reasoning module to capture the relation between the two actors based on their respective relations with the context. Tomei et al. [96] proposed a graph-based framework to learn high-level interactions between people and objects, in both space and time. Spatio-temporal relationships are learned through self-attention on a multi-layer graph structure which can connect entities from consecutive clips, thus considering long-range spatial and temporal dependencies. ### 2.4 Action Detection with Limited Supervision Fully supervised action detection requires the full annotation of temporal boundaries and action labels for all action instances in training videos, which is very time-consuming and costly. To eliminate the need for exhaustive annotations in the training phase, in recent years, researchers have explored the design of efficient models that require limited ground-truth annotations. We discuss weakly-supervised methods in Section 2.4.1, and other learning methods with limited supervision (unsupervised, semi-supervised, and self- supervised) are described in Section 2.4.2. #### 2.4.1 Weakly-supervised Action Detection Weakly-supervised learning scheme requires coarse-grained or noisy labels during the training phase. Following the work of [97], weakly-supervised action detection in common settings requires only the video-level labels of actions during training while the temporal boundaries of action instances are not needed. During testing both labels and temporal boundaries of actions are predicted. In the following parts of this section, weakly-supervised action detection refers to this setting. There are also other weak signals utilized for action detection such as order of actions [98], [99], [100], [101], frequency of action labels [102], and total number of events in each video [103]. A common strategy in weakly-supervised action detection is to use attention mechanism to focus on discriminative snippets and combine salient snippet-level features into a video-level feature. The attention scores are used to localize the action regions and eliminate irrelevant background frames. There are two main strategies to extract attention signals from videos. First, class-specific attention approaches where attention scores are generated from class activation sequences (def 15) for each action class (Section 2.4.1.2). Second, class-agnostic attention approaches where attention scores are class-agnostic and are extracted from raw data (Section 2.4.1.3). We discuss these two attention strategies in this section. ##### 2.4.1.1 Term Definition To facilitate reading this section, we provide the definition of frequently used terminologies. ###### Definition 15. Temporal class activation maps (T-CAM). For a given video, T-CAM is a matrix denoted by $A$ which represent the possibility of activities at each temporal position. Matrix $A$ has $n_{c}$ rows which is the total number of action classes, and $T$ columns which is the number of temporal positions in the video. Value of cell $A[c,t]$ is the activation of class $c$ at temporal position $t$. Formally $A$ is calculated by: $A=WX\oplus b,$ (10) where $X\in{\rm I\\!R}^{d\times T}$ is a video-level feature matrix, and $d$ is the feature dimension. Also, $W\in{\rm I\\!R}^{n_{c}\times d}$ and $b\in{\rm I\\!R}^{n_{c}}$, are learnable parameters and $\oplus$ is the addition with broadcasting operator. ###### Definition 16. Class-specific attention scores. In a given video, class-specific attention score is the occurrence probability of action class $c$ at temporal position $t$, denoted by $a[c,t]$. Formally, $a[c,t]$ is computed by normalizing the activation of class $c$ over temporal dimension: $a[c,t]=\frac{\text{exp}(A[c,t])}{\sum_{t=1}^{T}\text{exp}(A[c,t])},$ (11) where $A$ is the T-CAM (def 15), and $T$ is the number of temporal positions. Therefore, row $a_{c}$ is the probability distribution of occurrence of class $c$ over video length. ###### Definition 17. Class-agnostic attention score. In a given video, class-agnostic attention score, denoted by $\lambda_{t}$, is the occurrence probability of any action of interest at temporal position $t$, regardless of the action class. The attention vector for all temporal positions of the video is denoted by $\lambda$. ###### Definition 18. Attention-based aggregated features. The video-level foreground and background features are generated using temporal pooling of embedded features weighted by attention scores. Class-specific features are defined based on class-specific attention scores $a_{c}$ (def 16) for each class $c$ while class-agnostic features are defined based on class-agnostic attention vector $\lambda$ (def 17). Aggregated foreground feature is most influenced by feature vectors with high attention that represent actions while background feature is impacted by features with low attention. $T$ is the video length and $X$ is the video feature matrix. These features are formulated as the following: | Foreground: | Background: ---|---|--- Class-specific: | $f_{c}=Xa_{c}$ | $b_{c}=\frac{1}{T-1}X(\mathbb{1}-a_{c}),$ Class-agnostic: | $f=\frac{1}{T}X\lambda$ | $b=\frac{1}{T}X(\mathbb{1}-\lambda).$ ##### 2.4.1.2 Class-specific Attention for Action Localization Class-specific attention module computes the attention weight $a[c,t]$ (def 16) for all action classes $c$ and all temporal positions $t$ in each video. The attention scores attend to the portions of the video where an activity of a certain category occurs. Therefore, video segments with attention scores higher than a threshold are localized as action parts. Class-specific attention module is used in [104], [105], [102], [106] to localize the temporal boundary of action instances. Class-specific attention learning with MIL: In general scheme of MIL (multi- instance learning), training instances are arranged in sets, called bags, and a label is provided for the entire bag [107]. In the context of weakly- supervised temporal action detection, each video is treated as a bag of action instances and the video-level action labels are provided. In order to compute the loss for each bag (video in this task), each video should be represented using a single confidence score per category. The confidence score for each category is computed as the average of top $k$ activation scores over the temporal dimension for that category. In a given video, suppose set $\\{t^{c}_{1},t^{c}_{2},\cdots,t^{c}_{k}\\}$ are $k$ temporal positions with highest activation scores for class $c$. Then, the video-level class-wise confidence score $s^{c}$ for class $c$ is defined as: $s^{c}=\frac{1}{k}\sum_{l=1}^{k}A[c,t^{c}_{l}],$ (12) where $A[c,t^{c}_{l}]$ is the activation (def 15) of class $c$ at temporal position $t^{c}_{l}$. Then, probability mass function (PMF) of action classes is computed by applying softmax function on $s^{c}$ scores over class dimension: $p^{c}=\frac{\exp{(s^{c})}}{\sum_{c=1}^{n_{c}}\exp{(s^{c})}},$ (13) where $n_{c}$ is the number of action classes. MIL loss is a cross-entropy loss applied over all videos and all action classes. For video $i$ and action class $c$, $p^{c}_{i}$ is the class-wise probability score, and $y^{c}_{i}$ is a normalized ground-truth binary label. MIL is defined as: $L_{MIL}=\frac{1}{n}\sum_{i=1}^{n}\sum_{c=1}^{n_{c}}-y^{c}_{i}\log(p^{c}_{i}),$ (14) where $n$ is the total number of videos. MIL loss supervises class-wise probability scores which are computed based on activation scores $A[c,t]$. Therefore, MIL learns activation scores and T-CAM (def 15) for each video and is used in W-TALC [105], Action Graphs [108], UNet [104], and Actionbytes [109]. Class-specific attention learning with CASL: The CASL (co-activity similarity loss) was initially introduced in W-TALC[105] and then inspired others Deep Metric [106], Action Graphs [108], WOAD [110], Actionbytes [109]. The main idea is that for a pair of videos including the same action classes, the foreground features in both videos should be more similar than the foreground feature in one video and the background feature in the other video. For a pair of videos with indices $m$ and $n$ that include action class $c$, the foreground features are denoted by $f^{m}_{c}$, $f^{n}_{c}$ and the background features by $b^{m}_{c}$, $b^{n}_{c}$ (def 18). Then CASL is defined based on ranking hinge loss as the following: $\begin{split}L^{mn}_{c}&=\frac{1}{2}\\{\max\big{(}0,d(f^{m}_{c},f^{n}_{c})-d(f^{m}_{c},b^{n}_{c})+\delta\big{)}\\\ &+\max\big{(}0,d(f^{m}_{c},f^{n}_{c})-d(b^{m}_{c},f^{n}_{c})+\delta\big{)}\\},\end{split}$ (15) where $d$ is a metric (e.g., cosine similarity) to measure the degree of similarity between two feature vectors and $\delta$ is a margin parameter. The average of $L^{mn}_{c}$ is computed over all video pairs that include action class $c$. This loss trains class-specific attention scores $a_{c}$ as foreground and background features $f_{c}$ and $b_{c}$ are defined based on $a_{c}$ (def 18). Islam et al. in Deep Metric [106] replaced metric $d$ with a class-specific metric $D_{c}$ defined for each class $c$. Rashid et al. in Action Graphs [108] applied a GCN to transform each temporal segment’s feature representation to a weighted average of its neighbors. Then updated features are used in CASL for localization. The advantage of this GCN is to model temporal dependencies, cluster the semantically-similar time segments, and pushing dissimilar segments apart. Class-specific attention learning with center loss: The center loss which was first introduced in [111], learns the class-specific centers and penalizes the distance between the features and their class centers. Narayan et al. in 3C-Net [102] employed center loss to enhance the feature discriminability and reduce the intra-class variations. For each video $i$ and each action class $c$, center loss computes the distance (L2 norm) between class-specific foreground feature $f^{i}_{c}$ (def 18) and cluster center feature $z_{c}$ as the following: $\mathcal{L}_{center}=\frac{1}{N}\sum_{i}\sum_{c:y^{i}(c)=1}\left\lVert f^{i}_{c}-z_{c}\right\rVert^{2}_{2},$ (16) where cluster center feature $z_{c}$ is updated during training. Here, $N$ is the total number of videos, and condition $y^{i}(c)=1$ checks if action class $c$ occurs in video $i$. ##### 2.4.1.3 Class-agnostic Attention for Action Localization Class-agnostic attention module computes attention vector $\lambda$ (def 17) directly from raw data, by applying fully connected and ReLU layers over video features, followed by a sigmoid function to scale attention weights to $[0,1]$. Learning class-agnostic attention weights is used in many methods such as RPN [112], BG modeling [113], AutoLoc[114], CleanNet [115], DGAM [116], STPN [117] , BaSNet [118] , MAAN [119], and CMCS [120]. Class-agnostic attention learning with cross-entropy: The video-level class- agnostic foreground and background features $f$ and $b$ (def 18) are fed to a classification module, and supervised with a cross entropy loss: $p_{fg}[c]=\frac{\exp{(w_{c}\cdot f)}}{\sum_{i=0}^{C}\exp{(w_{i}\cdot f)}},\mathcal{L}_{fg}=-\log(p_{fg}[y]),$ (17) where $w_{c}$ s are the weights of the classification module, $C$ is the number of action classes in the entire dataset, and $y$ is the label of action that happens in the video. Also, label $0$ represents the background class. Similarly, $\mathcal{L}_{bg}$ is defined for $p_{bg}$ which is a softmax applied over multiplication of background feature $b$ and the classification module. This loss trains attention vector $\lambda$ through class-agnostic features $f$ and $b$ (def 18), and is used in STPN [117]. Class-agnostic attention learning with clustering loss: Nguyen et al in background modeling [113] propose a method to separate foreground and background using a clustering loss by penalizing the discriminative capacity of background features. Class-agnostic foreground and background features $f$ and $b$ (def 18) are encouraged to be distinct using a clustering loss: $z_{f}=\frac{\exp(uf)}{\exp(uf)+\exp(vf)}\ ,\ z_{b}=\frac{\exp(vb)}{\exp(ub)+\exp(vb)},$ (18) $\mathcal{L}_{cluster}=-\log{z_{f}}-\log{z_{b}},$ (19) where $u,v\in{\rm I\\!R}^{d}$ are trainable parameters. Attention $\lambda$ is trained by separating class-agnostic features $f$ and $b$ (def 18). Class-agnostic attention learning with prototypes: Prototypical network which was introduced in [121] for classification task, represents each class as a prototype and matches each instance with a prototype with highest similarity. During training, the semantically-related prototypes are pushed closer than unrelated prototypes. Huang et al. in RPN [112] proposed a prototype learning scheme for action localization. For temporal position $t$ and action class $c$, the similarity score $s_{t,c}$ between feature $x_{t}$ and prototype $p_{c}$ is computed and similarity vector $s_{t}$ consists of $s_{t,c}$ for all classes. Then the similarity vector $s_{t}$ is fused with attention score $\lambda_{t}$ into a video-level score $\hat{s}$: $s_{t,c}=-\left\lVert x_{t}-p_{c}\right\rVert^{2}_{2}\ \ ,\ \ \hat{s}=\sum_{t=1}^{T}\lambda_{t}s_{t}.$ (20) Score $\hat{s}$ is supervised by a classification loss with respect to the video-level labels, training attention scores $\lambda_{t}$. Class-agnostic attention learning with CVAE: DGAM [116] aims to separate actions from context frames by imposing different attentions on different features using a generative model, conditional VAE (CVAE) [122]. Formally, the objective of DGAM is: $\max_{\lambda\in[0,1]}\underbrace{\log p(y|X,\lambda)}_{\text{term 1}}+\underbrace{\log p(X|\lambda)}_{\text{term 2}},$ (21) where $X$ denotes the features, $y$ is the video-level label, and $\lambda$ is the attention signal. Term 1 encourages high discriminative capability of the foreground feature $f$ and punishes any discriminative capability of the background feature $b$. Term 2 is approximated by a generative model which forces the feature representation $X$ to be accurately reconstructed from the attention $\lambda$ using CVAE. By maximizing this conditional probability with respect to the attention, the frame-wise attention is optimized by imposing different attentions on different features, leading to separation of action and context frames. ##### 2.4.1.4 Direct Action Proposal Generation Many methods [117], [105], [123], [104] localize the actions by applying thresholds on attention scores. The disadvantage of thresholding is that the snippets are treated independently and their temporal relations are neglected. Also, thresholding may not be robust to noises in class activation maps. Shou et al. [114] in AutoLoc directly predict the temporal boundary of each action instance. A localization branch is designed to directly predict the action boundaries (inner boundaries). The outer boundaries are also obtained by inflating the inner boundaries. Knowing that a video includes action class $c$, an outer-inner-contrastive (OIC) loss is applied on the activation scores of action $c$. The OIC loss computes the average activation in the outer area minus the average activation in the inner area to encourage high activations inside and penalize high activations outside because a complete action clip should look different from its neighbours. Liu et al. [115] proposed CleanNet to exploit temporal contrast for action localization. A contrast score is generated by summing up action, starting and ending scores for each action proposal. The action localization is trained by maximizing the average contrast score of the proposals, which penalizes fragmented short proposals and promotes completeness and continuity in action proposals. ##### 2.4.1.5 Action Completeness Modeling Previous methods used random hiding and iterative removal to enforce action completeness. Singh et al. in Hide-and-seek [123] force the model to see different parts of the video by randomly masking different regions of the videos in each training epoch. However, randomly hiding frames does not always guarantee the discovery of new parts and also disrupts the training process. Zhong et al. in Step-by-step erasion [124] trained a series of classifiers iteratively to find complementary parts, by erasing the predictions of predecessor classifiers from input videos. The major draw-back with this approach is the extra time cost and computational expense to train multiple classifiers. Zeng et al. [125] propose an iterative-winners-out strategy that selects the most discriminative action instances in each training iteration and hide them in the next iteration. Liu et al. in CMCS [120] proposed to enforce multiple branches in parallel to discover complementary pieces of an action. Each branch generates a different class activation map (def 15). A diversity loss (introduced in [126]) is imposed on class activation maps, which computes cosine similarities between pairs of branches and all action categories. Minimizing the diversity loss, encourages the branches to produce activations on different action parts. #### 2.4.2 Unsupervised, Semi-supervised, and Self-supervised Although weakly-supervised action detection has been extensively studied in recent years, there are fewer articles addressing action detection task in unsupervised, semi-supervised, or self-supervised setting that are briefly reviewed here. ##### 2.4.2.1 Unsupervised Action Detection Unsupervised learning does not need any human-annotated labels during training. Seneret et al. [127] introduced an iterative approach which alternates between discriminative learning of the appearance of sub-activities from visual features and generative modeling of the temporal structure of sub- activities. Kukleva et al. [128] proposed a combination of temporal encoding (generated using a frame time stamp prediction network) and a Viterbi decoding for consistent frame-to-cluster assignment. Gong et al. in ACL [129] used only the total count of unique actions that appear in the video set as supervisory signal. They propose a two-step clustering and localization iterative procedure. The clustering step provides noisy pseudo-labels for the localization step, and the localization step provides temporal co-attention models to improve the clustering performance. ##### 2.4.2.2 Self-supervised Action Detection Self-supervised learning refers to training with pseudo labels where pseudo labels are automatically generated for a pre-defined pretext task without involving any human annotations. Chen et al. in SSTDA [130] proposed self- supervised temporal domain adaptation method to address the spatio-temporal variations (different people performing the tasks in different styles) in action segmentation. They designed two self-supervised auxiliary tasks, binary and sequential domain prediction, to jointly align local and global embedded feature spaces across domains. The binary domain prediction task predicts a single domain for each frame-level feature, and the sequential domain prediction task predicts the permutation of domains for an untrimmed video, both trained by adversarial training with a gradient reversal layer (GRL) [131, 132]. Jain et al. in Actionbytes [109] only use short trimmed videos during the training and train an action localization network with cluster assignments as pseudo-labels to segments a long untrimmed videos into interpretable fragments (called ActionBytes). They adopt a self-supervised iterative approach for training boundary-aware models from short videos by decomposing a trimmed video into ActionBytes and generate pseudo-labels to train a CNN to localize ActionBytes within videos. TABLE II: The benchmark datasets for temporal and spatio-temporal action detection. Dataset | Activities Types | # Videos | # Action Categories | Avg Video Length (Sec) | # Action Instances (avg per video) | Multi-label ( # labels per frame) ---|---|---|---|---|---|--- THUMOS [1] | Sports | 413 | 20 | 212 | 15.5 | No MultiTHUMOS [57] | Sports | 413 | 65 | 212 | 97 | Yes ActivityNet [133] | Human Activities | 19,994 | 200 | 115 | 1.54 | No HACS Segment [134] | Human Activities | 50K | 200 | 156 | 2.8 | No Charades [135] | Daily Activities | 9,848 | 157 | 30 | 6.75 | Yes Breakfast [11] | Cooking | 1712 | 48 | 162 | 6 | No 50Salads [136] | Cooking | 50 | 17 | 384 | 20 | No MPII cooking 2 [137] | Cooking | 273 | 59 | 356 | 51.6 | No COIN [138] | Daily Activities | 11,827 | 180 | 142 | 3.9 | No Ava [86] | Movies | 437 | 80 | 900 | 3361.5 | Yes ##### 2.4.2.3 Semi-supervised Action Detection In Semi-supervised setting, a small number of videos are fully annotated with the temporal boundary of actions and class labels while a large number of videos are either unlabeled or include only video-level labels. Ji et al. [139] employ a fully supervised framework, known as BSN [46], to exploit the small set of labeled data. They encode the input video into a feature sequence and apply sequential perturbations (time warping and time masking [140]) on it. Then, the student proposal model takes this perturbed sequence as the input but the teacher model predicts directly on the original feature sequence. In the end, the student model is jointly optimized with a supervised loss applied to labeled videos and a consistency loss to all videos. ## 3 Datasets and Evaluation In this section, we describe the datasets collected for action detection and the evaluation metrics for this task. ### 3.1 Datasets Gaidon et al. [141, 142] introduced the problem of temporally localizing the actions in untrimmed videos, focusing on limited actions such as “drinking and smoking” [67] and “open door and sitdown” [143]. Later, researchers worked on building the following datasets that include large number of untrimmed videos with multiple action categories and complex background information. Some of these datasets target activities of high-level semantics (such as sports) while others include fine-grained activities (such as cooking). The details are summarized in Table II. $\bullet$ THUMOS14 [1] is the most widely used dataset for temporal action localization. There are $220$ and $213$ videos for training and testing with temporal annotations in $20$ classes. Action instances are rather sparsely distributed through the videos and about $70\%$ of all frames are labeled as background. The number of action instances per video on average is $15.5$ (and $1.1$ for distinct action instances). Also, maximum number of distinct actions per video is 3. $\bullet$ MultiTHUMOS [57] has the same set of videos as in THUMOS14 [1], but it extends the latter from $20$ action classes with $0.3$ labels per frame to $65$ classes with $1.5$ labels per frame. Also, the average number of distinct action classes in a video is $10.5$ (compared to $1.1$ in THUMOS14), making it a more challenging multi-label dataset. Also, maximum number of distinct actions per video is 25. $\bullet$ ActivityNet [133] has two versions, v1.2 and v1.3. The former contains $9,682$ videos in $100$ classes, while the latter, which is a superset of v1.2 and was used in the ActivityNet Challenge 2016, contains $19,994$ videos in $200$ classes. In each version, the dataset is divided into three disjoint subsets, training, validation, and testing, by 2:1:1. $\bullet$ HACS [134] includes $504K$ untrimmed videos retrieved from YouTube where each one is strictly shorter than $4$ minutes. HACS clips consists of $1.5M$ annotated clips of 2-second duration and HACS Segments contains $139K$ action segments densely annotated in $50K$ untrimmed videos spanning $200$ action categories. $\bullet$ CHARADES [135] consists of $9,848$ videos recorded by Amazon Mechanical Turk users based on provided scripts. This dataset contains videos with multiple actions and involves daily life activities from $157$ classes of $267$ people from three continents. Over $15\%$ of the videos have more than one person. $\bullet$ Breakfast [11] includes $1712$ videos for breakfast preparation activities performed by $52$ subjects. The videos were recorded in $18$ different kitchens and belong to $10$ different types of breakfast activities (such as fried egg or coffee) which consist of $48$ different fine-grained actions. Each video contains $6$ action instances on average and only $7\%$ of the frames are background. $\bullet$ 50Salads [136] contains $50$ videos for salad preparation activities performed by $25$ subjects and with $17$ distinct action classes. On average, each video contains $20$ action instances and is $6.4$ minutes long. $\bullet$ MPII Cooking 2 [137] consists of $273$ videos with about $2.8$ million frames. There are $59$ action classes and about $29\%$ of the frames are background. The dataset provides a fixed split into a train and test set, separating $220$ videos for training. $\bullet$ COIN dataset [138],[144] contains $180$ tasks and $11,827$ videos and $46,354$ annotated segments. The videos are collected from YouTube in $12$ domains (e.g., vehicles, gadgets, etc.) related to daily activities. $\bullet$ AVA [86] is designed for spatio-temporal action detection and consists of $437$ videos where each video is a $15$ minute segment taken from a movie. Each person appearing in a test video must be detected in each frame and the multi-label actions of the detected person must be predicted correctly. The action label space contains $80$ atomic action classes but often the results are reported on the most frequent $60$ classes. ### 3.2 Evaluation Metrics Here, we discuss the metrics designed to evaluate the performance of proposal generation, temporal action detection, and spatio-temporal action detection. Temporal Action Proposal Generation. For this task, Average Recall (AR) with multiple IoU thresholds is usually used as evaluation metrics. Most methods use IoU thresholds set $[0.5$ : $0.05$ : $0.95]$ in ActivityNet-1.3 [133] and $[0.5:0.05:1.0]$ in THUMOS14 [1]. To evaluate the relation between recall and proposals number, most methods evaluate AR with Average Number of proposals (AN) on both datasets, which is denoted as AR@AN. On ActivityNet-1.3, area under the AR vs. AN curve (AUC) is also used as metrics, where AN varies from $0$ to $100$. Temporal Action Detection. For this task, mean Average Precision (mAP) is used as evaluation metric, where Average Precision (AP) is calculated on each action class, respectively. On ActivityNet-1.3 [133], mAP with IoU thresholds $\\{0.5,0.75,0.95\\}$ and average mAP with IoU thresholds set $[0.5:0.05:0.95]$ are often used. On THUMOS14[1], mAP with IoU thresholds $\\{0.3,0.4,0.5,0.6,0.7\\}$ is used. Spatio-temporal Action Detection. Two metrics are frequently used for this task. First, frame-AP measures the area under the precision-recall curve of the detections for each frame. A detection is correct if the intersection- over-union with the ground truth at that frame is greater than a threshold and the action label is correctly predicted. Second, video-AP measures the area under the precision-recall curve of the action tubes predictions. A tube is correct if the mean per frame intersection-over-union with the ground truth across the frames of the video is greater than a threshold and the action label is correctly predicted. ### 3.3 Performance Analysis Action detection results of the state-of-the-art methods on THUMOS14 [1] and ActivityNet [133] dataset are compared by mAP (%) in Tables III and IV respectively. The methods are categorized to fully-supervised, weakly- supervised, semi-supervised, self-supervised and US (unsupervised). We also summarize the advantageous and limitations of fully-supervised methods and methods with limited supervision in Tables V and VI. TABLE III: Action detection results of the-state-of-the-art on testing set of THUMOS-14, measured by mAP (%) at tIoU thresholds. Supervision | Method | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 ---|---|---|---|---|---|--- Fully supervised | Yeung et al. [56] | 36.0 | 26.4 | 17.1 | - | - SMS [145] | 36.5 | 27.8 | 17.8 | - | - SCNN [19] | 36.3 | 28.7 | 19 | - | - Sst [53] | - | - | 23.0 | - | - CDC [14] | 40.1 | 29.4 | 23.3 | 13.1 | 7.9 SSAD [35] | 43 | 35 | 24.6 | - | - TCN [146] | - | 33.3 | 25.6 | 15.9 | 9.0 TURN TAP [27] | 44.1 | 34.9 | 25.6 | - | - R-C3D [30] | 44.8 | 35.6 | 28.9 | - | - SS-TAD [54] | 45.7 | - | 29.2 | - | 9.6 SSN [18] | 51.9 | 41.0 | 29.8 | - | - CTAP [52] | - | - | 29.9 | - | - CBR [28] | 50.1 | 41.3 | 31.0 | 19.1 | 9.9 S3D[36] | 47.9 | 41.2 | 32.6 | 23.3 | 14.3 DBS [15] | 50.6 | 43.1 | 34.3 | 24.4 | 14.7 BSN [46] | 53.5 | 45.0 | 36.9 | 28.4 | 20.0 MGG [40] | 53.9 | 46.8 | 37.4 | 29.5 | 21.3 AGCN [31] | 57.1 | 51.6 | 38.6 | 28.9 | 17.0 GTAN [147] | 57.8 | 47.2 | 38.8 | - | - BMN [47] | 56.0 | 47.4 | 38.8 | 29.7 | 20.5 SRG[148] | 54.5 | 46.9 | 39.1 | 31.4 | 22.2 DBG [48] | 57.8 | 49.4 | 39.8 | 30.2 | 21.7 G-TAD [62] | 54.5 | 47.6 | 40.2 | 30.8 | 23.4 BC-GNN [49] | 57.1 | 49.1 | 40.4 | 31.2 | 23.1 BSN++ [149] | 59.9 | 49.5 | 41.3 | 31.9 | 22.8 TAL-Net [13] | 53.2 | 48.5 | 42.8 | 33.8 | 20.8 TSA-Net [33] | 55.8 | 52.0 | 44.1 | 33.0 | 21.8 BU [150] | 53.9 | 50.7 | 45.4 | 38.0 | 28.5 A2Net [50] | 58.6 | 54.1 | 45.5 | 32.5 | 17.2 ATAG [63] | 62.0 | 53.1 | 47.3 | 38.0 | 28.0 Lianli et al. [151] | 66.4 | 58.4 | 48.8 | 36.7 | 25.5 PGCN [61] | 63.6 | 57.8 | 49.1 | - | - | TadTR [152] | 62.4 | 57.4 | 49.2 | 37.8 | 26.3 | AFNet [32] | 63.4 | 58.5 | 49.5 | 36.9 | 23.5 | AGT [65] | 65.0 | 58.1 | 50.2 | | | PBRNet [153] | 58.5 | 54.6 | 51.3 | 41.8 | 29.5 | RTD-Net[66] | 68.3 | 62.3 | 51.9 | 38.8 | 23.7 | C-TCN [43] | 68.0 | 62.3 | 52.1 | - | - | VSGN [64] | 66.7 | 60.4 | 52.4 | 41.0 | 30.4 | MLTPN [44] | 66.0 | 62.6 | 53.3 | 37.0 | 21.2 | TSP [154] | 69.1 | 63.3 | 53.5 | 40.4 | 26.0 | DaoTAD [155] | 62.8 | 59.5 | 53.8 | 43.6 | 30.1 | AFSD [51] | 67.3 | 62.4 | 55.5 | 43.7 | 31.1 | SP-TAD [156] | 69.2 | 63.3 | 55.9 | 45.7 | 33.4 | Liu et al.[157] | 68.9 | 64.0 | 56.9 | 46.3 | 31.0 Weakly supervised | Hide-Seek [123] | 19.5 | 12.7 | 6.8 | - | - UNet [104] | 28.2 | 21.1 | 13.7 | - | - Step-by-step [124] | 31.1 | 22.5 | 15.9 | - | - STPN [117] | 35.5 | 25.8 | 16.9 | 9.9 | 4.3 MAAN [119] | 41.1 | 30.6 | 20.3 | 12 | 6.9 AutoLoc [114] | 35.8 | 29 | 21.2 | 13.4 | 5.8 W-TALC [105] | 40.1 | 31.1 | 22.8 | - | 7.6 STAR [158] | 48.7 | 34.7 | 23 | - | - CMCS [120] | 41.2 | 32.1 | 23.1 | 15 | 7 AdapNet [159] | 41.09 | 31.61 | 23.65 | 14.53 | 7.75 Cleannet [115] | 37 | 30.9 | 23.9 | 13.9 | 7.1 TSM [160] | 39.5 | 31.9 | 24.5 | 13.8 | 7.1 3C-Net [102] | 40.9 | 32.3 | 24.6 | - | 7.7 Shen et al [161] | 44 | 34.4 | 25.5 | 15.2 | 7.2 Action Graphs [108] | 47.3 | 36.4 | 26.1 | - | - BG modeling [113] | 46.6 | 37.5 | 26.8 | 17.6 | 9 BaSNet [118] | 44.6 | 36 | 27 | 18.6 | 10.4 RPN [112] | 48.2 | 37.2 | 27.9 | 16.7 | 8.1 TSCN [162] | 47.8 | 37.7 | 28.7 | 19.4 | 10.2 DGAM [116] | 46.8 | 38.2 | 28.8 | 19.8 | 11.4 ECM [163] | 46.5 | 38.2 | 29.1 | 19.5 | 10.9 Deep Metric [106] | 46.8 | - | 29.6 | - | 9.7 A2CL-PT [164] | 48.1 | 39.0 | 30.1 | 19.2 | 10.6 EM-MIL [165] | 45.5 | 36.8 | 30.5 | 22.7 | 16.4 Lee et al [166] | 46.9 | 39.2 | 30.7 | 20.8 | 12.5 | ASL [167] | 51.8 | - | 31.1 | - | 11.4 | Huang et al [168] | 49.1 | 40.0 | 31.4 | 18.8 | 10.6 | Ding et al [169] | 48.2 | 39.7 | 31.6 | 22.0 | 13.8 | CoLA [170] | 51.5 | 41.9 | 32.2 | 22.0 | 13.1 | Acsnet [171] | 51.4 | 42.7 | 32.4 | 22.0 | 11.7 | Lee et al. [172] | 52.3 | 43.4 | 33.7 | 22.9 | 12.1 | ACM-Net [173] | 55.0 | 44.6 | 34.6 | 21.8 | 10.8 | D2-Net [174] | 52.3 | 43.4 | 36.0 | - | - Semi supervised | TTC-Loc [175] | 52.8 | 44.4 | 35.9 | 24.7 | 13.8 Ji et al [139] | 53.4 | 45.2 | 37.2 | 29.5 | 20.5 Self supervised | Actionbytes [109] | 43.0 | 35.8 | 29.0 | - | 9.5 Gong et al. [176] | 50.8 | 42.2 | 32.9 | 21.0 | 10.1 US | ACL [129] | 39.6 | 32.9 | 25.0 | 16.7 | 8.9 TABLE IV: Action detection results of the-state-of-the-art on validation set of ActivityNet (V is the version), measured by mAP (%) at tIoU thresholds. $\star$ indicates utilization of weaker feature extractor (UNet [104]). Supervision | Method | V | 0.5 | 0.75 | 0.95 | Average ---|---|---|---|---|---|--- Fully supervised | R-C3D [30] | 1.3 | 26.8 | - | - | 12.7 AFNet [32] | 1.3 | 36.1 | 17.8 | 5.2 | 18.6 TAL-Net [13] | 1.3 | 38.23 | 18.30 | 1.30 | 20.22 TCN [146] | 1.3 | 37.49 | 23.47 | 4.47 | 23.58 CDC [14] | 1.3 | 45.3 | 26.0 | 0.2 | 23.8 SSN [18] | 1.3 | 39.12 | 23.48 | 5.49 | 23.98 DBS [15] | 1.3 | 43.2 | 25.8 | 6.1 | 26.1 A2Net [50] | 1.3 | 43.55 | 28.69 | 3.7 | 27.75 MLTPN [44] | 1.3 | 44.86 | 28.96 | 4.30 | 28.27 SRG[148] | 1.3 | 46.53 | 29.98 | 4.83 | 29.72 BSN [46] | 1.3 | 46.45 | 29.96 | 8.02 | 30.03 BU [150] | 1.3 | 43.47 | 33.91 | 9.21 | 30.12 AGCN [31] | 1.3 | - | - | - | 30.4 RTD-Net[66] | 1.3 | 47.21 | 30.68 | 8.61 | 30.83 Lianli et al. [151] | 1.3 | 47.01 | 30.52 | 8.21 | 30.88 C-TCN [43] | 1.3 | 47.6 | 31.9 | 6.2 | 31.1 PGCN [61] | 1.3 | 48.26 | 33.16 | 3.27 | 31.11 | TadTR [152] | 1.3 | 49.08 | 32.58 | 8.49 | 32.27 | SP-TAD [156] | 1.3 | 50.06 | 32.92 | 8.44 | 32.99 | BMN [47] | 1.3 | 50.07 | 34.78 | 8.29 | 33.85 | Liu et al.[157] | 1.3 | 50.02 | 34.97 | 6.57 | 33.99 | G-TAD [62] | 1.3 | 50.36 | 34.60 | 9.02 | 34.09 | BC-GNN [49] | 1.3 | 50.56 | 34.75 | 9.37 | 34.26 | GTAN [147] | 1.3 | 52.61 | 34.14 | 8.91 | 34.31 | AFSD [51] | 1.3 | 52.4 | 35.3 | 6.5 | 34.4 | ATAG [63] | 1.3 | 50.92 | 35.35 | 9.71 | 34.68 | BSN++ [149] | 1.3 | 51.27 | 35.70 | 8.33 | 34.88 | PBRNet [153] | 1.3 | 53.96 | 34.97 | 8.98 | 35.01 | VSGN [64] | 1.3 | 52.38 | 36.01 | 8.37 | 35.07 | TSP [154] | 1.3 | 51.26 | 37.12 | 9.29 | 35.81 Weakly supervised (V=1.2) | UNet⋆ [104] | 1.2 | 7.4 | 3.2 | 0.7 | 3.6 Step-by-step [124] | 1.2 | 27.3 | 14.7 | 2.9 | 15.6 AutoLoc⋆ [114] | 1.2 | 27.3 | 15.1 | 3.3 | 16.0 TSM [160] | 1.2 | 28.3 | 17.0 | 3.5 | 17.1 Action Graphs [108] | 1.2 | 29.4 | - | - | - W-TALC [105] | 1.2 | 37.0 | | | 18.0 EM-MIL [165] | 1.2 | 37.4 | - | - | 20.3 Cleannet [115] | 1.2 | 37.1 | 20.3 | 5.0 | 21.6 3C-Net [102] | 1.2 | 37.2 | - | - | 21.7 Deep Metric [106] | 1.2 | 35.2 | - | - | - CMCS [120] | 1.2 | 36.8 | 22.0 | 5.6 | 22.4 Shen et al [161] | 1.2 | 36.9 | 23.1 | 3.4 | 22.8 RPN [112] | 1.2 | 37.6 | 23.9 | 5.4 | 23.3 TSCN [162] | 1.2 | 37.6 | 23.7 | 5.7 | 23.6 BaSNet [118] | 1.2 | 38.5 | 24.2 | 5.6 | 24.3 DGAM [116] | 1.2 | 41.0 | 23.5 | 5.3 | 24.4 Acsnet [171] | 1.2 | 41.0 | 23.5 | 5.3 | 24.4 | ECM [163] | 1.2 | 41.0 | 24.9 | 6.5 | 25.5 | ASL [167] | 1.2 | 40.2 | - | - | 25.8 | Lee et al. [172] | 1.2 | 41.2 | 25.6 | 6.0 | 25.9 | D2-Net [174] | 1.2 | 42.3 | 25.5 | 5.8 | 26.0 | CoLA [170] | 1.2 | 42.7 | 25.7 | 5.8 | 26.1 | Ding et al [169] | 1.2 | 41.7 | 26.7 | 6.3 | 26.4 Weakly supervised (V=1.3) | STPN [117] | 1.3 | 29.3 | 16.9 | 2.6 | - STAR [158] | 1.3 | 31.1 | 18.8 | 4.7 | - AdapNet [159] | 1.3 | 33.61 | 18.75 | 3.40 | 21.97 MAAN [119] | 1.3 | 33.7 | 21.9 | 5.5 | - BG modeling [113] | 1.3 | 36.4 | 19.2 | 2.9 | - A2CL-PT [164] | 1.3 | 36.8 | 22.0 | 5.2 | 22.5 Huang et al [168] | 1.3 | 36.5 | 22.8 | 6.0 | 22.9 | ACM-Net [173] | 1.3 | 40.1 | 24.2 | 6.2 | 24.6 Semi | TTC-Loc [175] | 1.2 | 40.6 | 3.6 | 5.3 | 24.5 Self supervised | Actionbytes [109] | 1.2 | 39.4 | - | - | - Gong et al. [176] | 1.2 | 45.5 | 27.3 | 5.4 | 27.6 US | ACL [129] | 1.2 | 35.2 | 21.4 | 3.1 | 21.1 #### 3.3.1 Fully-supervised Methods Proposal Generation. Anchor-free methods such as SSN [18], BSN [46], BMN [47], DBG [48], BC-GNN [49], BU [150], and BSN++ [149], A2Net [50] and AFSD [51] achieved superior results compared with anchor-based methods such as Yeung et al. [56], SMS [145], TCN [146], SCNN [19], TURN TAP [27], CBR [28], and CDC [14]. This is because anchor-free methods generate temporal action proposals with more flexibility and precise temporal boundaries. Some methods such as CTAP[52], MGG[40], PBRNet [41], SRG[148], and RapNet [42] combine advantageous of anchor-based and anchor-free methods and attained a higher results. Proposal Feature Extraction. R-C3D [30] and AFNet [32] employ 3D RoI pooling for feature extraction and obtained low results on ActivityNet due to lack of receptive field alignment with proposal span. TAL-Net [13], TSA-Net [33] employ a multi-tower network and achieve a higher performance compared with 3D RoI pooling methods. The methods of SSAD [35], S3D[36], MGG [40], PBRNet [153], MLTPN [44], C-TCN [43], RapNet [42], SP-TAD [156], and DaoTAD [155] employ temporal feature pyramid to extract features from actions with different duration and achieved superior performance. Modeling Long-term Dependencies. Sst [53] and SS-TAD [54] which are RNN-based methods achieve relatively lower results as they can not generate flexible proposals. PGCN [61], G-TAD [62], BC-GNN [49], AGCN [31], ATAG [63], and VSGN [64] are graph models that capture dependencies between proposals or video segments. Among them VSGN [64] achieved the best performance by exploiting correlations between cross-scale snippets (original and magnified) and aggregating their features with a graph pyramid network. AGT [65], RTD-Net [66], ATAG [63], and TadTR [152] use transformers to model long-range dependencies. Among them RTD-Net [66] achieved the best results (on THUMOS14) by customizing the encoder with a boundary-attentive architecture to enhance the discrimination capability of action boundary. There are also two state-of-the-art (SOTA) methods that do not belong to the mentioned categories of methods. TSP [154] proposed a novel supervised pretraining paradigm for clip features, and improved the performance of SOTA using features trained with the proposed pretraining strategy. Liu et al. in [157] leverages temporal aggregation to improve the feature discriminative power of each snippet and enhance the feature coherence within a single instance. TABLE V: Summary of fully-supervised methods for temporal action detection. $(+)$ and $(-)$ denote the advantages and disadvantages. Objective | Category | Methods | Advantages and Limitations ---|---|---|--- Proposal Generation | Anchor-based | SCNN [19], CBR[28], Turn-Tap[27], CDC [14] | \+ Efficiently generate multiple-scales proposals, use global info of all anchors to generate reliable confidence scores. \- Proposals are not temporally flexible and precise. Anchor-free | TAG [18], BSN [46], BMN [47], DBG [48] BC-GNN [49], BU [150] A2Net [50], AFSD [51] BSN++ [149] | \+ Generate proposals with flexible duration. \+ Global context for proposal evaluation ( in BMN, DBG). \+ Global context for proposal generation (in DBG). \- Proposal evaluation is not efficient in some cases. \- Distorting the information of short actions due to down-scaling. | Anchor-based +Anchor-free | CTAP[52], MGG[40] PBRNet [41], RapNet [42] | \+ Combining advantageous of anchor-based and anchor-free. | \- Not modeling long-range dependencies. Proposal Feature Extraction | 3D RoI pooling | R-C3D [30], AFNet [32] | \+ Fast feature extraction from multi-scale proposals. \- Proposal features may include insufficient or irrelevant info because of receptive field misalignment. Multi-tower Network | TAL-Net [13], TSA-Net [33] | \+ Alignment of receptive field to proposal span to extract rich features from proposals. \- Pre-defined temporal intervals limit the accuracy of proposals. TFPN | SSAD [35], S3D[36] MGG [40], C-TCN [43] MLTPN [44], PBRNet [41] A2Net [50], AFSD [51] RapNet [42], SP-TAD [156] DaoTAD [155] | \+ Feature pyramids to detect different scales of actions. \+ Re-fine the proposal boundaries from coarse to fine (in MGG, PBRNet, and RapNet). \+ Combination with anchor-free pipeline for flexible and precise proposal generation (A2Net, AFSD). \- No modeling of temporal dependencies in most cases. Modeling Long-term Dependencies | RNNs | Sst [53], SS-TAD [54] | \+ Modeling long-term dependencies for proposal generation. \- Proposals are not flexible and precise. Graphs | PGCN [61], G-TAD [62], BC-GNN [49], AGCN [31] ATAG [63] , VSGN [64] | \+ Modeling temporal dependencies between proposals or video segments for proposal generation and refinement. \- Proposal generation is inefficient or temporal dependencies are used only for proposal refinement. Transformer | AGT [65], RTD-Net [66] ATAG [63], TadTR [152] | \+ Modeling non-linear temporal structure and inter-proposal relationships for proposal generation. \- High parametric complexity. #### 3.3.2 Methods with Limited Supervision Action Localization with Class-specific Attention. UNet [104] is supervised with MIL loss which is not strong enough to predict accurate attention scores. The methods of W-TALC [105], Action Graphs [108], and Deep Metric [106] all target action-action separation by employing a co-activity similarity loss. 3C-Net [102] applied center loss on video-level aggregated features to enhance feature discriminability. Deep Metric [106] outperforms W-TALC [105], Action Graphs [108], and 3C-Net [102] by defining a class-specific metric for each action category. Action Localization with Class-agnostic Attention. STPN [117] proposed to learn attention through class-agnostic features but has a low performance as cross entropy loss alone does not train accurate attention signals. BG modeling [113] used a clustering loss to separate action from background. BG modeling [113] and BaSNet [118] force all background frames to belong to one specific class which is not desirable as they do not share any common semantics. RPN [112], and Huang et al. [168] increase inter-class separateness by pushing action (or sub-action) features to their prototypes. Huang et al. [168] outperforms RPN [112] by modeling the relations between sub-actions of each action. DGAM [116] addressed the action-context confusion through imposing different attentions on different features with a generative model. EM-MIL [165] employed Expectation-Maximization to capture complete action instances and outperformed DGAM [116] on THUMOS14 dataset. Direct Action Localization. AutoLoc [114], and CleanNet [115] regress the intervals of action instances for proposal generation, instead of performing hard thresholding. They obtained a lower performance compared with most recent methods as they do not model action completeness or address action-context confusion. Action Completeness Modeling. The methods of CMCS [120], Hide-and-Seek [123], and Step-by-step [124] target the action completeness and CMCS [120] achieves a superior performance. This is because Hide-and-Seek [123] and Step-by-step [124] do not guarantee the discovery of new parts by randomly hiding or removing different video regions. In contrary, CMCS [120] employs a diversity loss to enforce the model to discover complementary action parts. ACL [129] is an unsupervised method and only uses the total count of unique actions that appear in the video, but it still achieves a comparable performance with respect to some of the weakly-supervised methods such as 3C-Net [102]. Gong et al. [176] is a self-supervised method that attained the state-of-the-art results on ActivityNet-1.2 among methods with limited supervision, confirming the advantageous of self-supervised learning. The recent state-of-the-art weakly supervised methods such as D2-Net [174] achieved comparable performance to the semi-supervised methods of Ji et al [139] and TTC-Loc [175]. This is interesting specially because D2-Net [174] does not use temporal annotation of actions at all while Ji et al [139] and TTC-Loc [175] use temporal annotations at least for a small percentage of videos in the dataset. TABLE VI: Summary of methods with limited supervision for temporal action detection. $(+)$ and $(-)$ denote the advantages and disadvantages. Objective | Category | Method | Advantages and Limitations ---|---|---|--- Localization with Class-specific Attention | MIL Loss | UNet [104], W-TALC [105] Action Graphs [108], BaSNet [118] 3C-Net [102], Actionbytes [109] | \+ Learns temporal class activation maps. \- MIL loss alone does not predict accurate attention scores. \- Only supervising temporal positions with highest activation scores. Co-activity Similarity Loss (CASL) | W-TALC[105], Action Graphs [108] DM[106], Actionbytes [109] | \+ Action-background separation and reducing intra-class variations. \- Action-context confusion is not addressed. \- Not modeling action completeness. Center Loss | 3C-Net [102] | \+ Reduce intra-class variations by pushing action features to class centers. | \- Imprecise attention signal, supervises video-level aggregated features. Localization with Class-agnostic Attention | CE Loss | STPN [117], RPN [112] BG modeling [113] | \+ Learns attention through class-agnostic features. \- CE loss alone does not train accurate attention signals. Clustering Loss | RPN [112] BG modeling [113] | \+ Separating foreground-background features. \- Force all background frames to belong to one specific class, but they do not share any common semantics. Prototype Learning | RPN [112] Huang et al. [168] | \+ Inter-class separateness by pushing action (or sub-action) features to their prototypes. \- Action-context confusion is not addressed. | Generative Model | DGAM [116] EM-MIL [165] | \+ Conditional VAE / Expectation-Maximization to separate actions from context frames and capture complete action instances. | \- Not modeling temporal dependencies and relation between sub-actions. Direct Localization | Action-boundary Contrast | AutoLoc [114] CleanNet [115] | \+ Regress the intervals of action instances for proposal generation, instead of performing hard thresholding. \- Not modeling action completeness. Action Completeness Modeling | Masking | Hide-and-seek [123] Step-by-step [124] | \+ Randomly hiding or removing different video regions to see different action parts. \- Does not guarantee the discovery of new parts. Diversity Loss | CMCS [120] | \+ Enforcing the model to discover complementary pieces of an action. | \- Not modeling the relation between sub-actions. ## 4 Discussions In this section, we describe the application of temporal action detection in real-world applications and introduce several directions for future work in this domain. ### 4.1 Applications Temporal action detection has numerous real-world applications as most of the videos in practice are untrimmed with a sparse set of actions. In this section, we describe several applications such as understanding instructional videos, anomaly detection in surveillance videos, action spotting in sports, and detection in self-driving cars. #### 4.1.1 Action Localization in Instructional videos With the rising popularity of social media and video sharing sites such as YouTube, people worldwide upload numerous instructional videos in diverse categories. Millions of people watch these tutorials to learn new tasks such as ”making pancakes” or ”changing a flat tire.” Analysis of the instructional videos has drawn more attention in recent years, leading to the proposition of several tasks including step localization and action segmentation [177]. Based on the psychological studies, it has been shown that simplifying and segmenting the video into smaller steps (sub-actions) is a more effective way to learn a new task [144, 178]. For example, the task of ”making pancakes” can be segmented to action steps such as ”add the eggs,” ”pour the mixture into the pan,” ”heat a frying pan,” and such. Many datasets are designed to study action localization and action anticipation such as EPIC-Kitchen [179] and INRIA Instructional Videos Dataset [180]. Both of these tasks (step localization and action segmentation) are directly related to action detection. Step localization is the task of localizing the start and endpoints of a series of steps and recognizing their labels while action segmentation is the frame-level labeling. #### 4.1.2 Anomaly Detection in Surveillance Videos Surveillance cameras are increasingly deployed in public places, monitoring the areas of interest to ensure security. With the stream of data from these video cameras, there has been a rise in video analysis and anomaly detection research. Anomalies are significant deviations of scene entities from normal behavior [181, 182]. Fighting, traffic accidents, burglary, and robbery are a few examples of anomalies. Compared to normal activities, anomalous events rarely occur. Therefore, intelligent computer vision algorithms are required to detect anomalous events automatically, to avoid the waste of time and labor. In some methods, anomaly detection models are trained with normal behaviors to learn distributions of normal patterns. These models identify anomalous activities based on dissimilarity to the standard data distributions [183, 184]. In other cases, normal and anomalous videos are used during training to automatically predict high anomaly scores [185, 186]. In many real-time applications, the system must detect anomalous events as soon as each video frame arrives, only based on history and the current data; for instance, an intelligent video surveillance application designed to raise the alarm when suspicious activity is detected. To this end, online action detection algorithms are developed to accumulate historical observations and predicted future information to analyze current events [187], [188], [189], [190],[191]. #### 4.1.3 Action Spotting in Sports Professional analysts utilize sports videos to investigate the strategies in a game, examine new players, and generate meaningful statistics. In order to analyze the videos, they watch many broadcasts to spot the highlights within a game, which is a time-consuming and costly process. Fortunately, automated sports analytic methods developed in the computer vision field can facilitate sports broadcasts understanding. In recent years, many automated methods have been proposed to help localize the salient actions of a game. They produce statistics of events within a game by either analyzing camera shots or semantic information. Human activity localization in sports videos is studied in [192, 193, 194, 195], salient game actions are identified in [196, 197], automatic game highlights identification and summarization are performed in [198, 199, 200, 201, 202]. Moreover, action spotting, which is the task of temporal localization of human-induced events, has been popular in soccer game broadcasts [3, 203] and some methods aimed to automatically detect goals, penalties, corner kicks, and card events [204]. Action detection algorithms can inspire many of the tasks mentioned above. #### 4.1.4 Action Detection in Autonomous Driving With the rapid development and advancement of cars and other vehicles in urban transportation, autonomous driving has attracted more attention in the last decades. The cameras assembled on the self-driving cars capture the real-time stream of videos that need to be processed with online algorithms. The car should be aware of the surrounding environment and spot road users, including pedestrians, cyclists, and other vehicles, to make safe autonomous decisions. Also, it should be able to detect and anticipate road users activities such as moving away, moving towards, crossing the road, and anomalous events in real- time to adjust the speed and handle the situation. Therefore, spatio-temporal action localization algorithms need to be developed to guarantee the safety of self-driving cars [205]. Yao et al. [206] proposed a traffic anomaly detection with a when-where-what pipeline to detect, localize, and recognize anomalous events from egocentric videos. To improve the detection and prediction of pedestrian movements, Rasouli et al. [4] studied pedestrian behavior depending on various factors, such as demographics of the pedestrians, traffic dynamics, and environmental conditions. Moreover, Mahadevan et al. [207] proposed an immersive VR-based pedestrian mixed traffic simulator to examine pedestrian behavior in street crossing tasks. ### 4.2 Future work Weakly-supervised action localization in untrimmed videos has drawn much research attention by providing only video-level labels during training instead of exhaustive annotation of temporal boundaries in the training phase. Subsequently, knowledge transfer from publicly available trimmed videos is a promising trend to make up for the coarse-grained video-level annotations in weakly-supervised settings. Nevertheless, domain-adaptation schemes must fulfill the domain gap between trimmed and untrimmed videos to transfer robust and reliable knowledge. Only a few methods have explored knowledge transfer from trimmed videos [109], [159], [208], [209], but we expect to see more in the future. In recent years, zero-shot learning (ZSL) in the visual recognition domain has been emerging as a rising trend as it is challenging to collect a large number of samples for each class during training. ZSL works by transferring the knowledge from the seen classes with sufficiently many instances to generalize the models on unseen classes with no samples during training. The task of zero-shot temporal activity detection (ZSTAD) is introduced in [210] to generalize the applicability of action detection methods to newly emerging or rare events that are not included in the training set. The task of ZSTAD is highly challenging because each untrimmed video in the testing set possibly contains multiple novel action classes that must be localized and detected. It is worth mentioning that activity detection with few-shot learning has been recently explored in [109], [211], [212], [213], [214], [215]. The advancement of both zero-shot and few-shot action detection is anticipated in the near future. ## 5 Conclusion Action detection schemes have expedited the progress in many real-world applications such as instructional video analysis, anomaly detection in surveillance videos, sports analysis, and autonomous driving. The advancement of learning methods with limited supervision has facilitated action detection by detachment from costly need to annotate the temporal boundary of actions in long videos. This survey has extensively studied recently developed deep learning-based methods for action detection from different aspects including fully-supervised schemes, methods with limited supervision, benchmark datasets, performance analysis, applications, and future directions. The performance analysis and future directions are summarized to inspire the design of new and efficient methods for action detection that serves the computer vision community. ## References * [1] Y.-G. Jiang, J. Liu, A. R. Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar, “Thumos challenge: Action recognition with a large number of classes,” 2014. * [2] F. Rea, A. Vignolo, A. Sciutti, and N. Noceti, “Human motion understanding for selecting action timing in collaborative human-robot interaction,” _Front. Robot. AI_ , vol. 6, p. 58, 2019. * [3] A. Cioppa, A. Deliege, S. Giancola, B. Ghanem, M. V. Droogenbroeck, R. Gade, and T. B. Moeslund, “A context-aware loss function for action spotting in soccer videos,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 13 126–13 136. * [4] A. Rasouli and J. K. Tsotsos, “Autonomous vehicles that interact with pedestrians: A survey of theory and practice,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 21, no. 3, pp. 900–918, 2019. * [5] S. Herath, M. Harandi, and F. Porikli, “Going deeper into action recognition: A survey,” _Image and vision computing_ , vol. 60, pp. 4–21, 2017. * [6] C. Feichtenhofer, H. Fan, J. Malik, and K. He, “Slowfast networks for video recognition,” in _Proceedings of the IEEE international conference on computer vision_ , 2019, pp. 6202–6211. * [7] D. Ghadiyaram, D. Tran, and D. Mahajan, “Large-scale weakly-supervised pre-training for video action recognition,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 12 046–12 055. * [8] H. Duan, Y. Zhao, Y. Xiong, W. Liu, and D. Lin, “Omni-sourced webly-supervised learning for video recognition,” _arXiv preprint arXiv:2003.13042_ , 2020\. * [9] H. Idrees, A. R. Zamir, Y.-G. Jiang, A. Gorban, I. Laptev, R. Sukthankar, and M. Shah, “The thumos challenge on action recognition for videos “in the wild”,” _Computer Vision and Image Understanding_ , vol. 155, pp. 1–23, 2017. * [10] A. Gorban, H. Idrees, Y.-G. Jiang, A. R. Zamir, I. Laptev, M. Shah, and R. Sukthankar, “Thumos challenge: Action recognition with a large number of classes,” 2015. * [11] H. Kuehne, A. Arslan, and T. Serre, “The language of actions: Recovering the syntax and semantics of goal-directed human activities,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2014, pp. 780–787. * [12] C. Lea, M. D. Flynn, R. Vidal, A. Reiter, and G. D. Hager, “Temporal convolutional networks for action segmentation and detection,” in _proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 156–165. * [13] Y.-W. Chao, S. Vijayanarasimhan, B. Seybold, D. A. Ross, J. Deng, and R. Sukthankar, “Rethinking the faster r-cnn architecture for temporal action localization,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 1130–1139. * [14] Z. Shou, J. Chan, A. Zareian, K. Miyazawa, and S.-F. Chang, “Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 5734–5743. * [15] Z. Gao, L. Wang, Q. Zhang, Z. Niu, N. Zheng, and G. Hua, “Video imprint segmentation for temporal action detection in untrimmed videos,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, 2019, pp. 8328–8335. * [16] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 3431–3440. * [17] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge 2012 (voc2012) results (2012),” in _URL http://www. pascal-network. org/challenges/VOC/voc2011/workshop/index. html_ , 2011. * [18] Y. Zhao, Y. Xiong, L. Wang, Z. Wu, X. Tang, and D. Lin, “Temporal action detection with structured segment networks,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 2914–2923. * [19] Z. Shou, D. Wang, and S.-F. Chang, “Temporal action localization in untrimmed videos via multi-stage cnns,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 1049–1058. * [20] Y. Xiong, L. Wang, Z. Wang, B. Zhang, H. Song, W. Li, D. Lin, Y. Qiao, L. Van Gool, and X. Tang, “Cuhk & ethz & siat submission to activitynet challenge 2016,” _arXiv preprint arXiv:1608.00797_ , 2016. * [21] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 4489–4497. * [22] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in _Advances in neural information processing systems_ , 2014, pp. 568–576. * [23] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks: Towards good practices for deep action recognition,” in _European conference on computer vision_. Springer, 2016, pp. 20–36. * [24] J. Carreira and A. Zisserman, “Quo vadis, action recognition? a new model and the kinetics dataset,” in _proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 6299–6308. * [25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [26] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” _arXiv preprint arXiv:1502.03167_ , 2015. * [27] J. Gao, Z. Yang, K. Chen, C. Sun, and R. Nevatia, “Turn tap: Temporal unit regression network for temporal action proposals,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 3628–3636. * [28] J. Gao, Z. Yang, and R. Nevatia, “Cascaded boundary regression for temporal action detection,” _arXiv preprint arXiv:1705.01180_ , 2017. * [29] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in _Advances in neural information processing systems_ , 2015, pp. 91–99. * [30] H. Xu, A. Das, and K. Saenko, “R-c3d: Region convolutional 3d network for temporal activity detection,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 5783–5792. * [31] J. Li, X. Liu, Z. Zong, W. Zhao, M. Zhang, and J. Song, “Graph attention based proposal 3d convnets for action detection.” in _AAAI_ , 2020, pp. 4626–4633. * [32] G. Chen, C. Zhang, and Y. Zou, “Afnet: Temporal locality-aware network with dual structure for accurate and fast action detection,” _IEEE Transactions on Multimedia_ , 2020. * [33] G. Gong, L. Zheng, and Y. Mu, “Scale matters: Temporal scale aggregation network for precise action localization in untrimmed videos,” in _2020 IEEE International Conference on Multimedia and Expo (ICME)_. IEEE, 2020, pp. 1–6. * [34] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in _European conference on computer vision_. Springer, 2016, pp. 21–37. * [35] T. Lin, X. Zhao, and Z. Shou, “Single shot temporal action detection,” in _Proceedings of the 25th ACM international conference on Multimedia_ , 2017, pp. 988–996. * [36] D. Zhang, X. Dai, X. Wang, and Y.-F. Wang, “S3d: single shot multi-span detector via fully 3d convolutional networks,” _arXiv preprint arXiv:1807.08069_ , 2018. * [37] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _International Conference on Medical image computing and computer-assisted intervention_. Springer, 2015, pp. 234–241. * [38] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 2117–2125. * [39] C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, “Dssd: Deconvolutional single shot detector,” _arXiv preprint arXiv:1701.06659_ , 2017. * [40] Y. Liu, L. Ma, Y. Zhang, W. Liu, and S.-F. Chang, “Multi-granularity generator for temporal action proposal,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 3604–3613. * [41] Q. Liu and Z. Wang, “Progressive boundary refinement network for temporal action detection.” * [42] J. Gao, Z. Shi, G. Wang, J. Li, Y. Yuan, S. Ge, and X. Zhou, “Accurate temporal action proposal generation with relation-aware pyramid network.” in _AAAI_ , 2020, pp. 10 810–10 817. * [43] X. Li, T. Lin, X. Liu, C. Gan, W. Zuo, C. Li, X. Long, D. He, F. Li, and S. Wen, “Deep concept-wise temporal convolutional networks for action localization,” _arXiv preprint arXiv:1908.09442_ , 2019. * [44] X. Wang, C. Gao, S. Zhang, and N. Sang, “Multi-level temporal pyramid network for action detection,” in _Chinese Conference on Pattern Recognition and Computer Vision (PRCV)_. Springer, 2020, pp. 41–54. * [45] J. B. Roerdink and A. Meijster, “The watershed transform: Definitions, algorithms and parallelization strategies,” _Fundamenta informaticae_ , vol. 41, no. 1, 2, pp. 187–228, 2000. * [46] T. Lin, X. Zhao, H. Su, C. Wang, and M. Yang, “Bsn: Boundary sensitive network for temporal action proposal generation,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 3–19. * [47] T. Lin, X. Liu, X. Li, E. Ding, and S. Wen, “Bmn: Boundary-matching network for temporal action proposal generation,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 3889–3898. * [48] C. Lin, J. Li, Y. Wang, Y. Tai, D. Luo, Z. Cui, C. Wang, J. Li, F. Huang, and R. Ji, “Fast learning of temporal action proposal via dense boundary generator.” in _AAAI_ , 2020, pp. 11 499–11 506. * [49] Y. Bai, Y. Wang, Y. Tong, Y. Yang, Q. Liu, and J. Liu, “Boundary content graph neural network for temporal action proposal generation,” _arXiv preprint arXiv:2008.01432_ , 2020. * [50] L. Yang, H. Peng, D. Zhang, J. Fu, and J. Han, “Revisiting anchor mechanisms for temporal action localization,” _IEEE Transactions on Image Processing_ , vol. 29, pp. 8535–8548, 2020. * [51] C. Lin, C. Xu, D. Luo, Y. Wang, Y. Tai, C. Wang, J. Li, F. Huang, and Y. Fu, “Learning salient boundary feature for anchor-free temporal action localization,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 3320–3329. * [52] J. Gao, K. Chen, and R. Nevatia, “Ctap: Complementary temporal action proposal generation,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 68–83. * [53] S. Buch, V. Escorcia, C. Shen, B. Ghanem, and J. Carlos Niebles, “Sst: Single-stream temporal action proposals,” in _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , 2017, pp. 2911–2920. * [54] S. Buch, V. Escorcia, B. Ghanem, L. Fei-Fei, and J. C. Niebles, “End-to-end, single-stream temporal action detection in untrimmed videos,” 2019. * [55] J. Yuan, B. Ni, X. Yang, and A. A. Kassim, “Temporal action localization with pyramid of score distribution features,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 3093–3102. * [56] S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei, “End-to-end learning of action detection from frame glimpses in videos,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 2678–2687. * [57] S. Yeung, O. Russakovsky, N. Jin, M. Andriluka, G. Mori, and L. Fei-Fei, “Every moment counts: Dense detailed labeling of actions in complex videos,” _International Journal of Computer Vision_ , vol. 126, no. 2-4, pp. 375–389, 2018. * [58] V. Escorcia, F. C. Heilbron, J. C. Niebles, and B. Ghanem, “Daps: Deep action proposals for action understanding,” in _European Conference on Computer Vision_. Springer, 2016, pp. 768–784. * [59] B. Singh, T. K. Marks, M. Jones, O. Tuzel, and M. Shao, “A multi-stream bi-directional recurrent neural network for fine-grained action detection,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 1961–1970. * [60] S. Ma, L. Sigal, and S. Sclaroff, “Learning activity progression in lstms for activity detection and early detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 1942–1950. * [61] R. Zeng, W. Huang, M. Tan, Y. Rong, P. Zhao, J. Huang, and C. Gan, “Graph convolutional networks for temporal action localization,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 7094–7103. * [62] M. Xu, C. Zhao, D. S. Rojas, A. Thabet, and B. Ghanem, “G-tad: Sub-graph localization for temporal action detection,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 10 156–10 165. * [63] S. Chang, P. Wang, F. Wang, H. Li, and J. Feng, “Augmented transformer with adaptive graph for temporal action proposal generation,” _arXiv preprint arXiv:2103.16024_ , 2021. * [64] C. Zhao, A. Thabet, and B. Ghanem, “Video self-stitching graph network for temporal action localization,” _arXiv preprint arXiv:2011.14598_ , 2020. * [65] M. Nawhal and G. Mori, “Activity graph transformer for temporal action localization,” _arXiv preprint arXiv:2101.08540_ , 2021. * [66] J. Tan, J. Tang, L. Wang, and G. Wu, “Relaxed transformer decoders for direct action proposal generation,” _arXiv preprint arXiv:2102.01894_ , 2021. * [67] I. Laptev and P. Pérez, “Retrieving actions in movies,” in _2007 IEEE 11th International Conference on Computer Vision_. IEEE, 2007, pp. 1–8. * [68] L. Cao, Z. Liu, and T. S. Huang, “Cross-dataset action detection,” in _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_. IEEE, 2010, pp. 1998–2005. * [69] M. Jain, J. Van Gemert, H. Jégou, P. Bouthemy, and C. G. Snoek, “Action localization with tubelets from motion,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2014, pp. 740–747. * [70] D. Oneata, J. Revaud, J. Verbeek, and C. Schmid, “Spatio-temporal object detection proposals,” in _European conference on computer vision_. Springer, 2014, pp. 737–752. * [71] W. Chen and J. J. Corso, “Action detection by implicit intentional motion clustering,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 3298–3306. * [72] J. C. Van Gemert, M. Jain, E. Gati, C. G. Snoek _et al._ , “Apt: Action localization proposals from dense trajectories.” in _BMVC_ , vol. 2, 2015, p. 4. * [73] M. M. Puscas, E. Sangineto, D. Culibrk, and N. Sebe, “Unsupervised tube extraction using transductive learning and dense trajectories,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 1653–1661. * [74] G. Gkioxari and J. Malik, “Finding action tubes,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 759–768. * [75] R. Hou, C. Chen, and M. Shah, “Tube convolutional neural network (t-cnn) for action detection in videos,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 5822–5831. * [76] X. Peng and C. Schmid, “Multi-region two-stream r-cnn for action detection,” in _European conference on computer vision_. Springer, 2016, pp. 744–759. * [77] G. Singh, S. Saha, M. Sapienza, P. H. Torr, and F. Cuzzolin, “Online real-time multiple spatiotemporal action localisation and prediction,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 3637–3646. * [78] S. Saha, G. Singh, M. Sapienza, P. H. Torr, and F. Cuzzolin, “Deep learning for detecting multiple space-time action tubes in videos,” _arXiv preprint arXiv:1608.01529_ , 2016. * [79] P. Weinzaepfel, Z. Harchaoui, and C. Schmid, “Learning to track for spatio-temporal action localization,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 3164–3172. * [80] Z. Yang, J. Gao, and R. Nevatia, “Spatio-temporal action detection with cascade proposal and location anticipation,” _arXiv preprint arXiv:1708.00042_ , 2017. * [81] Y. Ye, X. Yang, and Y. Tian, “Discovering spatio-temporal action tubes,” _Journal of Visual Communication and Image Representation_ , vol. 58, pp. 515–524, 2019. * [82] Z. Li, K. Gavrilyuk, E. Gavves, M. Jain, and C. G. Snoek, “Videolstm convolves, attends and flows for action recognition,” _Computer Vision and Image Understanding_ , vol. 166, pp. 41–50, 2018. * [83] L. Wang, Y. Qiao, X. Tang, and L. Van Gool, “Actionness estimation using hybrid fully convolutional networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 2708–2717. * [84] G. Yu and J. Yuan, “Fast action proposals for human action detection and search,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 1302–1311. * [85] V. Kalogeiton, P. Weinzaepfel, V. Ferrari, and C. Schmid, “Action tubelet detector for spatio-temporal action localization,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 4405–4413. * [86] C. Gu, C. Sun, D. A. Ross, C. Vondrick, C. Pantofaru, Y. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, R. Sukthankar _et al._ , “Ava: A video dataset of spatio-temporally localized atomic visual actions,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 6047–6056. * [87] X. Yang, X. Yang, M.-Y. Liu, F. Xiao, L. S. Davis, and J. Kautz, “Step: Spatio-temporal progressive learning for video action detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 264–272. * [88] C.-Y. Wu, C. Feichtenhofer, H. Fan, K. He, P. Krahenbuhl, and R. Girshick, “Long-term feature banks for detailed video understanding,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 284–293. * [89] C. Sun, A. Shrivastava, C. Vondrick, K. Murphy, R. Sukthankar, and C. Schmid, “Actor-centric relation network,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 318–334. * [90] Y. Zhang, P. Tokmakov, M. Hebert, and C. Schmid, “A structured model for action detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 9975–9984. * [91] R. Girdhar, J. Carreira, C. Doersch, and A. Zisserman, “Video action transformer network,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 244–253. * [92] O. Ulutan, S. Rallapalli, M. Srivatsa, C. Torres, and B. Manjunath, “Actor conditioned attention maps for video action detection,” in _The IEEE Winter Conference on Applications of Computer Vision_ , 2020, pp. 527–536. * [93] M. Tomei, L. Baraldi, S. Calderara, S. Bronzin, and R. Cucchiara, “Stage: Spatio-temporal attention on graph entities for video action detection,” _arXiv preprint arXiv:1912.04316_ , 2019. * [94] J. Ji, R. Krishna, L. Fei-Fei, and J. C. Niebles, “Action genome: Actions as compositions of spatio-temporal scene graphs,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 10 236–10 247. * [95] J. Pan, S. Chen, Z. Shou, J. Shao, and H. Li, “Actor-context-actor relation network for spatio-temporal action localization,” _arXiv preprint arXiv:2006.07976_ , 2020. * [96] M. Tomei, L. Baraldi, S. Calderara, S. Bronzin, and R. Cucchiara, “Video action detection by learning graph-based spatio-temporal interactions,” _Computer Vision and Image Understanding_ , vol. 206, p. 103187, 2021. * [97] C. Sun, S. Shetty, R. Sukthankar, and R. Nevatia, “Temporal localization of fine-grained actions in videos by domain transfer from web images,” in _Proceedings of the 23rd ACM international conference on Multimedia_ , 2015, pp. 371–380. * [98] P. Bojanowski, R. Lajugie, F. Bach, I. Laptev, J. Ponce, C. Schmid, and J. Sivic, “Weakly supervised action labeling in videos under ordering constraints,” in _European Conference on Computer Vision_. Springer, 2014, pp. 628–643. * [99] D.-A. Huang, L. Fei-Fei, and J. C. Niebles, “Connectionist temporal modeling for weakly supervised action labeling,” in _European Conference on Computer Vision_. Springer, 2016, pp. 137–153. * [100] A. Richard, H. Kuehne, and J. Gall, “Weakly supervised action learning with rnn based fine-to-coarse modeling,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 754–763. * [101] H. Kuehne, A. Richard, and J. Gall, “Weakly supervised learning of actions from transcripts,” _Computer Vision and Image Understanding_ , vol. 163, pp. 78–89, 2017. * [102] S. Narayan, H. Cholakkal, F. S. Khan, and L. Shao, “3c-net: Category count and center loss for weakly-supervised action localization,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 8679–8687. * [103] J. Schroeter, K. Sidorov, and D. Marshall, “Weakly-supervised temporal localization via occurrence count learning,” _arXiv preprint arXiv:1905.07293_ , 2019. * [104] L. Wang, Y. Xiong, D. Lin, and L. Van Gool, “Untrimmednets for weakly supervised action recognition and detection,” in _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , 2017, pp. 4325–4334. * [105] S. Paul, S. Roy, and A. K. Roy-Chowdhury, “W-talc: Weakly-supervised temporal activity localization and classification,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 563–579. * [106] A. Islam and R. Radke, “Weakly supervised temporal action localization using deep metric learning,” in _The IEEE Winter Conference on Applications of Computer Vision_ , 2020, pp. 547–556. * [107] M.-A. Carbonneau, V. Cheplygina, E. Granger, and G. Gagnon, “Multiple instance learning: A survey of problem characteristics and applications,” _Pattern Recognition_ , vol. 77, pp. 329–353, 2018. * [108] M. Rashid, H. Kjellstrom, and Y. J. Lee, “Action graphs: Weakly-supervised action localization with graph convolution networks,” in _The IEEE Winter Conference on Applications of Computer Vision_ , 2020, pp. 615–624. * [109] M. Jain, A. Ghodrati, and C. G. Snoek, “Actionbytes: Learning from trimmed videos to localize actions,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 1171–1180. * [110] M. Gao, Y. Zhou, R. Xu, R. Socher, and C. Xiong, “Woad: Weakly supervised online action detection in untrimmed videos,” _arXiv preprint arXiv:2006.03732_ , 2020. * [111] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in _European conference on computer vision_. Springer, 2016, pp. 499–515. * [112] L. Huang, Y. Huang, W. Ouyang, L. Wang _et al._ , “Relational prototypical network for weakly supervised temporal action localization,” 2020. * [113] P. X. Nguyen, D. Ramanan, and C. C. Fowlkes, “Weakly-supervised action localization with background modeling,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 5502–5511.
# Rare collapse of fermionic quasiparticles upon coupling to local bosons Piotr Wrzosek Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, PL-02093 Warsaw, Poland Adam Kłosiński Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, PL-02093 Warsaw, Poland Krzysztof Wohlfeld Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, PL-02093 Warsaw, Poland Cliò Efthimia Agrapidis<EMAIL_ADDRESS>Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, PL-02093 Warsaw, Poland ###### Abstract We study the stability of the fermionic quasiparticle in a fermion-boson model on a Bethe lattice, with fermions interacting with local bosons by Peierls coupling. We solve the problem by mapping it onto a non-interacting chain with site-dependent potential. We show that, despite a finite number of bosonic excitations costing zero energy, it is hard for the quasiparticle to completely collapse. The quasiparticle disappearance becomes easier with an increase in: (i) the total number of bosons with zero energy, and (ii) the relative strength of the coupling between bosons and fermions. The postulated model can, among other things, be applied to study systems in which fermions are introduced into antiferromagnetic (or antiferro-orbital) domains surrounded by ferromagnetic (or ferro-orbital) ordered states. This might take place in the overdoped cuprates or upon doping manganese or vanadium oxides. Finally, we show how this model leads to an in-depth understanding of the onset of quasiparticles in the 1D and 2D $t$-$J^{z}$ model. ## I Introduction One of the most standard approaches to tackle a quantum many-body system is to approximate it by weakly-interacting long-lived quasiparticles [1]. This is a very successful picture as destroying a quasiparticle turns out to be a far more complex task than naively expected. One of the first physical system posited to exhibit quasiparticle decay was 4He: already in 1959 Pitaevskii suggested that the phononic quasiparticle was allowed to decay into a two- roton continuum [2, 3]. Yet, it is nowadays believed that the quasiparticle does not enter the two-roton continuum in 4He [3] and instead the decay is exponentially avoided due to strong interactions [4, 5]. A somewhat similar situation takes place in some non-collinearly ordered magnets with small spin. Here, the decay of magnon quasiparticles is also not observed [6, 4]. It is then only for large spins and weaker magnon interactions that the magnon decay takes place and can be observed [7, 8]. Nevertheless, these decays do not seem to overdamp the magnons in realistic spin models [9]. To search for the total quasiparticle collapse, often referred to as non-Fermi liquid or ‘unparticle’ physics [10, 11] and realised in the high-temperature superconducting cuprates (cf. [12, 13]), one has to go to more exotic models. These typically concern fermions coupled to a gapless mobile boson with a very particular type of fermion-boson coupling [14]. The latter occurs when the gapless bosons are Goldstone modes and the coupling to fermions does not vanish in the limit of low energy-momentum transfer [15, 14]. However, as suggested in Ref. [14], such a coupling is relatively rare in nature (its best realisation being a nematic Fermi liquid). Another, probably more common, route is to explore interactions between massless gauge bosons [16] and fermions [17, 18, 19, 20] which can lead to non-Fermi liquid behavior in two dimensions (2D). Such couplings may play a vital role in quantum Hall systems [21], spin liquids [22] or quantum critical systems such as heavy fermions [23]. In this paper we explore yet another route to quasiparticle extinction. To this end, we adhere to a fermion-boson model with a Peierls-type coupling between the two particle species. The primary difference with respect to all of the cases above is that the bosons in our model are immobile, i.e., local, and the lattice translational symmetry is explicitely broken. On the other hand, also for the here-studied model, it is important that a finite number of bosons can become massless (cost zero energy). While such a model is interesting per se, we believe it to be of relevance to, for instance, systems in which fermions are introduced into antiferromagnetic (or antiferro-orbital) domains surrounded by ferromagnetic (or ferro-orbital) ordered states. This might take place in overdoped cuprates [24, 25, 26, 27, 28, 29, 24, 30] or upon doping manganese or vanadium oxides [31, 32, 33, 34, 35]. All of the results obtained in the paper follow from exact analytical diagonalisation of the Hamiltonian. This enables us to address the issue of quasiparticle stability in an unbiased way. The main result is that a complete quasiparticle collapse is in general possible in the fermion-boson model – though only in a small portion of the model parameter space, making it a rare occurence. The quasiparticle disappearance is allowed once a certain number of local bosonic excitations cost zero energy. This decay becomes easier with an increase in: (i) the total number of bosons with zero energy, and (ii) the relative strength of the coupling between bosons and fermions. Finally, we uncover an interesting relation between the model introduced in this paper and the well-studied problem of a mobile hole introduced in the one-dimensional (1D) or 2D Ising antiferromagnet – as given by the 1D or 2D $t$–$J^{z}$ model [36, 37, 15, 38, 39, 40, 41, 42, 43, 44, 45, 46]. It turns out that these models, which always support a quasiparticle solution, are a specific realisation of the class of fermion-boson models considered in this paper. This means that the hole in the Ising antiferromagnet always forming a quasiparticle solution is not the generic one. The paper is organised as follows: in Sec. II we introduce the fermion-boson Hamiltonian and the methods used to perform the calculations, including mapping the interacting model to a non-interacting chain with the same spectral properties; in Sec. III we study the effect of the coupling to the impurity-like bosons, i.e. bosons which all cost zero energy except for one particular site, on the appearance of a quasiparticle; then in Sec. IV we extend our study to string-like local bosons, i.e. bosons which have finite energy for a whole range of sites. Within Secs. III and IV, we also show how our results relate to the $t$-$J^{z}$ model on a 1D and 2D Bethe lattice, respectively. Lastly, we discuss the results and draw our conclusions in Sec. V. ## II Models ### II.1 Fermion-boson model We consider an interacting Hamiltonian on a Bethe lattice with coordination number $z$, $\displaystyle\mathcal{H}=-t$ $\displaystyle\sum_{\langle i,j\rangle}\left[h_{i}^{\dagger}h_{j}\left(a_{i}+a_{j}^{\dagger}\right)\right.+\left.h_{j}^{\dagger}h_{i}\left(a_{j}+a_{i}^{\dagger}\right)\right]$ (1) $\displaystyle+\sum_{i}J_{i}a_{i}^{\dagger}a_{i},$ where $h_{i}$ are fermion annihilation operators and $a_{i}$ are hard-core bosons annihilation operators at site $i$, $t$ is the Peierls coupling between fermions and bosons, and $J_{i}$ is the on-site boson potential. The model lives in a restricted Hilbert space including states with the constraint $\displaystyle n_{a_{i}}+n_{h_{i}}\leq 1,$ (2) with $n_{a_{i}}$ and $n_{h_{i}}$ being the number operator at site $i$ for bosons and fermions respectively. Figure 1: Example of the distance labelling on a Bethe lattice with $z=3$. The numbering corresponds to the distance $d_{i}$ from the root of the Bethe lattice, labelled as $0$. We chose one arbitrary site for the root of the Bethe lattice. We label this site with index 0 and we will call it an origin of the lattice. Let $d_{i}$ stand for the distance of site $i$ to the origin of the lattice, calculated in the number of edges in the graph of the lattice that separate the site i from site 0, as shown in Fig. 1. With this, we impose a geometrical constraint on the shape of the on-site boson potentials $J_{i}$ that renders equivalent all the branches of the Bethe lattice (starting at site 0), $d_{i}=d_{j}\Rightarrow J_{i}=J_{j}.$ (3) In this work we are interested in the single-fermion spectral function $\mathcal{A}(\omega)=-\frac{1}{\pi}\text{Im}\mathcal{G}(\omega+i\delta),$ (4) where the local single-particle Green’s function $\mathcal{G}$ is given by: $\mathcal{G}(\omega+i\delta)=\left\langle\varnothing\left|h_{0}\frac{1}{\omega-\mathcal{H}+i\delta}h^{\dagger}_{0}\right|\varnothing\right\rangle,$ (5) where $\lvert\varnothing\rangle$ denotes the vacuum state for the particles in Eq. (1) and the fermion $h$ is created at site $0$ of the Bethe lattice. Here the local Green’s function describes the motion of a single fermion in the considered Hamiltonian (1), for it describes the case when a single fermion is added at the site labelled as 0 of the Bethe lattice, then propagates via Hamiltonian (7), and finally it is annihilated at site 0. Note that we choose to probe the system using the local Green’s function, for the model we consider is anisotropic in real space (through, among other features, the site-dependent boson energies $J_{i}$). ### II.2 Mapping to a non-interacting model Since we are interested in the spectral function (4), we can restrict ourselves to the subspace of states that contribute to it [see below]. Within this subspace, the above model in Eq. (1) can be mapped to a single spinless fermion moving on an otherwise empty 1D chain with nearest-neighbour hopping and an external potential. The hopping amplitudes are equal everywhere other than between the 0-th site at the center of the chain and its two neighbors and the chain is placed in a site-dependent external potential that is symmetric around the 0-th site. The mapping is done via $V_{i}=\sum_{d_{n}=0}^{i-1}J_{n}\quad{\rm and}\quad\left\\{\begin{array}[]{c}\tau_{0}=t\sqrt{z}\\\ \tau=t\sqrt{z-1}\end{array}\right.$ (6) and the non-interacting Hamiltonian is given by $\displaystyle H$ $\displaystyle=\sum^{\infty}_{i=-\infty}\;\left[-\tau\left(c^{\dagger}_{i+1}c_{i}+\text{h.c.}\right)+\;V_{|i|}\;n_{i}\right]$ (7) $\displaystyle+\left(\tau-\frac{\tau_{0}}{\sqrt{2}}\right)\left(c^{\dagger}_{1}c_{0}+c_{0}^{\dagger}c_{-1}+\text{h.c.}\right),$ where $c^{\dagger}_{i}(c_{i})$ creates (annihilates) a spinless fermion at site $i$, $n_{i}=c^{\dagger}_{i}c_{i}$ is the fermion density operator at site $i$, $\tau$ and $\tau_{0}$ are the hopping amplitudes and $V_{|i|}$ is an external potential taken to be symmetric around the origin $i=0$, where the fermion is originally introduced in the system. Without loss of generality we assume that the hopping amplitude $\tau>0$. Since we are interested in the single-fermion spectral function (4), we need to define the relevant quantity also for the non-interacting model: $\displaystyle A(\omega)=-\frac{1}{\pi}\text{Im}{G}(\omega+i\delta).$ (8) where the corresponding single-particle Green’s function is $G(\omega+i\delta)=\left\langle\varnothing\left|c_{0}\frac{1}{\omega-{H}+i\delta}c^{\dagger}_{0}\right|\varnothing\right\rangle.$ (9) Now, $\lvert\varnothing\rangle$ denotes the vacuum for $c$ fermions, i.e. the empty 1D chain. Model (7-8) is an effective model resulting from the mapping of model (1-4), but it can also be seen as a model describing a fermion in a 1D crystal (or optical lattice) with a particular pattern of impurities given rise to the potential $V_{|i|}$, cf. [47] for a recent work on a related problem. ### II.3 Solution Our goal is to calculate the spectral function defined in Eq. (8). To this end we choose the basis $\mathcal{B}$ in the following manner: We start with the initial state $\lvert 0\rangle\equiv c^{\dagger}_{0}\lvert\varnothing\rangle$, which corresponds to a spinless fermion located at site $i=0$. From this state we construct states ‘reachable’ by the propagator $1/(\omega-{H}+i\delta)$ by repeated application of the Hamiltonian on our initial state, namely $H^{n}\lvert 0\rangle$, for some $n\in\mathbb{N}$. This leads to basis $\mathcal{B}$ defined as $\begin{split}\lvert 0\rangle&=c^{\dagger}_{0}\lvert\varnothing\rangle,\\\ \lvert i\rangle&=\frac{1}{\sqrt{2}}\left(c^{\dagger}_{-i}+c^{\dagger}_{i}\right)\lvert\varnothing\rangle.\end{split}$ (10) In this basis, the Hamiltonian (7) takes the form $H=\sum_{i,j}h_{i,j}|i\rangle\langle j|$. Note that we can restrict the Hilbert subspace to $\mathcal{B}$, since all neglected states, i.e. the anti- symmetric ones $\lvert i\rangle_{-}=\frac{1}{\sqrt{2}}\left(c^{\dagger}_{-i}-c^{\dagger}_{i}\right)\lvert\varnothing\rangle$, give zero contribution to the spectral function [48]. An essential feature of the introduced basis $\mathcal{B}$ is that the Hamiltonian matrix $[h_{i,j}]$ becomes tridiagonal, $[h_{i,j}]=\begin{bmatrix}V_{0}&-\tau_{0}&&&\\\ -\tau_{0}&V_{1}&-\tau&&\\\ &-\tau&V_{2}&-\tau&\\\ &&-\tau&V_{3}&\ddots\\\ &&&\ddots&\ddots\\\ \end{bmatrix}.$ (11) This yields a simple formula for the Green’s function in the form of a continued fraction, $G(\omega)=\frac{1}{\omega-V_{0}-\Sigma(\omega)}.$ (12) Let us denote $\displaystyle\Gamma(\omega)$ $\displaystyle=\frac{\tau^{2}}{\omega- V_{n}-\Gamma(\omega)}$ (13) $\displaystyle\Omega_{i<n}=$ $\displaystyle\omega- V_{i}\quad\text{and}\quad\Omega_{n}=\frac{\tau^{2}}{\Gamma(\omega)},$ (14) where $i\geq 0$ denotes the distance from the origin. Then, we can write the expression for the self-energy for a generic symmetric potential centered around $i=0$: $\displaystyle\Sigma(\omega)=\frac{\tau_{0}^{2}}{\Omega_{1}-\frac{\tau^{2}}{\Omega_{2}-\frac{\tau^{2}}{\Omega_{3}-\ldots}}}=\frac{\tau_{0}^{2}}{\tau^{2}}\operatornamewithlimits{\mathcal{K}}_{i=1}^{n}\left(\frac{\tau^{2}}{\Omega_{i}}\right).$ (15) In what follows, we are interested in the properties of the state contributing to the spectral function $A(\omega)$ at the lowest energy (denoted as $\omega_{QP}$). In particular, the central question is whether this state is a discrete one (i.e. a bound state) or is part of a continuum. As in the fermion-boson model this question corresponds to the issue of quasiparticle stability, we denote the spectral weight carried by the discrete state at $\omega_{QP}$ with the quasiparticle spectral weight $a_{\mathrm{QP}}$. In terms of the quasiparticle energy $\omega_{QP}$, this is given by: $\displaystyle a_{\mathrm{QP}}$ $\displaystyle(\tau,\tau_{0},V)=\lim_{\omega\to\omega_{\mathrm{QP}}}\left(\omega-\omega_{\mathrm{QP}}\right)\;G(\omega)=$ (16) $\displaystyle=\lim_{\omega\to\omega_{\mathrm{QP}}}\frac{1}{1-\frac{d}{d\omega}\Sigma(\omega)}$ Hence, to obtain the expression for the quasi-particle residue we calculate the derivative of the self-energy with respect to the frequency $\omega$, $\displaystyle\frac{d}{d\omega}\Sigma(\omega)$ $\displaystyle=\frac{\tau_{0}^{2}}{\tau^{2}}\frac{d}{d\omega}\operatornamewithlimits{\mathcal{K}}_{i=1}^{n}\left(\frac{\tau^{2}}{\Omega_{i}}\right)=$ (17) $\displaystyle=\frac{\tau_{0}^{2}}{\tau^{4}}\sum_{j=1}^{n}\left(-1\right)^{j+1}\prod_{i=1}^{j}\left[\operatornamewithlimits{\mathcal{K}}_{l=i}^{n}\left(\frac{\tau^{2}}{\Omega_{l}}\right)\right]^{2}\frac{d\Omega_{j}}{d\omega},$ where $\frac{d}{d\omega}\Omega_{i<n}=1$ and $\frac{d}{d\omega}\Omega_{n}=\frac{d}{d\omega}\left(\frac{\tau^{2}}{\Gamma(\omega)}\right)$. Note that $\omega_{\mathrm{QP}}$ in (16) depends on the specific form of the potential $V_{|i|}$, so we cannot provide a general form for this quantity at this point. ## III Impurity-like bosons We consider a class of fermion-boson models with impurity-like bosons. To this end, we assume that the site-dependent bosonic energy $J_{n}$ in Eq. (1) takes the form $J_{n}=\begin{cases}J/2&\mbox{if }d_{n}=0\\\ 0&\mbox{if }d_{n}\neq 0\end{cases},$ (18) i.e., all but one boson in the Bethe lattice are massless. This may look as a quite unphysical regime – however, as shown below, it may find its realisation in a number of physical systems. The above case corresponds to considering a point potential at site $n=0$ (i.e. at the position where we introduce the spinless fermion in the system) (7) $V_{n}=\begin{cases}0&\mbox{if }n=0\\\ V&\mbox{if }n\neq 0\end{cases},$ (19) in the non-interacting model given in Eq. (7). The other parameters of the two Hamiltonians, namely the coordination number $z$ in (1) and, correspondingly, the $\tau_{0}/\tau$ ratio in (7) are left free. Since the two models are equivalent, we present the solution in terms of the easier-to-solve non-interacting model (7). ### III.1 Solution Figure 2: The quasiparticle spectral weight $a_{\mathrm{QP}}(V)$ for different values of ${\tau_{0}}/{\tau}$ as a function of the depth of the point potential $V$ [cf. Eq. (19)]. Note that the lines start at $V>0$ for any ${\tau_{0}}/{\tau}<\sqrt{2}$. Dashed lines shows values of $\tau_{0}/\tau$ for which the mapping to the Bethe lattice coordination number $z$ leads to a fractional value of the latter. In what follows we leave the ratio $\tau_{0}/\tau$ as a parameter in the system. However, we assume that $\tau_{0}\leq\sqrt{2}\tau$. This is due to an earlier study [48] which shows that larger values of $\tau_{0}$ always stabilise a quasiparticle solution. In fact, $\tau_{0}/\tau=\sqrt{2}$ is a limiting case which corresponds to a non-interacting isotropic chain [see Eq. (7)] and, as shown below, is the case for which the quasiparticle solution is always present, independently of the value of the point potential. The above assumption leads to the following expression for the self-energy (15), $\Sigma(\omega)=\left\\{\begin{array}[]{lr}\frac{\tau_{0}^{2}}{\tau^{2}}\left(\frac{\omega-V}{2}+\sqrt{\left(\frac{\omega-V}{2}\right)^{2}-\tau^{2}}\right),&\>\omega\leq V\\\ \frac{\tau_{0}^{2}}{\tau^{2}}\left(\frac{\omega-V}{2}-\sqrt{\left(\frac{\omega-V}{2}\right)^{2}-\tau^{2}}\right),&\>\omega>V\\\ \end{array}\right..$ (20) Moreover, following the general solution presented above in Sec. II.3, we calculate the analytic formula for the quasiparticle spectral weight $a_{\mathrm{QP}}$ for the potential given in Eq. (19). In terms of the quasiparticle energy $\omega_{\mathrm{QP}}$ we have $\displaystyle a_{\mathrm{QP}}$ $\displaystyle(\tau,\tau_{0},V)=\lim_{\omega\to\omega_{\mathrm{QP}}}\left(\omega-\omega_{\mathrm{QP}}\right)\;G(\omega)=$ (21) $\displaystyle=\lim_{\omega\to\omega_{\mathrm{QP}}}\frac{1}{1-\frac{d}{d\omega}\Sigma(\omega)}=$ $\displaystyle=\left[1-\frac{1}{2}\frac{\tau_{0}^{2}}{\tau^{2}}\left(1+\frac{\omega_{\mathrm{QP}}-V}{\sqrt{(\omega_{\mathrm{QP}}-V)^{2}-4\tau^{2}}}\right)\right]^{-1},$ where $\omega_{\mathrm{QP}}=\left\\{\begin{array}[]{lr}\frac{\tau_{0}^{2}}{2}\left(\frac{V}{\tau_{0}^{2}-\tau^{2}}-\frac{\sqrt{V^{2}+4\tau_{0}^{2}-4\tau^{2}}}{|\tau_{0}^{2}-\tau^{2}|}\right)&\tau_{0}>\tau,0\\\ -\frac{1}{V}&\tau_{0}=\tau>0\\\ \frac{\tau_{0}^{2}}{2}\left(\frac{V}{\tau_{0}^{2}-\tau^{2}}+\frac{\sqrt{V^{2}+4\tau_{0}^{2}-4\tau^{2}}}{|\tau_{0}^{2}-\tau^{2}|}\right)&0<\tau_{0}<\tau\\\ 0&\tau_{0}=0\end{array}\right..$ (22) All of the above solutions come from the $\omega\leq V$ branch in (20). We plot the quasiparticle weight $a_{\mathrm{QP}}$ as a function of $V/\tau$ for different values of $\tau_{0}/\tau$ in Fig. 2 and notice that the value of the potential $V$ at which the quasiparticle weight is finite shifts away from zero with tuning $\tau_{0}/\tau$ away from $\sqrt{2}$. We then determine for which depth of the potential $V$ the quasiparticle weight $a_{\mathrm{QP}}$ becomes finite as the hopping ratio $\tau_{0}/\tau$ changes. This leads to the definition of the critical value of $V^{*}$ $V^{*}(\tau,\tau_{0})=\left\\{\begin{array}[]{lr}\frac{2\tau^{2}-\tau_{0}^{2}}{|\tau|}&\tau_{0}>0\\\ -\infty&\tau_{0}=0\\\ \end{array}\right..$ (23) We can then also calculate the spectral function (8) by using the self-energy in Eq. (20) and substituting in Eq. (12). We show an example of the spectral function for the model (7)-(19) with $\tau_{0}/\tau=\sqrt{3/2}$ in Fig. 3. From Eq. (23), we find $V^{*}=0.5\tau$. Indeed, no discrete energy branch is visible for $V\leq V^{*}$, $V^{*}$ being the critical value of the point potential for which the discrete state appears. Figure 3: (a) Spectral function $A(\omega)$ as a function of the point- potential depth $V/\tau$ [cf. Eq. (19)] with ${\tau_{0}}/{\tau}=\sqrt{3/2}$. The right panel shows the spectral function $A(\omega)$ at $V=0.5\tau$: for this value no discrete energy peak is visible, since for $\tau_{0}/\tau=\sqrt{3/2}$ this corresponds to the critical value $V^{*}$ and the quasiparticle appears only for $V>0.5\tau$. (b) Schematic representation of the model at the vertical cut shown in (a) with ${\tau_{0}}/{\tau}=\sqrt{3/2}$ and $V=0.5\tau$. The different hopping around the origin is drawn in red, the gray area represents the shape of the potential, the black filled circle represents the moving fermion and the dashed circle its starting position. Let us now go back to the fermion-boson interacting Hamiltonian (1). We are considering a boson potential as given by (18), while we can vary the Bethe lattice coordination number $z$. Substituting (6) in (23), we find the critical value of the boson potential $J^{*}$ $J^{*}(t,z)=\frac{t(z-2)}{\sqrt{z-1}}.$ (24) Note that $z$ can only take integer values equal or larger than two, so that we lose part of the solution available when considering the non-interacting chain (7). Condition (24) implies that, also for the interacting fermion-boson model on a Bethe lattice with coordination number $z$ (1), there are two distinct regimes: one with a well defined fermionic quasiparticle for $J>J^{*}$, and one with no quasiparticle for $J\leq J^{*}$. ### III.2 Special limit Figure 4: Spectral function $A(\omega)$ as a function of the potential depth $V/\tau$ [cf. Eq. (19)] for the ‘special limit’ $\tau_{0}/\tau=\sqrt{2}$. The right panel shows the spectral function $A(\omega)$ at $V=0.5\tau$: for this value there is a clear discrete energy state split from a continuum, since for $\tau_{0}/\tau=\sqrt{2}$ we have the critical $V^{*}=0$ and the quasiparticle appears for any $V>0$. (b) Schematic representation of the model at the vertical cut shown in (a) with ${\tau_{0}}/{\tau}=\sqrt{2}$ and $V=0.5\tau$. In this case, all of the hoppings in the non-interacting chain are equivalent. The gray area represents the shape of the potential, the black filled circle represents the moving fermion and the dashed circle its starting position. Fig. 2 shows the existence of a special value $\tau_{0}/\tau=\sqrt{2}$ for which the discrete state exists for any value $V>0$. We then consider the spectral function $A(\omega)$ for this case (see Fig. 4). Indeed, a discrete energy branch is present for all values $V>0$. As shown in Fig. 2(b), taking $\tau_{0}/\tau=\sqrt{2}$ corresponds to considering a chain with equal hoppings on all bonds, i.e., the hopping from the origin is not distinct from the others anymore. In terms of the fermion-boson model (1), this translates to the existence of a special value of the coordination number $z$ at which the quasiparticle is present for any value $J>0$: $z=2$ (corresponding to $\tau_{0}/\tau=\sqrt{2}$). This corresponds to a 1D interacting fermion-boson model with a boson point potential. The latter system is equivalent to a well- known interacting model: the $t$-$J^{z}$ chain with one single hole (see below). ### III.3 Physical realization Figure 5: Dynamics of a single hole on a 1D chain $t$-$J^{z}$ (Bethe lattice with $z=2$). The black solid circle represents the hole in the current state, the dashed circle is the hole at its initial creation site. The cartoon shows the system after few hole hoppings have occured. $z=2$ case We have seen how a critical value of the relative strength of the fermion-boson coupling $J^{*}/t$ determines the existence or suppression of a quasiparticle and we have determined a critical case for the coordination number $z=2$ at which the quasiparticle exists for any $J/t>0$. For this special case, we now show how we recover the physics of one of the most studied interacting models in condensed matter physics: the $t$-$J^{z}$ chain with one single hole. Indeed, we will show how its spectral function and of the fermion-boson model (1) (with $z=2$) are the same. The Hamiltonian of the 1D $t\text{-}J^{z}$ model with a single hole reads $\mathcal{H}_{t\text{-}J^{z}}=-t\sum_{\langle i,j\rangle,\sigma}\left(\tilde{c}_{i\sigma}^{{\dagger}}\tilde{c}_{j\sigma}\\!+\\!\mathrm{H.c.}\\!\right)+J^{z}\sum_{\langle i,j\rangle}\left(S_{i}^{z}S_{j}^{z}\\!-\tfrac{1}{4}\,\tilde{n}_{i}\tilde{n}_{j}\right),$ (25) where $\tilde{c}_{i\sigma}$ annihilates an electron at site $i$ in the constrained Hilbert space without double occupancy, $t$ is the nearest neighbor hopping parameter, $S^{z}_{i}$ is the $z$-component of the spin-1/2 operator at site $i$, $\tilde{n}_{i}=\tilde{c}_{i\sigma}^{\dagger}\tilde{c}_{i\sigma}$ is the electron density operator at site $i$ and $J^{z}$ is the antiferromagnetic exchange coupling. Rather than using the electron and spin operators, we can rewrite the Hamiltonian (25) in terms of hole and magnon operators [49]. The transformation starts with the rotation of all spins on one sublattice, which in turn allows for the introduction of holes and magnons in terms of the following transformations: $\displaystyle\tilde{c}_{i\uparrow}^{\dagger}$ $\displaystyle=h_{i}P_{i},$ $\displaystyle\tilde{c}_{i\uparrow}=P_{i}h_{i}^{\dagger},$ (26) $\displaystyle\tilde{c}_{i\downarrow}^{\dagger}$ $\displaystyle=h_{i}a_{i}^{\dagger},$ $\displaystyle\tilde{c}_{i\downarrow}=h_{i}^{\dagger}a_{i},$ $\displaystyle S_{i}^{z}$ $\displaystyle=\frac{1}{2}-a_{i}^{\dagger}a_{i}-\frac{1}{2}h_{i}^{\dagger}h_{i},$ $\displaystyle\tilde{n}_{i}=1-h_{i}^{\dagger}h_{i},$ $\displaystyle P_{i}$ $\displaystyle=\sqrt{1-a_{i}^{\dagger}a_{i}}$ where $h_{i}$ is a fermion operators that annihilates a hole and $a_{i}$ is a bosonic operator annihilating a magnon at site $i$. The kinetic energy now reads: $\displaystyle\mathcal{H}_{t}=\mathcal{P}\Big{\\{}-t\sum_{\langle i,j\rangle}$ $\displaystyle\left[h_{i}^{\dagger}h_{j}\left(a_{i}+a_{j}^{\dagger}\right)+h_{j}^{\dagger}h_{i}\left(a_{j}+a_{i}^{\dagger}\right)\right]\Big{\\}}\mathcal{P},$ (27) with $\mathcal{P}$ being the global action of the projection operators $P_{i}$ and acting by projecting out states that do not satisfy $n_{a_{i}}+n_{h_{i}}\leq 1$. The potential energy reads $\displaystyle\mathcal{H}_{J^{z}}$ $\displaystyle=E_{0}+\frac{J^{z}}{2}\sum_{\langle i,j\rangle}\left[a_{i}^{\dagger}a_{i}+a_{j}^{\dagger}a_{j}-2a_{i}^{\dagger}a_{i}a_{j}^{\dagger}a_{j}\right.+$ (28) $\displaystyle+\left.h_{i}^{\dagger}h_{i}+h_{j}^{\dagger}h_{j}-h_{i}^{\dagger}h_{i}a_{j}^{\dagger}a_{j}-h_{j}^{\dagger}h_{j}a_{i}^{\dagger}a_{i}-h_{i}^{\dagger}h_{i}h_{j}^{\dagger}h_{j}\right].$ Note that this transformation is exact, provided that one considers initial states with no more than two magnons per site, a subset of the full bosonic Hilbert space. Of course, all eigenstates of (25) belong to this subspace and any state in this subspace is confined to it when time evolution is applied. It is straightforward to check that, up to a constant energy shift, the Green’s function of a single hole is exactly the same for the $t$-$J^{z}$ Hamiltonian $\mathcal{H}_{t}+\mathcal{H}_{J^{z}}$, given by Eqs. (27) and (28), and for the interacting boson-fermion model $\mathcal{H}$ in Eq. (1): $\displaystyle\mathcal{G}(\omega+i\delta)$ $\displaystyle=\left\langle\varnothing\left|h_{0}\frac{1}{\omega^{\prime}+i\delta-\mathcal{H}_{t}-\mathcal{H}_{J^{z}}}h^{\dagger}_{0}\right|\varnothing\right\rangle$ $\displaystyle=\left\langle\varnothing\left|h_{0}\frac{1}{\omega+i\delta-\mathcal{H}}h^{\dagger}_{0}\right|\varnothing\right\rangle,$ (29) where $|\varnothing\rangle$ denotes the vacuum state for holes and magnons, $\omega^{\prime}=\omega-E$. To show the validity of the equivalence in Eq. (III.3), let us start with the hole operators $h_{i}$. For a single hole, the hole-hole interaction term $h_{i}^{\dagger}h_{i}h_{j}^{\dagger}h_{j}$ vanishes. Moreover, the summation over the hole number operators $h_{i}^{\dagger}h_{i}$ yields a constant. Thus these terms can be hidden in the constant energy shift $E$ included in $\omega^{\prime}$. For other terms, the equivalence depends on the chosen initial state $\lvert 0\rangle=h^{{\dagger}}_{0}\lvert\varnothing\rangle$. Here, magnons can only be created by the moving hole and there is no separate magnon dynamics. If the initial state contains magnons or if magnons are allowed to move, equation Eq. (III.3) does not have to be fulfilled. States that belong to a linear combination arising from repeated application of the Hamiltonian $\mathcal{H}_{t}^{n}\lvert 0\rangle$, with $\mathcal{H}_{t}$ given by Eq. (27), for any $n$ are reachable by the $\mathcal{G}$ (or $\mathcal{H}_{t}$) operator, defined in Eq. (III.3), from the initial state $\lvert 0\rangle$. On the other hand, any reachable state belongs to $\mathcal{H}_{t}^{n}\lvert 0\rangle$ for some $n$. Hence, a chain of magnons connects the hole with the site where it was initially created (see Fig. 5). Moreover, there are no other magnons in this state, since magnons can only be created by the hole dynamics. Now we need to consider three distinct cases for magnon chains of length $n=0$, $n=1$ and $n>1$. For $n=0$, the magnon number operator gives $0$ and thus $\mathcal{H}=\mathcal{H}_{t}+\mathcal{H}_{J^{z}}$ up to a shift by a constant energy $E$. For $n=1$, only the hole-magnon interaction and the cost of the creation of a single magnon have to be taken into account (the hole cost can be incorporated into the constant energy shift $E$). It is easy to check that the cost of creating a magnon in a 1D chain is ${J^{z}}$, and that there will always be only one single hole-magnon interaction (even for longer chains) contributing an energy $-{J^{z}}/{2}$, so that the total energy contribution of creating the first magnon is ${J^{z}}/{2}$ and for states with one magnon we again recover $\mathcal{H}=\mathcal{H}_{t}+\mathcal{H}_{J}^{z}$. Creating more magnons in the 1D chain costs no energy, since the energy required to create a magnon is exactly cancelled by the magnon-magnon interaction between neighboring bosons [45]. Hence, we show that indeed $\mathcal{H}=\mathcal{H}_{t}+\mathcal{H}_{J^{z}}$ for the whole class of reachable states. Indeed, it is well known that a fermionic quasiparticle appears for any value $J^{z}>0$ in the $t$-$J^{z}$ chain with one single hole, agreeing with our results that a quasiparticle is present for any $J\neq 0$ in the fermion-boson model (1) for $z=2$ with a bosonic point potential acting on the site at which the hole is originally introduced, as given in (18) (see Fig. 5) and with the equivalence $J\equiv J^{z}$. Thus, the spectral function shown in Fig. 4 for the limiting value $\tau_{0}/\tau=\sqrt{2}$ is the same as that for the 1D $t$-$J^{z}$ with one single hole, where $V/\tau$ is replaced by $J/2t$. $z>2$ case For a point potential acting on a Bethe lattice with $z>2$, we can draw a parallel to lightly doped systems with mobile charges. In fact, when doping ions are introduced in an atomic crystal, they affect the crystal structure so that an effective potential is introduced at the point of origin of the hole. This corresponds to $J_{n}$ given by (18). If we consider the quasi-2D Bethe lattice as a first approximation for 2D systems, we can claim that for lightly doped systems, such that dopants are sparse and distant enough to not affect each other, a quasiparticle will not always stabilize when impurities are introduced in fermionic systems, rather it will depend on the strength of the local potential around the impurity. A similar situation to this case is also present in the intermediate state of resonant inelastic X-ray scattering (RIXS) on transition metal oxides (TMOs) [50, 51]. Here, introduction of a core hole results in a $3d^{10}$ configuration on the $d$ orbitals, which can be treated as a mobile hole. In turn, this mobile hole is affected by the core-hole potential on the site where it was originally introduced. For TMOs with relatively small magnetic exchange, our model (1) would then be a good first approximation of the fundamental physics happening at the intermediate state of RIXS, meaning a quasiparticle would stabilize only for large enough values of core-hole potential. ## IV String-like bosons In this section, we consider a different shape of the bosonic potential $J_{n}$ so that not one but several bosons have finite energy. More specifically, we choose a model in which a finite number of bosons, clustered around the central site of the Bethe lattice, have the same nonzero energy. Without loss of generality, we fix the coordination number $z=3$ in (1). This choice corresponds to a string-like potential on the Bethe lattice and to setting the hopping ratio $\tau_{0}/\tau=\sqrt{3/2}$ in the non-interacting model (7). Thus, we consider the on-site boson energies $J_{n}$ in Eq. (1) as given by $J_{n}=\begin{cases}{J}&\mbox{if }d_{n}=0\\\ \frac{J}{2}&\mbox{if }0<d_{n}<l\\\ 0&\mbox{if }d_{n}\geq l\end{cases},$ (30) which translates in the non-interacting Hamiltonian Eq. (7) to: $V_{n}=\begin{cases}0&\mbox{if }n=0\\\ V+\frac{V}{2}(n-1)&\mbox{if }0<|n|<l\\\ V+\frac{V}{2}(l-1)&\mbox{if }|n|\geq l\end{cases}.$ (31) Again, we present the solution in terms of the non-interacting model (7)-(31). ### IV.1 Solution Figure 6: Spectral function $A(\omega)$ for a string-like potential [cf. (31)] and varying half-width of the potential well $l$ for ${\tau_{0}}/{\tau}=\sqrt{3/2}$ and the (right panel) potential parameter $V=0.5\tau$. A transition from a solution with no quasiparticles to one with multiple quasiparticles is observed. (b) Vertical cuts of panel (a) at different values of the potential half-width $l$. (b1) $l=1$, no discrete state peak is present, (b2) $l=6$ two discrete state peaks are present in the spectrum as well as a continuum, (b3) $l=12$ several discrete peaks are visible as well as a higher energy continuum of states, (b4) $l=18$ several discrete peaks make up the spectrum, similar to the known ladder-spectrum. (c) Schematic of the model in a string potential with defects [cf. (31)]. The different hopping around the origin is drawn in red, the gray area shows the altered string potential, the black filled circle represents the moving fermion and the dashed circle its starting position, we explicitly shown the half-width of the potential $l$. We plot the spectral function for the string-like potential (31) and $\tau_{0}/\tau=\sqrt{3/2}$ in Fig. 6. As the relevant parameter in this case we use the half-width of the potential $l$, while the potential parameter is set to be $V=0.5\tau$. To better show the progression of the spectral function $A(\omega)$ as a function of $l$, we present $A(\omega)$ for four values of $l$ in Fig. 6(b). For $l=1$ (Fig. 6(b1)), no discrete state is present. Note that this is the same situation shown in the vertical cut of Fig. 3(a), since $l=1$ corresponds to a point potential. Now we focus on increasing $l$. At $l=6$, already two distinct discrete states are present, together with a relatively large continuum of states (cf. Fig. 6(b2)). When $l=12$, the spectral function shows several consecutive peaks, but a continuum persists for larger energies $\omega/\tau$ (cf. Fig. 6(b3)). Finally, for $l=18$, we see a spectrum composed only of consecutive sharp peaks, i.e., discrete states, similar to a ladder spectrum. This situation keeps on as $l$ goes to infinity. It is then natural to consider the special limit of a full string potential covering the whole lattice. ### IV.2 Special Limit Figure 7: Spectral function $A(\omega)$ in a string potential [cf. (33)] for $\tau_{0}/\tau=\sqrt{3/2}$. The right panel shows the spectral function $A(\omega)$ at $V=0.5\tau$: the spectral function takes the shape of the so- called ladder spectrum and contains only discrete (i.e. quasiparticle-like) peaks. (b) Schematic of the model at the vertical cut drawn in (a). The different hopping around the origin is drawn in red, the gray area shows the string potential, the black filled circle represents the moving fermion and the dashed circle its starting position. Note that the string potential completely covers the infinite chain. We now consider the limit $\l\to\infty$. This results in a finite on-site bosonic energy $J_{n}$ for $every$ site $n$ on the Bethe lattice in Eq. (1): $J_{n}=\begin{cases}J&\mbox{if }d_{n}=0\\\ \frac{J}{2}&\mbox{if }d_{n}>0,\end{cases}$ (32) i.e., all bosons are massive. This situation corresponds to a potential $V_{n}$ in the non-interacting Hamiltonian Eq. (7) having a perfect string (i.e. discrete linear) character: $V_{n}=\begin{cases}0&\mbox{if }n=0\\\ V+\frac{V}{2}(n-1)&\mbox{if }n\neq 0.\end{cases}$ (33) In Fig. 7, we show the spectral function for such a model as a function of the potential parameter $V/\tau$. There is an important difference in the physics depicted in Fig. 6 and Fig. 7. In the first case, we plot the spectral function in terms of the half-width of the potential $l$, i.e., in terms of how many sites are actually affected by it and we fix the potential parameter to be $V=0.5\tau$. In the second case, we show the spectral function evolution as a function of the potential energy parameter $V/\tau$, similarly to what done previosuly for the case of a point potential. Indeed, we observe that this case is very different from the previously considered case of a point potential, since there exist at least one discrete energy state for any value of $V$. However, as the potential increases, we see that more discrete states appear in the system, in contrast to the point potential case were only one discrete state would stabilize. The stark contrast to the case of impurity-like bosons [cf. Sec. III] is underlined when we fix the value of $V$ and plot $A(\omega)$: a structure with several consecutive delta-like peaks appear, [right panel of Fig. 7(a)]. This structure is the well-known ladder-spectrum of [36, 15]. In the language of the fermion-boson model this means that for any value of the finite bosonic energy $J$ the spectral function of model (7) with (32) solely consists of quasiparticles, cf. [36, 15]. Note that at least one discrete state emerges for any value of $V$, in contrast with the finite string potential case of Fig. 6 for which for any fixed value of V, we have a continuum until a critical value $l^{*}$ is reached, meaning the discrete state appears only when the potential ‘covers’ enough sites around the origin, where the fermion is introduced. This number depends on the parameter $V$ as well as the hopping ratio $\tau_{0}/\tau$. In terms of the interacting fermion-boson model, this translates to the possibility of having quasiparticle decay in case only some bosons around the origin of the Bethe lattice are massive – with the rest being massless. Nonetheless, to reach the ladder spectrum form when $l$ is finite, the potential needs to cover a relatively large number of lattice sites. Hence, it is rather easy to destabilise the ladder spectrum and recover some kind of state continuum. In the language of the interacting fermion-boson model this implies that, whereas in the case presented in Fig. 7 we are tuning the strength of the boson-fermion coupling $t/J$ (strictly speaking $J/t$ in Fig. 7), in the case of Fig. 6 we are tuning the number of massless bosons $\propto l$ present in our system. ### IV.3 Physical realization Figure 8: Dynamics of a single hole on a Bethe lattice with $z=3$. The black solid circle represents the hole in the current state, the dashed circle is the hole at its initial creation site. The cartoon shows the system after few hole hoppings have occurred. Perfect string potential The spectrum in Fig. 7 immediately brings to mind the one of the $t$-$J^{z}$ model on a Bethe lattice with $z>2$ [36, 15]. While the relation between the fermion-boson model in Eq. (1) and the $t$-$J^{z}$ model on a Bethe lattice was studied earlier in [40, 45, 46], for completeness let us describe it in detail here. It is straightforward to generalize Eq. (25) to any $z>2$. The subsequent consideration also apply, especially the equivalence of the Green’s functions shown in Eq. (III.3), with the shape of the bosonic potential as given in (32). More generally, equality (III.3) holds true for any coordination number $z$ and the choice of $J_{n}$ $\begin{split}J_{0}&=\frac{J^{z}}{2}(z-1),\\\ J_{n\neq 0}&=\frac{J^{z}}{2}(z-2)d_{n}\end{split}$ (34) and the site $n=0$ is the site at which the hole is originally introduced in our lattice (see Fig. 8). While the consideration about the energy shift due to the presence of a hole stay the same, generalizing the energy equivalence for the cases when magnons are present in (III.3) requires some attention, since now we are considering a Bethe lattice with $z>2$ and there are more neighboring sites. However, the analysis of states reachable through $\mathcal{H}_{t}$ still holds true, so that the motion of the hole on a Bethe lattice still results in a chain of magnons connecting the hole to the site where it was initially created. Again, there are no other magnons in the system, since only the hole dynamics can result in magnon creation. Thus, the magnons and the hole form a 1D-like chain on the Bethe lattice branch. It is again necessary to consider three distinct cases of magnon chains of length $m=0$, $m=1$ and $m>1$. For the $m=0$ case, the same conclusions as in the $z=2$ case apply, so that $\mathcal{H}=\mathcal{H}_{t}+\mathcal{H}_{J^{z}}$. However, we now generalize the energy cost of creating a magnon to include any value of $z$. This cost is found to be ${zJ^{z}}/{2}$. Again, there will always be only one single hole- magnon interaction (even for longer effective chains) contributing an energy $-{J^{z}}/{2}$. Thus, setting the cost of the first magnon to $J_{0}=({J^{z}}/{2})(z-1)$, we see that also for states with one magnon we have $\mathcal{H}=\mathcal{H}_{t}+\mathcal{H}_{J}^{z}$. For $m>1$ we need to take into account the magnon-magnon interaction. Since magnons form a 1D-like chain, then number of such interactions will be $m-1$ and each one of them will contribute an energy $-J^{z}$, cf. [46]. This lowers the energy of every magnon beyond the first one, so that choosing $J_{n>0}=({J^{z}}/{2})(z-2)d_{n}$ we show that indeed $\mathcal{H}=\mathcal{H}_{t}+\mathcal{H}_{J^{z}}$ for the whole class of reachable states. Note that, when $z=3$, we recover (32). We can conclude that Fig. 7 exactly shows the spectral function for a single hole moving on a Bethe lattice with $z=3$ hosting the $t$-$J^{z}$ model, depicted in Fig. 8. String-potential with defects The most accurate realisation of a fermion-boson system with bosons subject to a string potential with defects as considered by Eq. (30) is as follows. Let us assume that we have a system which consists of two kinds of subsystems: an Ising ferromagnet and an Ising antiferromagnet. Next, we put these two subsystems on a Bethe lattice in such a way that the antiferromagnet surrounds the origin of the Bethe lattice and the ferromagnet starts $l$ sites away from the origin. Finally, once we probe such a system with the local fermion Green’s function, by putting a mobile hole into the origin of the Bethe lattice, then such a problem is described exactly by model (1)-(4) with the potential (30). This is because: (i) the hole in the antiferromagnetic subsystem is described by (1) with the constant bosonic energies $J_{n}$ given by Eq. (32) – see discussion immediately above, (ii) the hole in the ferromagnetic subsystem can freely move without introducing spin flips – which in the language of model (1) means that the hole excites magnons with zero-energy, i.e. $J_{n}=0$ for $d_{n}>l$ on a Bethe lattice. Figure 9: An example of an antiferromagnetic domain of size $d=3$ embedded in a ferromagnet on a Bethe lattice with $z=3$. The aforementioned ferromagnetic-antiferromagnetic system can be realised by considering antiferromagnetic domains of size $\propto l$ immersed in a ferromagnet, as sketched in Fig. 9. Such a situation, to some extent, may occur in the overdoped cuprates. In this case the tendency to ferromagnetism upon doping [24, 25, 26, 27] and diminishing antiferromagnetic correlations with doping [28, 29] may, in the first-order quantum phase transition scenario [24] or phase separation scenario [30], locally lead to the onset of antiferromagnetic domains within the ferromagnetic background. Naturally, there are several differences between the cuprate models (such as doped $t$–$J$ or Hubbard) and the model considered here – the main ones concern the Heisenberg (and not Ising) spin exchange, the finite number of holes (and not just one single hole), and the 2D square lattice (and not Bethe lattice with $z>2$). Nevertheless, we believe that these differences are not important enough to fully hinder the applicability of the current study to the cuprate problem. Thus, the interesting insight gained from the problem under study here is that, if the antiferromagnetic domains are very small and the system is ‘almost’ ferromagnetic, a fermion inserted into the antiferromagnetic subsystem may not form a well-defined quasiparticle (especially if the antiferromagnetic exchange is small). On the other hand, if the antiferromagnetic domains are relatively large (compared to the ferromagnetic ones), the quasiparticle may be stabilised relatively easily. This counterintuitive conclusion can be tested experimentally by STM studies of the overdoped cuprates. A somewhat similar situation can occur in correlated systems with orbital degrees of freedom and incommensurate filling – such as doped manganites or vanadates [31, 32, 33, 34, 35]. In this case hole doping induces transition from the alternating orbital to the ferro-orbital state. Thus one may expect that upon moderate doping, and assuming the onset phase separation, domains with alternating orbital order would become surrounded by the ferro-orbital ordered state. Then, depending on the relative size of such alternating orbital domains, the quasiparticle decay may (small domains) or may not (large domains) happen . Note that in the orbital case we are closer to model Eq. (1) with (30) than in the cuprate case described above, since the orbital degrees of freedom interact in correlated systems in an almost Ising manner [52]. One can also think of other realisations of such a string potential with defects. This may for instance occur once a hole is introduced in the antiferromagnetic side of a ferromagnetic-antiferromagnetic interface (or alternatively on the alternating orbital side of a ferro-orbital-alternating orbital interface). Here, however, a detailed investigation is needed, for the topology of such a problem is quite different: in this case one of the subsystems is not entirely surrounded by the other one. Nevertheless, the intuition gained by the current study suggests that also in this case the quasiparticle could decay if the size of the ferromagnetic (or ferro-orbital) subsystem is significantly larger than the antiferromagnetic (alternating orbital) one. Finally, one also speculates that there may exist also other physical systems with fermions coupled to zero-energy bosons in one subsystem and to bosonic excitations with finite energy in another. This might be the case in a rather exotic situation in which one of the subsystems contains condensed bosons in real space and the other one is a ‘normal’ state with bosonic excitations costing finite energy. ## V Discussion & Conclusions In this work, we have considered an interacting fermion-boson model with Peierls coupling between fermions and local bosons. We have solved it analytically by mapping it to an impurity-like non-interacting chain. The fermion-boson coupling introduced in model (1) can take several forms, but we have restricted ourselves to two important cases: the impurity-like and string-like bosons (future studies of other cases are encouraged). We have shown how in these two cases it is possible to destabilize the fermionic quasiparticle either by tuning the relative strength of the fermion-boson coupling or by increasing the number of zero-energy bosons: When considering impurity-like bosons in Sec. III, one can tune the relative strength of the fermion-boson coupling $t/J$ such that no quasiparticle appears in the system for $J\leq J^{*}$ (for fixed $t=1$), with $J^{*}$ depending on the Bethe lattice coordination number $z$. On the other hand, for $J>J^{*}$ the quasiparticle is stabilised. Overall, the critical value $J^{*}$ increases with the coordination number $z$ – but $z=2$ is a limiting case with $J^{*}=0$, i.e., we always obtain a quasiparticle solution (see below). Note that for the impurity-like bosons the number of zero-energy bosons cannot be tuned (it equals the number of lattice sites minus one, for the bosons have a hard core). Thus, it is interesting to observe that, despite the coupling to an overall huge number of massless bosons, a fermionic quasiparticle is still stable once $J>J^{*}$. For string-like bosons as in Sec. IV, the situation is more complex, for we can tune here both the number of zero-energy bosons as well as the coupling between fermions and bosons. First, just as for the impurity-like bosons, an increase in the relative fermion-boson coupling strength $t/J$ can destabilize the quasiparticle. Second, the stability of a quasiparticle depends on the number of bosons with zero-energy. On the one hand, if there are no zero- energy bosons, the hole is affected by a discrete linear potential (string potential), since each created boson costs energy. In this case all of the eigenstates of the system are of quasiparticle-type and we obtain the so- called ladder spectrum (see below). On the other hand, the ladder spectrum of the perfect string-potential is rather fragile, since a finite number of bosons with zero-energy will result in a decrease in the number of quasiparticles, the emergence of an energy continuum and, for the critical number of these bosons, the onset of a completely incoherent spectrum. The latter takes place for a relatively large number of massless bosons and strong fermion-boson coupling $t/J$. Lastly, we have mapped the fermionic Green’s function of the fermion-boson model in specific limits to that of the single hole in the $t$-$J^{z}$ model. We have shown that: (i) the fermion-boson model with impurity-like bosons and the coordination number $z=2$ corresponds to the 1D $t$–$J^{z}$ model; (ii) the fermion-boson model with string-like bosons subject to a perfect string potential and with coordination number $z>2$ corresponds to the quasi-2D (Bethe lattice) $t$–$J^{z}$ model. Note that these two particular limits carry a quasiparticle solution for any finite value of the model parameters, as discussed above and as well-known from the extensive $t$–$J^{z}$ model literature, cf. [36, 37, 15, 38, 39, 40, 41, 42, 43, 44, 45, 46]. Finally, it is possible to slightly modify these particular limits of the fermion-boson model to destabilise the quasiparticle solution (thus, e.g. in 1D one can scale the first hopping of the hole in the $t$–$J^{z}$ model; in 2D one can add an Ising ferromagnetic interface next to the Ising antiferromagnet). Nevertheless, it is a stunning observation of this work that the parameter range for which the quasiparticle decay happens is small when compared to the range for which it is stable – and this is despite the ubiquitous presence of zero-energy bosons in the system. Examples of boson-fermion systems in which the quasiparticle collapse may happen include systems with a low concentration of impurities and mobile fermions (for instance created in the intermediate state of RIXS experiment). Another example concerns fermions introduced to antiferromagnetic domains immersed in the ferromagnetic background or alternating orbital domains immersed in the ferro-orbital background. Rather counterintuitively, the quasiparticle extinction in these cases is more likely once the ferro-ordered state dominates. Such a situation might take place in overdoped cuprates – or other doped transition metal oxides with orbital degrees of freedom (e.g. manganites or vanadates). Further experimental and theoretical studies are needed to verify the latter proposal. ## Acknowledgements This work was supported by Narodowe Centrum Nauki (NCN, Poland) under Project Nos. 2016/22/E/ST3/00560 and 2021/40/C/ST3/00177. For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission. The code to reproduce the data and figures presented in this manuscript is available at Ref. [53]. ## References * Venema _et al._ [2016] L. Venema, B. Verberck, I. Georgescu, G. Prando, E. Couderc, S. Milana, M. Maragkou, L. Persechini, G. Pacchioni, and L. Fleet, The quasiparticle zoo, Nat. Phys. 12, 1085 (2016). * Pitaevskii [1959] L. P. Pitaevskii, Properties of the spectrum of elementary excitations near the disintegration threshold of the excitations, JETP 9, (1959). * Glyde [2017] H. R. Glyde, Excitations in the quantum liquid 4He: A review, Reports on Progress in Physics 81, 014501 (2017). * Verresen _et al._ [2019] R. Verresen, R. Moessner, and F. Pollmann, Avoided quasiparticle decay from strong quantum interactions, Nat. Phys. 15, 750 (2019). * Gaveau and Schulman [1995] B. Gaveau and L. S. Schulman, Limited quantum decay, J. Phys. A Math. Theor 28, 7359 (1995). * Ito _et al._ [2017] S. Ito, N. Kurita, H. Tanaka, S. Ohira-Kawamura, K. Nakajima, S. Itoh, K. Kuwahara, and K. Kakurai, Structure of the magnetic excitations in the spin-1/2 triangular-lattice Heisenberg antiferromagnet Ba3CoSb2O9, Nat. Comm. 8, 235 (2017). * Chernyshev and Zhitomirsky [2009] A. L. Chernyshev and M. E. Zhitomirsky, Spin waves in a triangular lattice antiferromagnet: Decays, spectrum renormalization, and singularities, Phys. Rev. B 79, 144416 (2009). * Oh _et al._ [2013] J. Oh, M. D. Le, J. Jeong, J.-h. Lee, H. Woo, W.-Y. Song, T. G. Perring, W. J. L. Buyers, S.-W. Cheong, and J.-G. Park, Magnon breakdown in a two dimensional triangular lattice heisenberg antiferromagnet of multiferroic ${\mathrm{lumno}}_{3}$, Phys. Rev. Lett. 111, 257202 (2013). * Zhitomirsky and Chernyshev [2013] M. E. Zhitomirsky and A. L. Chernyshev, Colloquium: Spontaneous magnon decays, Rev. Mod. Phys. 85, 219 (2013). * Phillips [2006] P. Phillips, Mottness, Annals of Physics 321, 1634 (2006). * Zaanen [2019] J. Zaanen, Planckian dissipation, minimal viscosity and the transport in cuprate strange metals, SciPost Phys. 6, 061 (2019). * Chen _et al._ [2019] S.-D. Chen, M. Hashimoto, Y. He, D. Song, K.-J. Xu, J.-F. He, T. P. Devereaux, H. Eisaki, D.-H. Lu, J. Zaanen, and Z.-X. Shen, Incoherent strange metal sharply bounded by a critical doping in Bi2212, Science 366, 1099 (2019). * Wahlberg _et al._ [2021] E. Wahlberg, R. Arpaia, G. Seibold, M. Rossi, R. Fumagalli, E. Trabaldo, N. B. Brookes, L. Braicovich, S. Caprara, U. Gran, G. Ghiringhelli, T. Bauch, and F. Lombardi, Restored strange metal phase through suppression of charge density waves in underdoped YBa2Cu3O7-δ, Science 373, 1506 (2021). * Watanabe and Vishwanath [2014] H. Watanabe and A. Vishwanath, Criterion for stability of goldstone modes and fermi liquid behavior in a metal with broken symmetry, PNAS 111, 16314 (2014). * Kane _et al._ [1989] C. L. Kane, P. A. Lee, and N. Read, Motion of a single hole in a quantum antiferromagnet, Phys. Rev. B 39, 6880 (1989). * Powell [2020] B. J. Powell, Emergent particles and gauge fields in quantum matter, Contemporary Physics 61, 96 (2020), https://doi.org/10.1080/00107514.2020.1832350 . * Lee and Nagaosa [1992] P. A. Lee and N. Nagaosa, Gauge theory of the normal state of high-${T}_{c}$ superconductors, Phys. Rev. B 46, 5621 (1992). * Altshuler _et al._ [1994] B. L. Altshuler, L. B. Ioffe, and A. J. Millis, Low-energy properties of fermions with singular interactions, Phys. Rev. B 50, 14048 (1994). * Chakravarty _et al._ [1995] S. Chakravarty, R. E. Norton, and O. F. Syljuåsen, Transverse gauge interactions and the vanquished Fermi liquid, Phys. Rev. Lett. 74, 1423 (1995). * Lee [2009] S.-S. Lee, Low-energy effective theory of Fermi surface coupled with U(1) gauge field in $2+1$ dimensions, Phys. Rev. B 80, 165102 (2009). * Halperin _et al._ [1993] B. I. Halperin, P. A. Lee, and N. Read, Theory of the half-filled landau level, Phys. Rev. B 47, 7312 (1993). * Lee and Lee [2005] S.-S. Lee and P. A. Lee, U(1) gauge theory of the Hubbard model: Spin liquid states and possible application to $\kappa$-$(\mathrm{BEDT}$-$\mathrm{TTF}{)}_{2}{\mathrm{cu}}_{2}(\mathrm{CN}{)}_{3}$, Phys. Rev. Lett. 95, 036403 (2005). * Gegenwart _et al._ [2008] P. Gegenwart, Q. Si, and F. Steglich, Quantum criticality in heavy-fermion metals, Nat. Phys. 4, 186 (2008). * Kopp _et al._ [2007] A. Kopp, A. Ghosal, and S. Chakravarty, Competing ferromagnetism in high-temperature copper oxide superconductors, PNAS 104, 6123 (2007). * Jia _et al._ [2014] C. J. Jia, E. A. Nowadnick, K. Wohlfeld, Y. F. Kung, C.-C. Chen, S. Johnston, T. Tohyama, B. Moritz, and T. P. Devereaux, Persistent spin excitations in doped antiferromagnets revealed by resonant inelastic light scattering, Nat. Comm. 5, 10.1038/ncomms4314 (2014). * Santoso _et al._ [2017] I. Santoso, W. Ku, T. Shirakawa, G. Neuber, X. Yin, M. Enoki, M. Fujita, R. Liang, T. Venkatesan, G. A. Sawatzky, A. Kotlov, S. Yunoki, M. Rübhausen, and A. Rusydi, Unraveling local spin polarization of zhang-rice singlet in lightly hole-doped cuprates using high-energy optical conductivity, Phys. Rev. B 95, 165108 (2017). * Ong _et al._ [2022] B. L. Ong, K. Jayaraman, C. Diao, T. J. Whitcher, A. Jain, H. Hung, M. B. H. Breese, E. S. Tok, and A. Rusydi, Anomalous ferromagnetism of quasiparticle doped holes in cuprate heterostructures revealed using resonant soft x-ray magnetic scattering, Nat. Comm. 13, 10.1038/s41467-022-31885-1 (2022). * Lee _et al._ [2006] P. A. Lee, N. Nagaosa, and X.-G. Wen, Doping a mott insulator: Physics of high-temperature superconductivity, Rev. Mod. Phys. 78, 17 (2006). * Zhang _et al._ [2022] W. Zhang, C. E. Agrapidis, Y. Tseng, T. C. Asmara, E. Paris, V. N. Strocov, E. Giannini, S. Nishimoto, K. Wohlfeld, and T. Schmitt, Unravelling the nature of the spin excitations disentangled from the charge contributions in a doped cuprate superconductor (2022). * Battisti _et al._ [2016] I. Battisti, K. M. Bastiaans, V. Fedoseev, A. de la Torre, N. Iliopoulos, A. Tamai, E. C. Hunter, R. S. Perry, J. Zaanen, F. Baumberger, and M. P. Allan, Universality of pseudogap and emergent order in lightly doped mott insulators, Nat. Phys. 13, 21 (2016). * Dagotto _et al._ [2001] E. Dagotto, T. Hotta, and A. Moreo, Colossal magnetoresistant materials: the key role of phase separation, Physics Reports 344, 1 (2001). * Miyasaka _et al._ [2000] S. Miyasaka, T. Okuda, and Y. Tokura, Critical behavior of metal-insulator transition in ${\mathrm{la}}_{1-\mathit{x}}{\mathrm{sr}}_{\mathit{x}}{\mathrm{vo}}_{3}$, Phys. Rev. Lett. 85, 5388 (2000). * Fujioka _et al._ [2005] J. Fujioka, S. Miyasaka, and Y. Tokura, Orbital disordering and the metal-insulator transition with hole doping in perovskite-type vanadium oxides, Phys. Rev. B 72, 024460 (2005). * Fujioka _et al._ [2008] J. Fujioka, S. Miyasaka, and Y. Tokura, Doping variation of anisotropic charge and orbital dynamics in ${\text{Y}}_{1-x}{\text{Ca}}_{x}{\text{VO}}_{3}$: Comparison with ${\text{La}}_{1-x}{\text{Sr}}_{x}{\text{VO}}_{3}$, Phys. Rev. B 77, 144402 (2008). * Avella _et al._ [2019] A. Avella, A. M. Oleś, and P. Horsch, Defect-induced orbital polarization and collapse of orbital order in doped vanadium perovskites, Phys. Rev. Lett. 122, 127206 (2019). * Bulaevskii _et al._ [1968] L. N. Bulaevskii, E. L. Nagaev, and D. I. Khomskii, A New Type of Auto-localized State of a Conduction Electron in an Antiferromagnetic Semiconductor, JETP 27, 836 (1968). * Brinkman and Rice [1970] W. F. Brinkman and T. M. Rice, Single-particle excitations in magnetic insulators, Phys. Rev. B 2, 1324 (1970). * Starykh and Reiter [1996] O. A. Starykh and G. F. Reiter, Hole motion in the Ising antiferromagnet: An application of the recursion method, Phys. Rev. B 53, 2517 (1996). * Sorella and Parola [1998] S. Sorella and A. Parola, Theory of hole propagation in one-dimensional insulators and superconductors, Phys. Rev. B 57, 6444 (1998). * Chernyshev and Leung [1999] A. L. Chernyshev and P. W. Leung, Holes in the $t$-$J_{z}$ model: A diagrammatic study, Phys. Rev. B 60, 1592 (1999). * Šmakov _et al._ [2007a] J. Šmakov, A. L. Chernyshev, and S. R. White, Binding of holons and spinons in the one-dimensional anisotropic $t\mathrm{\text{$-$}}J$ model, Phys. Rev. Lett. 98, 266401 (2007a). * Šmakov _et al._ [2007b] J. Šmakov, A. L. Chernyshev, and S. R. White, Spinon-holon interactions in an anisotropic $t\text{$-$}J$ chain: A comprehensive study, Phys. Rev. B 76, 115106 (2007b). * Maśka _et al._ [2014] M. M. Maśka, M. Mierzejewski, and E. Kochetov, The Ising version of the $t$-${J}$ model, Philosophical Magazine 95, 583 (2014). * Grusdt _et al._ [2018] F. Grusdt, M. Kánasz-Nagy, A. Bohrdt, C. S. Chiu, G. Ji, M. Greiner, D. Greif, and E. Demler, Parton theory of magnetic polarons: Mesonic resonances and signatures in dynamics, Phys. Rev. X 8, 011046 (2018). * Bieniasz _et al._ [2019] K. Bieniasz, P. Wrzosek, A. M. Oles, and K. Wohlfeld, From “weak” to “strong” hole confinement in a Mott insulator, SciPost Phys. 7, 066 (2019). * Wrzosek and Wohlfeld [2021] P. Wrzosek and K. Wohlfeld, Hole in the two-dimensional ising antiferromagnet: Origin of the incoherent spectrum, Phys. Rev. B 103, 035113 (2021). * Moghaddam _et al._ [2021] A. G. Moghaddam, D. Chernyavsky, C. Morice, J. van Wezel, and J. van den Brink, Engineering spectral properties of non-interacting lattice Hamiltonians, SciPost Phys. 11, 109 (2021). * Wohlfeld _et al._ [2008] K. Wohlfeld, M. Daghofer, A. M. Oleś, and P. Horsch, Spectral properties of orbital polarons in mott insulators, Phys. Rev. B 78, 214423 (2008). * Holstein and Primakoff [1940] T. Holstein and H. Primakoff, Field dependence of the intrinsic domain magnetization of a ferromagnet, Phys. Rev. 58, 1098 (1940). * Kourtis _et al._ [2012] S. Kourtis, J. van den Brink, and M. Daghofer, Exact diagonalization results for resonant inelastic x-ray scattering spectra of one-dimensional mott insulators, Phys. Rev. B 85, 064423 (2012). * Ament _et al._ [2011] L. J. P. Ament, M. van Veenendaal, T. P. Devereaux, J. P. Hill, and J. van den Brink, Resonant inelastic x-ray scattering studies of elementary excitations, Rev. Mod. Phys. 83, 705 (2011). * Kugel and Khomskii [1982] K. I. Kugel and D. I. Khomskii, The Jahn-Teller effect and magnetism: transition metal compounds, Soviet Physics Uspekhi 25, 231 (1982). * Wrzosek _et al._ [2022] P. Wrzosek, A. Kłosiński, K. Wohlfeld, and C. E. Agrapidis, Rare collapse of fermionic quasiparticles upon coupling to local bosons, zenodo.7463213 10.5281/zenodo.7463213 (2022).
# Lightweight Portrait Matting via Regional Attention and Refinement Yatao Zhong Microsoft <EMAIL_ADDRESS>Ilya Zharkov Microsoft <EMAIL_ADDRESS> ###### Abstract We present a lightweight model for high resolution portrait matting. The model does not use any auxiliary inputs such as trimaps or background captures and achieves real time performance for HD videos and near real time for 4K. Our model is built upon a two-stage framework with a low resolution network for coarse alpha estimation followed by a refinement network for local region improvement. However, a naive implementation of the two-stage model suffers from poor matting quality if not utilizing any auxiliary inputs. We address the performance gap by leveraging the vision transformer (ViT) as the backbone of the low resolution network, motivated by the observation that the tokenization step of ViT can reduce spatial resolution while retain as much pixel information as possible. To inform local regions of the context, we propose a novel cross region attention (CRA) module in the refinement network to propagate the contextual information across the neighboring regions. We demonstrate that our method achieves superior results and outperforms other baselines on three benchmark datasets while only uses $1/20$ of the FLOPS compared to the existing state-of-the-art model. ## 1 Introduction Image matting is one of the most studied topics in computer vision. Formally a matting problem is formulated as $I=\alpha F+(1-\alpha)B.$ (1) The goal is to solve for the alpha matte $\alpha$, but the foreground $F$ and background $B$ are also unknown. Therefore, this is a highly under-constrained problem, which oftentimes requires some priors. One commonly used prior is a user provided trimap [11, 1, 3, 4, 25, 29, 19, 14, 18, 6], where each pixel is categorized as “definite foreground”, “definite background” or “unknown”. However, trimaps require user interaction and are time-consuming to obtain, hence difficult to be deployed in a fully automated system. Another recently proposed prior is an additional background image [23]. However, capturing a second image under the same conditions (e.g., lighting and shadow) is not always possible and the background image is only useful if it is well aligned with the input image. There have also been efforts to remove all auxiliary inputs and predict the alpha mattes directly from input images [2, 30, 21, 12]. Approaches of this type are typically learning based and have been demonstrated to perform reasonably well even without any priors provided. Figure 1: An overview of our method. 1) An input image is first tokenized before being fed to a low resolution network that consists of a ViT backbone and a decoder. 2) Coarse alpha is upsampled to full resolution and concatenated with the input image. 3) Regions of uncertainty are selected from the estimated trimap and cropped from the concatenated RGBA image. 4) Cropped regions run through a refinement network that features a cross region attention (CRA) module to obtained the refined alpha. Nevertheless, all of these methods operate at full resolution, making them extremely compute-intensive and impossible to be deployed in real applications for high resolution portrait matting (e.g., HD and 4K). In this work, we aim at reducing the computation while also retaining the matting quality. We present a lightweight model that estimates the alpha matte directly from the image without any user interactions or auxiliary inputs such as trimaps or background captures. Our model is built upon a prior observation that a portrait alpha matte is dominated by “definite foreground” ($\alpha=1$) and “definite background” ($\alpha=0$), which can be obtained by upsampling the estimated alpha matte from a low resolution model. Only a few uncertain regions around the boundaries ($0<\alpha<1$) need to be refined. Therefore, the proposed model consists of two stages: an initial stage for low resolution alpha estimation and a second stage for full resolution refinement. Fig. 1 gives an overview of our method. However, we find naively adopting the two-stage framework leads to inferior results due to the missing auxiliary inputs which we intentionally eliminate. To address the performance gap, we leverage the vision transformer (ViT) as the backbone in the low resolution model. As opposed to image downsampling, image tokenization in the ViT is a better choice for reducing spatial resolution because it does not lose pixel information. Since the the refinement network operates on extracted local regions, to inform it of the context, a straightforward design choice would be to reuse the upsampled features from the low resolution network [15]. However, this adds to the compute budget by doing upsampling at high resolution. Therefore, we opt for an inverted process by first extracting local regions followed by gathering the context. To recover the contextual information, we propose a novel cross region attention (CRA) module, which propagates the information across the $k$ nearest neighbors of each region through multi-head attention with a learnable lookup table for relative positional encoding. We demonstrate that, with all the aforementioned designs, our model outperforms other baselines by a large margin on the P3M and PPM datasets. We also show that our model is able to retain the matting quality using only $1/20$ of the FLOPS compared to the existing state-of-the-art model [12]. In summary, our work has the following contributions. * • We leverage the tokenization step of ViT to reduce spatial resolution while retain the full pixel information for coarse alpha estimation. * • We invert the order of computing contextual features and extracting local regions to avoid feature upsampling at high resolution to save computation. * • We propose a novel cross region attention (CRA) module to capture the contextual information across $k$ nearest neighbors of each region. * • We conduct extensive experiments to demonstrate the effectiveness and efficiency of our model: achieving the state-of-the-art performance while using minimal FLOPS. ## 2 Related Work Traditional matting. Traditional matting algorithms [11, 7, 1, 3, 4, 25] are derived from the matting equation Eq. 1. The goal is to solve for the alpha matte $\alpha$, but at the same time one needs to also solve for the foreground $F$ and background $B$. Since this is an ill-posed problem with only the observed image $I$ being provided, a common practice is to use trimaps as constraints. [4] formulates the problem in a Bayesian framework and solves it using maximum a posteriori (MAP) estimation. [25] formulates matting as a problem of solving Poisson equations using matte gradient field. Both end up being an iterative solution. [11] proposes the first closed form solution, but their method is memory and compute intensive because the involvement of a large sparse linear system. [7] accelerates [11] by using large kernel Laplacian and adaptive kernel sizes obtained from KD-tree segmentation on trimaps. Other works [3, 1] improve [11] by removing the local color line model assumption. They use the global pixel affinities to propagate alpha values in trimaps from known regions to unknown regions. Learning based matting. With the advance in deep learning, many recent approaches [29, 19, 14, 18] have shifted to a learning based paradigm, where a model takes the image and trimap as input and learns to predict the alpha matte. DIM [29] is the pioneer that leverages deep neural networks in the task of image matting. AlphaGAN [19] improves [29] by training the model with an adversarial loss. GCA [14] introduces a guided contextual attention module by computing the correlation between unknown regions. IndexNet [18] utilizes learned indices in the decoder for upsampling to guide the matte generation. Trimap-free matting. There have also been efforts trying to eliminate the dependence on trimaps. [23, 15] propose to capture an additional background image as an auxiliary input for image matting. SHM [2] predicts the alpha matte by fusing a self-learned trimap and a raw alpha matte. LF [30] employs a similar fusion concept by estimating the foreground and background probability maps and blending them with self-learned weights. HATT [21] uses a spatial and channel-wise attention module to integrate low level and high level features. P3M-Net [12] adopts a multi-task framework by predicting trimaps and alpha mattes at multi-resolutions and uses a stack of integration modules to exchange feature information. ## 3 Method The proposed two-stage framework (shown in Fig. 1) proceeds as follows. The low resolution network predicts a coarse alpha matte and a coarse trimap. Next we extract uncertain regions using the coarse trimap and crop the selected regions from the input image and upsampled coarse alpha. The cropped patches then pass through the refinement network to obtain the refined alpha patches. Finally, we replace the refined alpha patches back in the upsampled coarse alpha to complete the full alpha. Below we illustrate each of the steps and explain our design choices. For brevity of writing, we use the notation $\mathfrak{R}_{x}$ to refer to a resolution at $\frac{1}{x}$ of the full resolution. ### 3.1 ViT as Low Resolution Backbone Different from prior works that use additional inputs such as background captures [15, 23] or trimaps [29, 19, 14, 18, 6], our models tackles portrait matting without any auxiliary inputs, which is significantly more challenging. In fact, as we will show in the experiments, simply adopting a CNN architecture results in poor matting quality if not given auxiliaries. We argue that, even for coarse alpha estimation, a higher input resolution could be beneficial to the overall better quality. Nonetheless, higher resolution inevitably adds more computation. This motivates us to think how we can improve it without resorting to increased compute budget. We therefore propose to transform an image $I\in\mathbb{R}^{H\times W\times C}$ to a grid of non- overlapping patches of $\mathbb{R}^{\frac{H}{P}\times\frac{W}{P}\times P^{2}C}$, where $\frac{H}{P}\times\frac{W}{P}$ corresponds to the grid resolution and $P^{2}C$ is the channel dimension. This is often referred to as pixel-unshuffle or space-to-depth. As opposed to downsampling, pixel-unshuffle preserves the original pixel information when reducing the spatial resolution. Table 1: Results of different downsampling and pixel-unshuffle strategies. $d$ is the downsampling rate and $p$ is the pixel-unshuffle patch size. We evaluate the models on the P3M-500 test data. For more details about the test data, please refer to Sec. 4.1. | P3M-500-NP | P3M-500-P ---|---|--- No. | Method | FLOPS | SAD | Grad | SAD | Grad A | Resnet-50, $d$=4 | 24.7G | 14.25 | 12.60 | 14.11 | 14.69 B | Resnet-50, $d$=2 | 59.9G | 13.43 | 11.77 | 12.05 | 12.58 C | Resnet-50, $p$=4 | 28.2G | 15.21 | 12.71 | 16.10 | 15.22 D | Swin-T, $d$=2, $p$=8 | 18.6G | 10.89 | 10.72 | 10.39 | 12.73 To verify the advantage of pixel-unshuffle over downsampling, we test several models with different pixel-unshuffle and downsampling strategies and evaluate them on a benchmark dataset. We summarize their SAD (sum of absolute difference) and Grad (gradient difference) in Tab. 1. Resnet-50 [8, 9] is used as the low resolution backbone for models A, B and C. A sequence of upsampling and conv layers are used in the decoder to keep the low resolution output at $\mathfrak{R}_{8}$. All models share the same refinement stage, which we will discuss later in Sec. 3.2 and 3.3. Comparing A and B, we can see that increasing the resolution of the low resolution network improves the accuracy. Nevertheless, this requires considerably more compute budget. To retain accuracy without adding more computation, we resort to pixel-unshuffle (C). However, we see C underperforms A. This seemingly contradicts the hypothetical strength of using pixel- unshuffle, but we argue that the performance drop, in fact, can be explained by the usage of large kernels. Many prior arts [8, 9, 24, 26, 31] have demonstrated the success of using small kernels because they help preserve the locality and translation invariance of CNNs. Large kernels break these nice properties, making CNNs suffer from poor generalization. In the case of C in Tab. 1, Resnet-50 already starts with a relatively large kernel (7$\times$7). When used with pixel-unshuffle with a patch size of 4, the effective kernel size of the first layer becomes 28$\times$28, which is obviously too large to be applied to a CNN. Due to limitation of CNNs with large kernels, we opt for ViT as the low resolution backbone. ViT comes naturally a better choice for low resolution prediction because the first step of ViT — image tokenization — is equivalent to pixel-unshuffle, which can effectively reduce the spatial resolution while retain the full pixel information. Specifically, we choose the Swin-T [17] as the low resolution backbone. As shown in Tab. 1, model D uses a downsampling rate of 2 followed by pixel-unshuffle with a patch size of 8, which results in a $\times$16 reduction in resolution. With the proposed design principal, model D achieves a remarkable improvement in terms of both accuracy and FLOPS. ### 3.2 Refinement Stage Modern neural network architectures for dense prediction tasks typically rely on a pyramid of upsampled features for global information. Similarly, for a refinement network to receive contextual information, the most straightforward idea would be to upsample the features to input resolution before extracting the refinement regions. This way the cropped regions are informed of their context. However, upsampling at high resolution (e.g. HD or 4K) is both memory and compute intensive. We propose an alternative to eliminate the heavy feature upsampling. Our refinement stage avoids reusing any deep features from the low resolution network. The low resolution network only predicts a coarse alpha matte and a coarse trimap. Like a traditional trimap, the predicted trimap has three classes: “definite foreground”, “definite background” and “uncertain”. We encode it as a 3-channel softmax output. Since no ground truth trimaps are available at training time, we apply morphological operations with heuristics to create the target trimaps from the ground truth alpha mattes. At inference time, we select the pixels predicted as “uncertain” as the regions of interest to be refined. In the refinement stage, we first upsample the coarse alpha matte to full resolution. This op is lightweight compared to the heavy feature upsampling. The upsampled alpha matte is concatenated with the input image to form a 4-channel RGBA image. With the selected “uncertain” pixels from the trimap, we locate the corresponding 8$\times$8 regions in the RGBA image, which are cropped and fed to a tiny refinement network consisting of an encoder and a decoder. At last, we replace the respective 8$\times$8 regions in the upsampled alpha with the refined crops to obtain the final alpha. Fig. 3(a) visualizes how the cropped regions run through the entire refinement stage. ### 3.3 Cross Region Attention The aforementioned refinement stage only receives 8$\times$8 local regions as input, inevitably losing the context. Thus a mechanism is needed to recover the context after region extraction. We therefore propose a novel cross region attention (CRA) module to capture the contextual information across the neighboring regions. CRA is inspired by the multi-head attention [27, 5, 17], but instead of consuming a sequence or regular grid of tokens, it operates on the $k$ nearest neighbors (KNN) of a central token. Our proposed mechanism inverts the order of context collection and region extraction and has the advantage of eliminating the heavy feature upsampling as discussed in Sec. 3.2. Below we use “region” to refer to “token” since an extracted region is effectively a token. KNN extraction. After identifying all “uncertain” regions from the trimap, for each region, we find the closest $k$ regions as its KNNs (under the metric of Euclidean distance). We do pairwise comparisons at training time, but employ KD-tree as a faster search algorithm at inference time. The left part of Fig. 3(a) visualizes the locations of a region’s KNNs. Figure 2: An illustration of relative positional bias encoding. This is an example with search range $s$=3, which ends up with a 5$\times$5 search window and a lookup table of 25 biases for in-range positions and one extra bias for any out-of-range positions. The orange square denotes the center, which is encoded with the bias at the 13th slot. The yellow square denotes an in-range sample encoded with the 6th slot. The purple squares are two out-of-range samples, whose positional biases are given by the last table entry. Relative positional bias. The KNNs can potentially be scattered anywhere around the central region and distributed on a non-regular grid, so we need a way to encode their relative positions. We define a search range $s$ on the image space. The relative positions on each axis are supposed to be in $[-s+1,s-1]$. This ends up with $(2s-1)^{2}$ possible relative positions within the search range. We encode the relative positions with a learnable lookup table $P\in\mathbb{R}^{(2s-1)^{2}+1}$. The first $(2s-1)^{2}$ entries encode all possible in-range positions. The last entry encodes any out-of- range positions. Each entry is used as the relative positional bias $B$ in the attention formula: $\text{Attention}(Q,K,V)=\text{Softmax}(\frac{QK^{T}}{\sqrt{d}}+B)V\text{,}$ (2) where $d$ is the feature dimension; $Q$, $K$ and $V$ are the query, key and value respectively. Fig. 2 illustrates how the relative positional biases are encoded with a lookup table. Cross region attention. After we obtain the features of all extracted regions from the refinement network’s encoder, we locate the KNNs of each region and query their relative positional biases from the lookup table $P$. Let $f_{i}\in\mathbb{R}^{d}$ denote the feature of region $i$ and $\\{i_{1},i_{2},\cdots,i_{k}\\}$ denote the $k$ nearest neighbors of region $i$. For each region $i$, we feed the features $[f_{i},f_{i_{1}},f_{i_{2}},\cdots,f_{i_{k}}]\in\mathbb{R}^{(k+1)\times d}$ of this region and its KNNs, along with their relative positional biases $B\in\mathbb{R}^{k+1}$, to two consecutive attention blocks, shown in Fig. 3(b). Note that the second block does not need to do pairwise attention $QK^{T}$ across all $k+1$ regions. Instead, it only computes the attention between the central region and its KNNs by $Q_{0}K^{T}$, where $Q_{0}$ is the query of the central region $i$. Finally, the output feature goes to the refinement network’s decoder to obtain the refined alpha matte of region $i$. (a) (b) Figure 3: A visualization of the refinement stage. Each $\square$ corresponds to a region of 8$\times$8 pixels at full resolution. The example uses 8 nearest neighbors and a search range $s=3$ for relative positional bias. (a) visualizes the KNNs of a central region and the identification of in-range and out-of-range neighbors. A central region obtains its contextual features by aggregating the features from its KNNs through the cross region attention (CRA) module. (b) shows the structure of the attention block in CRA. ### 3.4 Training We train the low resolution network and refinement network end-to-end at the same time. During training, we apply various data augmentation strategies such as horizontal flipping, cropping and affine transformation as well as color adjustment in hue, saturation and brightness. Since the Swin-T backbone inherently uses an effective output stride of 32 and a window size of 7 for its window attention, with the original implementation [17] the input image size is expected to be multiples of $7\times 32=224$. We make a modification to accommodate input images of arbitrary sizes (but no less than 224$\times$224) by padding any intermediate feature maps with zeros if their spatial sizes are not already dividable by 7 and masking out the padded regions when computing the attention. At training time, all images are resized to 896$\times$896, but at inference time, the model accepts images of arbitrary resolutions. We refer readers to the supplementary material for a full list of training losses. Table 2: Quantitative results on the P3M-500 tet data. $\dagger$ indicates that a trimap is used. | P3M-500-NP | P3M-500-P ---|---|--- Method | GFLOPS | SAD | SAD-T | Grad | Conn | SAD | SAD-T | Grad | Conn DIM† [29] | 791.6 | 5.32 | 5.32 | 4.70 | 7.70 | 4.89 | 4.89 | 4.48 | 9.68 P3M-Net [12] | 364.9 | 11.23 | 7.65 | 10.35 | 12.51 | 8.73 | 6.89 | 8.22 | 13.88 MODNet [10] | 512x512 input | 15.7 | 20.20 | 12.48 | 16.83 | 18.41 | 30.08 | 12.22 | 19.73 | 28.61 fullres input | 103.2 | 63.74 | 13.56 | 25.75 | 62.69 | 95.47 | 13.70 | 37.28 | 94.86 BGMv2 [15] | Resnet-50 | 26.5 | 16.72 | 7.55 | 13.00 | 15.39 | 15.70 | 7.23 | 15.54 | 14.71 Resnet-101 | 33.9 | 15.66 | 7.72 | 12.42 | 14.65 | 13.90 | 7.23 | 14.69 | 13.13 Ours | 19.0 | 10.60 | 6.83 | 10.78 | 9.77 | 10.04 | 6.44 | 12.65 | 9.41 ## 4 Experiments ### 4.1 Experiment Setup Datasets. We benchmark on two datasets: P3M-10k [12] and PPM-100 [10]. P3M-10k is by far the largest human portrait matting dataset and contains 10421 high- resolution in-the-wild images with annotated alpha mattes. For privacy issues, all faces in the images have been blurred. As shown in [12], training on images with blurred faces does not degrade the model performance. Instead, it may even help the model to generalize better. We use the provided 9421 images with blurred faces for training and 500 images with blurred faces for privacy- preserving test and rest 500 normal images (without face blurring) for non- privacy test. Following [12], we denote the two test subsets from P3M-10k as P3M-500-P (privacy-preserving) and P3M-500-NP (non-privacy). Compared to P3M-10k, PPM-100 is a smaller dataset curated specifically for evaluation purpose. Baselines. We compare our model with the state-of-the-art trimap free method P3M-Net [12]. As a reference, we also include DIM [29], a commonly adopted trimap-based baseline, in the experiments. We also compare with MODNet [10] and BGMv2 [15], which are designated lightweight model for fast inference. The original BGMv2 relies on an additional image of background. To make it a fair comparison, we retrain a modified version by eliminating the background capture. Note that MODNet is designed for 512$\times$512 input images while we are targeting at higher resolutions such as HD and 4K. Therefore, we use two strategies to accommodate MODNet in our test scenario – we either use 512$\times$512 input and upsample the output to full resolution or run the model on full resolution directly. Default model. Our default model uses Swin-T [17] as the low resolution network. To reduce the input image size for coarse alpha estimation, we apply a patch size of 16 for pixel-unshuffle, which is equivalent to a $\times$16 reduction in spatial resolution. Because the refinement network operates on 8$\times$8 regions, we let the low resolution network’s decoder to produce a 4-channel output at $\mathfrak{R}_{16}$ and append a pixel-shuffle layer at the end to increase the resolution from $\mathfrak{R}_{16}$ to $\mathfrak{R}_{8}$. This way, a pixel at $\mathfrak{R}_{8}$ is equivalent to a 8$\times$8 region at the full resolution. For CRA, we use 8 nearest neighbors and employ a search range of 4 for relative positional encoding. Evaluation metrics. We follow previous works using the sum of absolute difference (SAD), the gradient difference (Grad) and the connectivity error (Conn) as the evaluation metrics. Conn is used as a way to measure the degree of connectivity, the intuition behind which is that unconnected components are more visually distracting when they are further away from the dominant connected components in the image [22]. We also report SAD within the transition area (a.k.a the “uncertain” region in a trimap), denoted as SAD-T. FLOPS is used as an indicator of compute budget. Since the FLOPS of BGMv2 [15] and our method depend on image content, we report the mean FLOPS over multiple inferences with an average of 1.63M pixels per inference. Figure 4: Estimated coarse trimaps and refined alpha mattes. Trimaps are upsampled to full resolution for visualization. Figure 5: Qualitative results. On the left and right are respectively the results of P3M-500-P and P3M-500-NP. Zoom in for more details. ### 4.2 Results We visualize the intermediate coarse trimaps and the final alpha mattes in Fig. 4. One can see how accurately our model can adapt to the number of regions to be refined. For example, the low resolution network predicts more “uncertain” regions (shown in green) around fuzzy hair in the trimaps while refrains from doing so around the contour of body. More qualitative results are shown in Fig. 5. Quantitative results on the P3M test data are presented in Tab. 2. Our model outperforms all baselines by a large margin on all metrics while uses the least amount of computation. Compared to the previous state-of-the-art method P3M-Net, our model achieves competitive results with nearly $1/20$ of the FLOPS. We are only slightly behind P3M-Net on Grad (for both P3M-500-NP and P3M-500-P) and SAD (for P3M-500-P) while obtains the state-of-the-art results on all other metrics. In the auxiliary-free setting, BGMv2 does not retain the good performance reported by [15] due to the lack of an additional background capture as input. Note that DIM has the best numbers for all metrics, but it is not directly comparable to other models because it takes trimap as an auxiliary input. We include it here merely for reference purpose as it is one of the mostly adopted methods for comparison. It is worth noting that MODNet is originally designed and trained for 512x512 images. When running on high resolution input, not only does it incur degraded quality, but also increase its GFLOPS from 15.7 to 103.2. On the other hand, our model is super lightweight and can generate high quality full resolution mattes with only 19 FLOPS. Numeric results on the PPM-100 test data are shown in Tab. 3. Since PPM-100 does not have training data, we use models trained on the P3M-10k data for evaluation. Our model is superior to others on all metrics except being slightly worse than P3M-Net on SAD-T. P3M-Net achieve competitive results on P3M-500-NP and P3M-500-P, but its performance drops significantly when evaluated on PPM-100. We believe this is because of the domain gap between the training and test sets. Some major differences we observe between the two datasets are image resolutions and imaging quality. Images in PPM-100 are higher resolutions but have worse imaging quality. This explains the overall performance degradation for all models on PPM-100. However, our model is more robust to this domain gap and achieves the best results on PPM-100. Table 3: Quantitative results on the PPM-100 dataset. Method | SAD | SAD-T | Grad | Conn ---|---|---|---|--- P3M-Net [12] | 142.74 | 43.06 | 57.02 | 139.89 MODNet [10] | 512x512 | 104.35 | 65.42 | 68.56 | 96.45 fullres | 324.07 | 68.97 | 77.42 | 319.70 BGMv2 [15] | Resnet-50 | 193.40 | 49.39 | 61.49 | 185.52 Resnet-101 | 159.44 | 50.67 | 59.41 | 149.79 Ours | 90.28 | 45.06 | 50.69 | 84.09 Table 4: FPS and GFLOPS for HD and 4K inputs. All models are evaluated with a single Nvidia Quadro RTX 6000 GPU. An empty entry means we fail to evaluate the model due to out-of-memory error. | HD | 4K ---|---|--- Method | FPS | GFLOPS | FPS | GFLOPS DIM [29] | 5.0 | 1007.1 | - | - P3M-Net [12] | 9.2 | 463.5 | - | - MODNet [10] | 15.0 | 123.4 | - | - BGMv2 | Resnet-50 | 57.4 | 32.7 | 23.7 | 128.6 [15] | Resnet-101 | 45.8 | 42.2 | 17.8 | 166.8 Ours | w/ CRA | 54.9 | 21.2 | 19.5 | 74.6 w/o CRA | 71.2 | 19.4 | 26.4 | 70.7 Table 5: Quantitative results for ablation study. | P3M-500-NP | P3M-500-P ---|---|--- No. | Refinement Method | CRA | SAD | SAD-T | Grad | Conn | SAD | SAD-T | Grad | Conn E | [15] | NA | 11.68 | 7.92 | 12.90 | 11.11 | 11.81 | 7.37 | 15.19 | 11.44 F | Ours | ✗ | 11.71 | 7.28 | 11.72 | 10.88 | 10.69 | 6.90 | 13.74 | 10.06 G | Ours | ✓ | 10.60 | 6.83 | 10.78 | 9.77 | 10.04 | 6.44 | 12.65 | 9.41 | Search Range | KNN | I | 2 | 8 | 11.58 | 6.82 | 11.04 | 10.76 | 10.52 | 6.48 | 13.06 | 9.87 H | 3 | 8 | 10.74 | 6.85 | 10.78 | 9.91 | 10.41 | 6.46 | 12.76 | 9.77 G | 4 | 8 | 10.60 | 6.83 | 10.78 | 9.77 | 10.04 | 6.44 | 12.65 | 9.41 J | 8 | 8 | 10.60 | 6.79 | 10.87 | 9.79 | 10.46 | 6.56 | 13.08 | 9.88 K | 4 | 4 | 11.11 | 6.96 | 11.20 | 10.28 | 10.40 | 6.54 | 13.04 | 9.81 L | 4 | 16 | 10.77 | 6.80 | 10.93 | 9.95 | 9.40 | 6.33 | 12.59 | 8.84 Figure 6: Qualitative results of real-world HD videos. The red box shows some of the typical failure cases. ### 4.3 Real Application Performance We test on real-world HD videos from [23, 15] and show the qualitative results in Fig. 6. Please refer to the supplementary material for more video results. We profile all models on HD and 4K inputs and compare their FPS and GFLOPS in Tab. 4. As shown in the table, our models use the least amount of computation and achieve competitive frame rate. For DIM, P3M-Net and MODNet running on full resolution, we fail to profile their performance on 4K input due to the massive memory footprint required. On the other hand, our models yield real time performance for HD input and near real time for 4K. It is also worth noting that our model with CRA, even with fewer GFLOPS, runs slightly slower than Resnet-50 backboned BGMV2. This is because most modern deep learning frameworks do not have well optimized transformer operators. As shown in [20], more than 2x speedup is possible with the optimized native CUDA kernels. Our current implementation of Swin-T and CRA utilizes generic pytorch functions to compute the multi-head attention. We believe a similar improvement at inference time is possible with the optimized implementation. ### 4.4 Ablation Study In this section, we demonstrate the effectiveness of the proposed refinement stage, the CRA module and their associated hyper parameters. Refinement stage. We compare the proposed refinement stage with that of BGMv2 [15] by fixing the low resolution backbone. As shown in Tab. 5, model G (our default) and model E differ only by the refinement stage. Model G outperforms model E on all metrics, demonstrating the effectiveness of the proposed refinement stage. We also observe similar results by comparing model A (from Tab. 1) with BGMv2 Resnet-50 (from Tab. 2). Both models use Resnet-50 as the low resolution network, but they differ by the refinement stage. Model A surpassing BGMv2 Resnet-50 on all evaluated metrics, again, demonstrates the advantage of the proposed refinement stage. Cross region attention. The CRA lies at the core of our method. We show its individual impact by taking it out from our model, resulting in model F shown in Tab. 5. We can see that G achieves better results than F, demonstrating the effectiveness of CRA. Search range. Search range is used in the CRA module to identify in-range neighbors and out-of-range neighbors for relative positional encoding (Sec. 3.3). Because all out-of-range neighbors share the same relative positional bias (Fig. 2), smaller search range enforces more out-of-range neighbors, making the model less discriminative against the neighboring regions. In Tab. 5, we list three models (G, H & I) with different search ranges. As the search range increases, we can see a trend 111For the majority of the evaluated metrics. in improved performance. However, we also observe from model J that a large search range of 8 does not boost the performance any further. We believe this is due to the search window being too large (15$\times$15) 222Recall that, given the search range $s$, the search window size is $(2s-1)\times(2s-1)$.. At most 8 out of 225 positional biases are queried from the lookup table, leaving the rest of the positional biases untouched. Therefore, a small percentage of biases being queried and optimized during each gradient propagation results in sub-optimal bias lookup table, hence the degraded quality of the model. KNNs. We study the impact of KNNs by varying its number. As KNNs increase (K $\rightarrow$ G $\rightarrow$ L) in Tab. 5, we can see the overall 11footnotemark: 1 performance of the model improves. The reason is twofold. First, more KNNs means more contextual information, which helps the model do a better job at learning. Second, more KNNs benefit the training of bias lookup table. As we have just discussed, it improves the chances of positional biases being queried and optimized during training. ## 5 Failure cases and Future Work When there is high contrast texture in the extracted regions, the refinement network finds it difficult to identify the correct foreground and background. As is shown in Fig. 6, the text in on the whiteboard is supposed to be background, but it is perfectly (and incorrectly) segmented as foreground. Also, the refinement network can only improvement the quality of local boundaries. Any false predictions in the original low resolution matte can not be recovered. For example, The chair in Fig. 6 has been false positive in the first stage, it is impossible to undo the false prediction by the refinement network. Currently our model is trained only with limited amount of data (9421 images from the P3M data [12]), which is far from being robust in real-world applications. Because the refinement network only consumes an upsampled matte and does not rely on any intermediate features from the first stage, we believe it is possible to train the low resolution network and the refinement network separately to improvement the overall robustness of our method. The abundant low resolution segmentation data [16, 32, 13] can be leveraged to train the coarse model while the high resolution matting data plus an unlimited number of human synthetics [28] can used to train the refinement stage. We leave this as a future work. ## 6 Conclusion We present a new lightweight two-stage method for high resolution portrait matting. At the heart of our method is a ViT backboned low resolution network for coarse alpha estimation and a novel cross region attention (CRA) module in the second stage for local refinement. We verify that using pixel-unshuffle rather than downsampling has the advantage of preserving original pixel information and that ViT comes naturally a good choice for that purpose. We demonstrate the effectiveness of the proposed low resolution network, refinement stage and CRA module and analyze the individual impact of several key hyper parameters. Through extensive experiments, we show the superiority of our method against the previous state-of-the-arts in terms of both accuracy, FPS and FLOPS. ## References * [1] Yagiz Aksoy, Tunc Ozan Aydin, and Marc Pollefeys. Designing effective inter-pixel information flow for natural image matting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 29–37, 2017. * [2] Quan Chen, Tiezheng Ge, Yanyu Xu, Zhiqiang Zhang, Xinxin Yang, and Kun Gai. Semantic human matting. In Proceedings of the 26th ACM international conference on Multimedia, pages 618–626, 2018. * [3] Qifeng Chen, Dingzeyu Li, and Chi-Keung Tang. Knn matting. IEEE transactions on pattern analysis and machine intelligence, 35(9):2175–2188, 2013. * [4] Yung-Yu Chuang, Brian Curless, David H Salesin, and Richard Szeliski. A bayesian approach to digital matting. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, volume 2, pages II–II. IEEE, 2001. * [5] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. * [6] Marco Forte and François Pitié. $f$, $b$, alpha matting. arXiv preprint arXiv:2003.07711, 2020. * [7] Kaiming He, Jian Sun, and Xiaoou Tang. Fast matting using large kernel matting laplacian matrices. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2165–2172. IEEE, 2010. * [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. * [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630–645. Springer, 2016. * [10] Zhanghan Ke, Jiayu Sun, Kaican Li, Qiong Yan, and Rynson WH Lau. Modnet: Real-time trimap-free portrait matting via objective decomposition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 1140–1147, 2022. * [11] Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form solution to natural image matting. IEEE transactions on pattern analysis and machine intelligence, 30(2):228–242, 2007. * [12] Jizhizi Li, Sihan Ma, Jing Zhang, and Dacheng Tao. Privacy-preserving portrait matting. In Proceedings of the 29th ACM International Conference on Multimedia, pages 3501–3509, 2021. * [13] Jianshu Li, Jian Zhao, Yunchao Wei, Congyan Lang, Yidong Li, Terence Sim, Shuicheng Yan, and Jiashi Feng. Multi-human parsing in the wild. arXiv preprint arXiv:1705.07206, 2017. * [14] Yaoyi Li and Hongtao Lu. Natural image matting via guided contextual attention. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11450–11457, 2020. * [15] Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian L Curless, Steven M Seitz, and Ira Kemelmacher-Shlizerman. Real-time high-resolution background matting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8762–8771, 2021. * [16] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014. * [17] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012–10022, 2021. * [18] Hao Lu, Yutong Dai, Chunhua Shen, and Songcen Xu. Indices matter: Learning to index for deep image matting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3266–3275, 2019. * [19] Sebastian Lutz, Konstantinos Amplianitis, and Aljosa Smolic. Alphagan: Generative adversarial networks for natural image matting. arXiv preprint arXiv:1807.10088, 2018. * [20] Scott Wolchok Rui Zhu Christian Puhrsch Michael Gschwind, Eric Han. A bettertransformer for fast transformer inference. https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/. * [21] Yu Qiao, Yuhao Liu, Xin Yang, Dongsheng Zhou, Mingliang Xu, Qiang Zhang, and Xiaopeng Wei. Attention-guided hierarchical structure aggregation for image matting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13676–13685, 2020. * [22] Christoph Rhemann, Carsten Rother, Jue Wang, Margrit Gelautz, Pushmeet Kohli, and Pamela Rott. A perceptually motivated online benchmark for image matting. In 2009 IEEE conference on computer vision and pattern recognition, pages 1826–1833. IEEE, 2009. * [23] Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steven M Seitz, and Ira Kemelmacher-Shlizerman. Background matting: The world is your green screen. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2291–2300, 2020. * [24] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. * [25] Jian Sun, Jiaya Jia, Chi-Keung Tang, and Heung-Yeung Shum. Poisson matting. In ACM SIGGRAPH 2004 Papers, pages 315–321. 2004. * [26] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015. * [27] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. * [28] Erroll Wood, Tadas Baltrušaitis, Charlie Hewitt, Sebastian Dziadzio, Thomas J Cashman, and Jamie Shotton. Fake it till you make it: face analysis in the wild using synthetic data alone. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3681–3691, 2021. * [29] Ning Xu, Brian Price, Scott Cohen, and Thomas Huang. Deep image matting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2970–2979, 2017. * [30] Yunke Zhang, Lixue Gong, Lubin Fan, Peiran Ren, Qixing Huang, Hujun Bao, and Weiwei Xu. A late fusion cnn for digital matting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7469–7478, 2019. * [31] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2472–2481, 2018. * [32] Jian Zhao, Jianshu Li, Yu Cheng, Li Zhou, Terence Sim, Shuicheng Yan, and Jiashi Feng. Understanding humans in crowded scenes: Deep nested adversarial learning and a new benchmark for multi-human parsing. arXiv preprint arXiv:1804.03287, 2018.
aainstitutetext: Yukawa Institute for Theoretical Physics, Kyoto University, Sakyo-ku, Kyoto 606-8502, Japanbbinstitutetext: Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN, Wako 351-0198, Japan # Calculating composite-particle spectra in Hamiltonian formalism and demonstration in 2-flavor QED${}_{1+1\text{d}}$ Etsuko Itou a,b Akira Matsumoto a and Yuya Tanizaki itou(at)yukawa.kyoto-u.ac.jp akira.matsumoto(at)yukawa.kyoto-u.ac.jp yuya.tanizaki(at)yukawa.kyoto-u.ac.jp ###### Abstract We consider three distinct methods to compute the mass spectrum of gauge theories in the Hamiltonian formalism: (1) correlation-function scheme, (2) one-point-function scheme, and (3) dispersion-relation scheme. The first one corresponds to the conventional Euclidean method in the Monte Carlo simulations. The second one uses the boundary effect to efficiently compute the mass spectrum. The third one constructs the excited states and fits their energy using the dispersion relation with selecting quantum numbers. Each method has its pros and cons, and we clarify such properties in their applications to the mass spectrum for the $2$-flavor massive Schwinger model at $m/g=0.1$ and $\theta=0$ using the density-matrix renormalization group (DMRG). We note that the multi-flavor Schwinger model at small mass $m$ is a strongly-coupled field theory even after the bosonizations, and thus it deserves to perform the first-principle numerical calculations. All these methods mostly agree and identify the stable particles, pions $\pi_{a}$ ($J^{PG}=1^{-+}$), sigma meson $\sigma$ ($J^{PG}=0^{++}$), and eta meson $\eta$ ($J^{PG}=0^{--}$). In particular, we find that the mass of $\sigma$ meson is lighter than twice the pion mass, and thus $\sigma$ is stable against the decay process, $\sigma\to\pi\pi$. This is consistent with the analytic prediction using the WKB approximation, and, remarkably, our numerical results are so close to the WKB-based formula between the pion and sigma-meson masses, $M_{\sigma}/M_{\pi}=\sqrt{3}$. ††preprint: YITP-23-98, RIKEN-iTHEMS-Report-23 ## 1 Introduction In recent years, numerical simulations of quantum field theories (QFTs) in the Hamiltonian formalism have attracted a lot of attention motivated by the rapid progress of quantum computing technology and also the developments of tensor network techniques. These methods rely on different disciplines from that of Monte Carlo simulations for the conventional lattice gauge theories, and thus they are expected to give complimentary frameworks. One of the remarkable features is that these methods do not rely on importance sampling, and thus we may be able to circumvent the issue of sign problems. With this motivation in mind, we investigate the methods to calculate the mass spectrum of gauge theories in Hamiltonian formalism. When studying strongly- coupled QFTs, we often encounter situations where the fundamental degrees of freedom defining the theory do not appear in the low-energy spectrum. Quantum chromodynamics (QCD) is a notable, successful example of such phenomena: Quarks and gluons are confined inside the color-singlet hadrons, and it explains the physics of strong interaction. The Monte Carlo simulations nicely predict the hadron spectrum FlavourLatticeAveragingGroupFLAG:2021npn and also the physics at finite temperature Borsanyi:2013bia ; HotQCD:2014kol in the sign-problem-free regions. Of course, we are currently very far away to reproduce such tremendous achievements of Monte Carlo simulations, and thus it is important to develop the counterparts of those calculational techniques in Hamiltonian formalism. In this work, we consider three independent methods to compute the mass spectrum of the $2$-flavor massive Schwinger model using the density-matrix renormalization group (DMRG): * • correlation-function scheme, * • one-point-function scheme, and * • dispersion-relation scheme. The first one corresponds to the conventional method using the Euclidean (or spatial) correlators as in the Monte Carlo simulations. The second one uses the boundary effect for efficiently computing the mass spectrum, which is partly motivated by the applications of the Friedel oscillations PhysRevB.54.13495 ; SHIBATA19971024 . The third one constructs the excited states and fits their energy using the dispersion relation with selecting quantum numbers (see, e.g., Refs. Pirvu_2012 ; Haegeman_2012 ; Haegeman_2013 for similar analysis in spin systems). Each method has its pros and cons, and especially the last one is specific to the Hamiltonian formalism. Our purpose is to clarify their properties in the concrete studies of the $2$-flavor massive Schwinger model. The Schwinger model is a $1+1$d quantum electrodynamics (QED${}_{1+1\text{d}}$) Schwinger:1962tp , and it is a strongly-coupled theory like $4$d QCD: The fundamental fermions are confined because of the linear Coulomb potential, and the low-lying states are composite states like mesons. Despite its strong-coupling nature, one can calculate many nontrivial aspects using analytical methods Lowenstein:1971fc ; Casher:1974vf ; Coleman:1975pw ; Coleman:1976uz ; Manton:1985jm ; Hetrick:1988yg ; Jayewardena:1988td ; Sachs:1991en ; Adam:1993fc ; Adam:1994by ; Hetrick:1995wq ; Narayanan:2012du ; Narayanan:2012qf ; Lohmayer:2013eka ; Tanizaki:2016xcu , and this theory has been used as a benchmark to test new computational methods in previous studies (see, e.g., Refs. Banuls:2013jaa ; Banuls:2015sta ; Banuls:2016lkq ; Buyens:2016ecr ; Buyens:2016hhu ; Banuls:2016gid ; Funcke:2019zna ; Chakraborty:2020uhf ; Honda:2021aum ; Honda:2021ovk ; Honda:2022edn ; Tomiya:2022chr ; Funcke:2023lli ; Dempsey:2023gib ; Kharzeev:2020kgc ; deJong:2021wsd ; Nguyen:2021hyk ; Nagano:2023uaq for numerical studies of Schwinger model with tensor networks and/or quantum simulations). The mass spectrum of this model was studied numerically by the Monte Carlo method as well Fukaya:2003ph , including the region with nonzero $\theta$ angles with the reweighting technique. As another Monte Carlo based studies, the dual variable formulations are developed, which successfully eliminates the sign problem of $1+1$d U(1) gauge theories Gattringer:2015nea ; Gattringer:2015baa ; Gattringer:2018dlw . There is also a numerical approach with the light-cone quantization in the Hamiltonian formalism Harada:1993va by using the so- called Tamm-Dancoff approximation. The low-energy mass spectrum of the 2-flavor Schwinger model with a theta term has been studied analytically by using the bosonization technique Coleman:1976uz ; Hetrick:1995wq . We note, however, that the low-energy effective theory is strongly coupled if $0<m/g\ll 1$ even after bosonization, and the details of the prediction rely on some approximations that are not fully justifiable. It is physically nontrivial if those analytic predictions are reproduced by the first-principle numerical computations when we go into the details beyond qualitative aspects. We performed the DMRG computations using the C++ library of ITensor itensor to obtain the mass spectrum with the above three methods. We find that all three methods mostly agree and identify the stable particles, pions $\pi_{a}$ ($J^{PG}=1^{-+}$), sigma meson $\sigma$ ($J^{PG}=0^{++}$), and eta meson $\eta$ ($J^{PG}=0^{--}$), where $J$ denotes the isospin quantum number, $P$ and $G$ is the parity and $G$-parity, respectively. In particular, we observe that the mass of $\sigma$ meson is lighter than twice the pion mass, and thus $\sigma$ is stable against the decay process, $\sigma\to\pi\pi$. This implies that $\sigma$ is a stable particle, not a $\pi\pi$ resonance, and this is a notable difference compared with $4$d QCD. This is consistent with the analytic prediction based on the WKB approximation of the Abelian bosonized description, and our numerical results are so close to the WKB-based formula between the pion and sigma-meson masses, $M_{\sigma}/M_{\pi}=\sqrt{3}$. Let us emphasize that this poses an interesting theoretical question on why the semiclassical prediction works so well even outside its valid regime. This paper is organized as follows. In Section 2, we review the continuum 2-flavor Schwinger model and the bosonization analysis focusing on the mass spectrum. In Section 3, we introduce the lattice formulation of the Hamiltonian and define some observables. In Section 4.1, we briefly explain our simulation method, the DMRG algorithm. Section 4.2 shows our setup of the simulation. In Section 5, we present our simulation results for the three methods. Section 6 is devoted to conclusion and discussion. Appendix A shows the explicit form of the Hamiltonian and the observables in the spin representation for DMRG. In Appendix B, we test the validity of the charge conjugation operator on the lattice in the 1-flavor Schwinger model. In Appendix C, we discuss the assignment of the flavor index for constructing MPS. In Appendix D, we investigate how the truncation of the bond dimension affects the correlation function in the massless 1-flavor Schwinger model. ## 2 Review of the 2-flavor Schwinger model In this work, we study the 2-flavor Schwinger model, which is $(1+1)$-dimensional quantum electrodynamics (QED${}_{1+1\text{d}}$) with $N_{f}=2$ species of Dirac fermion. The Lagrangian density with the Minkowski metric $\eta_{\mu\nu}=\mathrm{diag}(1,-1)$ is given by $\mathcal{L}=-\frac{1}{4g^{2}}F_{\mu\nu}F^{\mu\nu}+\frac{\theta}{4\pi}\epsilon_{\mu\nu}F^{\mu\nu}+\sum_{f=1}^{N_{f}}\left[i\bar{\psi}_{f}\gamma^{\mu}\left(\partial_{\mu}+iA_{\mu}\right)\psi_{f}-m\bar{\psi}_{f}\psi_{f}\right],$ (1) where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is the field strength, $g$ is the gauge coupling, and $\theta$ is the vacuum angle describing the background electric flux. The index $f$ labels the flavor. We set the masses of the two fermions equal to $m$. ### 2.1 Global symmetry and composite operators In the chiral limit ($m=0$), the $2$-flavor Schwinger model has the chiral symmetry and the $G$-parity symmetry, $\frac{\mathrm{SU}(2)_{L}\times\mathrm{SU}(2)_{R}}{\mathbb{Z}_{2}}\times(\mathbb{Z}_{2})_{G}\quad(m=0),$ (2) and the chiral symmetry has an ’t Hooft anomaly. The $G$-parity operation is the combination of the charge-conjugation with the $\pi$ rotation of the $\mathrm{SU}(2)_{V}$, which will be discussed later. We note that the continuous chiral symmetry cannot be spontaneously broken due to the Coleman- Mermin-Wagner theorem, and the anomaly matching condition is satisfied by the $\mathrm{SU}(2)$ level-$1$ Wess-Zumino-Witten ($\mathrm{SU}(2)_{1}$ WZW) conformal field theory. The $\mathrm{SU}(2)_{1}$ WZW model is equivalent to the self-dual compact boson, and one can explicitly derive it from the massless $2$-flavor Schwinger model with the Abelian bosonization Coleman:1976uz . The massive 2-flavor Schwinger model (1) no longer has the chiral symmetry, but it maintains the vector-like symmetry, $\left\\{\begin{array}[]{cc}[\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}]\times(\mathbb{Z}_{2})_{G}&\quad(\theta\in\pi\mathbb{Z}),\\\ \mathrm{SU}(2)_{V}/\mathbb{Z}_{2}&\quad(\text{else}),\end{array}\right.$ (3) and we call $\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}$ as the isospin symmetry. The $\mathbb{Z}_{2}$ quotient of $\mathrm{SU}(2)/\mathbb{Z}_{2}$ means that the local operators always have the integer isospin quantum numbers since the gauge invariance requires that the local operator must consist of the same number of $\psi$ and $\bar{\psi}$. We define the isospin operators $J_{a}$ as conserved charges under this symmetry by $J_{a}=\frac{1}{2}\int dx\,\bar{\psi}\gamma^{0}\tau_{a}\psi,$ (4) where $\tau_{a}$ represents Pauli matrices of the isospin space with $a\in\\{x,y,z\\}$. When $\theta$ takes some special values, e.g. $\theta=0$, the theory enjoys the charge conjugation, $C:A\to-A,\quad\psi\leftrightarrow\mathsf{C}\overline{\psi}^{t},$ (5) with a suitable element of the Clifford algebra $\mathsf{C}$. We note that this operation flips the sign of the $\theta$ angle, and thus this symmetry does not exist for generic values of $\theta$. For general numbers of flavors, $C$ acts on $\mathrm{SU}(N_{f})/\mathbb{Z}_{N_{f}}$ as an outer automorphism, i.e. the symmetry group becomes $[\mathrm{SU}(N_{f})/\mathbb{Z}_{N_{f}}]\rtimes(\mathbb{Z}_{2})_{C}$. When $N_{f}=2$, however, $\mathrm{SU}(2)$ does not have nontrivial outer automorphisms, and indeed $C$ just gives the $\pi$ rotation in the isospin space. Thus, it is convenient to introduce the $G$-parity Gparity , $G=Ce^{i\pi J_{y}},$ (6) so that it commutes with the isospin operation and gives a well-defined eigenvalue $\pm 1$. Moreover, the $G$ parity acts trivially on the $\mathrm{SU}(2)_{1}$ WZW theory. Thus, if we find a particle with $G=-1$, we can immediately tell it remains massive in the chiral limit. In this paper, we mainly focus on the following composite operators to discuss the meson spectrum: $\displaystyle\pi_{a}$ $\displaystyle=-i\bar{\psi}\gamma^{5}\tau_{a}\psi\quad$ $\displaystyle(J^{PG}=1^{-+}),$ (7) $\displaystyle\sigma$ $\displaystyle=\bar{\psi}\psi$ $\displaystyle(J^{PG}=0^{++}),$ (8) $\displaystyle\eta$ $\displaystyle=-i\bar{\psi}\gamma^{5}\psi$ $\displaystyle(J^{PG}=0^{--}).$ (9) We call them pion, sigma, and eta operators, respectively, obviously motivated by the meson spectrum of $4$d QCD. We will often denote $\pi_{3}=\pi$ for simplicity. Here, we have specified their quantum numbers $J^{PG}$, where $J$ denotes the isospin, and $P$ and $G$ denote the parity and the $G$-parity at $\theta=0$, respectively. The $(1+1)$d QED is strongly coupled, and it turns out that the light particles correspond to these operators, and this feature is reminiscent of $4$d QCD. Here, we would like to note that $\eta$ has $G=-1$, and thus it remains massive in the chiral limit, which is analogous to the $U(1)_{A}$ problem Coleman:1975pw ; Coleman:1976uz ; Frohlich:1976mt . Other mesons, $\pi$ and $\sigma$ have $G=+1$ and actually become massless in the chiral limit. The massless $\sigma$ particle is an outcome of the absence of chiral symmetry breaking, unlike the $4$d QCD case. ### 2.2 Phase structure With $m\not=0$, the system is gapped and has the unique ground state at generic values of $\theta$. In $(1+1)$d, there is no stable topologically- ordered state, and the unique gapped ground states are then classified as the symmetry-protected topological (SPT) states PhysRevB.83.035107 ; Kapustin:2014gma ; Kapustin:2014tfa . This perspective provides a very powerful tool to understand the phase structure of the $2$-flavor Schwinger model. Let us recall that the massive Schwinger model always has the isospin symmetry, $\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}$. We can then calculate the partition function under the presence of the background gauge fields for the isospin symmetry. Compared with the $\mathrm{SU}(2)$ gauge field, the $\mathrm{SU}(2)/\mathbb{Z}_{2}$ gauge field has milder cocycle conditions, which is controlled by the $\mathbb{Z}_{2}$ $2$-form gauge field $w_{2}$ in addition to the familiar $1$-form gauge field. As a result, at generic values of $\theta$, the partition function with the background gauge field is described by the low-energy effective topological action, $\mathcal{Z}_{\theta}\simeq\exp(i\pi k\int w_{2}),$ (10) with some $k\sim k+2$. We note that $k$ is a discrete label that distinguishes the SPT states protected by $\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}$, and thus it cannot be changed under the continuous change of coupling constants unless quantum phase transitions happen. We can prove that two partition functions at $\theta$ and $\theta+2\pi$ are different in the presence of the background gauge fields. The anomalous relation can be summarized as Misumi:2019dwq $\mathcal{Z}_{\theta+2\pi}=\exp\left(i\pi\int w_{2}\right)\mathcal{Z}_{\theta}.$ (11) The label $k$ is changed as $k\mapsto k+1$ as we change the $\theta$ angle from $\theta$ to $\theta+2\pi$, and there must be a phase transitions separating the $k=0,1$ ground states. It is somewhat customary to assign the $k=0$ state for $-\pi<\theta<\pi$ and the $k=1$ state for $\pi<\theta<3\pi$, while, precisely, this assignment depends on the UV-regularization scheme. We note that the whole story here is quite parallel to that of the anti- ferromagnetic Heisenberg chain or the $(1+1)$d $\mathbb{C}P^{1}$ sigma model Haldane:1983ru ; Affleck:1986pq ; Haldane:1988zz ; Affleck:1987vf ; Komargodski:2017dmc ; Komargodski:2017smk ; Lajko:2017wif ; Tanizaki:2018xto (except for the properties at $\theta=\pi$ Coleman:1976uz ; Dempsey:2023gib ). The distinction between the states at $\theta=0$ and $\theta=2\pi$ becomes more vivid when we take the open boundary condition. Turning on non-zero $\theta$ corresponds to introducing a background electric field with a constant magnitude $\theta/2\pi$. When we increase $\theta$ beyond $\pi$, the background field becomes larger than $1/2$. Then the Dirac fermions with charges $\pm 1$ are excited at the boundaries to cancel the background field as much as possible. As a consequence, these boundary states have isospin $1/2$, which is the projective representation of $\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}$. This is nothing but the signature of the nontrivial SPT state. If the system size is large enough, the interaction between the boundary states is exponentially suppressed, so the two independent degrees of freedom with isospin $1/2$ yield $2\times 2$ degeneracy of the ground state. We will use this boundary excitation as a source of iso- triplet particles to determine the mass of the pion from the one-point function in Section 5.2. Note that if we increase $\theta$ further beyond $3\pi$, the boundary excitations become bound states of Dirac fermions with isospin 1, which can be completely screened by the gauge-invariant particles inside the bulk, and the ground state would be unique again with the open boundary condition. ### 2.3 Mass spectrum In this subsection, we are going to give a relatively detailed review on analytic predictions about the mass spectrum. There is already a huge effort for the analytic studies of the multi-flavor Schwinger model, and thus the reader may wonder how one could obtain something new with numerical studies. We would like to clarify what kinds of approximations are used in the previous studies and give the justification of this work on physical aspects. There are two exactly-solvable limits of the multi-flavor Schwinger model: * • In the heavy fermion limit $m\to\infty$, the model becomes the pure $U(1)$ gauge theory. * • In the chiral limit $m=0$, the model becomes the $SU(N_{f})_{1}$ WZW conformal field theory and one massive free boson, and they are completely decoupled. It is then natural to consider perturbations from these limits in order to investigate the cases of general mass $m$. When $m\gg g$, one can perform systematic perturbations to study the spectrum. In the opposite case $0<m\ll g$, however, the systematic perturbation works only for the $1$-flavor case, and further approximations are necessary for $N_{f}\geq 2$. By applying the Abelian bosonization for $N_{f}=2$, the fermions are mapped to the $2\pi$-periodic scalar fields $\phi_{1},\phi_{2}$. The Lagrangian (1) is then completely equivalent to $\displaystyle\mathcal{L}$ $\displaystyle=\frac{1}{2g^{2}}F_{01}^{2}+\frac{1}{2\pi}(\phi_{1}+\phi_{2}+\theta)F_{01}$ $\displaystyle+\frac{1}{8\pi}\left((\partial\phi_{1})^{2}+(\partial\phi_{2})^{2}\right)+Cm\rho N_{\rho}[\cos(\phi_{1})+\cos(\phi_{2})],$ (12) where $C=e^{\gamma}/(2\pi)$ is a numerical constant, and $N_{\rho}$ denotes the normal ordering for the contraction with a free field propagator of mass $\rho$ Coleman:1974bu .111All the UV divergences from loop diagrams are removed by this prescription, and the theory is independent of the choice of $\rho$. For the free theory with mass $\rho$, $N_{\rho}[\bullet]$ becomes the ordinary normal ordering. We can rigorously integrate out the gauge fields as the Lagrangian is quadratic in terms of $F_{01}$. Changing the basis of bosons as $\phi_{1,2}=\sqrt{2\pi}\eta-\frac{\theta}{2}\pm\varphi$, the effective Lagrangian becomes $\displaystyle\mathcal{L}_{\mathrm{eff}}[\eta,\varphi]=\frac{1}{2}\left[(\partial\eta)^{2}-\mu^{2}\eta^{2}\right]+\frac{1}{4\pi}(\partial\varphi)^{2}+2Cm\rho N_{\rho}\left[\cos\left(\sqrt{2\pi}\eta-\frac{\theta}{2}\right)\cos(\varphi)\right],$ (13) where $\mu^{2}=2g^{2}/\pi$. We have the $\mathbb{Z}_{2}$ symmetry acting only on $\eta$ when $\theta=0$, and this is the $G$-parity. When $m=0$, the massive $\eta$ and the massless $\varphi$ decouple, as advocated above. Let us now turn on the small mass, $0<m\ll g$. The $\eta$ particle has the mass $\mu+O(m)$, but it is hard to compute the $O(m)$ correction due to the potential infrared divergence in the loop diagrams with the $\varphi$ fields Coleman:1976uz . Instead, we integrate out $\eta$ at the tree level to discuss the physics of $\pi$ and $\sigma$ mesons, which gives $\left\langle N_{\rho}\left[\cos\left(\sqrt{2\pi}\eta-\frac{\theta}{2}\right)\right]\right\rangle=\sqrt{\frac{\mu}{\rho}}\cos\frac{\theta}{2}$ for the free massive $\eta$ with mass $\mu$. The effective theory for $\varphi$ becomes the sine-Gordon model, $\mathcal{L}_{\mathrm{SG}}[\varphi]=\frac{1}{4\pi}(\partial\varphi)^{2}+2Cm\cos\frac{\theta}{2}(\mu\rho)^{1/2}N_{\rho}[\cos\varphi].$ (14) The isospin $\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}$ symmetry is not manifest at all in the Lagrangian, but it secretly exists quantum mechanically. In particular, the $z$-component isospin current is given by $J_{z}^{\mu}=\frac{1}{2\pi}\varepsilon^{\mu\nu}\partial_{\nu}\varphi$, and its charge $J_{z}=\int dx\frac{1}{2\pi}\partial_{x}\varphi$ counts the winding number of $\varphi\sim\varphi+2\pi$. We would like to emphasize that the effective theory (14) is strongly coupled, and we cannot solve it in the ordinary perturbation for small but nonzero $m$. What is actually done in the previous literature is the optimized perturbation; we optimize the renormalization scale $\rho$ so that the coefficient of the $\cos(\varphi)$ potential becomes $O(\rho^{2})$, and we get $\rho_{\mathrm{optimzied}}\sim\left|m\sqrt{\mu}\cos(\theta/2)\right|^{2/3}.$ (15) This is identified as the mass gap caused by the mass perturbation, and this formula gives the $\theta$-dependence of the lightest meson mass, i.e. $M_{\pi}$. The spectrum of the sine-Gordon model was studied by using WKB approximation Dashen:1975hd . Introducing an extra parameter controlling the kinetic term as $\frac{1}{4\pi\beta^{2}}(\partial\varphi)^{2}$, the quantum scaling dimension of $\cos\varphi$ becomes $\Delta=\beta^{2}/2$, so the semiclassical approximation is valid when $\beta^{2}\to 0$. The model has the soliton and antisoliton, and let us denote their mass as $M_{\mathrm{SG}}$. Then, Dashen et al. Dashen:1975hd predicted the masses of soliton-antisoliton bound states as $M_{\mathrm{SG}}^{(n)}=2M_{\mathrm{SG}}\sin\left(\frac{\pi}{2}\frac{n}{4/\beta^{2}-1}\right),$ (16) with $n=1,2,\cdots<(4/\beta^{2}-1)$. Even though it is subtle if the WKB works at the self-dual point $\beta^{2}=1$, Coleman got an intriguing observation using this semiclassical formula Coleman:1976uz . The nontrivial check for its validity is the recovery of the $\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}$ symmetry at $\beta^{2}=1$. Substituting $n=1$ with $\beta^{2}=1$ into (16), we get $M_{\mathrm{SG}}^{(1)}=2M_{\mathrm{SG}}\sin\frac{\pi}{6}=M_{\mathrm{SG}}.$ (17) This shows that the lightest soliton-antisoliton bound state has the same mass as the soliton or antisoliton itself. The soliton and antisoliton have $J_{z}=\pm 1$, and the soliton-antisoliton bound state has $J_{z}=0$. The $G$-parity does not act on $\varphi$, so all these states have $G=+1$. Thus, these three states form the isospin triplet $J^{PG}=1^{-+}$ of mass $M_{\mathrm{SG}}$, which is identified as the pion in the Schwinger model, and then $M_{\mathrm{SG}}=M_{\pi}\sim(m\sqrt{\mu}\cos(\theta/2))^{2/3}$. The mass of the second soliton-antisoliton bound state is given by $M_{\mathrm{SG}}^{(2)}=2M_{\mathrm{SG}}\sin\frac{\pi}{3}=\sqrt{3}M_{\mathrm{SG}}.$ (18) This state has $J_{z}=0$, and there is no other state with the same mass. We then identify it as the $\sigma$ meson in the Schwinger model with $J^{PG}=0^{++}$. Thus, the semiclassical method predicts that the masses of pion and sigma meson satisfy $\displaystyle M_{\sigma}=\sqrt{3}M_{\pi}.$ (19) Importantly, $M_{\sigma}<2M_{\pi}$. Unlike the $4$d QCD, $\sigma$ is a stable particle, not a resonance, because the decay $\sigma\to\pi\pi$ is energetically prohibited. As we discussed above, the low-energy mass spectra can be estimated by bosonization. However, it relies on the optimized perturbation and also on the semiclassical method, and these analyses are not necessarily fully justified. It is still difficult to compute the exact $m$-dependence or to find other states with higher energies than $\mu$. Thus, it is worth studying the mass spectrum of the 2-flavor model by first-principle numerical methods. ## 3 Lattice formulation of the 2-flavor Schwinger model In this section, we explain the Hamiltonian formalism of the 2-flavor Schwinger model and its lattice regularization as a generalization to $N_{f}=2$ from previous research Chakraborty:2020uhf ; Honda:2021aum ; Honda:2021ovk ; Honda:2022edn . We also define various local and global observables used in the analysis. ### 3.1 Hamiltonian First, we introduce the continuum Hamiltonian of the $N_{f}$-flavor Schwinger model. By introducing a conjugate momentum $\Pi=\frac{1}{g^{2}}\partial_{0}A^{1}+\frac{\theta}{2\pi}$, the Hamiltonian is given by $H=\int dx\,\left\\{\frac{g^{2}}{2}\left(\Pi-\frac{\theta}{2\pi}\right)^{2}+\sum_{f=1}^{N_{f}}\left[-i\bar{\psi}_{f}\gamma^{1}\left(\partial_{1}+iA_{1}\right)\psi_{f}+m\bar{\psi}_{f}\psi_{f}\right]\right\\}.$ (20) In Hamiltonian formalism, the physical Hilbert space is constrained by the Gauss law condition $\partial_{1}\Pi+\sum_{f=1}^{N_{f}}\psi_{f}^{\dagger}\psi_{f}=0.$ (21) The electric field corresponds to $E:=\dot{A}_{1}=g^{2}(\Pi-\theta/2\pi)$. Thus, the theta angle $\theta$ plays the role of the background electric field. In the periodic boundary condition, the Hamiltonians at $\theta$ and $\theta+2\pi$ are unitary equivalent, $H_{\theta+2\pi}=U^{\dagger}H_{\theta}U$ with $U=\exp(-i\int A_{1}dx)$, which realizes the $2\pi$ periodicity of $\theta$. Next, we consider the lattice regularization of the Hamiltonian. Here we employ the staggered fermion to define fermions on the lattice Kogut:1974ag ; Susskind:1976jm . The staggered fermions $\chi_{f,n}$ with the lattice spacing $a$ represents the discretization of the two-component Dirac fermions $\psi_{f}(x)$ with the lattice spacing $2a$. The single-component fermions $\chi_{f,n}$ at the site $n=0,1,\cdots,N-1$ correspond to each component222 The labels $u$ and $d$ of $\psi_{f}$ denote the upper and lower spinor component respectively. They are nothing to do with up and down quark in QCD here. of $\psi_{f}(x)$ depending on $n$ as $\psi_{f}(x)=\begin{pmatrix}\psi_{u,f}(x)\\\ \psi_{d,f}(x)\end{pmatrix}\leftrightarrow\frac{1}{\sqrt{2a}}\begin{pmatrix}\chi_{f,2[n/2]}\\\ \chi_{f,2[n/2]+1}\end{pmatrix}.$ (22) The number of staggered fermions for each flavor is equal to $N$, thus at each site, there are two staggered fermions. In this work, we set $N$ to be an even number. The gauge field is encoded to U(1) variables $U_{n}\sim\exp(-iaA^{1}(x))$, defined on the link between the $n$-th and $(n+1)$-th sites, and the conjugate momentum is replaced by $L_{n}\sim-\Pi(x)$, defined on the $n$-th site. The canonical commutation relations are given by $\\{\chi_{f,n}^{\dagger},\,\chi_{f^{\prime},m}\\}=\delta_{ff^{\prime}}\delta_{nm},$ (23) $\\{\chi_{f,n},\,\chi_{f^{\prime},m}\\}=\\{\chi_{f,n}^{\dagger},\,\chi_{f^{\prime},m}^{\dagger}\\}=0,$ (24) $[U_{n},\,L_{m}]=\delta_{nm}U_{n}.$ (25) Note that the roles of the staggered fermion operators depend on the site $n$: $\chi_{f,n}^{\dagger}:\begin{cases}\textrm{creation op. of particle}&n:\textrm{even}\\\ \textrm{annihilation op. of anti- particle}&n:\textrm{odd}\end{cases},$ (26) $\chi_{f,n}:\begin{cases}\textrm{annihilation op. of particle}&n:\textrm{even}\\\ \textrm{creation op. of anti- particle}&n:\textrm{odd}\end{cases}.$ (27) Thus, the operator $\chi_{f,n}^{\dagger}\chi_{f,n}$ counts the number of particles on the even sites, whereas $\chi_{f,n}\chi_{f,n}^{\dagger}$ counts the number of anti-particles on the odd sites. Considering that the particle has an electric charge of $+1$ and the anti-particle has $-1$, the charge density operator at the site $n$ is given by $\rho_{f,n}=\chi_{f,n}^{\dagger}\chi_{f,n}+\frac{(-1)^{n}-1}{2}=\begin{cases}\chi_{f,n}^{\dagger}\chi_{f,n}&n:\textrm{even}\\\ -\chi_{f,n}\chi_{f,n}^{\dagger}&n:\textrm{odd}\end{cases}.$ (28) In this work, we choose the open boundary condition in order to eliminate the bosonic degrees of freedom having an infinite dimensional Hilbert space. The Gauss law (21) is also discretize as $L_{n}-L_{n-1}=\sum_{f=1}^{N_{f}}\rho_{f,n},$ (29) where the left-hand side corresponds to the divergence of the electric field and the right-hand side is the charge density (28). We set the explicit form of the (1+1)d gamma matrices $\gamma^{0}=\sigma^{3}$, $\gamma^{1}=i\sigma^{2}$ and $\gamma^{5}=\gamma^{0}\gamma^{1}=\sigma^{1}$. Using the operators $\chi_{f,n}$, $U_{n}$, and $L_{n}$ introduced above, the lattice Hamiltonian is given by Funcke:2023lli ; Dempsey:2023gib $\displaystyle H$ $\displaystyle=J\sum_{n=0}^{N-2}\left(L_{n}+\frac{\theta}{2\pi}\right)^{2}$ $\displaystyle+\sum_{f=1}^{N_{f}}\left[-iw\sum_{n=0}^{N-2}\left(\chi_{f,n}^{\dagger}U_{n}\chi_{f,n+1}-\chi_{f,n+1}^{\dagger}U_{n}^{\dagger}\chi_{f,n}\right)+m_{\mathrm{lat}}\sum_{n=0}^{N-1}(-1)^{n}\chi_{f,n}^{\dagger}\chi_{f,n}\right],$ (30) where $J=g^{2}a/2$ and $w=1/2a$. Here we replace the mass $m$ of the continuum theory by $m_{\mathrm{lat}}:=m-\frac{N_{f}g^{2}a}{8}$ (31) in the lattice Hamiltonian, following the recent proposal Dempsey:2022nys for eliminating $O(a)$ correction. In the continuum theory, the chiral limit $m=0$ has the continuous chiral symmetry, and it contains $[\mathrm{SU}(N_{f})_{V}/\mathbb{Z}_{N_{f}}]\times(\mathbb{Z}_{N_{f}})_{L}$ as a subgroup. With the above replacement, the lattice theory at $m=0$ maintains the discrete chiral symmetry $\mathbb{Z}_{2}\subset(\mathbb{Z}_{N_{f}})_{L}$ for even $N_{f}$, and this is the point protected by the remnant of the chiral symmetry. By adding up the lattice Gauss law equation (29) from the boundary $n=0$ to the site $n$, we find that $L_{n}$ can be replaced by $\displaystyle L_{n}$ $\displaystyle=L_{-1}+\sum_{f=1}^{N_{f}}\sum_{k=0}^{n}\rho_{f,k}$ $\displaystyle=\sum_{f=1}^{N_{f}}\sum_{k=0}^{n}\chi_{f,k}^{\dagger}\chi_{f,k}+\frac{N_{f}}{2}\left(\frac{(-1)^{n}-1}{2}-n\right),$ (32) where we set $L_{-1}=0$. Furthermore, we can set $U_{n}=1$ since the degrees of freedom of $U_{n}$ can be absorbed by the U(1) phase of $\chi_{n}$. Then the lattice Hamiltonian is written only by the fermions as $H=H_{J}+H_{w}+H_{m}$, where the gauge part $H_{J}$ is given by $H_{J}=J\sum_{n=0}^{N-2}\left[\sum_{f=1}^{N_{f}}\sum_{k=0}^{n}\chi_{f,k}^{\dagger}\chi_{f,k}+\frac{N_{f}}{2}\left(\frac{(-1)^{n}-1}{2}-n\right)+\frac{\theta}{2\pi}\right]^{2},$ (33) and the kinetic term $H_{w}$ and the mass term $H_{m}$ of the fermions are $H_{w}=-iw\sum_{f=1}^{N_{f}}\sum_{n=0}^{N-2}\left(\chi_{f,n}^{\dagger}\chi_{f,n+1}-\chi_{f,n+1}^{\dagger}\chi_{f,n}\right),$ (34) $H_{m}=m_{\mathrm{lat}}\sum_{f=1}^{N_{f}}\sum_{n=0}^{N-1}(-1)^{n}\chi_{f,n}^{\dagger}\chi_{f,n}.$ (35) ### 3.2 Map to the spin system Now, we map the Hamiltonian written by the staggered fermions to the spin Hamiltonian. Such a spin Hamiltonian formalism is useful to apply tensor network methods and quantum computations. The $N_{f}\times N$ degrees of freedom of the staggered fermion $\chi_{f,n}$ can be described by the same number of spin-1/2 degrees of freedom. The Hilbert space of such a spin system is given by $\mathcal{H}=\bigotimes_{f=1}^{N_{f}}\bigotimes_{n=0}^{N-1}\mathcal{H}_{f,n},$ (36) where $\mathcal{H}_{f,n}$ is the local Hilbert space of the single spin-1/2 state. A general state $\ket{\Psi}$ in this Hilbert space can be described by a superposition of all possible spin configurations $\bm{s}$, $\ket{\Psi}=\sum_{\bm{s}}\Psi(\bm{s})\ket{\bm{s}},$ (37) $\ket{\bm{s}}\in\left\\{\left.\bigotimes_{f=1}^{N_{f}}\bigotimes_{n=0}^{N-1}\ket{s_{f,n}}_{f,n}\right|\ket{s_{f,n}}_{f,n}=\ket{\uparrow},\ket{\downarrow}\right\\}.$ (38) The spin up $\ket{\uparrow}$ and down $\ket{\downarrow}$ state are the eigenstates of the Pauli matrix $\sigma^{z}$ with the eigenvalues $+1$ and $-1$, respectively. The map to the spin system can be achieved by the so-called Jordan-Wigner transformation. The fermion operators $\chi_{f,n}$ for the two flavors $f=1,2$ are represented by spin operators as follows: $\displaystyle\chi_{1,n}$ $\displaystyle=\sigma_{1,n}^{-}\prod_{j=0}^{n-1}(-\sigma_{2,j}^{z}\sigma_{1,j}^{z}),$ $\displaystyle\chi_{1,n}^{\dagger}$ $\displaystyle=\sigma_{1,n}^{+}\prod_{j=0}^{n-1}(-\sigma_{2,j}^{z}\sigma_{1,j}^{z}),$ (39) $\displaystyle\chi_{2,n}$ $\displaystyle=\sigma_{2,n}^{-}(-i\sigma_{1,n}^{z})\prod_{j=0}^{n-1}(-\sigma_{2,j}^{z}\sigma_{1,j}^{z}),$ $\displaystyle\chi_{2,n}^{\dagger}$ $\displaystyle=\sigma_{2,n}^{+}(i\sigma_{1,n}^{z})\prod_{j=0}^{n-1}(-\sigma_{2,j}^{z}\sigma_{1,j}^{z}),$ (40) where we define $\sigma_{f,n}^{\pm}=\frac{1}{2}(\sigma_{f,n}^{x}\pm i\sigma_{f,n}^{y}).$ (41) The Pauli matrices $\sigma_{f,n}^{a}$ ($a=x,y,z$) act on the spin $\ket{s_{f,n}}_{f,n}$ at the site $n$ of the flavor $f$. They do not commute only if they are on the same site of the same flavor, so that $\left[\sigma_{f,n}^{a},\,\sigma_{f^{\prime},n^{\prime}}^{b}\right]=2i\delta_{ff^{\prime}}\delta_{nn^{\prime}}\epsilon^{abc}\sigma_{f,n}^{c}.$ (42) We can check that the canonical anti-commutation relations (23) and (24) are satisfied thanks to the properties of the Pauli matrices. Note that this is not a unique way of translation to the spin system which realizes the anti-commutation relations. Different transformations give different representations of the original Hamiltonian. We choose this transformation since various local operators can be constructed by only a few numbers of the Pauli matrices. The spin representation of the Hamiltonian and the observables defined above are summarized in Appendix A. ### 3.3 Local observables Let us consider the meson operators (7), (9), and (8) on the lattice. Based on the continuum descriptions, it is natural to define the lattice version of these operators as follows: $\pi(n):=PS_{1,n}-PS_{2,n},$ (43) $\eta(n):=PS_{1,n}+PS_{2,n},$ (44) $\sigma(n):=S_{1,n}+S_{2,n}.$ (45) Here $S_{f,n}$ and $PS_{f,n}$ are the scalar and pseudo-scalar operators for the flavor $f=1,2$ on the lattice, respectively. In order to obtain their explicit form, we rewrite the scalar condensate $(\bar{\psi}\psi)_{f}$ by the staggered fermion, so that $\displaystyle(\bar{\psi}\psi)_{f}$ $\displaystyle=(\psi_{u}^{\dagger}\psi_{u}-\psi_{d}^{\dagger}\psi_{d})_{f},$ $\displaystyle=\frac{1}{2a}(-1)^{n}(\chi_{f,n}^{\dagger}\chi_{f,n}-\chi_{f,n+1}^{\dagger}\chi_{f,n+1}).$ (46) Similarly, the pseudo-scalar condensate $(\bar{\psi}\gamma^{5}\psi)_{f}$ is given by $\displaystyle(\bar{\psi}\gamma^{5}\psi)_{f}$ $\displaystyle=(\psi_{u}^{\dagger}\psi_{d}-\psi_{d}^{\dagger}\psi_{u})_{f},$ $\displaystyle=\frac{1}{2a}(-1)^{n}(\chi_{f,n}^{\dagger}\chi_{f,n+1}-\chi_{f,n+1}^{\dagger}\chi_{f,n}).$ (47) These operators have a site-by-site fluctuation due to the staggered fermion. Here we define the lattice scalar condensate operator by the two-site average of (46), namely $\displaystyle S_{f}(n)$ $\displaystyle:=\frac{1}{2}\left[(\bar{\psi}\psi)_{f,n-1}+(\bar{\psi}\psi)_{f,n}\right],$ $\displaystyle=\frac{1}{4a}(-1)^{n}(-\chi_{f,n-1}^{\dagger}\chi_{f,n-1}+2\chi_{f,n}^{\dagger}\chi_{f,n}-\chi_{f,n+1}^{\dagger}\chi_{f,n+1}),$ (48) for $n=1,2,\cdots,N-2$. The lattice pseudo-scalar condensate operator is also defined by the two-site average of (47) with a factor $-i$, $\displaystyle PS_{f}(n)$ $\displaystyle:=-\frac{i}{2}\left[(\bar{\psi}\gamma^{5}\psi)_{f,n-1}+(\bar{\psi}\gamma^{5}\psi)_{f,n}\right],$ $\displaystyle=\frac{i}{4a}(-1)^{n}(\chi_{f,n-1}^{\dagger}\chi_{f,n}-\chi_{f,n}^{\dagger}\chi_{f,n-1}-\chi_{f,n}^{\dagger}\chi_{f,n+1}+\chi_{f,n+1}^{\dagger}\chi_{f,n}),$ (49) for $n=1,2,\cdots,N-2$. Note that both of $S_{f}(n)$ and $PS_{f}(n)$ are composed of the staggered fermions at the three sites $n$ and $n\pm 1$. ### 3.4 Global observables We will define the quantum number ($J_{z},\bm{J}^{2},C$, and $P$) and momentum operators, which will be useful to distinguish the eigenstates of the Hamiltonian. These operators can be described by some global observables, which act on the whole lattice. First of all, let us focus on the isospin operator (4). We define the lattice version in terms of the staggered fermion. The isospin $J_{z}$ operator counts the number of particles of each flavor with the factor $\pm 1/2$ on even sites and the number of anti-particles with the opposite sign on odd sites. Thus, it can be realized by $J_{z}=\frac{1}{2}\sum_{n=0}^{N-1}\left(\chi_{1,n}^{\dagger}\chi_{1,n}-\chi_{2,n}^{\dagger}\chi_{2,n}\right).$ (50) It is convenient to define the isospin $J_{\pm}$ operators by $J_{\pm}=J_{x}\pm iJ_{y}.$ (51) Based on the role of fermion operators (26) and (27), $J_{+}$ operator is given by $J_{+}=\sum_{n=0}^{N-1}\chi_{1,n}^{\dagger}\chi_{2,n},$ (52) which transforms $f=2$ particle to $f=1$ particle on even sites and $f=1$ anti-particle to $f=2$ anti-particle on odd sites. Similarly, $J_{-}$ operator is given by $J_{-}=\sum_{n=0}^{N-1}\chi_{2,n}^{\dagger}\chi_{1,n},$ (53) which transforms $f=1$ particle to $f=2$ particle on even sites and $f=2$ anti-particle to $f=1$ anti-particle on odd sites. Then the Casimir operator $\bm{J}^{2}$ can also be defined as the combination of the operators above by $\bm{J}^{2}=\frac{1}{2}(J_{+}J_{-}+J_{+}J_{-})+J_{z}^{2}.$ (54) Second, we will consider the charge conjugation and parity operators. For this purpose, let us discuss the description of the particle and anti-particle as a spin state. Applying the Jordan-Winger transformation, the spin representation of the charge density operator (28) is given by $\rho_{f,n}=\frac{\sigma_{f,n}^{z}+1}{2}+\frac{(-1)^{n}-1}{2}=\begin{cases}(\sigma_{f,n}^{z}+1)/2&n:\textrm{even},\\\ (\sigma_{f,n}^{z}-1)/2&n:\textrm{odd}.\end{cases}$ (55) This operator counts the number of particles with $+1$ on even sites and the number of anti-particles with $-1$ on odd sites. We can confirm that the particle is described by the spin-up state $\ket{\uparrow}$ on even sites by taking the expectation value $\bra{\uparrow}\rho_{f,n}\ket{\uparrow}_{f,n}=\begin{cases}1&n:\textrm{even},\\\ 0&n:\textrm{odd}.\end{cases}$ (56) Similarly, we find that the anti-particle is described by the spin-down state $\ket{\downarrow}$ on odd sites as $\bra{\downarrow}\rho_{f,n}\ket{\downarrow}_{f,n}=\begin{cases}0&n:\textrm{even},\\\ -1&n:\textrm{odd}.\end{cases}$ (57) Based on this fact, charge conjugation, namely the exchange of particles and anti-particles can be performed by the exchange of even sites and odd sites. In addition, the spin-up state should be replaced by the spin-down state, and vice versa. These operations can be realized by the 1-site translation of the lattice and the multiplication of $\sigma^{x}$ operators. Thus, we define the charge conjugation operator by Banuls:2013jaa $C:=\prod_{f=1}^{N_{f}}\left(\prod_{n=0}^{N-1}\sigma_{f,n}^{x}\right)\left(\prod_{n=0}^{N-2}(\mathrm{SWAP})_{f;N-2-n,N-1-n}\right),$ (58) where the swap operator is given by $(\mathrm{SWAP})_{f;j,k}=\frac{1}{2}\left(\bm{1}_{f,j}\bm{1}_{f,k}+\sum_{a}\sigma_{f,j}^{a}\sigma_{f,k}^{a}\right),$ (59) using the Pauli matrices. As the name suggests, the swap operator exchanges the state $\ket{s}_{f,j}$ and $\ket{s^{\prime}}_{f,k}$, namely $(\mathrm{SWAP})_{f;j,k}\ket{s}_{f,j}\otimes\ket{s^{\prime}}_{f,k}=\ket{s^{\prime}}_{f,j}\otimes\ket{s}_{f,k}.$ (60) The product of the swap operators in (58) realizes the 1-site translation. The charge conjugation defined in this way satisfies $C^{\dagger}C=1$, but $C^{2}\neq 1$. When we take the periodic boundary condition, $C^{2}=1$ is achieved in the continuum limit, but this is not the case for the open boundary condition. Moreover, the Hamiltonian does not commute with $C$ due to the presence of the boundaries, and we will actually see the eigenstates of the Hamiltonian give $|\Braket{C}|<1$. Therefore, $C$ does not give a good quantum number when we take the staggered-fermion regularization with the open boundary condition. In this study, following the observation of Ref. Banuls:2013jaa , we assume that the sign of $\mathrm{Re}\Braket{C}$ remembers the original sign of $C$ for each eigenstate. We discuss this prescription in detail in Appendix B. Next, we define the parity operator. The parity transformation $x\rightarrow-x$ can be achieved by flipping the order of the lattice sites. The site $n\in\\{0,1,\cdots,N-1\\}$ is mapped to the site $n^{\prime}=N-1-n$. However, this operation also exchanges particles and anti-particles since the roles of even sites and odd sites are exchanged when $N$ is even. Thus, an additional operation of 1-site translation is necessary to fix it. We define the parity operator by $\displaystyle P:=\prod_{f=1}^{N_{f}}$ $\displaystyle\left(\prod_{j=0}^{N/2-1}\sigma_{f,2j+1}^{z}\right)$ $\displaystyle\times$ $\displaystyle\left(\prod_{n=0}^{N-2}(\mathrm{SWAP})_{f;N-2-n,N-1-n}\right)\left(\prod_{n=0}^{N/2-1}(\mathrm{SWAP})_{f;n,N-1-n}\right),$ (61) where the products of the swap operators perform the reversal $n\rightarrow n^{\prime}$ and the 1-site translation.333 If we implement the reversal $n\rightarrow n^{\prime}$ in this manner, the bond dimension of MPO grows exponentially with $N$. Thus, in practice, we apply the reversal by transposing all the matrices in MPS. The additional factors of $\sigma^{z}$ come from the shift of the staggered phase, which corresponds to $\gamma^{0}$ in the parity transformation of the Dirac fermion $\psi(x)\rightarrow\gamma^{0}\psi(-x)$. As we mentioned for the $C$ operator, the $P$ operator in the open boundary condition does not commute with the Hamiltonian as it contains the $1$-unit lattice translation. Therefore, we take the same prescription to determine the parity quantum number for each state as in the case of $C$. Finally, the other important quantity is a total momentum operator, which can be used to identify the momentum excitation Banuls:2013jaa . We start with the continuum description of the gauge invariant operator $K=\sum_{f=1}^{N_{f}}\int dx\,\psi_{f}^{\dagger}(i\partial_{x}-A_{1})\psi_{f},$ (62) which commutes with the continuum Hamiltonian (20) under the periodic boundary condition using the Gauss-law constraint (21). In our case with the open boundary condition, it does not commute with the Hamiltonian since the translational symmetry is explicitly broken. Thus, the expectation value $\Braket{K}$ is no longer the quantum number in the strictest sense. However, we will see that the operator is still useful as an approximate one to investigate the mass spectrum of the model. Let us consider its lattice version. Here we set $A_{1}(x)=0$ since we fix the gauge $U_{n}=1$ in our setup. The combination $\psi_{f}^{\dagger}\partial_{x}\psi_{f}$ of the Dirac fermion corresponds to $\psi_{f}^{\dagger}\partial_{x}\psi_{f}=(\psi_{u}^{\dagger}\partial_{x}\psi_{u}+\psi_{d}^{\dagger}\partial_{x}\psi_{d})_{f}=\frac{1}{2a}\chi_{f,n}^{\dagger}(\chi_{f,n+2}-\chi_{f,n}),$ (63) in terms of the staggered fermion. There is another possible combination $-(\partial_{x}\psi_{f}^{\dagger})\psi_{f}=-\frac{1}{2a}(\chi_{f,n+2}^{\dagger}-\chi_{f,n}^{\dagger})\chi_{f,n},$ (64) given by the integral by parts ignoring boundary term. Then we define the total momentum on the lattice as a Hermitian operator by taking symmetric combination $\displaystyle K$ $\displaystyle:=\frac{i}{2}\sum_{f=1}^{N_{f}}\sum_{n=0}^{N-3}\frac{1}{2a}\left[\chi_{f,n}^{\dagger}(\chi_{f,n+2}-\chi_{f,n})-(\chi_{f,n+2}^{\dagger}-\chi_{f,n}^{\dagger})\chi_{f,n}\right],$ $\displaystyle=\frac{i}{4a}\sum_{f=1}^{N_{f}}\sum_{n=1}^{N-2}(\chi_{f,n-1}^{\dagger}\chi_{f,n+1}-\chi_{f,n+1}^{\dagger}\chi_{f,n-1}).$ (65) This operator does not commute with the term $H_{w}$ (34) and $H_{J}$ (33) of the lattice Hamiltonian due to the open boundary. We also note that the latter $[K,H_{J}]$ has an $O(a)$ violation effect even in the periodic boundary condition. ## 4 Calculation method and the simulation setup We employ the density matrix renormalization group (DMRG) White:1992zz ; White:1993zza ; Schollw_ck_2005 ; Schollw_ck_2011 to study the spin Hamiltonian of the 2-flavor Schwinger model after the Jordan-Winger transformation, whose explicit form is given by (95). The DMRG is known as an efficient method to study (1+1)d gapped spin systems and has been developed mainly in the field of condensed matter physics. We utilized the C++ library of ITensor itensor to perform the tensor network calculation of this work. Let us briefly explain the basic idea of DMRG to obtain the ground state and excited states, and then we explain the details of parameter settings. ### 4.1 Quick review of DMRG In the spin systems, any wave function $\ket{\Psi}$ can be expressed as the form of the matrix product states (MPS), $\ket{\Psi}=\sum_{i_{1},\ldots,i_{N}=1}^{2}\tr[A_{1}^{(i_{1})}\cdots A_{N}^{(i_{N})}]\ket{i_{1}\ldots i_{N}},$ (66) by repeating the singular-value decomposition (SVD). Here, $i_{n}=1,2$ denotes the spin degrees of freedom at the $n$-th site, $A_{n}^{(i_{n})}$ denotes a $D\times D$ matrix, and this size $D$ is called the bond dimension. The upper bound for the entanglement entropy of $\ket{\Psi}$ is given by $\ln D$. Therefore, the MPS gives a useful tool to study the many-body states with low entanglement entropies, such as the ground state of the $(1+1)$d gapped systems Stoudenmire_2012 ; Wall_2012 . For the 2-flavor Schwinger model, we arrange the site index $n$ and the flavor index $f$ on the 1d lattice with the single index to apply DMRG. The ordering of the indices is chosen so that the behavior of entanglement entropy is reproduced appropriately with a reasonable bond dimension. This point is discussed in Appendix C. The DMRG is a variational algorithm based on the MPS. In each step of the algorithm, the matrices are updated to decrease the energy $E=\Braket{\Psi}{H}{\Psi}$ as a cost function. In addition, we perform the low-rank approximation and thus the smaller singular values are discarded, which amount to an error $\Delta$. We determine the bound dimension by setting the maximal bond dimension and also the cutoff parameter $\varepsilon$ on the error so that $\Delta\leq\varepsilon$. Smaller $\varepsilon$ gives a better approximation, but it also requires a larger bond dimension and increases the computational costs. We can also effectively calculate the expectation values or correlation functions of local operators by rewriting those operators in the form of matrix product operators (MPOs) and then taking contractions with the ground state $\ket{\Psi}$. We can use DMRG to obtain the low-energy excited states in a recursive way. Assume that we already find the energy eigenstates $\ket{\Psi_{k^{\prime}}}$ with $k^{\prime}=0,1,\ldots,k-1$ from below. Then, we apply the same technique to find the $k$-th state $\ket{\Psi_{k}}$ by changing the Hamiltonian for the cost function as $H_{k}=H+W\sum_{k^{\prime}=0}^{k-1}\ket{\Psi_{k^{\prime}}}\bra{\Psi_{k^{\prime}}},$ (67) where $W>0$ is a weight to impose the orthogonality. We can generate the excited states from the ground state to any level step by step. ### 4.2 Simulation setup Let us explain our parameter setup when using the ITensor itensor . The gauge coupling $g$ has mass dimension $1$ in $1+1$d QED, and thus we can measure the energy scale in the unit of $g$ by setting $g=1$. In this work, we always set $g=1$ and the fermion mass $m=0.1$, so the photon mass is $\mu=\sqrt{\frac{2}{\pi}}\simeq 0.8$. The lattice fermion mass (31) becomes $m_{\mathrm{lat}}=0.1-\frac{a}{4}$. The theta angle is normally set to $\theta=0$, except when measuring the one-point function of the pion at $\theta=2\pi$. For the correlation-function scheme and the one-point-function scheme, we use the lattice size of $N=160$. The lattice spacing is set to $a\approx 0.25$ so that the physical size is $L=a(N-1)=39.8$. The number of DMRG steps called the sweeps, is set to $N_{\mathrm{sweep}}=20$. We generate the ground state for four different values of the cutoff parameter: $\varepsilon=10^{-10}$, $10^{-12}$, $10^{-14}$, and $10^{-16}$. To characterize the bond dimension of the MPS, we focus on the largest number of nonzero singular values, which we call the effective bond dimension, denoted as $D_{\mathrm{eff}}$. In our computations, we set the maximal bond dimension large enough so that $D_{\mathrm{eff}}$ is solely controlled by the cutoff $\varepsilon$ for the above physical setup. We observe $D_{\mathrm{eff}}$ to be about 400, 800, 1600, and 2800 for the respective values of $\varepsilon$ above. For the dispersion-relation scheme, we generate many excited states up to $k=23$, which require a lot of computational cost. Therefore, we choose a smaller lattice size of $L=19.8$ with $N=100$ and $a=0.2$. The excited states are generated with a cutoff of $\varepsilon=10^{-10}$ and a weight parameter of $W=10$. To achieve better convergence of higher states, we increase the number of sweeps to $N_{\mathrm{sweep}}=50$. The bond dimension is about 500 for the ground state while it is at most 2300 for the excited states. As an initial state of the DMRG, we choose the Néel state, which is a direct product of the spin-down states on even sites and the spin-up state on odd sites, $\Ket{\mathrm{N\acute{e}el}}=\bigotimes_{f=1}^{N_{f}}\ket{\downarrow}_{f,0}\ket{\uparrow}_{f,1}\cdots\ket{\downarrow}_{f,N-2}\ket{\uparrow}_{f,N-1}.$ (68) Based on (56) and (57), the Néel states is regarded as a zero-particle state. We also impose the charge conservation condition during the DMRG, so that the MPS satisfies the condition $Q=0$, where $Q=\sum_{f=1}^{N_{f}}\sum_{n=0}^{N-1}\rho_{f,n}=\frac{1}{2}\sum_{f=1}^{N_{f}}\sum_{n=0}^{N-1}\sigma_{f,n}^{z}.$ (69) We note that the Gauss law with the usual open boundary on both sides requires $Q=0$ on the physical states. ## 5 Simualtion results In this section, we explain our numerical results for the meson spectrum of the $2$-flavor Schwinger model at $\theta=0$. We apply three distinct methods in our computations of the meson spectrum: * • the correlation-function scheme, * • the one-point-function scheme, and * • the dispersion-relation scheme. Each method has its own pros and cons, and we are going to discuss them. We will see that all these schemes give consistent results. ### 5.1 Correlation-function scheme In the relativistic quantum field theories, the Hilbert space only plays the secondary role, and we are supposed to reconstruct all the physical information from the correlation functions. In the conventional Euclidean lattice gauge theory, people usually follow this dogma, and the mass spectrum is obtained from the correlation function in the imaginary time direction. We can take the same approach also in the Hamiltonian formalism by the measurement of the spatial correlation function with the distance $r=|x-y|$. First, let us work on pions, and we consider the equal-time spatial correlation function, $C_{\pi}(r)=\Braket{\pi(x)\pi(y)},$ (70) where $\pi(x)$ denotes the operator defined by (43) with $x=na$. In order to evade the boundary effect as much as possible, we compute $C_{\pi}(r)$ by changing $x$ and $y$ symmetrically as $x=(L-r)/2$ and $y=(L+r)/2$, and the range of $r$ is restricted to $0\leq r\leq L/2$. The results are shown in the left panel of Fig. 1 in the logarithmic scale, and the pion mass can be extracted from the exponential decay of $C_{\pi}(r)$. Here, the data with different colors represent the different values of the cutoff parameter $\varepsilon$. Figure 1: (Left) The correlation function of pion $\ln|\Braket{\pi(x)\pi(y)}|$ is plotted against the distance $r=|x-y|$ for various values of $\varepsilon$. The number of lattice sites is $N=160$ and the lattice spacing $a$ is determined so that $L=a(N-1)=39.8$. (Right) The effective mass of the pion $M_{\pi,\mathrm{eff}}(r)$ (3-point average) calculated from the correlation function in the left panel is plotted against $r$. It is convenient to use the so-called effective mass defined by $\tilde{M}_{\pi,\mathrm{eff}}(r)=-\frac{1}{2a}\log\frac{C_{\pi}(r+2a)}{C_{\pi}(r)},$ (71) where $2a$ comes from the step size of changing $r$. We further take the 3-point average of the effective mass $M_{\pi,\mathrm{eff}}(r)=\frac{1}{4}\tilde{M}_{\pi,\mathrm{eff}}(r-2a)+\frac{1}{2}\tilde{M}_{\pi,\mathrm{eff}}(r)+\frac{1}{4}\tilde{M}_{\pi,\mathrm{eff}}(r+2a)$ (72) to suppress a remaining oscillation caused by the staggered fermion. The result of $M_{\pi,\mathrm{eff}}(r)$ is shown in the right panel of Fig. 1. One might be tempted to think that the pion mass corresponds to the plateau value of $M_{\pi,\mathrm{eff}}(r)$, and the result with the DMRG cutoff $\varepsilon=10^{-10}$ actually seems to become constant for $r\gtrsim 10$ almost exactly. However, this is the fake plateau due to the low-rank approximation. The point is that the leading asymptotic behavior of the spatial correlator is not purely the exponential decay, and it would take the Yukawa-type form asymptotically as $r\to\infty$, $C_{\pi}(r)\sim\frac{1}{r^{\alpha}}\exp(-M_{\pi}r).$ (73) We actually have $\alpha=1/2$ for the $(1+1)$d free massive boson, and we shall discuss the detailed analysis in Appendix D in the case of the $1$-flavor Schwinger model. As a result, the effective mass for the Yukawa- type correlation function is given by $M_{\pi,\mathrm{eff}}(r)=-\frac{d}{dr}\log C_{\pi}(r)\sim\frac{\alpha}{r}+M_{\pi},$ (74) and there must be an additional $O(1/r)$ contribution on top of the actual mass $M_{\pi}$. Figure 2: The effective mass of the pion $M_{\pi,\mathrm{eff}}(r)$ is plotted against $1/r$. The data points for $\varepsilon=10^{-16}$ are fitted by $\alpha/r+M$ inside the region $0.075\leq 1/r\leq 0.15$. The fitting result is depicted by the shaded band with systematic error. Motivated by this fact, we plot $M_{\pi,\mathrm{eff}}(r)$ against $1/r$ in Fig. 2. We can see that the behavior of the effective mass strongly depends on the cutoff $\varepsilon$ especially when $r$ is large. When $\varepsilon$ is not small enough, we observe the saturation of $M_{\pi,\mathrm{eff}}(r)$, and then the $\alpha/r$ term seems to be absent. We note that the low-rank approximation of the DMRG is similar to the approximation of the transfer matrix by a finite matrix. Thus, $C_{\pi}(r)$ in the DMRG is approximated by the sum of purely exponential functions, and we need sufficiently large bond dimensions to reproduce the non-exponential corrections, such as $1/r^{\alpha}$. In fact, we can observe in Fig. 2 that the development of the $1/r$-behavior in $M_{\pi,\mathrm{eff}}(r)$ for large $r$ by making $\varepsilon$ sufficiently small, i.e. the bond dimension sufficiently large. We estimate the mass $M_{\pi}$ by the linear extrapolation $1/r\rightarrow 0$ of the result for the largest bond dimension with $\varepsilon=10^{-16}$, which is performed by fitting the data points with $\alpha/r+M_{\pi}$. To evaluate the systematic errors from the uncertainty of the fitting range, we try a lot of fittings by changing the fitting range inside the region $0.075\leq 1/r\leq 0.15$, and we obtain the probability distribution of the fitting results. The best-fitting result and its error are estimated from the position and the width of the peak, respectively. Thus, we obtained $M_{\pi}=0.431(1),$ (75) with $\alpha=0.477(9)$, and the fitting lines are drawn as the purple shadow in Fig. 2. Figure 3: The effective mass of sigma meson $M_{\sigma,\mathrm{eff}}(r)$ (left) and eta meson $M_{\eta,\mathrm{eff}}(r)$. Next, we perform similar analyses for sigma meson (45) and eta meson (44). Since these are isospin singlets, their one-point functions are not zero, and we subtract the disconnected parts from the correlation functions, $\displaystyle C_{\sigma,\mathrm{eff}}(r)$ $\displaystyle=\langle\sigma(x)\sigma(y)\rangle-\langle\sigma(x)\rangle\langle\sigma(y)\rangle,$ (76) $\displaystyle C_{\eta,\mathrm{eff}}(r)$ $\displaystyle=\langle\eta(x)\eta(y)\rangle-\langle\eta(x)\rangle\langle\eta(y)\rangle,$ (77) with $x=(L-r)/2$, $y=(L+r)/2$. We then compute the 3-point averages of the effective mass, $M_{\sigma,\mathrm{eff}}(r)$ and $M_{\eta,\mathrm{eff}}(r)$, and they are shown in Fig. 3. The difference in the asymptotic behavior is observed by changing $\varepsilon$ also in these cases. We plot the effective masses of the sigma and eta mesons against $1/r$ in Fig. 4 to see the asymptotic behavior. They approach $\propto 1/r$ for smaller $\varepsilon$ as expected. We fit the data for $\varepsilon=10^{-16}$ by $\alpha/r+M$ inside the region $0.075\leq 1/r\leq 0.15$ and estimate the systematic error. Then we obtained $M_{\sigma}=0.722(6),$ (78) with $\alpha=0.83(5)$ for sigma meson, and $M_{\eta}=0.899(2),$ (79) with $\alpha=0.51(2)$ for eta meson. It is notable that $\alpha_{\sigma}\sim 0.8$ has a relatively large deviation from the free boson result, $\alpha=1/2$, which may suggest that the sigma meson has a nontrivial dispersion relation even for small momentum. Figure 4: The effective mass of sigma meson $M_{\sigma,\mathrm{eff}}(r)$ (left) and eta meson $M_{\eta,\mathrm{eff}}(r)$ (right) are plotted against $1/r$. The data points for $\varepsilon=10^{-16}$ are fitted by $\alpha/r+M$ inside the range $0.075\leq 1/r\leq 0.15$. The fitting results are depicted by the shaded bands with systematic errors. Finally, we summarize the masses of the three mesons measured by the correlation functions: $\begin{array}[]{c|c|c|c}&\text{pion}&\text{sigma}&\text{eta}\\\ \hline\cr\text{mass}/g&\,\,0.431(1)&\,\,0.722(6)&\,\,0.899(2)\end{array}$ (80) The numerical results are qualitatively consistent with the analytic result by bosonization $M_{\pi}<M_{\sigma}<M_{\eta}$. We also find $M_{\sigma}/M_{\pi}=1.68(2),$ (81) which is close to the prediction by the sine-Gordon model Eq. (19). ### 5.2 One-point-function scheme We consider an alternative way to obtain the mass spectrum without using the two-point correlation functions. Let us recall that we are taking the open boundary condition, and we can use those boundaries as the source for excitations from the thermodynamic ground state. The boundary effect decays exponentially for the gapped systems, and thus the one-point function of a local operator $\mathcal{O}(x)$ should behave as $\langle\mathcal{O}\rangle+C^{\prime}e^{-M_{\mathcal{O}}x}$ as the function of the distance $x=an$ from the boundary. Here, $\langle\mathcal{O}\rangle$ gives the vacuum expectation value in the thermodynamic limit, and $M_{\mathcal{O}}$ in the exponent gives the lightest particle mass with the same quantum number of $\mathcal{O}(x)$. In the context of condensed matter physics, it is known that the correlation function can be obtained from the Fridel oscillation, which is induced by a boundary effect or a local external field PhysRevB.54.13495 ; SHIBATA19971024 . We note that the $x$-dependence in this method takes the purely exponential form $e^{-Mx}$ as the leading behavior for $x\to\infty$. This can be easily understood by considering the path integral and the $\pi/2$ rotation of Euclidean spacetime. Then, the boundary condition sits at the constant imaginary time and defines the state $\ket{\mathrm{Bdry}}$ with zero momentum. Thus, the leading contribution to the imaginary-time correlation function $\bra{\mathrm{Vac}}Oe^{-H|x|}\ket{\mathrm{Bdry}}$ should come from the lightest particle with the zero-momentum projection, giving $e^{-Mx}$. This feature has nice compatibility with the low-rank approximation of DMRG. #### 5.2.1 The one-point functions of $\sigma$ and $\eta$ at $\theta=0$ At $\theta=0$, the boundary condition turns out to be completely invariant under the isospin rotation, and thus the boundary state $\ket{\mathrm{Bdry}}$ does not produce one pion states. Therefore, let us here focus on the iso- singlet particles, $\sigma$ and $\eta$, and we will come back to pions later. First, we discuss the eta meson as it turns out to be the simplest one. Since the $G$ parity is not spontaneously broken, we must have $\langle\eta\rangle=0$ in the thermodynamic limit. However, the staggered fermion realizes the $G$ parity (or charge conjugation) as the one-unit lattice translation, and thus the open boundary condition violates the $G$ parity. Therefore, the boundary state can be a source of the eta meson, and we evaluate the one-point function $\Braket{\eta(x)}$ of the eta meson operator (44) in the range $0<x\leq L/2$. The result is shown in Fig. 5. The cutoff parameter is changed from $\varepsilon=10^{-10}$ to $10^{-16}$. The one-point function decays exponentially with $x$ as expected. Thus, we fit the data points of $\ln|\Braket{\eta(x)}|$ by $-M_{\eta}x+C$ in the fitting range $7\leq x\leq 13$, and the result is $M_{\eta}=0.9014(1),$ (82) with $C=-1.096(1)$ for the smallest cutoff $\varepsilon=10^{-16}$. The corresponding fitting curve is shown in Fig. 5 with the purple line. In this case, we also find that the results for the other values of $\varepsilon$ are consistent within the fitting error. Thus, the cutoff dependence does not appear unlike the case of the correlation functions, and we suspect that this is because MPS can efficiently express purely exponential decay. Figure 5: The one-point function $\ln|\Braket{\eta(x)}|$ of the eta meson is plotted against $x=an$ with $n=1,\cdots,N/2-1$ for various values of $\varepsilon$. The number of lattice sites is $N=160$ and the lattice spacing $a$ is determined so that $L=a(N-1)=39.8$. The result of fitting by $-M_{\eta}x+C$ for $\varepsilon=10^{-16}$ is also plotted by the solid line inside the range and by the broken line outside. Next, we evaluate the one-point function $\Braket{\sigma(x)}$ of the sigma meson (45) for $0<x\leq L/2$. We note that $\sigma$ has the same quantum number with the vacuum, and then $\Braket{\sigma(x)}$ is nonzero also in the bulk. It behaves as $e^{-Mx+C}+A$ with a constant shift of $A$, so we subtract the value $\Braket{\sigma(L/2)}$ at the center $x=L/2$ of the lattice from $\Braket{\sigma(x)}$ to remove the constant. The result is shown in Fig. 6, which indicates the exponential decay as expected. We fit the data points of $\ln|\Braket{\sigma(x)-\sigma(L/2)}|$ by $-M_{\sigma}x+C$ in the range $7\leq x\leq 13$, and the best-fit parameter is $M_{\sigma}=0.761(2),$ (83) with $C=-2.71(2)$, which are independent of the value of $\varepsilon$. The result of fitting for $\varepsilon=10^{-16}$ is shown in Fig. 6 with the purple line. Figure 6: The one-point function $\ln|\Braket{\sigma(x)-\sigma(L/2)}|$ of the sigma meson is plotted against $x=an$ with $n=1,\cdots,N/2-1$ for various values of $\varepsilon$. The value at $x=L/2$ is subtracted from $\Braket{\sigma(x)}$ to eliminate the constant shift in the bulk. The number of lattice sites is $N=160$ and the lattice spacing $a$ is determined so that $L=a(N-1)=39.8$. The result of fitting by $-M_{\sigma}x+C$ is also plotted by the solid line inside the range and by the broken line outside. #### 5.2.2 The one-point functions of $\pi_{3}$ at $\theta=2\pi$ Let us now come back to the issue of pions. As we have argued, the boundary state at $\theta=0$ is neutral under the isospin rotation, and thus it does not produce one-pion states and we have $\langle\pi(x)\rangle=0$ for all $x$. Therefore, we need to somehow create the boundary state that transforms nontrivially under the isospin rotation to study pions with the one-point- function scheme. In this study, we decided to use one of the ground states at $\theta=2\pi$ for this purpose. Since the Hamiltonians at $\theta=0$ and $\theta=2\pi$ are unitary equivalent under the periodic boundary condition, the bulk properties are the exactly same between $\theta=0,2\pi$. As we have discussed in Section 2.2, the ground state at $\theta=2\pi$ is a nontrivial SPT state protected by the isospin $\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}$ symmetry, and thus the boundary states with the open boundary condition have the isospin $1/2$. This boundary charge can be a source of the pions so that $\Braket{\pi(x)}$ becomes nonzero. About the computational cost, it turns out that the bond dimensions for the MPS are mostly the same at $\theta=0$ and $\theta=2\pi$ when the system size is large enough. Therefore, we can obtain the ground state at $\theta=2\pi$ as easily as that of $\theta=0$. We, however, observe that the bond dimension at $\theta=2\pi$ increases significantly if the system size is not large enough, and we suspect its reason is as follows. At $\theta=2\pi$, there is $4$-fold degeneracy due to the boundary degrees of freedom, but they split into the singlet and the triplet states with the energy splitting $\sim e^{-M_{\pi}L}$. That is, the true ground state at finite $L$ has an extra Bell pair between the boundary isospin $1/2$ states, which adds $\ln 2$ to the entanglement entropy. When we cut the system at $x=L/2$, this extra $\ln 2$ should be accumulated by the large numbers of small singular values, and thus the bond dimension becomes quite huge just to create the Bell pair between the boundaries. If $L$ is large enough, the energy gain by creating the Bell pair becomes negligible, and the DMRG would produce one of the ground states with disentangled boundary states practically. Thus, the computational cost becomes almost the same as that for the trivial state at $\theta=0$. Let us now evaluate the one-point function $\Braket{\pi(x)}$ of the pion (43) for $0<x\leq L/2$ using the ground state at $\theta=2\pi$. The result is shown in Fig. 7. We again find the exponential decay, and thus fit the data points of $\ln|\Braket{\pi(x)}|$ by $-M_{\pi}x+C$ in the range $7\leq x\leq 13$. The result is $M_{\pi}=0.4175(9),$ (84) with $C=0.203(9)$, which do not depend on the cutoff $\varepsilon$ as before. The fitting result for $\varepsilon=10^{-16}$ is shown in Fig. 7 with the purple line. Figure 7: The one-point function $\ln|\Braket{\pi(x)}|$ of the pion is plotted against $x=an$ with $n=1,\cdots,N/2-1$ for various values of $\varepsilon$. The number of lattice sites is $N=160$ and the lattice spacing $a$ is determined so that $L=a(N-1)=39.8$. We set $\theta=2\pi$ in order to induce the boundary charges, which make $\Braket{\pi(x)}$ nonzero. The result of fitting by $-M_{\pi}x+C$ is also plotted by the solid line inside the range and by the broken line outside. Let us summarize the effective masses using the one-point-function scheme: $\begin{array}[]{c|c|c|c}&\text{pion}&\text{sigma}&\text{eta}\\\ \hline\cr\text{mass}/g&\,\,0.4175(9)&\,\,0.761(2)&\,\,0.9014(1)\end{array}$ (85) The order of three meson masses is consistent with the analytic prediction. We also find $M_{\sigma}/M_{\pi}\simeq 1.821(6),$ (86) which is still close to the WKB prediction, $\sqrt{3}$, with a 5% deviation. The significant feature of the method is that the results do not depend on the cutoff parameter $\varepsilon$ as long as it is sufficiently small. Therefore, the systematic error from the cutoff is expected to be small enough. We do not need to increase the bond dimension so much, unlike the method by the correlation function. ### 5.3 Disparsion-relation shceme So far we have studied the mass spectrum by using the local observables of the ground state, and these methods are applicable both in the path integral and the Hamiltonian formalisms. As the third method for computing the mass spectrum, let us take a different approach that is specific to the Hamiltonian formalism: We compute the excited states as explained in Section 4.1, and then determine the mass spectrum from the dispersion relation. The low-lying excited states correspond to one-particle excitations. For example, the zero modes of the lightest meson, namely the pion, is expected to be obtained as the first excited state. We can also obtain the states with nonzero momentum $K$, and we can fit the data with the dispersion relation $\Delta E\simeq\sqrt{K^{2}+M_{\pi}^{2}}$ to obtain the pion mass. As we go to the higher excited states, we will encounter one-particle states of the sigma and eta mesons. They can be distinguished by measuring quantum numbers, such as the isospin and $G$-parity. Thus, we can compute the mass spectrum from the dispersion relation by generating the excited states. We note that our computation is done in the finite open interval, and thus the momentum is not a good quantum number. Also, there may exist a nontrivial contribution to the excitation energy from the boundaries. We are neglecting those subtleties in this work, but, surprisingly, it turns out that the numerical results are almost consistent with those with the previous two methods. We generated the MPS up to the 23rd excited state at $\theta=0$ with the small physical volume $L=19.8$. The energy gap $\Delta E_{k}=E_{k}-E_{0}$ of the $k$-th excited state is shown in the left panel of Fig. 8. We also measured the square of total momentum $K^{2}$ defined by (65). We note that its ground- state expectation value $\Braket{K^{2}}_{0}\simeq 0.46$ is nonzero because of the boundary effect and maybe also due to lattice artifacts, and thus we subtract $\Braket{K^{2}}_{0}$ from $\Braket{K^{2}}_{k}$ of the excited states. The result is plotted in the right panel of Fig. 8. From these results, we find many triply degenerated states, which are candidates for the states of the pion. There are a few singlet states as well, which are candidates for the eta and sigma mesons. Figure 8: (Left) The energy gap $\Delta E_{k}=E_{k}-E_{0}$ is plotted against the level of the excited state $k$. (Right) The square of total momentum $\Delta K_{k}^{2}=\Braket{K^{2}}_{k}-\Braket{K^{2}}_{0}$ is plotted against $k$ after subtracting the result for the ground state. To identify the states, we measure the expectation values of the isospin operators, $\bm{J}^{2}$ and $J_{z}$, the parity $P$ and the $G$-parity $G=Ce^{i\pi J_{y}}$ in Section 3.4. We note that the DMRG does not produce the states in a diagonal basis for these quantities. We diagonalize the $3\times 3$ matrix $\Braket{\psi_{k_{1}}}{J_{z}}{\psi_{k_{2}}}$ in each triplet to compute expectation values in the $J_{z}$ basis.444In computing the expectation value of the $G$-parity, we find it easier to do it in the $J_{y}$ basis instead of the $J_{z}$ basis because $G=Ce^{i\pi J_{y}}$, and we thus performed it in the $J_{y}$ basis.,555It is possible that triplets and singlets are also mixed in the DMRG if their energies are close. In fact, the states for $k=19,\cdots,23$ are mostly degenerated. We separated one triplet and two singlets out of them by diagonalization of $\Braket{\psi_{k_{1}}}{\bm{J}^{2}}{\psi_{k_{2}}}$ and of $\Braket{\psi_{k_{1}}}{C}{\psi_{k_{2}}}$. $k$ | $\bm{J}^{2}$ | $J_{z}$ | $G$ | $P$ ---|---|---|---|--- 1 | 2.00000004 | 0.99999997 | 0.27872443 | -6.819$\times{10}^{-8}$ 2 | 2.00000012 | -0.00000000 | 0.27872416 | -6.819$\times{10}^{-8}$ 3 | 2.00000004 | -0.99999996 | 0.27872443 | -6.819$\times{10}^{-8}$ 4 | 2.00000007 | 0.99999999 | 0.27736066 | 7.850$\times{10}^{-8}$ 5 | 2.00000006 | 0.00000000 | 0.27736104 | 7.850$\times{10}^{-8}$ 6 | 2.00000009 | -0.99999998 | 0.27736066 | 7.850$\times{10}^{-8}$ 7 | 2.00000010 | 1.00000000 | 0.27536687 | -8.838$\times{10}^{-8}$ 8 | 2.00000002 | 0.00000000 | 0.27536702 | -8.837$\times{10}^{-8}$ 9 | 2.00000007 | -0.99999998 | 0.27536687 | -8.838$\times{10}^{-8}$ 10 | 2.00000007 | 0.99999998 | 0.27356274 | 9.856$\times{10}^{-8}$ 11 | 2.00000005 | 0.00000001 | 0.27356277 | 9.856$\times{10}^{-8}$ 12 | 2.00000007 | -0.99999999 | 0.27356274 | 9.856$\times{10}^{-8}$ 15 | 1.99999942 | 0.99999966 | 0.27173470 | -1.077$\times{10}^{-7}$ 16 | 2.00000052 | 0.00000000 | 0.27173482 | -1.077$\times{10}^{-7}$ 17 | 2.00000015 | -1.00000003 | 0.27173470 | -1.077$\times{10}^{-7}$ 19 | 2.00009067 | 1.00004377 | 0.27717104 | -3.022$\times{10}^{-8}$ 20 | 2.00002578 | -0.00000004 | 0.27717020 | -3.023$\times{10}^{-8}$ 21 | 2.00003465 | -1.00001622 | 0.27717104 | -3.023$\times{10}^{-8}$ Table 1: The quantum numbers of the isospin triplet states. The index $k$ comes from the level of each state in the original basis. The rows of the table are separated into each triplet. $k$ | $\bm{J}^{2}$ | $J_{z}$ | $G$ | $P$ ---|---|---|---|--- 0 | 0.00000003 | -0.00000000 | 0.27984227 | 3.896$\times{10}^{-7}$ 13 | 0.00000003 | 0.00000000 | 0.27865844 | 1.273$\times{10}^{-7}$ 14 | 0.00000003 | 0.00000000 | 0.27508176 | -2.765$\times{10}^{-8}$ 18 | 0.00000028 | 0.00000006 | -0.27390909 | -6.372$\times{10}^{-7}$ 22 | 0.00001537 | 0.00000115 | 0.26678987 | 7.990$\times{10}^{-8}$ 23 | 0.00003607 | -0.00000482 | -0.27664779 | 5.715$\times{10}^{-7}$ Table 2: The quantum numbers of the isospin singlet states. The expectation values of $\bm{J}^{2}$, $J_{z}$, $G$, and $P$ in the $J_{z}$ basis are listed in Tables 1 and 2 for iso-triplets and iso-singlets, respectively. The index $k$ comes from the level of the state on the original random basis. We find that $|G|\neq 1$ because of $|C|\neq 1$ by the effect of the boundary. Hopefully, the sign of $G$ can be assumed to remember the original quantum number Banuls:2013jaa , and, if it is true, we can still identify the $G$-parity. This point will be discussed more in details in Appendix B. We identify the lowest triplet $k=1,2,3$ as the lowest modes of the pions ($\pi^{+}$, $\pi^{0}$, $\pi^{-}$) since they have the quantum numbers consistent with the pion, namely $J^{PG}=1^{-+}$ and $J_{z}=0,\pm 1$. For the iso-singlets shown in Table 2, we find that the $k=13$ state has the quantum number consistent with the sigma meson, namely $J^{PG}=0^{++}$ and $J_{z}=0$. The $k=18$ state is consistent with the eta meson with $J^{PG}=0^{--}$ and $J_{z}=0$. We identify these singlets with the lowest modes of the sigma and eta mesons. Figure 9: The energy gap $\Delta E_{k}$ is plotted against the square of total momentum $\Delta K_{k}^{2}$. The states with the same isospin and $G$-parity are plotted by the same symbol. Then each state is identified with the pion, sigma, or eta meson. We fit the data for each meson by $\Delta E=\sqrt{b^{2}\Delta K^{2}+M^{2}}$. The results are shown by the broken lines. The values of $M$ for each meson are also plotted as the endpoints of the fitting lines. After identifying the quantum numbers, we plot the energy gap $\Delta E_{k}=E_{k}-E_{0}$ against the momentum square $\Delta K_{k}^{2}=\Braket{K^{2}}_{k}-\Braket{K^{2}}_{0}$ to obtain the dispersion relation as shown in Fig. 9. The states with the same isospin $\bm{J}^{2}$ and $G$-parity are plotted by the same symbol.666 The triplet for $k=19,20,21$ is not shown in this plot since it is not of the state of the single pion. We expect that the triplet comes from the pion scattering state, which was discussed in Harada:1993va . Then we fit the data points by $\Delta E=\sqrt{b^{2}\Delta K^{2}+M^{2}}$ with fitting parameters $M$ and $b$. The fitting result of $M$ can be regarded as the mass of the corresponding meson as an extrapolation to $\Delta K^{2}\rightarrow 0$. We obtained $M_{\pi}=0.426(2)$, $b_{\pi}=1.017(4)$ for the pion; and $M_{\sigma}=0.7456(5)$, $b_{\sigma}=1.087(2)$ for the sigma meson. The fitting for the eta meson is simply solving an equation since there are only two data points. The result are $M_{\eta}=0.904$ and $b_{\eta}=0.962$. We summarize the masses of the mesons determined by the energy gap of the excited states: $\begin{array}[]{c|c|c|c}&\text{pion}&\text{sigma}&\text{eta}\\\ \hline\cr\text{mass}/g&\,\,0.426(2)&\,\,0.7456(5)&\,\,0.904\end{array}$ (87) We find the mass ratio $M_{\sigma}/M_{\pi}\simeq 1.75(1)$ (88) from this result, which is close to the WKB prediction $\sqrt{3}$. ## 6 Conclusion and Discussion In this paper, we work on three independent methods to compute the mass spectrum of lattice gauge theories in the Hamiltonian formalism, which apply to tensor networks and quantum computation. The methods are tested in the massive 2-flavor Schwinger model at $\theta=0$, some of which properties are analogous to the ones of $4$d QCD. The two species of fermion play roles of up and down quarks, and the composite particles (mesons) appear as triplets or singlets of the $\mathrm{SU}(2)_{V}/\mathbb{Z}_{2}$ isospin symmetry. We used the tensor network, in particular, DMRG for numerical simulation. We obtained the masses of the pion, sigma, and eta meson by the three methods, and the results are summarized in Fig. 10. We find that the results are roughly consistent with each other taking into account possible systematic errors for each method, such as the continuum and infinite-volume limits. In addition, all the results show the relation $M_{\pi}<M_{\sigma}<M_{\eta}$, which agrees with the analytic prediction by the bosonization technique. The mass of the eta meson $M_{\eta}\sim 0.9$ is consistent with $M_{\eta}=\mu+O(m)$ since $\mu\sim 0.8$ and $m=0.1$ in the current setup. We also find that the masses of the pion and sigma mesons satisfy the WKB-based formula (19), $M_{\sigma}/M_{\pi}=\sqrt{3}$, within not more than a 5% deviation. It is, honestly, very surprising that the semiclassical analysis gives the almost correct answer outside the range of its validity, and it would be theoretically interesting to uncover the reason behind its success. Figure 10: The masses of the pion, sigma, and eta meson obtained by the three independent methods are compared. Each result is obtained with the given finite lattice spacing. We also put the error bar of the fitting error for the correlation-function scheme, but it is too small to be seen. Let us discuss the advantages and difficulties of each method and the potential applications to other models. The first one, the correlation- function scheme, is the straightforward generalization of the technique in Lagrangian formalism. The advantage of this method is a wide range of applicability to various models. We can obtain the meson masses from correlation functions on a lattice with any dimensions, volume, and boundary condition. Furthermore, the correlation function accepts the off-diagonal element such as $\Braket{\mathcal{O}(x)\mathcal{O}^{\prime}(y)}$. This feature will be useful when we turn on $\theta\neq 0$ in the 2-flavor Schwinger model. The reason is that the meson operators become nontrivial mixtures of $S_{f}(n)$ and $PS_{f}(n)$ depending on $\theta$. In this case, we need to measure the correlation matrix of the operators and diagonalize it to extract the mode of each meson. However, our numerical results suggest that the bond dimension of MPS has to be sufficiently large to reproduce the correct asymptotic behavior of the correlation function. In particular, the computational cost increases rapidly as the system approaches a gapless phase, for example, $m\sim 0$ or $\theta\sim\pi$. Thus, the tensor network (MPS) is not an efficient approach to computing the mass spectrum by using correlation functions.777 It is possible that other types of tensor networks, such as MERA, may be still useful in this method. On the other hand, an ideal quantum computer is free from such a restriction of the bound dimension. Thus, the correlation function may be the first option in the era of practical quantum computation of field theories in this sense, though to avoid the finite volume effect for the two-point function we need a sizable scale computer. The second method, the one-point-function scheme, makes good use of the boundary effect rather than eliminating it. The results turn out to be insensitive to the bond dimension, and thus we have to increase neither the lattice size nor the bond dimension so much. Furthermore, the evaluation of the local one-point function is generally easier than that of the long-range correlation function. Thus, this is the most economical one among the three methods. We note, however, that we have to prepare suitable boundary conditions, such as defects, impurities, or external fields, to compute the mass spectrum with this one-point-function scheme, which requires good physical insights for the system of interest. In our case, the open boundary at $\theta=0$ can be regarded as a source of the iso-singlet mesons, $\sigma$, and $\eta$, but we have to set $\theta=2\pi$ to induce the boundary excitation as a source of the iso-triplet mesons, $\pi_{a}$. We should also note that we cannot obtain information on the off-diagonal correlators in the one-point- function scheme. When $\theta=0$, the off-diagonal correlators are unimportant because $\pi$, $\sigma$, and $\eta$ have different quantum numbers, but they should become important at generic values of $\theta$ because the $G$ parity is no longer a good quantum number. The third method, the dispersion-relation scheme, is the distinctive strategy of Hamiltonian formalism. We can obtain various states heuristically without knowing what kind of mesons appear in the spectrum. We found the scattering state unexpectedly by this method. Once we generate the excited states, it is straightforward to measure various observables such as energy, momentum, and quantum numbers. The states are identified by using these pieces of information. Furthermore, we can investigate the wave function to distinguish the $s$-wave or $p$-wave states. In this method, however, it is difficult to increase the system size or the spatial dimensions. The reason is that we have to generate an increasing number of states to search for different mesons. For example, in our setup, we encounter the $3\times 4$ states of the pion before obtaining the sigma meson at $k=13$. The momentum $K$ is discretized as $K\sim 2\pi\kappa/L$ for $\kappa=1,2,\cdots$ in the finite system with the size $L$. If $L$ is increased, the number of pion states in a certain range of energy grows up. Thus, we have to generate more excited states to reach the state of the sigma meson. As for higher dimensions, there are momentum excitations in each spatial direction, which result in an additional degeneracy. We expect that there is a way to avoid this issue by modifying the strategy. For example, if we are interested in a specific meson, it is more effective generating excited states with a constraint on the quantum number to skip mesons out of interest. In this work, we have computed the mass spectrum at $\theta=0$. We note that we have neglected many systematic errors, and thus there is plenty of room for improvement. As a physics, extending our investigation to $\theta\neq 0$ should be interesting, where the sign problem arises in naive applications of Monte Carlo simulations. The presence of $\theta$ introduces some differences compared to the $\theta=0$ case. Firstly, the mass of the pion, which corresponds to the gap of the system, decreases as $\theta\to\pi$. Consequently, we may need to increase the bound dimension of MPS, leading to higher computational costs. Secondly, the parity and $G$-parity are no longer good quantum numbers for $\theta\not=0$, and the scalar and pseudo-scalar operators have a nontrivial mixture. To handle this situation, we should measure the correlation matrix between these operators and diagonalize it. Although distinguishing the excited states, especially $\sigma$ and $\eta$, seems to become tricky, exploring the changes in the spectrum promises intriguing insights. Needless to say, it is very desirable that future developments of these techniques eventually enable us to compute the hadron spectrum of $4$d strongly-coupled gauge theories having the sign problem in the conventional Monte Carlo methods. ###### Acknowledgements. We would like to thank S. Aoki, M. Honda, T. Nishino, and K. Okunishi for their useful discussions. The numerical calculations were carried out on XC40 at YITP in Kyoto University and the PC clusters at RIKEN iTHEMS. The work of A. M. is supported by FY2022 Incentive Research Projects of RIKEN. The work of E. I. is supported by JST PRESTO Grant Number JPMJPR2113, JST Grant Number JPMJPF2221, JSPS KAKENHI (S) Grant number 23H05439, JSPS Grant-in-Aid for Transformative Research Areas (A) JP21H05190, and Program for Promoting Researches on the Supercomputer Fugaku” (Simulation for basic science: approaching the new quantum era) Grant number JPMXP1020230411. The work of Y. T. is supported by JSPS KAKENHI Grant number, 22H01218. This work is supported by Center for Gravitational Physics and Quantum Information (CGPQI) at YITP. ## Appendix A Operators in the spin representation In this appendix, we show the spin representations of the Hamiltonian and operators defined in Section 3 after the Jordan-Winger transformation (39) and (40). For later convenience, we first show the transformation of some local operators, $\chi_{f,n}^{\dagger}\chi_{f,n}=\sigma_{f,n}^{+}\sigma_{f,n}^{-}=\frac{\sigma_{f,n}^{z}+1}{2},$ (89) $\chi_{1,n}^{\dagger}\chi_{1,n+1}-\chi_{1,n+1}^{\dagger}\chi_{1,n}=\sigma_{1,n}^{+}\sigma_{2,n}^{z}\sigma_{1,n+1}^{-}-\sigma_{1,n}^{-}\sigma_{2,n}^{z}\sigma_{1,n+1}^{+},$ (90) $\chi_{2,n}^{\dagger}\chi_{2,n+1}-\chi_{2,n+1}^{\dagger}\chi_{2,n}=\sigma_{2,n}^{+}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{-}-\sigma_{2,n}^{-}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{+}.$ (91) The product of $\sigma^{z}$ in the Jordan-Winger transformation mostly cancels each other in the fermion bilinears. Let us start with the Hamiltonian. Using the relation above, the gauge part $H_{J}$ (33) is transformed as $H_{J}=\frac{J}{4}\sum_{n=0}^{N-2}\left[\sum_{f=1}^{N_{f}}\sum_{k=0}^{n}\sigma_{f,k}^{z}+N_{f}\frac{(-1)^{n}+1}{2}+\frac{\theta}{\pi}\right]^{2}.$ (92) The the fermion kinetic term $H_{w}$ (34) and the mass term $H_{m}$ (35) are given by $\displaystyle H_{w}$ $\displaystyle=-iw\sum_{n=0}^{N-2}\left(\sigma_{1,n}^{+}\sigma_{2,n}^{z}\sigma_{1,n+1}^{-}-\sigma_{1,n}^{-}\sigma_{2,n}^{z}\sigma_{1,n+1}^{+}\right.$ $\displaystyle\hphantom{=-iw\sum_{n=0}^{N-2}}\left.+\sigma_{2,n}^{+}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{-}-\sigma_{2,n}^{-}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{+}\right),$ (93) $H_{m}=\frac{m_{\mathrm{lat}}}{2}\sum_{f=1}^{N_{f}}\sum_{n=0}^{N-1}(-1)^{n}\sigma_{f,n}^{z}+\frac{m_{\mathrm{lat}}}{2}N_{f}\frac{1-(-1)^{N}}{2}.$ (94) Then the total Hamiltonian is a sum of them $H=H_{J}+H_{w}+H_{m}.$ (95) For practical use, $H_{J}$ can be decomposed into the quadratic term of $\sigma^{z}$, the linear term of $\sigma^{z}$, and the constant term by expanding the square. They can be summarized as follows: $H_{J}=H_{J}^{(2)}+H_{J}^{(1)}+H_{J}^{(0)},$ (96) $H_{J}^{(2)}=\frac{J}{2}\sum_{f=1}^{N_{f}}\sum_{j=0}^{N-3}\sum_{k=j+1}^{N-2}(N-k-1)\sigma_{f,j}^{z}\sigma_{f,k}^{z}+\frac{J}{4}\sum_{f\neq f^{\prime}}\sum_{n=0}^{N-2}\sum_{j,k=0}^{n}\sigma_{f,j}^{z}\sigma_{f^{\prime},k}^{z},$ (97) $H_{J}^{(1)}=\frac{J}{2}\sum_{f=1}^{N_{f}}\sum_{k=0}^{N-2}\left[\left(\frac{N_{f}}{2}+\frac{\theta}{\pi}\right)(N-k-1)+\frac{N_{f}}{2}\frac{(-1)^{N}+(-1)^{k}}{2}\right]\sigma_{f,k}^{z},$ (98) $\displaystyle H_{J}^{(0)}$ $\displaystyle=\frac{JN_{f}}{4}\frac{N(N-1)}{2}$ $\displaystyle+\frac{JN_{f}}{2}\left(\frac{N_{f}}{4}+\frac{\theta}{2\pi}\right)\left[\frac{(-1)^{N}-1}{2}+N\right]+J\left(\frac{\theta}{2\pi}\right)^{2}(N-1).$ (99) The spin Hamiltonian contains the non-local interactions which come from the Gauss law. It is not obvious whether the ground state can be described efficiently by MPS. Next, we map the observables by the Jordan-Winger transformation. The local scalar condensate (48) and the pseudo-scalar condensate (49) are transformed as $S_{f}(n)=\frac{1}{8a}(-1)^{n}(-\sigma_{f,n-1}^{z}+2\sigma_{f,n}^{z}-\sigma_{f,n+1}^{z}),$ (100) $\displaystyle PS_{1}(n)=\frac{i}{4a}(-1)^{n}$ $\displaystyle\left(\sigma_{1,n-1}^{+}\sigma_{2,n-1}^{z}\sigma_{1,n}^{-}-\sigma_{1,n-1}^{-}\sigma_{2,n-1}^{z}\sigma_{1,n}^{+}\right.$ $\displaystyle\left.-\sigma_{1,n}^{+}\sigma_{2,n}^{z}\sigma_{1,n+1}^{-}+\sigma_{1,n}^{-}\sigma_{2,n}^{z}\sigma_{1,n+1}^{+}\right),$ (101) $\displaystyle PS_{2}(n)=\frac{i}{4a}(-1)^{n}$ $\displaystyle\left(\sigma_{2,n-1}^{+}\sigma_{1,n}^{z}\sigma_{2,n}^{-}-\sigma_{2,n-1}^{-}\sigma_{1,n}^{z}\sigma_{2,n}^{+}\right.$ $\displaystyle\left.-\sigma_{2,n}^{+}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{-}+\sigma_{2,n}^{-}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{+}\right).$ (102) We can also map the isospin operators (50), (52), and (53) as follows: $J_{z}=\frac{1}{4}\sum_{n=0}^{N-1}(\sigma_{1,n}^{z}-\sigma_{2,n}^{z}),$ (103) $J_{+}=i\sum_{n=0}^{N-1}\sigma_{1,n}^{+}\sigma_{2,n}^{-},$ (104) $J_{-}=-i\sum_{n=0}^{N-1}\sigma_{2,n}^{+}\sigma_{1,n}^{-}.$ (105) Then it is straightforward to construct these operators as MPO. Finally, we consider the Jordan-Winger transformation of the total momentum operator (65). Each term in the sum is transformed as follows: $\displaystyle\chi_{1,n-1}^{\dagger}\chi_{1,n+1}-\chi_{1,n+1}^{\dagger}\chi_{1,n-1}$ $\displaystyle=-\sigma_{1,n-1}^{+}\sigma_{2,n-1}^{z}\sigma_{1,n}^{z}\sigma_{2,n}^{z}\sigma_{1,n+1}^{-}+\sigma_{1,n-1}^{-}\sigma_{2,n-1}^{z}\sigma_{1,n}^{z}\sigma_{2,n}^{z}\sigma_{1,n+1}^{+},$ (106) $\displaystyle\chi_{2,n-1}^{\dagger}\chi_{2,n+1}-\chi_{2,n+1}^{\dagger}\chi_{2,n-1}$ $\displaystyle=-\sigma_{2,n-1}^{+}\sigma_{1,n}^{z}\sigma_{2,n}^{z}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{-}+\sigma_{2,n-1}^{-}\sigma_{1,n}^{z}\sigma_{2,n}^{z}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{+}.$ (107) Thus, the total momentum is given by the combination of five Pauli matrices, $\displaystyle K=\frac{i}{4a}\sum_{n=1}^{N-2}$ $\displaystyle\left(\sigma_{1,n-1}^{-}\sigma_{2,n-1}^{z}\sigma_{1,n}^{z}\sigma_{2,n}^{z}\sigma_{1,n+1}^{+}-\sigma_{1,n-1}^{+}\sigma_{2,n-1}^{z}\sigma_{1,n}^{z}\sigma_{2,n}^{z}\sigma_{1,n+1}^{-}\right.$ $\displaystyle\left.+\sigma_{2,n-1}^{-}\sigma_{1,n}^{z}\sigma_{2,n}^{z}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{+}-\sigma_{2,n-1}^{+}\sigma_{1,n}^{z}\sigma_{2,n}^{z}\sigma_{1,n+1}^{z}\sigma_{2,n+1}^{-}\right).$ (108) ## Appendix B Charge conjugation operator in the 1-flavor Schwinger model The charge conjugation operator $C$ defined by (58) does not commute with the Hamiltonian with the open boundary condition. This is because the charge conjugation for the staggered fermion must incorporate the one-unit lattice translation, and thus it is not an on-site symmetry in our regularization scheme. As a consequence, the expectation value of $C$ does not become $\pm 1$. However, this is an important quantum number to diagnose the type of mesons, and we have assumed in Section 5.3 that we can diagnose the quantum number by the sign of $\langle C\rangle$. Although we have no theoretical justifications for this prescription, let us test it in the $1$-flavor Schwinger model to give some evidence for its reasonableness. Figure 11: The expectation values of the charge conjugation $C$ for the ground state (left) and the first excited state (right) are plotted against the lattice spacing $a$. The number of lattice sites $N$ is chosen to fix the physical length $L=(N-1)a$. Each symbol corresponds to a different value of $L$. We set $\theta=0$ and $m=0.125$ in this analysis. The fitting results by the quadratic are also shown by the solid lines. First, we investigate the behavior of $C$ in the continuum limit. We generated the MPS of the ground state and the 1st excited state of the 1-flavor Schwinger model at $\theta=0$ by DMRG. The lattice spacing $a$ is changed around $0.1\lesssim a\lesssim 0.25$. The number of lattice sites $N$ is chosen to fix the physical system size $L=(N-1)a$. We compute the expectation values of $C$ for these MPS. The results are shown in Fig. 11. The different symbols correspond to the results for different $L$ in the plots. We fitted the data points for each $L$ by the quadratic function $f(a)=c_{0}+c_{1}a+c_{2}a^{2}$. The fitting results are also plotted in Fig. 11 by the solid lines. For $L=49.8$, we obtained the continuum limit $\Braket{C}_{a\rightarrow 0}=0.321(3)$ for the ground state and $-0.320(3)$ for the 1st excited state. The results with the other $L$ agree with these values within the error. Thus, we confirmed that the expectation value of $C$ is a finite value in the continuum limit and is not sensitive to $L$. Next, let us discuss the effect of the boundary on $C$. We consider a further simplified model, the free fermion on the periodic lattice. The model is obtained from the 1-flavor Schwinger model with the periodic boundary condition by setting $g=0$ and adding the hopping term between $n=0$ and $n=N-1$ site. In fact, it is hard to adopt the p.b.c. in the current DMRG method due to the artificial long-range interaction between both ends of MPS. Thus, we choose small sizes of the lattice $N=20$ and $40$ for this analysis. The corresponding lattice spacings are $a=0.2$ and $0.1$ for the fixed physical length $L=Na=4$. We generate the ground state and the excited states up to the level $k=4$. The four excited states turned out to be degenerated. Thus, we compute $\Braket{C}_{k,k^{\prime}}$ including the off-diagonal elements, and diagonalize the result as the $4\times 4$ matrix. The eigenvalues are shown in Fig. 12. We found that $\Braket{C}=1$ for the ground state and $\Braket{C}=\pm 1,\alpha\pm i\beta$ for the excited states. These complex values satisfy $|\alpha|^{2}+|\beta|^{2}=1$ as we can see in the plot. The imaginary part $\beta$ becomes smaller as $a$ is decreased, which suggests that we will obtain $\Braket{C}\rightarrow\pm 1$ in the continuum limit. Figure 12: The expectation values of $C$ are plotted on the complex plane. The different symbols represent the results for the different values of the spacing $a$. All the data points turned out to be on the unit circle. ## Appendix C Arrangement of flavors on MPS In the spin representation of the Hamiltonian (95) of the 2-flavor Schwinger model, each spin has the site index $n$ and the flavor index $f$. To apply DMRG, we arrange these spins on the 1d lattice with the single index $(f,n)\rightarrow i$. Although the ordering of the indices does not affect the physics, it can affect the necessary bound dimensions, and thus calculation cost depends on it. In this work, we assign the index $i$ to $(f,n)$ as $i=N_{f}n+f-1=0,1,\cdots,N_{f}N,$ (109) which we call the staggered order in this section. In this arrangement, different flavors at the same physical site are put closely with each other, and this is important to control the bond dimension in the computation of DMRG. Let us consider another choice for comparison, $i=n+N(f-1),$ (110) which we name the flavor order here. In this case, we first arrange one of the flavors and then start to arrange the next one, so the flavor degrees of freedom at the same physical sites are separated by $N$, and this clearly violates the above important criterion. Figure 13: The effective bond dimension $D_{\mathrm{eff}}$ is plotted against the number of sweeps $N_{\mathrm{sweep}}$ for the flavor order (left) and the staggered order (right). The vertical axis of the left panel is in log scale, whereas the axis of the right panel is in linear scale. The lattice spacing and the fermion mass are set to $a=0.2$ and $m=0.1$. Figure 14: The effective bond dimension $D_{\mathrm{eff}}$ after 20 sweeps is plotted against the system size $N$ in log-log scale. The result grows exponentially with $N$ for the flavor order whereas it is saturated for the staggered order. The lattice spacing and the fermion mass are set to $a=0.2$ and $m=0.1$. In these two cases, we compare the efficiency of the MPS to represent the ground state in the gapped phase $\theta=0$. We obtain the ground state by DMRG and investigate the largest bond dimension in the MPS, called the effective bond dimension $D_{\mathrm{eff}}$. The results are plotted against the number of sweeps $N_{\mathrm{sweep}}$ in Fig. 13 for various lattice sizes $N$. We found that $D_{\mathrm{eff}}$ converges around $O(10)$ sweeps for both cases. However, the dependence on $N$ is totally different. For the flavor order, the final value of $D_{\mathrm{eff}}$ increases exponentially with $N$, which is caused by artificial long-range interaction between the two flavors. On the other hand for the staggered order, the final value is saturated for sufficiently large $N$. To show these behaviors, we plot the final values of $D_{\mathrm{eff}}$ after 20 sweeps against $N$ in Fig. 14. According to Fig. 14, the bond dimension seems to saturate in the case of the staggered order as $N\to\infty$. Since the $\ln D_{\mathrm{eff}}$ gives the upper bound for the entanglement entropy, this constant behavior is expected to be the optimal one for the $1+1$d gapped systems. On the other hand, $D_{\mathrm{eff}}$ grows exponentially fast for the flavor ordered as $N\to\infty$. We suspect that this is because the flavor order puts the entangled flavors in separate locations. If we cut the system into two pieces in terms of $i$ with the flavor ordering, the $O(N)$ entangled pairs are cut, and thus the entanglement entropy becomes $O(N)$, which is consistent with the exponential behavior of $D_{\mathrm{eff}}$. Therefore, we adopt the staggered order in the whole analysis of this work. ## Appendix D Correlation function in the 1-flavor Schwinger model We test the validity of the correlation-function scheme in Section 5.1 by examining the correlation function in the 1-flavor Schwinger model. When the fermion is massless $m=0$, the model can be analytically solvable and it is equivalent to the free massive boson with mass $\mu^{\prime}=g/\sqrt{\pi}$. Thus, this is a good benchmark and we compare the numerical result of DMRG with the analytical answer. As an analogy of the pseudo scalar meson in the 2-flavor Schwinger model, we consider the pseudo-scalar operator $PS=-i\bar{\psi}\gamma^{5}\psi$. The results of the correlation function $\Braket{PS(x)PS(y)}$ are shown in the left panel of Fig. 15. Here, the data with different colors are obtained with the different values of the cutoff parameter $\varepsilon$. The corresponding effective masses (3-point average) are also plotted in the right panel of Fig. 15, where we can see the significant $\varepsilon$ dependence. Figure 15: (Left) The correlation function $\ln\Braket{PS(x)PS(y)}$ is plotted against the distance $r=|x-y|$ for various values of $\varepsilon$. The number of lattice sites is $N=400$ and the lattice spacing $a$ is determined so that $L=a(N-1)=79.8$. (Right) The effective mass $M_{\mathrm{eff}}(r)$ (3-point average) calculated from the correlation function in the left panel is plotted against $r$. To see the $1/r$ correction of the effective mass, we plot $M_{\mathrm{eff}}(r)$ against $1/r$ in Fig. 16. Then we found that the result approaches the expected asymptotic behavior $M_{\mathrm{eff}}(r)\sim\alpha/r+M$ only if the cutoff $\varepsilon$ is sufficiently small. We fitted the data points for $\varepsilon=10^{-16}$ by $\alpha/r+M$ in the range $0.06\leq 1/r\leq 0.2$ and obtained $M=0.5677(5)$ and $\alpha=0.446(4)$. Here the systematic error from the uncertainty of the fitting range is evaluated as explained in Section 5.1. We note that this is the result on the finite lattice before taking the continuum limit, but it turned out to be close to the exact value $M=g/\sqrt{\pi}\approx 0.56419$ of the continuum theory. Therefore, it is quite important to discuss the cutoff (or bond-dimension) dependence especially when we use the correlation-function scheme. Indeed, if we naively read the plateau value at $\varepsilon=10^{-10}$, we got an incorrect answer $M\sim 0.63$ without observing the $1/\sqrt{r}$ contribution in the Yukawa potential at all. Figure 16: The effective mass $M_{\mathrm{eff}}(r)$ is plotted against $1/r$. The data points for $\varepsilon=10^{-16}$ are fitted by $\alpha/r+M$ inside the region $0.06\leq 1/r\leq 0.2$. The fitting result is depicted by the shaded band with systematic error. The exact mass of the pseudo scalar $g/\sqrt{\pi}$ is also shown by the horizontal broken line. ## References * (1) Flavour Lattice Averaging Group (FLAG) collaboration, _FLAG Review 2021_ , _Eur. Phys. J. C_ 82 (2022) 869 [2111.09849]. * (2) S. Borsanyi, Z. Fodor, C. Hoelbling, S.D. Katz, S. Krieg and K.K. Szabo, _Full result for the QCD equation of state with 2+1 flavors_ , _Phys. Lett. B_ 730 (2014) 99 [1309.5258]. * (3) HotQCD collaboration, _Equation of state in ( 2+1 )-flavor QCD_ , _Phys. Rev. D_ 90 (2014) 094503 [1407.6387]. * (4) N. Shibata, K. Ueda, T. Nishino and C. Ishii, _Friedel oscillations in the one-dimensional kondo lattice model_ , _Phys. Rev. B_ 54 (1996) 13495. * (5) N. Shibata, K. Ueda, T. Nishino and C. Ishii, _Large fermi surface of the one-dimensional kondo lattice model observed by friedel oscillations_ , _Physica B: Condensed Matter_ 230-232 (1997) 1024. * (6) B. Pirvu, J. Haegeman and F. Verstraete, _Matrix product state based algorithm for determining dispersion relations of quantum spin chains with periodic boundary conditions_ , _Physical Review B_ 85 (2012) . * (7) J. Haegeman, B. Pirvu, D.J. Weir, J.I. Cirac, T.J. Osborne, H. Verschelde et al., _Variational matrix product ansatz for dispersion relations_ , _Physical Review B_ 85 (2012) . * (8) J. Haegeman, S. Michalakis, B. Nachtergaele, T.J. Osborne, N. Schuch and F. Verstraete, _Elementary excitations in gapped quantum spin systems_ , _Physical Review Letters_ 111 (2013) . * (9) J.S. Schwinger, _Gauge Invariance and Mass. 2._ , _Phys. Rev._ 128 (1962) 2425. * (10) J.H. Lowenstein and J.A. Swieca, _Quantum electrodynamics in two-dimensions_ , _Annals Phys._ 68 (1971) 172. * (11) A. Casher, J.B. Kogut and L. Susskind, _Vacuum polarization and the absence of free quarks_ , _Phys. Rev. D_ 10 (1974) 732. * (12) S.R. Coleman, R. Jackiw and L. Susskind, _Charge Shielding and Quark Confinement in the Massive Schwinger Model_ , _Annals Phys._ 93 (1975) 267. * (13) S.R. Coleman, _More About the Massive Schwinger Model_ , _Annals Phys._ 101 (1976) 239. * (14) N.S. Manton, _The Schwinger Model and Its Axial Anomaly_ , _Annals Phys._ 159 (1985) 220. * (15) J.E. Hetrick and Y. Hosotani, _QED ON A CIRCLE_ , _Phys. Rev._ D38 (1988) 2621. * (16) C. Jayewardena, _SCHWINGER MODEL ON S(2)_ , _Helv. Phys. Acta_ 61 (1988) 636. * (17) I. Sachs and A. Wipf, _Finite temperature Schwinger model_ , _Helv. Phys. Acta_ 65 (1992) 652 [1005.1822]. * (18) C. Adam, _Instantons and vacuum expectation values in the Schwinger model_ , _Z. Phys._ C63 (1994) 169. * (19) C. Adam, _The Dyson-Schwinger equations in the instanton vacuum of the Schwinger model_ , _Czech. J. Phys._ 46 (1996) 893 [hep-ph/9501273]. * (20) J.E. Hetrick, Y. Hosotani and S. Iso, _The Massive multi - flavor Schwinger model_ , _Phys. Lett._ B350 (1995) 92 [hep-th/9502113]. * (21) R. Narayanan, _QED at a finite chemical potential_ , _Phys. Rev._ D86 (2012) 087701 [1206.1489]. * (22) R. Narayanan, _Two flavor massless Schwinger model on a torus at a finite chemical potential_ , _Phys. Rev._ D86 (2012) 125008 [1210.3072]. * (23) R. Lohmayer and R. Narayanan, _Phase structure of two-dimensional QED at zero temperature with flavor-dependent chemical potentials and the role of multidimensional theta functions_ , _Phys. Rev._ D88 (2013) 105030 [1307.4969]. * (24) Y. Tanizaki and M. Tachibana, _Multi-flavor massless QED 2 at finite densities via Lefschetz thimbles_, _JHEP_ 02 (2017) 081 [1612.06529]. * (25) M.C. Bañuls, K. Cichy, K. Jansen and J.I. Cirac, _The mass spectrum of the Schwinger model with Matrix Product States_ , _JHEP_ 11 (2013) 158 [1305.3765]. * (26) M.C. Bañuls, K. Cichy, J.I. Cirac, K. Jansen and H. Saito, _Thermal evolution of the Schwinger model with Matrix Product Operators_ , _Phys. Rev. D_ 92 (2015) 034519 [1505.00279]. * (27) M.C. Bañuls, K. Cichy, K. Jansen and H. Saito, _Chiral condensate in the Schwinger model with Matrix Product Operators_ , _Phys. Rev. D_ 93 (2016) 094512 [1603.05002]. * (28) B. Buyens, F. Verstraete and K. Van Acoleyen, _Hamiltonian simulation of the Schwinger model at finite temperature_ , _Phys. Rev. D_ 94 (2016) 085018 [1606.03385]. * (29) B. Buyens, J. Haegeman, F. Hebenstreit, F. Verstraete and K. Van Acoleyen, _Real-time simulation of the Schwinger effect with Matrix Product States_ , _Phys. Rev. D_ 96 (2017) 114501 [1612.00739]. * (30) M.C. Bañuls, K. Cichy, J.I. Cirac, K. Jansen and S. Kühn, _Density Induced Phase Transitions in the Schwinger Model: A Study with Matrix Product States_ , _Phys. Rev. Lett._ 118 (2017) 071601 [1611.00705]. * (31) L. Funcke, K. Jansen and S. Kühn, _Topological vacuum structure of the Schwinger model with matrix product states_ , _Phys. Rev. D_ 101 (2020) 054507 [1908.00551]. * (32) B. Chakraborty, M. Honda, T. Izubuchi, Y. Kikuchi and A. Tomiya, _Classically emulated digital quantum simulation of the Schwinger model with a topological term via adiabatic state preparation_ , _Phys. Rev. D_ 105 (2022) 094503 [2001.00485]. * (33) M. Honda, E. Itou, Y. Kikuchi, L. Nagano and T. Okuda, _Classically emulated digital quantum simulation for screening and confinement in the Schwinger model with a topological term_ , _Phys. Rev. D_ 105 (2022) 014504 [2105.03276]. * (34) M. Honda, E. Itou, Y. Kikuchi and Y. Tanizaki, _Negative string tension of a higher-charge Schwinger model via digital quantum simulation_ , _PTEP_ 2022 (2022) 033B01 [2110.14105]. * (35) M. Honda, E. Itou and Y. Tanizaki, _DMRG study of the higher-charge Schwinger model and its ’t Hooft anomaly_ , _JHEP_ 11 (2022) 141 [2210.04237]. * (36) A. Tomiya, _Schwinger model at finite temperature and density with beta VQE_ , 2205.08860. * (37) L. Funcke, K. Jansen and S. Kühn, _Exploring the CP-Violating Dashen Phase in the Schwinger Model with Tensor Networks_ , 2303.03799. * (38) R. Dempsey, I.R. Klebanov, S.S. Pufu, B.T. Søgaard and B. Zan, _Phase Diagram of the Two-Flavor Schwinger Model at Zero Temperature_ , 2305.04437. * (39) D.E. Kharzeev and Y. Kikuchi, _Real-time chiral dynamics from a digital quantum simulation_ , _Phys. Rev. Res._ 2 (2020) 023342 [2001.00698]. * (40) W.A. de Jong, K. Lee, J. Mulligan, M. Płoskoń, F. Ringer and X. Yao, _Quantum simulation of nonequilibrium dynamics and thermalization in the Schwinger model_ , _Phys. Rev. D_ 106 (2022) 054508 [2106.08394]. * (41) N.H. Nguyen, M.C. Tran, Y. Zhu, A.M. Green, C.H. Alderete, Z. Davoudi et al., _Digital Quantum Simulation of the Schwinger Model and Symmetry Protection with Trapped Ions_ , _PRX Quantum_ 3 (2022) 020324 [2112.14262]. * (42) L. Nagano, A. Bapat and C.W. Bauer, _Quench dynamics of the Schwinger model via variational quantum algorithms_ , 2302.10933. * (43) H. Fukaya and T. Onogi, _Lattice study of the massive Schwinger model with theta term under Luscher’s ’admissibility’ condition_ , _Phys. Rev. D_ 68 (2003) 074503 [hep-lat/0305004]. * (44) C. Gattringer, T. Kloiber and V. Sazonov, _Solving the sign problems of the massless lattice Schwinger model with a dual formulation_ , _Nucl. Phys. B_ 897 (2015) 732 [1502.05479]. * (45) C. Gattringer, T. Kloiber and M. Müller-Preussker, _Dual simulation of the two-dimensional lattice U(1) gauge-Higgs model with a topological term_ , _Phys. Rev._ D92 (2015) 114508 [1508.00681]. * (46) C. Gattringer, D. Göschl and T. Sulejmanpasic, _Dual simulation of the 2d U(1) gauge Higgs model at topological angle $\theta=\pi\,$: Critical endpoint behavior_, _Nucl. Phys._ B935 (2018) 344 [1807.07793]. * (47) K. Harada, T. Sugihara, M.-a. Taniguchi and M. Yahiro, _The Massive Schwinger model with SU(2)-f on the light cone_ , _Phys. Rev. D_ 49 (1994) 4226 [hep-th/9309128]. * (48) M. Fishman, S.R. White and E.M. Stoudenmire, _The ITensor Software Library for Tensor Network Calculations_ , _SciPost Phys. Codebases_ (2022) 4. * (49) T.D. Lee and C.N. Yang, _Charge conjugation, a new quantum numberg, and selection rules concerning a nucleon-antinucleon system_ , _Il Nuovo Cimento (1955-1965)_ 3 (1956) 749. * (50) J. Frohlich and E. Seiler, _The Massive Thirring-Schwinger Model (QED in Two-Dimensions): Convergence of Perturbation Theory and Particle Structure_ , _Helv. Phys. Acta_ 49 (1976) 889. * (51) X. Chen, Z.-C. Gu and X.-G. Wen, _Classification of gapped symmetric phases in one-dimensional spin systems_ , _Phys. Rev. B_ 83 (2011) 035107 [1008.3745]. * (52) A. Kapustin, _Bosonic Topological Insulators and Paramagnets: a view from cobordisms_ , 1404.6659. * (53) A. Kapustin, _Symmetry Protected Topological Phases, Anomalies, and Cobordisms: Beyond Group Cohomology_ , 1403.1467. * (54) T. Misumi, Y. Tanizaki and M. Ünsal, _Fractional $\theta$ angle, ’t Hooft anomaly, and quantum instantons in charge-$q$ multi-flavor Schwinger model_, _JHEP_ 07 (2019) 018 [1905.05781]. * (55) F.D.M. Haldane, _Nonlinear field theory of large spin Heisenberg antiferromagnets. Semiclassically quantized solitons of the one-dimensional easy Axis Neel state_ , _Phys. Rev. Lett._ 50 (1983) 1153. * (56) I. Affleck and E.H. Lieb, _A Proof of Part of Haldane’s Conjecture on Spin Chains_ , _Lett. Math. Phys._ 12 (1986) 57. * (57) F.D.M. Haldane, _O (3) Nonlinear sigma Model and the Topological Distinction between Integer- and Half-Integer-Spin Antiferromagnets in Two Dimensions_ , _Phys. Rev. Lett._ 61 (1988) 1029. * (58) I. Affleck, T. Kennedy, E.H. Lieb and H. Tasaki, _Rigorous Results on Valence Bond Ground States in Antiferromagnets_ , _Phys. Rev. Lett._ 59 (1987) 799. * (59) Z. Komargodski, A. Sharon, R. Thorngren and X. Zhou, _Comments on Abelian Higgs Models and Persistent Order_ , _SciPost Phys._ 6 (2019) 003 [1705.04786]. * (60) Z. Komargodski, T. Sulejmanpasic and M. Unsal, _Walls, anomalies, and deconfinement in quantum antiferromagnets_ , _Phys. Rev._ B97 (2018) 054418 [1706.05731]. * (61) M. Lajkó, K. Wamer, F. Mila and I. Affleck, _Generalization of the Haldane conjecture to SU(3) chains_ , _Nucl. Phys._ B924 (2017) 508 [1706.06598]. * (62) Y. Tanizaki and T. Sulejmanpasic, _Anomaly and global inconsistency matching: $\theta$-angles, $SU(3)/U(1)^{2}$ nonlinear sigma model, $SU(3)$ chains and its generalizations_, _Phys. Rev._ B98 (2018) 115126 [1805.11423]. * (63) S.R. Coleman, _The Quantum Sine-Gordon Equation as the Massive Thirring Model_ , _Phys. Rev. D_ 11 (1975) 2088. * (64) R.F. Dashen, B. Hasslacher and A. Neveu, _The Particle Spectrum in Model Field Theories from Semiclassical Functional Integral Techniques_ , _Phys. Rev. D_ 11 (1975) 3424. * (65) J.B. Kogut and L. Susskind, _Hamiltonian Formulation of Wilson’s Lattice Gauge Theories_ , _Phys. Rev. D_ 11 (1975) 395. * (66) L. Susskind, _Lattice Fermions_ , _Phys. Rev. D_ 16 (1977) 3031. * (67) R. Dempsey, I.R. Klebanov, S.S. Pufu and B. Zan, _Discrete chiral symmetry and mass shift in the lattice Hamiltonian approach to the Schwinger model_ , _Phys. Rev. Res._ 4 (2022) 043133 [2206.05308]. * (68) S.R. White, _Density matrix formulation for quantum renormalization groups_ , _Phys. Rev. Lett._ 69 (1992) 2863. * (69) S.R. White, _Density-matrix algorithms for quantum renormalization groups_ , _Phys. Rev. B_ 48 (1993) 10345. * (70) U. Schollwöck, _The density-matrix renormalization group_ , _Reviews of Modern Physics_ 77 (2005) 259. * (71) U. Schollwöck, _The density-matrix renormalization group in the age of matrix product states_ , _Annals of Physics_ 326 (2011) 96. * (72) E. Stoudenmire and S.R. White, _Studying two-dimensional systems with the density matrix renormalization group_ , _Annual Review of Condensed Matter Physics_ 3 (2012) 111. * (73) M.L. Wall and L.D. Carr, _Out-of-equilibrium dynamics with matrix product states_ , _New Journal of Physics_ 14 (2012) 125015.
# We Need to Talk About Data: The Importance of Data Readiness in Natural Language Processing Fredrik Olsson Gavagai Sweden <EMAIL_ADDRESS>Magnus Sahlgren11footnotemark: 1 AI Sweden Sweden <EMAIL_ADDRESS> The lion’s share of the work was carried out while at RISE, Research Institutes of Sweden. ###### Abstract In this paper, we identify the state of data as being an important reason for failure in applied Natural Language Processing (NLP) projects. We argue that there is a gap between academic research in NLP and its application to problems outside academia, and that this gap is rooted in poor mutual understanding between academic researchers and their non-academic peers who seek to apply research results to their operations. To foster transfer of research results from academia to non-academic settings, and the corresponding influx of requirements back to academia, we propose a method for improving the communication between researchers and external stakeholders regarding the accessibility, validity, and utility of data based on Data Readiness Levels Lawrence (2017). While still in its infancy, the method has been iterated on and applied in multiple innovation and research projects carried out with stakeholders in both the private and public sectors. Finally, we invite researchers and practitioners to share their experiences, and thus contributing to a body of work aimed at raising awareness of the importance of data readiness for NLP. ## 1 Introduction NLP has always been an applied discipline, with inspiration drawn both from basic reseach in computational linguistics, computer science, and cognitive science, as well as from business problems and applications in industry and the public sector. Even if some NLP researchers prefer to work at lower Technology Readiness Levels (TRLs), while others operate at a fairly high TRL range, most of the work in NLP has the potential to climb the TRL scale up to the more practical levels (i.e. from TRL level 7 upwards) Banke (2017). For those of us who work with external clients and habitually deliver results in the form of demonstrators and prototypes at TRL 6 or 7, a consistent and significant challenge is the state of the data available. In our experience, challenges regarding data are much more common in client-facing projects than challenges relating to the technical nature of models or algorithms. We argue that the lack of readiness with respect to data has become a serious obstacle when transferring findings from research to an applied setting. Even if the research problem is sufficiently well defined, and the business value of the proposed solution is well described, it is often not clear what type of data is required, if it is available, or if it at all exists. The border between academic research in NLP and the application of the research in practical non-academic settings is becoming increasingly blurred with the convergence of NLP research to more or less production-ready frameworks and implementations. On the one hand, research results have never been more accessible, and it has never been easier to obtain and adjust the architecture of, e.g., a state of the art language model to accommodate a new use case, and to construct a prototype showing the value the model would contribute to an external stakeholder. On the other hand, while the technical maturity of the research community has improved, the understanding of business value, and by extension, the understanding of the impact and importance of data are still largely lacking. It is our firm belief that in order for NLP, and in particular the research community that targets the lower TRLs, to become even more relevant and thus also benefit from feedback from parties outside the field, we have to assume a more holistic approach to the entire life cycle of applied research, with a particular eye on data readiness. The intention for this paper is therefore to raise awareness of data readiness for NLP among researchers and practitioners alike, and to initiate and nurture much-needed discussions with respect to the questions that arise when addressing real-world challenges with state of the art academic research. Figure 1: Relative frequency of publications in the ACL Anthology that mention the term “data” in the title for the last 20 years. ## 2 Related work As is evident from Figure 1, which shows the relative frequency of publications in the ACL Anthology111www.aclweb.org/anthology/ that mention the term “data” in the title over the last 20 years, there is an increasing interest in questions relating to data within our field. While there are a lot of activity related to data in the research community, few attempts at fostering a discussion of the whole process, from business problem to data access, has been made. Relevant areas of academic research include the following. Access to unlabelled training data. Efforts to collect and distribute text data at scale include corpora originating from CommonCrawl, e.g., Mackenzie et al. (2020); El-Kishky et al. (2020), along with tools to facilitate corpus creation Wenzek et al. (2020), as well as academic initiatives such as ELRA222http://www.elra.info/ and LDC333https://www.ldc.upenn.edu/. Creation of labelled training data. Research in Active Learning, e.g., Settles (2012); Siddhant and Lipton (2018); Liang et al. (2020); Ein-Dor et al. (2020), as well as Zero and Few-shot Learning Srivastava et al. (2018); Ye et al. (2020); Pelicon et al. (2020) allows for the utilization of pre-compiled knowledge, and human-in-the-loop approaches to efficient data labelling. However, these approaches assume that there is a clear objective to address, and that unlabelled data and expertise are available. In our experience, this is rarely the case. Bias, transparency, fairness are all areas that have bearing towards the state of data. Much of the research, however, is concerned with situations in which the data has already been collected. Most notable recent efforts include The Dataset Nutrition Label, which is a diagnostic framework for enabling standardized data analysis Holland et al. (2018); Data Statements for NLP that allows for addressing exclusion and bias in the field Bender and Friedman (2018); FactSheets intended for increasing consumers’ trust in AI services Arnold et al. (2019); and Datasheets for Datasets for facilitating better communication between dataset creators and consumers Gebru et al. (2020). Model deployment. Tangential to our efforts to provide stakeholders with prototypes at TRL 6-7 is the work of deploying machine learning models to a production environment. Research in this area that also touches on data readiness in some form include that of Polyzotis et al. (2018) and Paleyes et al. (2020). Work on data readiness related to other modalities than text include van Ooijen (2019) and Harvey and Glocker (2019) that both deal with data quality in medical imaging. Austin (2018) outlines practical solutions to common problems with data readiness when integrating diverse datasets from heterogeneous sources. Afzal et al. (2020) introduces the concept Data Readiness Report as a means to document data quality across a range of standardized dimensions. We have not found any work that focuses specifically on data readiness in the context of NLP. Our contribution is therefore a set of questions that we have found valuable to bring up in discussions with new stakeholders in order to allow us, and them, to form an understanding of the state of the data involved in the particular challenge. ## 3 Data Readiness Levels The notion of Data Readiness Levels (DRLs) provides a way of talking about data much in the same way TRLs facilitate communication regarding the maturity of technology Lawrence (2017). DRLs is a framework suitable for exchanging information with stakeholders regarding data accessibility, validity, and utility. There are three different major Bands of the DRLs, and each band can be thought of as consisting of multiple levels. The state of data is usually a progress from Band C towards Band A, with a particular business goal in mind. Figure 2 illustrates the three bands of the Data Readiness Levels. Figure 2: An overview of the different bands of Data Readiness Levels. Band C concerns the accessibility of data. All work at this level serves to grant the team intended to work with the data access to it; once access is provided, the data is considered to be at Band C - Level C-1, and ready to be brought into Band B. Issues that fall under Band C include: the existence of data; format conversion and encoding; legal aspects of accessibility; and programmatic aspects of accessibility. Band B concerns the validity of data. In order to pass Band B, the data has to be valid in the sense that it is representative of the task at hand. Furthermore, the data should be deduplicated, noise should be identified, missing values should be characterized, etc. At the top level of Band B, the data should be suitable for exploratory analysis, and the forming of working hypotheses. Band A concerns the utility of data. The utility of the data concerns the way in which the data is intended to be used: Is the data enough to solve the task at hand? A project should strive for data readiness at Band A - Level A-1. Note that the Data Readiness Levels should be interpreted with respect to a given task. ## 4 Examples of challenges The following are examples of typical challenges we have encountered, framed as belonging to the different DRLs.444The examples are deliberately kept vague since we do not want to disclose the corresponding external stakeholder. ### 4.1 DRL Band C – Accessibility Example 1: Data licensing. It was assumed that the data to work with in the project was in the public domain and readily available. It turned out the data was a proprietary news feed under license restrictions. The consequences of this were two-fold: not having access to the data generation process meant we could not address one of the stakeholder’s major problems (de-duplication, relevance assessment); and, the license restrictions prevented the project from publishing the dataset along with the research findings. Example 2: Company culture. The ownership of data was clearly specified, but the staff did not adhere to management’s request to release the data due to the uncertainty of the result of the project. This resulted in delays. The fear of the loss of jobs may impact availability of data – data readiness depends on the overall introduction of data-driven techniques in a new organization. Example 3: Data format. Raw data was stored as PDF files, generated by different sources. PDF is an output format, not an input format. Projects working with PDF files will always face challenges having to do with data conversion since there is currently no way of reliably converting an arbitrary PDF file into a format useful for NLP. ### 4.2 DRL Band B – Validity Example 4: Annotation guidelines. The existing annotation guidelines were elaborate, but fell short in practical applicability. Partly due to the misalignment between the annotation task and the guidelines, it became very time consuming to annotate data which resulted in a small dataset to work with. In turn, this affected the range of possible NLP techniques applicable to the problem at hand. Annotation guidelines have to be unambiguous, precise, and possible for an annotator to remember. Example 5: Annotation quality. The data was assumed to be of good quality, but additional investigations revealed a low inter-annotator agreement. The consequence was that the existing data could not be trusted, and the annotation work had to be re-done. If the definition of a task is too hard for human annotators to agree on, a machine trained on the data will perform poorly. Example 6: Annotation quality. Existing information produced by the stakeholder was assumed to be useful in creating an annotated dataset for the specific task at hand, but it turned out that the information was incomplete and insufficient. The consequence of not being able to leverage existing data for distant supervision was that the range of applicable techniques for addressing the stakeholder’s problem became severely limited. ### 4.3 DRL Band A – Utility Example 7: Annotation expectations. It was known that the data to work with was annotated, but the way the annotations had been made had not been communicated. Instead of sequence level annotations, the data was annotated at the document level. As a consequence, we could not explore the type of information extraction techniques we had expected, but had to resort to document classification instead. Example 8: Data sparseness. The overall amount and velocity of data were assumed to be of sufficient quantity, but when aligning data availability with use case requirements, it turned out the data was too sparse. The task could not be pursued. Example 9: Project scope. The stakeholder’s team and the unannotated data they provided to the project were at an exceptionally high DRL, but annotations for training, validation, and testing were very hard to obtain since the project had not planned for annotation work. As a consequence, we implemented a solution based on unsupervised learning instead of a supervised one. ## 5 A method for DRL assessment We introduce a method for gaining rapid and rough assessment of the data readiness levels of a given project. The method consists of a range of questions, intended to fuel the discussions between the stakeholders involved in a project with respect to its means and goals, as well as a simple way of visualizing the responses to the questions in order to bring attention to the areas that need more work. We expect to evolve the method in coming projects. So far, it has helped us to preemptively address some of the issues exemplified in Section 4; they are a good starting point in reaching the appropriate data readiness for solving real-world problems related to NLP. ### 5.1 Pre-requisites The pre-requisites for applying the method are the following: there should be a clear business or research-related objective for the project to achieve; the objective lends itself to a data-driven solution; and, there is data available that presumably is relevant for the task. The method should be scheduled for application at suitable points in time for the project, i.e., anytime the project enters a phase that relies on data and experimentation to make progress in the project plan. We suggest to apply the method at the very beginning of the project, as well as (at least) before entering the first round of empirical experiments with respect to the data at hand and the project’s objective. ### 5.2 Post-conditions The outcome of the method is two-fold: a visual representation of the Data Readiness Levels of the project at a specific point-in-time (as exemplified in Section 6); and, the insight into the state of data achieved by venting the questions among the project’s stakeholders. ### 5.3 The questions The purpose of each question is to draw the stakeholders’ attention to one aspect of the data readiness of the project. However, since not all questions are relevant to all types of projects some may be omitted depending on the characteristics of the project at hand. Each of the fifteen questions below can be answered by one of four options: Don’t know, No, Partially, and Yes, where Don’t know is always considered the worst possible answer, and Yes as the answer to strive for. The admittedly very coarse grained answer scale is intended to serve as a guide in assessing the state of the project’s data readiness, rather than as a definitive and elaborate tool for detailed assessment. #### 5.3.1 Questions related to Band C Band C, that concerns the accessibility of data, is the band in that is the least dependent on the actual objective of the project, but clearing it is still required in order to make the project successful. * Q1 Do you have programmatic access to the data? The data should be made accessible to the people who are going to work with it, in a way that makes their work as easy as possible. This usually means programmatic access via an API, database, or spreadsheet. * Q2 Are your licenses in order? In the case you plan on using data from a third- party provider, either commercial or via open access, ensure that the licences for the data permit the kind of usage that is needed for the current project. Furthermore, make sure you follow the Terms of Service set out by the provider. * Q3 Do you have lawful access to the data? Make sure you involve the appropriate legal competence early on in your project. Matters regarding, e.g., personal identifiable information, and GDPR have to be handled correctly. Failing to do so may result in a project failure, even though all technical aspects of the project are perfectly sound. * Q4 Has there been an ethics assessment of the data? In some use cases, such as when dealing with individuals’ medical information, the objectives of the project require an ethics assessment. The rules for such a probe into the data are governed by strict rules, and you should consult appropriate legal advisors to make sure your project adheres to them. * Q5 Is the data converted to an appropriate format? Apart from being accessible programmatically, and assessed with respect to licenses, laws, and ethics, the data should also be converted to a format appropriate for the potential technical solutions to the problem at hand. One particular challenge we have encountered numerous times, is that the data is on the format of PDF files. PDF is an excellent output format for rendering contents on screen or in print, but it is a terrible input format for data-driven automated processes (see, e.g., Panait (2020) for examples). #### 5.3.2 Questions related to Band B Band B concerns the validity of data. In pursuing projects with external parties, we have so far seen fairly few issues having to do with the validity of data. In essence, Band B is about trusting that the data format is what you expect it to be. * Q6 Are the characteristics of the data known? Are the typical traits and features of the data known? Perform an exploratory data analysis, and run it by all stakeholders in the project. Make sure to exemplify typical and extreme values in the data, and encourage the project participants to manually look into the data. * Q7 Is the data validated? Ensure that the traits and features of the data make sense, and, e.g., records are deduplicated, noise is catered for, and that null values are taken care of. #### 5.3.3 Questions related to Band A Band A concerns the utility of data. As such, it is tightly coupled to the objective of the project. In our experience, this is the most elusive data readiness level in that it requires attention every time the goal of a project changes. * Q8 Do stakeholders agree on the objective of the current use case? What problem are you trying to solve? The problem formulation should be intimately tied to a tangible business value or research hypothesis. When specifying the problem, make sure to focus on the actual need instead of a potentially interesting technology. The characteristics of the problem dictates the requirements on the data. Thus, the specification is crucial for understanding the requirements on the data in terms of, e.g., training data, and the need for manual labelling of evaluation or validation data. Only when you know the characteristics of the data, it will be possible to come up with a candidate technological approach to solve the problem. * Q9 Is the purpose of using the data clear to all stakeholders? Ensure that all people involved in the project understands the role and importance of the data to be used. This is to solidify the efforts made by the people responsible for relevant data sources to produce data that is appropriate for the project’s objective and the potential technical solution to address the objective. * Q10 Is the data sufficient for the current use case? Given the insight into what data is available, consider the questions: What data is needed to solve the problem? Is that a subset of the data that is already available? If not: is there a way of getting all the data needed? If there is a discrepancy between the data available, and the data required to solve the problem, that discrepancy has to be mitigated. If it is not possible to align the data available with what is needed, then this is a cue to go back to the drawing board and either iterate on the problem specification, or collect suitable data. * Q11 Are the steps required to evaluate a potential solution clear? How do you know if you have succeeded? The type of data required to evaluate a solution is often tightly connected to the way the solution is implemented: if the solution is based on supervised machine learning, i.e., requiring labelled examples, then the evaluation of the solution will also require labelled data. If the solution depends on labelled training data, the process of annotation usually also results in the appropriate evaluation data. Any annotation effort should take into account the quality of the annotations, e.g., the inter- annotator agreement; temporal aspects of the data characteristics, e.g., information on when we need to obtain newly annotated data to mitigate model drift; and, the representativity of the data. Tseng et al. (2020) provide a comprehensive set of best-practices for managing annotation projects. * Q12 Is your organization prepared to handle more data like this beyond the scope of the project? Even if the data processing in your organization is not perfect with respect to the requirements of machine learning, each project you pursue has the opportunity to articulate improvements to your organization’s data storage processes. Ask yourself the questions: How does my organization store incoming data? Is that process a good fit for automatic processing of the data in the context of an NLP project, that is, is the data stored on a format that brings it beyond Band C (accessibility) of the Data Readiness Levels? If not; what changes would need to be made to make the storage better? * Q13 Is the data secured? Ensure that the data used in the project is secured in such a way that it is only accessible to the right people, and thus not accessible by unauthorized users. Depending on the sensitivity of the project, and thus the data, there might be a need to classify the data according to the security standards of your organization (e.g., ISO 27001), and implement the appropriate mechanisms to protect the data and project outcome. * Q14 Is it safe to share the data with others? In case the project aims to share its data with others, the risks of leaking sensitive data about, e.g., your organization’s business plans or abilities have to be addressed prior to sharing it. * Q15 Are you allowed to share the data with others? In case the project wishes to share its data, make sure you are allowed to do so according to the licenses, laws, and ethics previously addressed in the project. ## 6 Example application of the method For the purpose of exemplifying the use of the method described above, consider the fictitious case of project Project, an undertaking of a large organization with good experience of running conventional ICT projects, but little knowledge about data-driven NLP-based analysis. The actual subject matter and scope of Project is not important. Figure 3: The figure illustrates the state of data readiness at the beginning of the fictitious project. ### 6.1 First application of the method When the project is initiated, the project manager involves its members in a session in which they recognize all fifteen questions as relevant for the project’s objectives, then discusses each question, and agrees on the appropriate answer. When the session is over, the project manager plots the responses in a radar chart, as displayed in Figure 3, in such a way that each of the questions answered is present in the chart, starting with Q1 (Programmatic access to data) up north, then progressing clock-wise with each question.555The code for generating radar charts as part of assessing the data readiness of your own project is available here: https://github.com/fredriko/draviz The responses are plotted such that Don’t know (the worst answer) is located at the center of the chart, while Yes (the best answer) is closest to the chart’s edge. The aim of the assessment method is for the surface constituted by the enclosed responses to cover as large an area as possible. The reasons are simple; first, all stakeholders, in particular those in executive positions, will gain immediate visual insight into the state of the data for the project and, hopefully, feel the urge to act to increase the area; second, it is easy to visualize the project’s progress at discrete points in time by overlaying two (or more) radar charts.666Overlaying radar charts with the purpose of comparing them only works when a small number of charts are involved; we are experimenting with parallel plots when the number of distinct charts exceeds 3. From Figure 3, it can be seen that Project, at its incarnation, is not very mature with respect to data readiness. The area covered by the enclosed responses is small, and the number of unknowns are large. The only certainties resulting by the initial assessment of Project, are that: there has been no ethical assessment made of the data; the data has not been converted to a suitable format; no characteristics of the data are known; the data has not been validated; there is not sufficient data for the use case; and the way to evaluate the success of the project has yet to be defined. On the bright side, the stakeholder partially agrees on the objective of the project, and the purpose of the data. ### 6.2 Second application of the method Fast forward to the second data readiness assessment of Project. In this case, it is scheduled to take place prior to the project embarking on the first rounds of empirical investigations of the state of data in relation to the project’s business objective. The purpose of looking into the data readiness of the project at this stage, is to support the project manager in their work regarding prioritization, management of resources, and handling of expectations in relation to the project’s progression and ability to reach its goals. Figure 4: The figure shows the corresponding state at the time where the project is ready to start making experiments based on the data. Again, after all stakeholders have agreed on responses to the questions of the method, they are plotted in a radar chart. Figure 4 shows the state of the project after the second data readiness assessment. Progress has been made; the area covered by the responses is larger than it was at the initial assessment (Figure 3). There are no unknowns left among the responses. Data is available and converted to a suitable format, its characteristics are known and the data format is generally trusted within the project. The fact that licenses, legal aspects of access, and ethics are not quite there yet does not, technically speaking, prohibit the project from moving on with the empirical investigation. However, these issues should be properly addressed before the project results are deployed to a general audience. The stakeholders are still not in full agreement on the project’s business objective, but they are aware of the purpose of the data, which has been deemed sufficient for the use case. Given the uncertainty with respect to the business objective, the steps required to evaluate proposed solutions are also unclear. Beyond the scope of the project, the organization is not yet set up to in a way that is required to repeat and reproduce the findings of Project to future data, and data security is still work in progress. The project is allowed to share the data if it wishes to to so, but since management has decided to play it safe with respect to giving away too much information regarding the organization’s future plans in doing so, it has been decided that data should not be share with external parties. ## 7 Conclusions Research in NLP has never been more accessible; the impact of new results has the potential to reach far beyond the academic sphere. But with great power comes great responsibility. How can we foster a better uptake of research among public agencies, and in industry, and thereby gain valuable insight into the research directions that really matter? We introduce a method for assessing the Data Readiness Levels of a project, consisting of fifteen questions, and the accompanying means for visualizing the responses. We have utilized the proposed method and visualization technique in several projects with stakeholders in both the private and public sectors, and it has proven to be a very useful tool to improve the potential for successful application of NLP solutions to solve concrete business problems. The method is a work-in-progress, and we thus invite researchers and practitioners in the NLP community to share their experience with respect to applied NLP research and data readiness at the following GitHub repository https://github.com/fredriko/nlp-data-readiness. ## References * Afzal et al. (2020) Shazia Afzal, Rajmohan C, Manish Kesarwani, Sameep Mehta, and Hima Patel. 2020. Data Readiness Report. _arXiv:2010.07213 [cs]_. ArXiv: 2010.07213. * Arnold et al. (2019) Matthew Arnold, Rachel K. E. Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ramamurthy, Darrell Reimer, Alexandra Olteanu, David Piorkowski, Jason Tsay, and Kush R. Varshney. 2019. FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity. _arXiv:1808.07261 [cs]_. ArXiv: 1808.07261. * Austin (2018) C. C. Austin. 2018. A Path to Big Data Readiness. In _2018 IEEE International Conference on Big Data (Big Data)_ , pages 4844–4853. * Banke (2017) Jim Banke. 2017. Technology Readiness Levels Demystified. URL: https://www.nasa.gov/topics/aeronautics/features/trl_demystified.html. Accessed: 2021-01-11. * Bender and Friedman (2018) Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. _Transactions of the Association for Computational Linguistics_ , 6:587–604. * Ein-Dor et al. (2020) Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active Learning for BERT: An Empirical Study. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7949–7962, Online. Association for Computational Linguistics. * El-Kishky et al. (2020) Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. CCAligned: A massive collection of cross-lingual web-document pairs. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 5960–5969, Online. Association for Computational Linguistics. * Gebru et al. (2020) Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumśe III, and Kate Crawford. 2020. Datasheets for Datasets. _arXiv:1803.09010 [cs]_. ArXiv: 1803.09010. * Harvey and Glocker (2019) Hugh Harvey and Ben Glocker. 2019. A standardised approach for preparing imaging data for machine learning tasks in radiology. In _Artificial Intelligence in Medical Imaging_ , pages 61–72. Springer. * Holland et al. (2018) Sarah Holland, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. 2018\. The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards. _arXiv:1805.03677 [cs]_. ArXiv: 1805.03677. * Lawrence (2017) Neil D Lawrence. 2017. Data readiness levels. _arXiv preprint arXiv:1705.02245_. * Liang et al. (2020) Weixin Liang, James Zou, and Zhou Yu. 2020. ALICE: Active Learning with Contrastive Natural Language Explanations. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4380–4391, Online. Association for Computational Linguistics. * Mackenzie et al. (2020) Joel Mackenzie, Rodger Benham, Matthias Petri, Johanne R. Trippas, J. Shane Culpepper, and Alistair Moffat. 2020. CC-news-en: A large english news corpus. In _Proceedings of the 29th ACM International Conference on Information & Knowledge Management_, CIKM ’20, pages 3077–3084, New York, NY, USA. Association for Computing Machinery. * Paleyes et al. (2020) Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence. 2020. Challenges in Deploying Machine Learning: a Survey of Case Studies. _arXiv:2011.09926 [cs]_. ArXiv: 2011.09926. * Panait (2020) Bogdan Panait. 2020. What’s so hard about PDF text extraction? URL: https://filingdb.com/b/pdf-text-extraction. Accessed: 2021-05-11. * Pelicon et al. (2020) Andraž Pelicon, Marko Pranjić, Dragana Miljković, Blaž Škrlj, and Senja Pollak. 2020. Zero-Shot Learning for Cross-Lingual News Sentiment Classification. _Applied Sciences_ , 10(17):5993. Number: 17 Publisher: Multidisciplinary Digital Publishing Institute. * Polyzotis et al. (2018) Neoklis Polyzotis, Sudip Roy, Steven Euijong Whang, and Martin Zinkevich. 2018. Data Lifecycle Challenges in Production Machine Learning: A Survey. _SIGMOD Record_ , 47(2):12. * Settles (2012) Burr Settles. 2012. _Active Learning_, volume 2012. Morgan & Claypool. * Siddhant and Lipton (2018) Aditya Siddhant and Zachary C. Lipton. 2018. Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2904–2909, Brussels, Belgium. Association for Computational Linguistics. * Srivastava et al. (2018) Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2018. Zero-shot Learning of Classifiers from Natural Language Quantification. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 306–316, Melbourne, Australia. Association for Computational Linguistics. * Tseng et al. (2020) Tina Tseng, Amanda Stent, and Domenic Maida. 2020. Best Practices for Managing Data Annotation Projects. _arXiv:2009.11654 [cs]_. ArXiv: 2009.11654. * van Ooijen (2019) Peter MA van Ooijen. 2019. Quality and curation of medical images and data. In _Artificial Intelligence in Medical Imaging_ , pages 247–255. Springer. * Wenzek et al. (2020) Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmaán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 4003–4012, Marseille, France. European Language Resources Association. * Ye et al. (2020) Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, SuHang Zheng, Feng Wang, Jun Zhang, and Huajun Chen. 2020. Zero-shot Text Classification via Reinforced Self-training. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 3014–3024, Online. Association for Computational Linguistics.
# Two-Sample Testing in Reinforcement Learning Martin Waltz Chair of Econometrics and Statistics, esp. in the Transport Sector, Technische Universität Dresden, 01062 Dresden, Germany Ostap Okhrin Chair of Econometrics and Statistics, esp. in the Transport Sector, Technische Universität Dresden, 01062 Dresden, Germany ###### Abstract Value-based reinforcement-learning algorithms have shown strong performances in games, robotics, and other real-world applications. The most popular sample-based method is $Q$-Learning. It subsequently performs updates by adjusting the current $Q$-estimate towards the observed reward and the maximum of the $Q$-estimates of the next state. The procedure introduces maximization bias with approaches like Double $Q$-Learning. We frame the bias problem statistically and consider it an instance of estimating the maximum expected value (MEV) of a set of random variables. We propose the $T$-Estimator (TE) based on two-sample testing for the mean, that flexibly interpolates between over- and underestimation by adjusting the significance level of the underlying hypothesis tests. A generalization, termed $K$-Estimator (KE), obeys the same bias and variance bounds as the TE while relying on a nearly arbitrary kernel function. We introduce modifications of $Q$-Learning and the Bootstrapped Deep $Q$-Network (BDQN) using the TE and the KE. Furthermore, we propose an adaptive variant of the TE-based BDQN that dynamically adjusts the significance level to minimize the absolute estimation bias. All proposed estimators and algorithms are thoroughly tested and validated on diverse tasks and environments, illustrating the bias control and performance potential of the TE and KE. Keywords: maximum expected value, two-sample testing, reinforcement learning, Keywords: $Q$-learning, estimation bias ## 1 Introduction Estimating the maximum expected value (MEV) of a set of random variables is a long-standing statistical problem, including early contributions of Blumenthal and Cohen (1968), Dudewicz (1971), and Dhariyal et al. (1985). These works show that for various true underlying distributions an unbiased estimator does not exist. The problem has recently attained increased attention since it also arises in modern machine learning domains, most notably in reinforcement learning (RL). RL aims at finding a policy - a mapping from states to actions - that maximizes a numerical reward signal (Sutton and Barto, 2018). Frequently used approaches define a policy-dependent action-value, also called $Q$-value, for each state-action pair. This value represents the expected sum of discounted rewards when executing the given action in the given state and following a specific policy afterward. In particular, the update rule of the $Q$-Learning (Watkins and Dayan, 1992) algorithm is based on adjusting the $Q$-estimate for a given state-action pair towards the observed reward and the maximum of the estimated $Q$-values of the next state. However, this use of the Maximum Estimator (ME) of the MEV leads to overestimations of action- values, which are transmitted throughout the update routine (Van Hasselt, 2010). These can damage the learning performance or even lead to failure of the algorithm (Thrun and Schwartz, 1993), especially when function approximation is used (Van Hasselt et al., 2016). Van Hasselt (2010) proposed the Double Estimator (DE), which splits the data into independent sets, thereby separating the selection and evaluation of a maximizing value. The corresponding Double $Q$-Learning is a popular choice among practitioners. Although the DE introduces underestimation bias, Double $Q$-Learning offers improved robustness and strong performances, especially in highly stochastic environments. Another crucial contribution is D’Eramo et al. (2016), in which the Weighted Estimator (WE) alongside Weighted $Q$-Learning is proposed. From a bias perspective, the estimator builds a compromise between the overestimating ME and the underestimating DE. However, the WE does not offer additional flexibility in selecting the level of bias and is computationally demanding since it requires numerical integration or Monte Carlo approximation. Notably, those estimators led to modifications of the Deep $Q$-Networks (DQN, Mnih et al. 2015), which expanded $Q$-Learning to the deep neural network (DNN) setting and paved the path for the striking success of RL in recent years (Silver et al., 2017; Vinyals et al., 2019). D’Eramo et al. (2016), Lan et al. (2020), among others, have shown that both over- and underestimation of $Q$-values might not always be harmful, depending on, e.g., the stochasticity of the environment, the difference of the action- values, the size of the action space, or the time horizon. We argue that a competitive estimator of the MEV should thus be able to interpolate between over- and underestimation via an interpretable hyperparameter, enabling it to deal with a diverse set of environments. Furthermore, the estimator should obey a variance bound and, for practical application, should be fast and stable to compute. Fulfilling these criteria, we propose an estimator based on two-sample testing for the mean, named $T$-Estimator (TE). The idea is to get a statistically significant statement of whether one mean is truly larger than others. Consequently, the hyperparameter is the level of significance $\alpha$, a familiar quantity for researchers and practitioners from diverse application domains. The ME is shown to be a special case of the TE with $\alpha=0.5$. Building on the two-sample test statistic, we further consider a generalization termed $K$-Estimator (KE), which is characterized by a suitable kernel function and can smooth the discontinuities around testing decisions of the TE. We theoretically and empirically analyze the TE and KE regarding their biases and variances, for which general sharp bounds are derived. Using those newly defined estimators, we propose RL algorithms for the table-based case and with DNNs as function approximators. Since the two-sample testing procedure incorporates variance estimates of the involved variables, we employ an online variance update routine (D’Eramo et al., 2019) in the table-based scenario, and the framework of the Boostrapped DQN (BDQN, Osband et al. 2016) in the DNN setting. The empirical evidence that over- and underestimation of action-values is not necessarily detrimental to learning performance might be explained by the connection of $Q$-estimates to the exploration procedure of algorithms (Fox et al., 2016). However, Fox (2019) and Liang et al. (2021) argue that these topics should be addressed separately by focusing firstly on unbiased value- estimation and secondly on improved exploration schemes. We acknowledge this perspective by additionally proposing an adaptive tuning mechanism for the significance level $\alpha$ of the TE in the DNN setting with the objective of minimizing the absolute estimation bias.111We are thankful to two anonymous reviewers that pushed us to think in this direction with their valuable comments. The approach complements recent proposals of Dorka et al. (2021) and Wang et al. (2021). The dynamic adjustment of $\alpha$ is realized by running partial greedy episodes and comparing $n$-step returns (Sutton and Barto, 2018) with the action-value estimates for the visited state-action pairs. Furthermore, through learning $\alpha$, we avoid the computational demanding tuning process of this environment-specific hyperparameter. Finally, we demonstrate the performance potential of all newly proposed estimators and algorithms by extensively testing them in various tasks and environments, with and without function approximation. The paper is organized as follows: Section 2 formalizes the problem of estimating the MEV. Section 3 details the proposed estimators and analyzes them with and without fulfilling the assumption of independently and identically distributed (iid) data. Section 4 introduces the RL setup and presents the new temporal-difference algorithms, while Section 5 details the measurement of estimation bias and introduces the adaptive update mechanism of $\alpha$. The experiments are shown and thoroughly discussed in Section 6, with the code being available at: https://github.com/MarWaltz/TUD_RL. Section 7 provides further literature on the state-of-the-art and Section 8 concludes. ## 2 Estimating the Maximum Expected Value ### 2.1 Problem Definition Let us consider $M\geq 2$ independent random variables $X_{1},\ldots,X_{M}$ with finite expectations $\mu_{1}=\operatorname{E}(X_{1}),\ldots,\mu_{M}=\operatorname{E}(X_{M})$ and variances $\sigma_{1}^{2}=\operatorname{Var}(X_{1}),\ldots,\sigma_{M}^{2}=\operatorname{Var}(X_{M})$. The corresponding probability density functions (pdfs) and cumulative distributions functions (cdfs) are denoted $f_{X_{1}},\ldots,f_{X_{M}}$ and $F_{X_{1}},\ldots,F_{X_{M}}$, respectively. The quantity of interest is the _maximum expected value_ : $\mu_{*}=\max_{i}\mu_{i}$. Estimation is performed based on samples $S=\\{S_{1},\ldots,S_{M}\\}$ without knowing moments or imposing distributional assumptions. The realizations in a sample $S_{i}$ are assumed to be iid. The unbiased sample mean of $S_{i}$ is denoted $\hat{\mu}_{i}(S_{i})$, while an estimator of the MEV is referred to as $\hat{\mu}_{*}(S)$. Throughout the paper, we frequently abbreviate these notations via $\hat{\mu}_{i}=\hat{\mu}_{i}(S_{i})$, $\hat{\mu}_{*}=\hat{\mu}_{*}(S)$, and similar, for conciseness. Primary evaluation criteria of an estimator are its bias $\operatorname{Bias}(\hat{\mu}_{*})=\operatorname{E}(\hat{\mu}_{*})-\mu_{*}$, and variance $\operatorname{Var}(\hat{\mu}_{*})=\operatorname{E}\left\\{[\hat{\mu}_{*}-\operatorname{E}(\hat{\mu}_{*})]^{2}\right\\}$. These can be aggregated to the mean squared error $\operatorname{MSE}(\hat{\mu}_{*})=\operatorname{Bias}(\hat{\mu}_{*})^{2}+\operatorname{Var}(\hat{\mu}_{*})$. ### 2.2 Maximum Estimator The ME $\hat{\mu}^{ME}_{*}$ is the classic approach and takes the maximum of unbiased mean estimates: $\hat{\mu}^{ME}_{*}=\max_{i}\hat{\mu}_{i}.$ Denoting the pdf of $\hat{\mu}_{i}$ as $\hat{f}_{i}$ and the corresponding cdf as $\hat{F}_{i}$, it holds: $\operatorname{E}\left(\hat{\mu}^{ME}_{*}\right)=\sum_{i=1}^{M}\int_{-\infty}^{\infty}x\hat{f}_{i}(x)\prod\limits_{\begin{subarray}{c}j=1\\\ j\neq i\end{subarray}}^{M}\hat{F}_{j}(x)dx.$ (1) The ME is positively biased: $\operatorname{E}\left(\hat{\mu}^{ME}_{*}\right)\geq\mu_{*}$, see Van Hasselt (2013). More precisely, following Aven (1985), a general upper bound for the bias can be given: $0\leq\operatorname{Bias}\left(\hat{\mu}^{ME}_{*}\right)\leq\sqrt{\frac{M-1}{M}\sum_{i=1}^{M}\operatorname{Var}\left(\hat{\mu}_{i}\right)}.$ (2) The bias is particularly large when $\mu_{1}\approx\ldots\approx\mu_{M}$. Furthermore, it can be shown that the variance of $\hat{\mu}^{ME}_{*}$ is bounded from above: $\operatorname{Var}\left(\hat{\mu}^{ME}_{*}\right)\leq\sum_{i=1}^{M}\frac{\sigma_{i}^{2}}{|S_{i}|}$, where $|S_{i}|$ is the sample size of $S_{i}$, see Van Hasselt (2013). ### 2.3 Double Estimator Van Hasselt (2010) introduced the DE, which is thoroughly analyzed in Van Hasselt (2013). The key idea is to separate the selection of the maximizing random variable and the evaluation of its sample mean, which is performed simultaneously in the ME. The DE splits $S$ randomly into disjoint subsets $S^{A}=\\{S^{A}_{1},\ldots,S^{A}_{M}\\}$ and $S^{B}=\\{S^{B}_{1},\ldots,S^{B}_{M}\\}$, guaranteeing that means based on the subsets are still unbiased. Afterwards, one selects an index which maximizes the sample mean in $S^{A}$: $a^{*}\in\\{i\mid\hat{\mu}_{i}(S^{A})=\max_{j}\hat{\mu}_{j}(S^{A})\\}$. The DE is defined by evaluating $a^{*}$ on $S^{B}$: $\hat{\mu}^{DE}_{*}(S)=\hat{\mu}_{a^{*}}(S^{B})$. Consequently, one can perform the same procedure with $S^{A}$ and $S^{B}$ switched to get a second DE estimate. Averaging both DE estimates yields the 2-fold Cross-Validation estimator (CVE) $\hat{\mu}^{CVE}_{*}$, which has a reduced variance in comparison to a single DE estimate. The expectations of the DE and the CVE are equal since both DE estimates (for $S^{A}$ and $S^{B}$) have identical expectations: $\displaystyle\operatorname{E}\left(\hat{\mu}^{CVE}_{*}\right)=\operatorname{E}\left(\hat{\mu}^{DE}_{*}\right)$ $\displaystyle=\sum_{i=1}^{M}\operatorname{E}\left[\hat{\mu}_{i}(S^{B})\right]P(i=a^{*})$ $\displaystyle=\sum_{i=1}^{M}\operatorname{E}\left[\hat{\mu}_{i}(S^{B})\right]\int_{-\infty}^{\infty}\hat{f}_{i}^{A}(x)\prod\limits_{\begin{subarray}{c}j=1\\\ j\neq i\end{subarray}}^{M}\hat{F}_{j}^{A}(x)dx,$ (3) where $\hat{f}_{i}^{A}$ and $\hat{F}_{i}^{A}$ are the cdf and pdf of $\hat{\mu}_{i}(S^{A})$, respectively. Comparing (3) with (1), it appears that (1) performs integration over $x$, while the corresponding term $\operatorname{E}[\hat{\mu}_{i}(S^{B})]$ in (3) is outside the integral, so that probability and value are independent. Van Hasselt (2010) showed that the DE is prone to underestimation: $\operatorname{E}\left(\hat{\mu}^{DE}_{*}\right)\leq\mu_{*}$, because it might attribute non-zero selection probability to non-maximum variables. Furthermore, Van Hasselt (2013) conjectures the following lower bound for the bias: $-\frac{1}{2}\left(\sqrt{\sum_{i=1}^{M}\frac{\sigma_{i}^{2}}{|S_{i}^{A}|}}+\sqrt{\sum_{i=1}^{M}\frac{\sigma_{i}^{2}}{|S_{i}^{B}|}}\right)<\operatorname{Bias}(\hat{\mu}^{DE}_{*})\leq 0,$ while the variance of the CVE is shown to be bounded as the ME: $\operatorname{Var}\left(\hat{\mu}^{CVE}_{*}\right)\leq\sum_{i=1}^{M}\frac{\sigma_{i}^{2}}{|S_{i}|}$. Worth mentioning is that the variance of the CVE is not necessarily half the variance of the DE, as there is non-zero covariance between the two DE estimates, see the example in Appendix A. Throughout the experiments, we follow D’Eramo et al. (2021) and use the CVE instead of the DE whenever possible. ### 2.4 Weighted Estimator D’Eramo et al. (2016) and D’Eramo et al. (2021) introduced the Weighted Estimator (WE) for the MEV, which is a weighted mean of all sample averages. Each weight corresponds to the probability of $\hat{\mu}_{i}$ being larger than all other means: $\hat{\mu}^{WE}_{*}=\sum_{i=1}^{M}w_{i}\hat{\mu}_{i}=\sum_{i=1}^{M}P\left(\hat{\mu}_{i}=\max_{j}\hat{\mu}_{j}\right)\hat{\mu}_{i}.$ Since the probabilities depend on the unknown mean distributions $\hat{f_{i}}$, the authors propose a Gaussian approximation based on the central limit theorem: $\hat{\mu}^{WE}_{*}=\sum_{i=1}^{M}\hat{\mu}_{i}\int_{-\infty}^{\infty}\tilde{f}_{i}(x)\prod\limits_{\begin{subarray}{c}j=1\\\ j\neq i\end{subarray}}^{M}\tilde{F}_{j}(x)dx,$ (4) where $\tilde{f}_{i}$ is the Gaussian pdf with mean $\hat{\mu}_{i}$ and variance $\frac{\hat{\sigma}_{i}^{2}}{|S_{i}|}$. The unbiased estimate of ${\sigma}_{i}^{2}$ is denoted $\hat{\sigma}_{i}^{2}$ and $|S_{i}|$ refers to the sample size. Crucially, the bias of the WE is bounded by the ME and DE: $\operatorname{Bias}(\hat{\mu}^{DE}_{*})\leq\operatorname{Bias}(\hat{\mu}^{WE}_{*})\leq\operatorname{Bias}(\hat{\mu}^{ME}_{*}),$ while it exhibits the same variance bound: $\operatorname{Var}\left(\hat{\mu}^{WE}_{*}\right)\leq\sum_{i=1}^{M}\frac{\sigma_{i}^{2}}{|S_{i}|}$. Thus, the bias of the WE might be positive or negative, depending on the distribution of the random variables. A drawback of this estimator lies in increased computation time since calculating the integrals in (4) is a demanding process. Tackling this issue, D’Eramo et al. (2021) propose to use Monte Carlo approximations instead, and we follow this approach when computing the WE in the experiments. The Monte Carlo sample sizes in those cases are set to 100. ## 3 Two-Sample Testing-based Estimators ### 3.1 T-Estimator To create a flexible estimator which: 1. (a) is able to interpolate between over- and underestimation, 2. (b) obeys a variance bound similar to the ME, 3. (c) has an interpretable hyperparameter, 4. (d) is fast and easy to compute, we propose a procedure based on one-sided two-sample testing for the mean. Generally, for two random variables $X_{1},X_{2}$, we consider the hypothesis $H_{0}:\mu_{1}\geq\mu_{2}$. The test statistic is constructed as follows (Wackerly et al., 2008): $T=\frac{\hat{\mu}_{1}-\hat{\mu}_{2}}{\sqrt{\frac{\hat{\sigma}^{2}_{1}}{|S_{1}|}+\frac{\hat{\sigma}^{2}_{2}}{|S_{2}|}}},$ where $\hat{\sigma}^{2}_{1}$ and $\hat{\sigma}^{2}_{2}$ are unbiased estimates of the variances $\sigma^{2}_{1}$ and $\sigma^{2}_{2}$ of $X_{1}$ and $X_{2}$, respectively. $|S_{1}|$ and $|S_{2}|$ denote the corresponding sample sizes. If the realization of $T$ is smaller than the $\alpha$-quantile of the standard normal distribution, $z_{\alpha}=\Phi^{-1}(\alpha)$, hypothesis $H_{0}$ is rejected. The use of the normal distribution as the asymptotic distribution of the test statistics for $H_{0}$ can be justified via the central limit theorem, since it holds $\sqrt{|S_{i}|}\frac{\hat{\mu}_{i}-\mu_{i}}{\sigma_{i}}\xrightarrow[|S_{i}|\rightarrow\infty]{d}\mathcal{N}(0,1)$ for $i=1,2$, and using Slutsky’s theorem (Casella and Berger, 2002), as $\hat{\sigma}_{i}$ converges almost surely to $\sigma_{i}$. Based on this test, the following procedure for estimating the MEV is proposed: First, we consider the complete set of indices $\mathcal{L}=\\{1,\ldots,M\\}$ and select an index that corresponds to a variable with the value of the ME: $i^{*}\in\\{i\mid\hat{\mu}_{i}=\max_{j}\hat{\mu}_{j}\\}$. Second, we test for all $i\in\mathcal{L}$ the $H_{0}$: $\mu_{i}\geq\mu_{i^{*}}$. If $H_{0}$ is rejected for some $i^{\prime}$, we assert $\mu_{i^{\prime}}<\mu_{i^{*}}$ and remove variable index $i^{\prime}$ from the index set: $\mathcal{L}\leftarrow\mathcal{L}\char 92\relax\\{i^{\prime}\\}$. Third, we average the remaining $\\{\hat{\mu}_{i}\mid i\in\mathcal{L}\\}$. Compactly written: $\hat{\mu}^{TE}_{*}(\alpha)=\left[\sum_{i=1}^{M}\mathcal{I}\left(\frac{\hat{\mu}_{i}-\hat{\mu}^{ME}_{*}}{\sqrt{\frac{\hat{\sigma}^{2}_{i}}{|S_{i}|}+\frac{\hat{\sigma}^{2}_{i^{*}}}{|S_{i^{*}}|}}}\geq z_{\alpha}\right)\right]^{-1}\sum_{i=1}^{M}\mathcal{I}\left(\frac{\hat{\mu}_{i}-\hat{\mu}^{ME}_{*}}{\sqrt{\frac{\hat{\sigma}^{2}_{i}}{|S_{i}|}+\frac{\hat{\sigma}^{2}_{i^{*}}}{|S_{i^{*}}|}}}\geq z_{\alpha}\right)\hat{\mu}_{i},$ (5) where $\mathcal{I}(\cdot)$ is the indicator function. We refer to (5) as _T- Estimator_ (TE). In simple words: TE averages the means of all variables, which are statistically not smaller than the one of the ME. Consequently, the selection is a binary decision of rejection or non-rejection of the underlying hypothesis. A key aspect of the TE is the consideration of the values of the sample means together with their uncertainties, expressed by variances. Asymptotically, meaning $|S_{i}|\rightarrow\infty$ for $i=1,\ldots,M$, the TE follows a normal distribution since it is an average of asymptotically normal distributed variables. The hyperparameter is the significance level $\alpha$, which is an interpretable quantity for practitioners and researchers, and is naturally restricted to $\alpha\in(0,0.5]$. One can directly determine the extreme case on the upper domain limit: $\hat{\mu}^{TE}_{*}(\alpha=0.5)=\hat{\mu}^{ME}_{*}$. Thus, the ME is a special case of the TE, being prone to overestimation bias with the bounds given in Section 2.2. By reducing $\alpha$, we reduce the bias since we tend to non- reject $H_{0}$ for smaller sample means. If one would consider a significance level of zero, the TE would collapse into the Average Estimator (AE): $\hat{\mu}^{AVG}_{*}=M^{-1}\sum_{i=1}^{M}\hat{\mu}_{i}$. Imagaw and Kaneko (2017) provide a similar definition in a multi-armed bandit context. The AE has low variance: $\operatorname{Var}(\hat{\mu}^{AVG}_{*})=M^{-2}\sum_{i=1}^{M}\frac{\sigma_{i}^{2}}{|S_{i}|}$, but severe negative bias: $\operatorname{Bias}(\hat{\mu}^{AVG}_{*})=-M^{-1}\sum_{i=1}^{M}(\max_{j}\mu_{j}-\mu_{i})$. However, we do not include $\alpha=0$ in our definition domain for the TE since 1) it is statistically not reasonable to consider such hypothesis tests and 2) the uncertainties of the sample means, quantified through variances, are not present anymore. In general, we flexibly interpolate between under- and overestimation by selecting a lower or larger level of significance. The following lemma contains the sharp bias bounds of the TE. ###### Lemma 1. For $\alpha\in(0,0.5]$, it holds: $\frac{1}{2}\left[\min_{i}\mu_{i}-\max_{i}\mu_{i}-\sqrt{\frac{M-1}{M}\sum_{i=1}^{M}\mathrm{Var}\left(\hat{\mu}_{i}\right)}\right]\leq\mathrm{Bias}\left[\hat{\mu}^{TE}_{*}(\alpha)\right]\leq\mathrm{Bias}(\hat{\mu}^{ME}_{*}).$ ###### Proof. The upper bound is straightforward since the TE is a weighted average of sample means, while the ME is the extreme case of weighting the maximum sample mean with one. Regarding the lower bound, we use that per construction: $\hat{\mu}^{TE}_{*}(\alpha)\geq\frac{\max_{i}\hat{\mu}_{i}+\min_{i}\hat{\mu}_{i}}{2}.$ (6) To see this, we first note that the numerator of the test statistics in (5) is always zero for the ME, leading to value one of the corresponding indicator function for all $\alpha\in(0,0.5]$. However, since the test statistics for index $i$ positively correlates with $\hat{\mu}_{i}$ and $\hat{\sigma}_{i}^{2}$, extreme variance scenarios are possible in which the index with the minimum mean yields the only other non-rejected hypothesis. Building expectations of (6), we have: $\displaystyle\operatorname{E}\left[\hat{\mu}^{TE}_{*}(\alpha)\right]$ $\displaystyle\geq\frac{1}{2}\left[\operatorname{E}\left(\max_{i}\hat{\mu}_{i}\right)+\operatorname{E}\left(\min_{i}\hat{\mu}_{i}\right)\right]$ $\displaystyle\geq\frac{1}{2}\left[\max_{i}\mu_{i}+\min_{i}\mu_{i}-\sqrt{\frac{M-1}{M}\sum_{i=1}^{M}\mathrm{Var}\left(\hat{\mu}_{i}\right)}\right],$ where the last line uses (2) and the bound for the minimum sample average of Aven (1985). The bias follows immediately. ∎ Moreover, a general expression for the expectation of the TE for arbitrary $M$ can be derived for known true variances. ###### Lemma 2. The expectation of the TE is: $\displaystyle\operatorname{E}$ $\displaystyle[\hat{\mu}^{TE}_{*}(\alpha)]=\sum_{i=1}^{M}\int_{-\infty}^{\infty}\int_{-\infty}^{x_{i}}\cdots\int_{-\infty}^{x_{i}}$ $\displaystyle\left[\sum_{j=1}^{M}\mathcal{I}\left(\frac{x_{j}-x_{i}}{\theta_{ij}}\geq z_{\alpha}\right)\right]^{-1}\left[\sum_{j=1}^{M}\mathcal{I}\left(\frac{x_{j}-x_{i}}{\theta_{ij}}\geq z_{\alpha}\right)x_{j}\right]\left[\prod_{j=1}^{M}\hat{f}_{j}(x_{j})\right]\left[\prod\limits_{\begin{subarray}{c}j=1\\\ j\neq i\end{subarray}}^{M}dx_{j}\right]dx_{i},$ (7) where $\theta_{ij}=\sqrt{\frac{\sigma_{i}^{2}}{|S_{i}|}+\frac{\sigma_{j}^{2}}{|S_{j}|}}$, $\hat{f_{j}}$ is the pdf of $\hat{\mu}_{j}$, being asymptotically normal, and $\sigma_{i}^{2}$ are known. That this expression holds true appears by generalizing the example with $M=2$ detailed in Appendix A to higher dimensions. Furthermore, the TE is consistent for the MEV since, with increasing sample size, each sample mean approaches its population mean, the ME approaches the true MEV, and the variances of the means tend to zero. Consequently, all tests except for the true MEV variable will reject the $H_{0}$, and only the MEV will be left. Note that the TE depends on performing multiple hypothesis tests, which can increase type I error. However, for simplicity, we stick to the procedure as presented above and do not pursue multiple testing corrections, e.g., a Bonferroni correction (Armstrong, 2014). Regarding variance, the TE shares the common overly pessimistic variance bound, while the proof relies on the TE being a weighted average of means and is similar to D’Eramo et al. (2016): ###### Lemma 3. For $\alpha\in(0,0.5]$, it holds: $\mathrm{Var}\left[\hat{\mu}^{TE}(\alpha)\right]\leq\sum_{i=1}^{M}\frac{\sigma_{i}^{2}}{|S_{i}|}$. ### 3.2 K-Estimator The TE is a flexible estimator fulfilling our initial requirements, but can be put in an even broader context. Crucially, the TE involves indicator functions, whose derivatives with relation to $\alpha$ are either zero or not existent, leading to the binary decision of exclusion and non-exclusion of a particular mean. Avoiding this behaviour, we propose to apply the standard Gaussian cdf $\Phi$ directly to the test statistics and use the resulting values as a smoothed weighting: $\hat{\mu}^{\Phi}_{*}=\left[\sum_{i=1}^{M}\Phi\left(T_{i}\right)\right]^{-1}\sum_{i=1}^{M}\Phi\left(T_{i}\right)\hat{\mu}_{i},\quad\text{where}\quad T_{i}=\frac{\hat{\mu}_{i}-\hat{\mu}^{ME}_{*}}{\sqrt{\frac{\hat{\sigma}^{2}_{i}}{|S_{i}|}+\frac{\hat{\sigma}^{2}_{i^{*}}}{|S_{i^{*}}|}}}.$ (8) In fact, it can be generalized even further by considering a weighting kernel $\kappa(\cdot)$: $\hat{\mu}^{KE}_{*}=\left[\sum_{i=1}^{M}\kappa\left(T_{i}\right)\right]^{-1}\sum_{i=1}^{M}\kappa\left(T_{i}\right)\hat{\mu}_{i}.$ (9) with $\kappa:\mathbb{R}^{-}\rightarrow\mathbb{R}^{+}$, where $\mathbb{R}^{-}=(-\infty;0]$ and $\mathbb{R}^{+}=\mathbb{R}\setminus\mathbb{R}^{-}\cup\left\\{0\right\\}$. We require that $\kappa(\cdot)$ is monotonically increasing to build a reasonable kernel since $T_{i}\leq 0$, $\forall i=1,\ldots,M$, and that $\lim\limits_{T_{i}\rightarrow-\infty}\kappa(T_{i})=0$ for consistency. Similar kernel functions are considered in Mammen (1991) for isotonic regressions. We refer to (9) as the _K-Estimator_ (KE). Crucially, the hyperparameter of the KE is not a fixed scalar anymore, but the specification of $\kappa(\cdot)$. For example, setting $\kappa(T_{i};\alpha)=\mathcal{I}(T_{i}\geq z_{\alpha})$ results in obtaining the TE. Further options for $\kappa(\cdot)$ are listed in Table 1. Additionally, more flexible parametrized specifications are available. Consider, for example, the cdf of the beta distribution $\mathcal{B}_{\mathfrak{a},\mathfrak{b}}$ with two shape parameters $\mathfrak{a}$, $\mathfrak{b}$. Although the latter is naturally defined on $[0,1]$, one could simply scale and shift it to, e.g., $[-5,0]$ to generate a more valid specification. This particular case is used in the example of Section 3.3. Apart from that, we primarily apply the standard Gaussian cdf throughout the paper. Kernel | $\kappa(T)$ for $T\leq 0$ ---|--- cdf: Gaussian $\Phi_{\lambda}$ | $\int_{-\infty}^{T}\frac{1}{\sqrt{2\pi\lambda^{2}}}\exp{\left\\{-\frac{1}{2}\left(\frac{t}{\lambda}\right)^{2}\right\\}}dt$ cdf: $t$-distribution $t_{\nu}$ | $\int_{-\infty}^{T}\frac{\Gamma(\frac{\nu+1}{2})}{\sqrt{\nu\pi}\Gamma(\frac{\nu}{2})}\left(1+\frac{t^{2}}{\nu}\right)^{-\frac{\nu+1}{2}}dt$ Epanechnikov | $\frac{3}{4}(1-T^{2})\mathcal{I}(|T|\leq 1)$ Laplace | $\frac{1}{2}\exp{(-|T|)}$ Triangle | $(1-|T|)\mathcal{I}(|T|\leq 1)$ Table 1: Exemplary kernel functions. $\lambda$ is the standard deviation of the Gaussian cdf, $\nu$ are the degrees of freedom of the $t$-distribution, and $\Gamma(x)=\int_{0}^{\infty}t^{x-1}\exp(-t)dt$ denotes the gamma function. We abbreviate the standard Gaussian kernel $\Phi_{\lambda=1}$ with $\Phi$. Bias and variance of the KE depend on the chosen kernel specification, but the bounds of the TE are still valid as long as the kernel function fulfills the requirements stated above. ###### Corollary 1. For the KE, it holds: $\frac{1}{2}\left[\min_{i}\mu_{i}-\max_{i}\mu_{i}-\sqrt{\frac{M-1}{M}\sum_{i=1}^{M}\mathrm{Var}\left(\hat{\mu}_{i}\right)}\right]\leq\mathrm{Bias}\left(\hat{\mu}^{KE}_{*}\right)\leq\mathrm{Bias}(\hat{\mu}^{ME}_{*}).$ ###### Proof. The KE cannot exceed the ME, thus the upper bound holds. For the lower bound, we note the same relationship as for the TE: $\hat{\mu}^{KE}_{*}\geq\frac{\max_{i}\hat{\mu}_{i}+\min_{i}\hat{\mu}_{i}}{2}.$ This follows since the sample mean corresponding to the ME is per construction weighted with $\kappa(0)$ (before normalization). Simultaneously, for extreme variance scenarios, it is possible that the weight of the minimum sample mean tends to $\kappa(0)$ as well, while the weights of the remaining means tend to $\lim\limits_{T_{i}\rightarrow-\infty}\kappa(T_{i})$, which is required to be zero. The remaining steps are identical to the proof of Lemma 1. ∎ ###### Corollary 2. For the KE, it holds: $\mathrm{Var}\left(\hat{\mu}^{KE}\right)\leq\sum_{i=1}^{M}\frac{\sigma_{i}^{2}}{|S_{i}|}$. The proof is again similar to D’Eramo et al. (2016). Finally, we can generalize (2) of the TE for the KE, describing the expectation for known variances. ###### Corollary 3. The expectation of the KE is: $\displaystyle\operatorname{E}[\hat{\mu}^{KE}_{*}]$ $\displaystyle=\sum_{i=1}^{M}\int_{-\infty}^{\infty}\int_{-\infty}^{x_{i}}\cdots\int_{-\infty}^{x_{i}}$ $\displaystyle\left[\sum_{j=1}^{M}\kappa\left(\frac{x_{j}-x_{i}}{\theta_{ij}}\right)\right]^{-1}\left[\sum_{j=1}^{M}\kappa\left(\frac{x_{j}-x_{i}}{\theta_{ij}}\right)x_{j}\right]\left[\prod_{j=1}^{M}\hat{f}_{j}(x_{j})\right]\left[\prod\limits_{\begin{subarray}{c}j=1\\\ j\neq i\end{subarray}}^{M}dx_{j}\right]dx_{i},$ where $\theta_{ij}=\sqrt{\frac{\sigma_{i}^{2}}{|S_{i}|}+\frac{\sigma_{j}^{2}}{|S_{j}|}}$, $\hat{f_{j}}$ is the pdf of $\hat{\mu}_{j}$, being asymptotically normal, and $\sigma_{i}^{2}$ are known. Finally, we highlight three main differences between the KE (including the TE as a special case) and the WE of Section 2.4 since both are constructed as a weighted sum of sample means. First, the weights of the WE are probabilities, while the weights of the KE do not have a probabilistic interpretation. Second, while the KE allows for a multitude of specifications, the WE is not tunable and thus cannot be adjusted to a given scenario in a practical problem. Third, the computation of the WE requires integration or Monte Carlo approximation schemes, while the KE’s computation is extremely fast. ### 3.3 Example (iid): Bias, Variance, and MSE To further analyze the behaviour of the proposed estimators, we assume a similar setup to D’Eramo et al. (2016) with $M=2$. Precisely, we consider Gaussian random variables $X_{1}\sim\mathcal{N}(\mu_{1},\sigma^{2})$, $X_{2}\sim\mathcal{N}(\mu_{2},\sigma^{2})$, where $\sigma^{2}=100$ is the common known variance, and we assume to have $|S_{1}|=|S_{2}|=100$ observations of each variable. As assumed throughout prior sections, the realizations in a sample $S_{i}$ are iid. We fix $\mu_{2}=0$ and compute bias, variance, and MSE for different $\mu_{1}\in[0,5]$. For completeness, we report the analytic forms for the expectation and variance of the estimators in this case in Appendix A. For the TE, we select significance levels $\alpha\in\\{0.05,0.10,0.15\\}$ and for the KE, we analyze the standard Gaussian kernel $\Phi$, the Epanechnikov kernel, and the shifted and scaled $\mathcal{B}_{\mathfrak{a},\mathfrak{b}}$ cdf kernel with $\mathfrak{a}=2$, $\mathfrak{b}=0.5$ as described above. Results are displayed in Figures 1 and 2. Figure 1: Comparison of the ME, DE, and TE with level of significance in parentheses. Figure 2: Comparison of the ME, DE, and KE with kernel in parentheses. Regarding the TE, we see how the bias decreases with a smaller significance level. In general, the estimator operates between the biases of ME and DE, although the bias of the DE is not necessarily a lower bound (see $\alpha=0.05$). In the mean equality case, $\mu_{1}=\mu_{2}$, the TE can avoid the significant overestimation of the ME while having only slightly increased variance. Consequently, the TE outperforms the conventional competitors for all considered significance levels under the MSE criterion for $\mu_{1}=\mu_{2}$. However, if the difference of the true expectations of the random variables is large, all estimators become unbiased. In this scenario, the ME is the best choice due to its low variance. Regarding the KE in Figure 2, we see that the standard Gaussian and Epanechnikov kernels can achieve a desirable balance between under- and overestimation while maintaining a smaller variance than the considered TE. However, the chosen specification of the beta distribution appears sub-optimal for this particular problem because of its rather strong underestimation for large $\mu_{1}-\mu_{2}$. To find a better fitting parametrization, we numerically solved the optimization problem of minimizing the squared bias for the depicted range of $\mu_{1}-\mu_{2}$ over the parameters $\mathfrak{a}$, $\mathfrak{b}$ of the $\mathcal{B}_{\mathfrak{a},\mathfrak{b}}$ kernel. To enable comparability between the estimators, we run the identical optimization for the parameter $\lambda$ of the Gaussian kernel $\Phi_{\lambda}$ (deviating from the unit variance specification) and the level of significance of the TE. The optimized kernel functions alongside specifications from Figures 1, 2 are depicted in Figure 3. The functions are normalized to $[0,1]$ by division through $\kappa(0)$ of the respective kernel. Optimizing the standard deviation of the Gaussian cdf yields $\lambda\approx 0.83$, which is close to the unit variance specification. On the other hand, the bias-optimal value for the significance level of the TE is $\approx 0.14$. Regarding the $\mathcal{B}_{\mathfrak{a},\mathfrak{b}}$ specification, one needs to recall that the beta kernel is capable of approximating both the optimized TE and the optimized KE with the (non-standard) Gaussian cdf kernel. Following Figure 3, the optimized TE is favorable in this scenario since the optimized beta cdf yields a non-unique, zero-variance solution, which is in line with the optimized TE. Overall, we have seen through this investigation that both the TE and the KE can achieve flexible trade-offs between bias and variance in the estimation of the MEV. Considering typical levels of significance like $0.05$, $0.10$, or $0.15$ in the TE builds a robust estimator, further enhanced by accurately specifying a suited KE. Figure 3: Original kernel functions and optimized specification for minimizing the squared bias in Figures 1 and 2. ### 3.4 Example (non-iid): Bias and Variance The estimators of the MEV will be transferred to solving sequential decision problems in Section 4, and thus auto-correlations, time dependencies, and decision-making play an essential role in the algorithms. We illustrate the limitations of the estimators by applying them to auto-correlated data. First, consider the following two processes: $\displaystyle X_{i,t}$ $\displaystyle=(1-\rho)\mu_{i}+\rho X_{i,t-1}+\varepsilon_{i,t},$ (10) $\displaystyle\varepsilon_{i,t}$ $\displaystyle\sim\mathcal{N}\left\\{0,(1-\rho^{2})\sigma^{2}\right\\},$ with $X_{i,0}=\mu_{i}$ for $i=1,2$ and time steps $t=0,1,\ldots,T$. The given specification yields the unconditional moments: $\operatorname{E}[X_{i,t}]=\mu_{i}$ and $\operatorname{Var}[X_{i,t}]=\sigma^{2}$. Thus, by setting $\mu_{1}=1$, $\mu_{2}=0$, and $\sigma^{2}=100$, we obtain a special case of the scenarios studied in Section 3.3 while introducing auto-correlation in the samples. The latter can be tuned via the parameter $\rho\in[0,1]$, and further details on such time series can be found in Tsay (2010). Second, in analogy to the temporal-difference algorithms considered later in Section 4, we compute exponentially weighted mean estimates based on (10): $\hat{\mu}_{i,t}=\hat{\mu}_{i,t-1}+\tau\left(X_{i,t}-\hat{\mu}_{i,t-1}\right),$ (11) where we initialize $\hat{\mu}_{i,0}=\mu_{i}$ for $i=1,2$ and consider time steps $t=0,1,\ldots,T$ with learning rate $\tau$. Transferred to RL, the $X_{i,t}$ are - potentially correlated - returns, while $\hat{\mu}_{i,t}$ corresponds to the estimate of the $Q$-value. Figure 4 illustrates exemplary realizations of (10) and (11). Figure 4: Realizations of the processes in (10) and (11) for varying $\rho$ and $\tau$. While $\rho=0.0$ represents independent random noise, the dependence structure is visible for larger $\rho$. The parameter $\tau$ controls the smoothness of the mean estimate. Figure 5: Estimator comparison in the non-iid setting based on kernel-density estimates for $10^{6}$ runs. The black dashed line is the true MEV. We consider $\rho=\\{0.0,0.3,0.7,0.95\\}$ and $\tau=\\{0.1,0.3\\}$ with time horizon $T=100$, and compare the ME, DE, TE, and KE. After simulating (10) and (11), we use the final mean estimate to compute the MEV, where we use the sample variance of the time series $\hat{\mu}_{i,t}$ as the required variance estimate for the TE and KE. Regarding the DE, we simulate for each variable ($i=1,2$) two mean estimate processes with $T=50$, where we use one for index selection and one for evaluation as described in Section 2.3. The simulation and estimation procedure is repeated $10^{6}$ times and we compute a kernel density estimate (Gaussian kernel, see Sheather 2004) on the MEV estimates. Figure 5 displays the results. We see the behavior of the iid-scenario reflected for $\rho=0.0$. The ME overestimates the MEV in all cases (with the peak of the density being slightly to the right), while the DE tends to underestimate it (with the peak being at the left). Similar to Section 3.3, the TE and KE can balance the two conventional approaches, and the relative ordering between the estimators remain with increasing auto-correlation. However, all estimators exhibit large variances for the extreme case $\rho=0.95$, and a general tendency towards positive bias is visible. This observation is an immediate consequence of the construction in (10) since it is likely to observe extreme mean estimates paired with large differences between the two mean processes; see column $\rho=0.95$ in Figure 4. In summary, when moving to the RL scenario in the following section, the principal behavior of the estimators remains. However, one should keep in mind that the MEV estimators are constructed for the ideal iid-case, and a violation of this assumption can impact the behavior of the estimators when facing sequential data. ## 4 Application to Reinforcement Learning ### 4.1 Tabular Version Reinforcement learning describes a collection of learning techniques for sequential decision processes, in which an agent aims to maximize its reward while interacting with an environment, see Sutton and Barto (2018) and Bertsekas (2019). The problem is modeled as a Markov Decision Process (MDP, Puterman 1994), consisting of a state space $\mathcal{S}$, a finite action space $\mathcal{A}$, a state transition probability distribution $\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]$, a bounded reward function $\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$, and a discount factor $\gamma\in[0,1]$. If $\gamma=1$, we assume there is a zero- reward absorbing state and that the probability of reaching this state converges to one as time tends to infinity, see Lan et al. (2020). At each time step $t$, the agent takes an action $a_{t}\in\mathcal{A}$ based on state information $s_{t}\in\mathcal{S}$, receives a reward $r_{t}=\mathcal{R}(s_{t},a_{t})$, and transitions with probability $\mathcal{P}(s_{t+1}\mid s_{t},a_{t})$ to a new state $s_{t+1}\in\mathcal{S}$. Objective is to optimize for a policy $\pi:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]$, a mapping from states to distributions over actions, that maximizes the expected return $\operatorname{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}\right]$. Common practice are value-based methods that define action-values $Q^{\pi}(s,a)=\operatorname{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}\mid s_{0}=s,a_{0}=a\right]$ for a certain policy. Thus, $Q^{\pi}(s,a)$ is the expected return when starting in state $s$, executing $a$, and following policy $\pi$ afterwards. There exists an optimal deterministic stationary policy $\pi^{*}(s)=\operatorname*{arg\,max}_{a\in\mathcal{A}}Q^{*}(s,a)$, that is connected with optimal action-values $Q^{*}(s,a)=\max_{\pi}Q^{\pi}(s,a)$ for all $s\in\mathcal{S}$ and $a\in\mathcal{A}$ if the state space is finite or countably infinite (Puterman, 1994, Theorem 6.2.10). To optimize for $Q^{*}(s,a)$, one uses a recursive relationship known as Bellman (1954) optimality equation: $Q^{*}(s,a)=\mathcal{R}(s,a)+\gamma\sum_{s^{\prime}\in\mathcal{S}}\mathcal{P}(s^{\prime}\mid s,a)\max_{a^{\prime}\in\mathcal{A}}Q^{*}(s^{\prime},a^{\prime}),$ (12) where $s^{\prime}$ is a successor state after performing action $a$ in state $s$. Since $Q^{*}(s^{\prime},a^{\prime})$ is the expected return from executing $a^{\prime}$ in $s^{\prime}$ and following the optimal policy afterward, the problem immediately appears as an instance of the estimation of the MEV of a set of random variables, namely the stochastic returns. Consequently, the methodology of Section 2 applies. $Q$-Learning, from Watkins and Dayan (1992), translates (12) into a sample-based algorithm by using the ME: $\hat{Q}^{*}(s,a)\leftarrow\hat{Q}^{*}(s,a)+\tau\left[y^{Q}-\hat{Q}^{*}(s,a)\right],$ with target $y^{Q}=r+\gamma\max_{a^{\prime}\in\mathcal{A}}\hat{Q}^{*}(s^{\prime},a^{\prime})$ and learning rate $\tau$ (recall Section 3.4). An estimate of $Q^{*}(s,a)$ is denoted $\hat{Q}^{*}(s,a)$. The algorithm is known to converge to the optimal action-values if the conditions of Robbins and Monro (1951) on the learning rate are fulfilled, and each state-action pair is visited infinitely often (Tsitsiklis, 1994). However, especially in early stages of the training when the $Q$-estimates are very imprecise, the algorithm tends to transmit overestimated values. Overcoming this issue, Van Hasselt (2010) uses the DE and stores two separate $Q$-tables with estimates $\hat{Q}^{*}_{A}$ and $\hat{Q}^{*}_{B}$, leading to the target: $y^{DQ}=r+\gamma\hat{Q}^{*}_{B}\left[s^{\prime},\operatorname*{arg\,max}_{a^{\prime}\in\mathcal{A}}\hat{Q}^{*}_{A}(s^{\prime},a^{\prime})\right]$. Finally, to apply the TE and KE in a $Q$-Learning setup, we propose to replace the target by: $y^{KQ}=r+\gamma\left\\{\sum_{a^{\prime}\in\mathcal{A}}\kappa\left[T_{\hat{Q}^{*}}(s^{\prime},a^{\prime})\right]\right\\}^{-1}\sum_{a^{\prime}\in\mathcal{A}}\kappa\left[T_{\hat{Q}^{*}}(s^{\prime},a^{\prime})\right]\hat{Q}^{*}(s^{\prime},a^{\prime}),$ (13) where $T_{\hat{Q}^{*}}(s^{\prime},a^{\prime})=\frac{\hat{Q}^{*}(s^{\prime},a^{\prime})-\max_{a^{\prime\prime}\in\mathcal{A}}\hat{Q}^{*}(s^{\prime},a^{\prime\prime})}{\sqrt{\widehat{\operatorname{Var}}\left[\hat{Q}^{*}(s^{\prime},a^{\prime})\right]+\widehat{\operatorname{Var}}\left[\hat{Q}^{*}(s^{\prime},a^{*})\right]}},$ for a currently maximizing action $a^{*}\in\\{a\in\mathcal{A}\mid\hat{Q}^{*}(s^{\prime},a)=\max_{a^{\prime\prime}\in\mathcal{A}}\hat{Q}^{*}(s^{\prime},a^{\prime\prime})\\}$. The variance estimate of $\hat{Q}^{*}(s^{\prime},a^{*})$ is denoted $\widehat{\operatorname{Var}}\left[\hat{Q}^{*}(s^{\prime},a^{*})\right]$ and will be generated following the proposal of D’Eramo et al. (2019). First, the variance of the underlying return is estimated via an exponentially-weighted online update: $\widehat{\sigma^{2}}_{\rm process}(s,a)\leftarrow(1-\tau)\left\\{\widehat{\sigma^{2}}_{\rm process}(s,a)+\tau\left[y^{KQ}-\hat{Q}^{*}(s,a)\right]^{2}\right\\}.$ Second, a normalization by Kish (1965) effective sample size $n_{\rm eff}(s,a)$ is performed: $\widehat{\operatorname{Var}}\left[\hat{Q}^{*}(s,a)\right]=\frac{\widehat{\sigma^{2}}_{\rm process}(s,a)}{n_{\rm eff}(s,a)}.$ The effective sample size weights each sample depending on the learning rate and is computed via $n_{\rm eff}(s,a)=\frac{\omega(s,a)^{2}}{\omega^{2}(s,a)}$, in which numerator and denominator are incrementally updated: $\displaystyle\omega(s,a)$ $\displaystyle\leftarrow(1-\tau)\omega(s,a)+\tau,$ $\displaystyle\omega^{2}(s,a)$ $\displaystyle\leftarrow(1-\tau)^{2}\omega^{2}(s,a)+\tau^{2}.$ With this approach, we can introduce TE-$Q$-Learning (TE-$Q$) and KE-$Q$-Learning (KE-$Q$), respectively, being summarized in Algorithm 1. initialize $\forall s\in\mathcal{S},a\in\mathcal{A}:$ $\hat{Q}^{*}(s,a)=0,\widehat{\sigma^{2}}_{\rm process}(s,a)>0,w(s,a)=0,w^{2}(s,a)=0$ repeat Initialize $s$ repeat Choose action $a$ from state $s$ with policy derived from $Q$ (e.g., $\epsilon$-greedy) Take action $a$, observe reward $r$ and next state $s^{\prime}$ [0.75ex] _Update effective sample size:_ $\omega(s,a)\leftarrow(1-\tau)\omega(s,a)+\tau$ $\omega^{2}(s,a)\leftarrow(1-\tau)^{2}\omega^{2}(s,a)+\tau^{2}$ $n_{\rm eff}(s,a)\leftarrow\frac{\omega(s,a)^{2}}{\omega^{2}(s,a)}$ [0.75ex] Calculate target $y_{KQ}$ via (13) [0.75ex] _Update process variance:_ $\widehat{\sigma^{2}}_{\rm process}(s,a)\leftarrow(1-\tau)\left\\{\widehat{\sigma^{2}}_{\rm process}(s,a)+\tau\left[y^{KQ}-\hat{Q}^{*}(s,a)\right]^{2}\right\\}$ [0.75ex] _Update $Q$-estimate:_ $\hat{Q}^{*}(s,a)\leftarrow\hat{Q}^{*}(s,a)+\tau\left[y^{KQ}-\hat{Q}^{*}(s,a)\right]$ $s\leftarrow s^{\prime}$ until _$s$ is terminal_ until __ Algorithm 1 TE-$Q$-Learning/KE-$Q$-Learning ### 4.2 Deep Version Table-based algorithms cannot deal with continuous state spaces, and function approximators like DNNs are used to parametrize the function $\hat{Q}^{*}(s,a;\theta)$, with $\theta$ being the parameter set of the neural network. The resulting DQN (Mnih et al., 2015) and their extensions (Van Hasselt et al., 2016; Schaul et al., 2016; Hessel et al., 2018) have shown great performances on a variety of challenging tasks. The optimization procedure of these algorithms is still based on the Bellman optimality equation (12), but uses gradient descend to update $\theta$: $\theta\leftarrow\theta+\tau\left[y^{DQN}-\hat{Q}^{*}(s,a;\theta)\right]\nabla_{\theta}\hat{Q}^{*}(s,a;\theta),$ where $y^{DQN}=r+\gamma\max_{a^{\prime}\in\mathcal{A}}\hat{Q}^{*}(s^{\prime},a^{\prime};\theta^{-})$. The set $\theta^{-}$ refers to the parameters of the target network, which is a time-delayed copy of the main network with parameter $\theta$. Instead of updating fully online, DQN samples minibatches of past experiences from a replay buffer $D$ to stabilize training. Van Hasselt et al. (2016) proposed the Double DQN (DDQN) and uses the DE to compute the target: $y^{DDQN}=r+\gamma\hat{Q}^{*}[s^{\prime},\operatorname*{arg\,max}_{a^{\prime}\in\mathcal{A}}\hat{Q}^{*}(s^{\prime},a^{\prime};\theta);\theta^{-}]$. The action selection is performed via the main network, while the evaluation uses the target network like the regular DQN. To translate the TE and KE to the function approximation case, we require a variance estimate of the $Q$-estimates. We follow D’Eramo et al. (2019) and use the framework of the BDQN (Osband et al., 2016) to accomplish this task. Generally, the bootstrap is a method for computing measures of accuracy for statistical estimates (Efron and Tibshirani, 1994). The method trains different regressors of the target function based on bootstrap samples generated by sampling with replacement from the original dataset. The BDQN transfers this idea to the DQN algorithm by maintaining $K\in\mathbb{N}$ differently initialized $Q$-networks $\hat{Q}^{*}_{k}(s,a;\theta_{k})$ with parameter $\theta_{k}$, $k=1,\ldots,K$, each equipped with its own target network $\hat{Q}^{*}_{k}(s,a;\theta_{k}^{-})$. This DQN modification was proposed to improve over the usual $\epsilon$-greedy exploration strategy. At the beginning of each episode, one $\hat{Q}^{*}_{k}$ is selected randomly, and the agent acts greedy with relation to this $\hat{Q}^{*}_{k}$. At test time, the majority vote of the function approximators is used. Generally, the BDQN can be implemented using $K$ different networks or maintaining a common network body and specifying $K$ different heads. We pursue the latter approach. Diversification across the heads is achieved by different random initialization of the parameters and random generation of binary masks $m^{1},\ldots,m^{K}\in[0,1]$, to indicate which head should be trained on which experience sample. A Bernoulli distribution with parameter $p$ is a possible choice for the generating distribution $M$. Crucially, through the $K$ heads, we can directly use the sample variance of the $Q$-estimates and transfer (13) to compute target $y^{KDQN,k}$ for the $k$-th network $\hat{Q}^{*}_{k}$: $y^{KDQN,k}=r+\gamma\left\\{\sum_{a^{\prime}\in\mathcal{A}}\kappa\left[T_{\hat{Q}^{*}_{k}}(s^{\prime},a^{\prime})\right]\right\\}^{-1}\sum_{a^{\prime}\in\mathcal{A}}\kappa\left[T_{\hat{Q}^{*}_{k}}(s^{\prime},a^{\prime})\right]\hat{Q}^{*}_{k}(s^{\prime},a^{\prime};\theta_{k}^{-}),$ (14) where $T_{\hat{Q}^{*}_{k}}(s^{\prime},a^{\prime})=\frac{\hat{Q}^{*}_{k}(s^{\prime},a^{\prime};\theta^{-})-\max_{a^{\prime\prime}\in\mathcal{A}}\hat{Q}^{*}_{k}(s^{\prime},a^{\prime\prime};\theta_{k}^{-})}{\sqrt{\widehat{\operatorname{Var}}\left[\hat{Q}^{*}_{k}(s^{\prime},a^{\prime};\theta_{k}^{-})\right]+\widehat{\operatorname{Var}}\left[\hat{Q}^{*}_{k}(s^{\prime},a^{*};\theta_{k}^{-})\right]}},$ for a currently maximizing action $a^{*}\in\\{a\in\mathcal{A}\mid\hat{Q}^{*}_{k}(s^{\prime},a;\theta_{k}^{-})=\max_{a^{\prime\prime}\in\mathcal{A}}\hat{Q}^{*}_{k}(s^{\prime},a^{\prime\prime};\theta_{k}^{-})\\}$. The resulting gradient $g_{i}^{k}$ for the $i$-th tuple from the replay buffer is: $g_{i}^{k}=m_{i}^{k}\left[y_{i}^{KDQN,k}-\hat{Q}^{*}_{k}(s_{i},a_{i};\theta_{k})\right]\nabla_{\theta}\hat{Q}^{*}_{k}(s_{i},a_{i};\theta_{k}).$ (15) The full procedure is termed TE-BDQN and KE-BDQN, respectively, being detailed in Algorithm 2. initialize Action-value estimate networks with $K$ outputs $\left\\{\hat{Q}^{*}_{k}\right\\}^{K}_{k=1}$, masking distribution $M$, empty replay buffer $D$ repeat Initialize $s$ Pick a value function to act: $k\sim\text{Uniform}\\{1,\ldots,K\\}$ repeat Choose action $a$ from state $s$ with greedy policy derived from $\hat{Q}^{*}_{k}$ Take action $a$, observe reward $r$ and next state $s^{\prime}$ Sample bootstrap mask $m\sim M$ Add $(s,a,r,s^{\prime},m)$ to replay buffer $D$ Sample random minibatch of transitions $\left\\{(s_{i},a_{i},s^{\prime}_{i},r_{i},m_{i})\right\\}_{i=1}^{B}$ from $D$ Perform gradient descent step based on (15) Every $C$ steps reset $\theta_{k}=\theta_{k}^{-}$ for $k=1,\ldots,K$ $s\leftarrow s^{\prime}$ until _$s$ is terminal_ until __ Algorithm 2 TE-BDQN/KE-BDQN ## 5 Adaptive Absolute Bias Minimization While the TE can interpolate between under- and overestimation by selecting a smaller or larger $\alpha$, it is a priori not known which $\alpha$ is adequate for an unknown environment. The method of choice for practitioners in such situations is a grid search to select a suitable value empirically. Next to the selection issue, a fixed parameter might not be sufficient for controlling the estimation bias, and, e.g., an ascending strategy would be favourable. Leveraging these considerations, we propose an adaptive modification of TE-BDQN to adjust $\alpha$ under the objective of minimizing the absolute estimation bias during training. ### 5.1 Bias Estimation Before introducing the adaptive mechanism, we first outline how to estimate the bias of given action-value estimates at a certain point in training. Following Chen et al. (2021a), we consider a current policy $\pi$, which is connected with true action-values $Q^{\pi}(s,a)$. The aggregated bias of estimates $\hat{Q}^{\pi}(s,a)$ of $Q^{\pi}(s,a)$ for all $s\in\mathcal{S},a\in\mathcal{A}$ is defined as: $\operatorname{Bias}(\hat{Q}^{\pi},\pi)=\operatorname{E}_{s\sim\rho^{\pi},a\sim\pi}[\hat{Q}^{\pi}(s,a)-Q^{\pi}(s,a)],$ where $\rho^{\pi}$ is the state-visitation distribution of $\pi$. Chen et al. (2021a) proposed to repeatedly run analysis episodes from random initial states while following $\pi$. The observed Monte Carlo return of an encountered state-action pair serves as an unbiased estimate of its true $Q$-value. Averaging over all encountered state-action pairs yields the estimate of the estimation bias: $\widehat{\operatorname{Bias}}(\hat{Q}^{\pi},\pi)=\frac{1}{\lvert\mathcal{T}\rvert}\sum_{(s,a,R)\in\mathcal{T}}[\hat{Q}^{\pi}(s,a)-R],$ (16) where $\mathcal{T}$ is a set of encountered $(s,a,R)$-tuples, where $s$ is the state, $a$ the executed action, and $R$ the (later) observed Monte Carlo return. $\lvert\mathcal{T}\rvert$ is the cardinality of $\mathcal{T}$. Chen et al. (2021a) applied this procedure to the Soft-Actor Critic algorithm (Haarnoja et al., 2018), which uses an actor dictating the policy $\pi$ and a critic providing the estimates $\hat{Q}^{\pi}$. In standard $Q$-Learning, no explicit actor is providing the policy, and the algorithm directly approximates the optimal action-values $Q^{*}(s,a)$, leading to estimates $\hat{Q}^{*}(s,a)$ (Sutton and Barto, 2018). To still generate insights into the action-value estimation accuracy of the algorithm, Van Hasselt et al. (2016) compare the $\hat{Q}^{*}(s,a)$ generated _during_ training with Monte Carlo returns of the final greedy policy _after_ training. Although this approach is certainly instructive, it does not enable an assessment without having a converged baseline. We instead propose to use (16) with $\pi(s)=\operatorname*{arg\,max}_{a}\hat{Q}^{*}(s,a)$ for all $s\in\mathcal{S}$ already _during_ training, being briefly summarized in Algorithm 3. The method is justified since $Q$-Learning uses a greedy target policy in its update. Thus, the algorithm _evaluates_ the greedy policy with respect to its own action-value estimates, and we can assess whether the $Q$-estimates are too optimistic or pessimistic. Transferred to the BDQN and its modifications, the procedure can be similarly applied by assessing each head separately since they constitute different estimates $\hat{Q}^{*}_{k}$ for $k=1,\ldots,K$. Thus, we run Algorithm 3 for each head and average the output to receive an aggregated bias estimate for the bootstrap-ensemble. Input Estimates $\hat{Q}^{*}(s,a)$ for all $s\in\mathcal{S},a\in\mathcal{A}$ Set $\mathcal{T}=\emptyset$ and $\pi(s)=\operatorname*{arg\,max}_{a}\hat{Q}^{*}(s,a)$ for all $s\in\mathcal{S}$ for _number of episodes_ do Randomly initialize $s$ Play episode following $\pi$ and append encountered state-action-return tuples to $\mathcal{T}$ end for return _$\frac{1}{\lvert\mathcal{T}\rvert}\sum_{(s,a,R)\in\mathcal{T}}[\hat{Q}^{*}(s,a)-R]$_ Algorithm 3 Bias estimation for $Q$-Learning like algorithms ### 5.2 Adaptive TE-BDQN Algorithm 3 will be used to assess the algorithms in the experiments of Section 6. Furthermore, the approach serves as a basis for dynamically adjusting the $\alpha$ of the TE-BDQN. Intuitively, since larger $\alpha$ leads to larger $Q$-estimates, $\alpha$ is reduced if the $Q$-estimates are too high. Vice versa, we increase $\alpha$ if the $Q$-estimates are too small. Precisely, we perform the following update: $\alpha\leftarrow\alpha+\frac{\tau_{\rm Ada}}{K}\sum_{k=1}^{K}\sum_{t=1}^{T_{\rm Ada}}\left[R_{k}(s_{t,k},a_{t,k})-\hat{Q}^{*}_{k}(s_{t,k},a_{t,k};\theta_{k})\right],$ (17) with step size $\tau_{\rm Ada}$ and roll-out length $T_{\rm Ada}$. Importantly, we use $n$-steps returns (Sutton and Barto, 2018): $R_{k}(s_{t,k},a_{t,k})=r_{t,k}+\gamma r_{t+1,k}+\gamma^{2}r_{t+2,k}+\ldots+\gamma^{T_{\rm Ada}-t}r_{T_{\rm Ada},k}+\gamma^{T_{\rm Ada}-t+1}\max_{a^{\prime}}\hat{Q}^{*}_{k}(s_{T_{\rm Ada}+1,k},a^{\prime};\theta_{k}),$ where $t=1,\ldots,T_{\rm Ada}$. We denote state and action at time $t$ under head $k$ as $s_{t,k}$ and $a_{t,k}$, respectively. The resulting immediate reward is $r_{t,k}$ and the initial states $s_{1,k}$ for $k=1,\ldots,K$ are randomly sampled from the replay buffer. As motivated in Section 5.1, the actions $a_{t,k}$ under head $k$ are selected by acting greedy with relation to $\hat{Q}^{*}_{k}$. Although $n$-step returns are generally not an unbiased estimate of the expected return of a policy like a complete Monte Carlo roll- out, we found them empirically much more practicable since they do not require running full episodes while still allowing us to judge the accuracy of the current value estimates. Consequently, this approach can also be applied to non-episodic problems. We update $\alpha$ immediately after the target networks, avoiding instabilities in the learning process. A similar proposal to (17) in an episodic context with continuous action spaces based on full Monte Carlo roll- outs has recently been made by Dorka et al. (2021). Note that it is possible to maintain a separate $\alpha$ for each bootstrap head, enabling a tailored parametrization for the bias of each approximator. However, we only consider one parameter for the whole ensemble for simplicity in the following. The resulting algorithm is called Ada-TE-BDQN and is shown in Appendix B. ## 6 Experiments We analyze the proposed estimators of the MEV on a statistically motivated real-world example before considering two tabular environments that serve as a proof-of-concept for TE/KE-$Q$-Learning. The experiments with function approximation are carried out in the MinAtar environments of Young and Tian (2019), which allow for a thorough algorithmic comparison. ### 6.1 Internet Ads We consider the internet ad problem previously studied by Van Hasselt (2013), D’Eramo et al. (2021), and Jiang et al. (2021). There are $M$ different ads, and each has the same return per click. Consequently, the click rate is the only quantity of interest and is modeled as a Bernoulli variable (click or no click). The true expectations $\mu_{1},\ldots,\mu_{M}$ of these $M$ variables equal the respective click probability and are equally spaced in a specific interval $\mu_{int}$. We consider $N$ customers and assume that the ads are presented equally often to have $N/M$ samples per ad. The objective is to estimate the maximum true mean accurately, thus finding the best ad based on the given samples. We compare the TE ($\alpha=0.1$) and the KE (standard Gaussian kernel) with the ME, DE, and WE based on bias, variance, and MSE. Six configurations of the problem are considered by varying the number of customers $N$, the number of ads $M$, or the upper limit of the sampling interval $\mu_{\rm int}$, while the lower limit is always fixed at 0.02. Figure 6 displays the results. Figure 6: Comparison of ME, DE, WE, TE, and KE on the internet ad problem. Results are averaged over 10 000 runs. Both TE and KE yield lower MSE than their competitors in most scenarios. We emphasize that $\alpha=0.1$ was not cherry-picked for this problem, and the TE’s performance could thus be further increased by tailoring $\alpha$ for each experiment. In general, all estimators’ MSEs decrease with an increasing number of ads, while they are more accurate with a higher number of customers $N$, constituting reasonable observations. The DE often yields large variances, while the ME provides biased estimates. Despite producing a higher MSE than TE and KE in most cases, the WE outperforms the conventional competitors ME and DE. ### 6.2 Maximization Bias Example We consider the example in Figure 6.5 of Sutton and Barto (2018). A simple MDP with two non-terminal states A and B is given. The agent starts in A. If it goes ’right’ from A, the episode ends, and zero reward is received. If action ’left’ is selected in A, the agent deterministically gets to state B and receives zero reward. There are eight actions to choose from B, but all lead to a terminal state and yield a reward sampled from $\mathcal{N}(-0.1,1)$. The parameters are $\gamma=1$, $\epsilon=0.1$, and $\tau=0.1$. Since this is an undiscounted task, the expected return starting with action ’left’ is $-0.1$, and the agent should always prefer going ’right.’ However, due to the random selection of the $\epsilon$-greedy strategy, the ’left’ action will always be picked at least $5\%$ in expectation. Figure 7 depicts the results. The upper-left part displays the percentage of selecting action ’left’ in A; the upper-right plot contains the same percentage after 500 training episodes. The lower-right graph shows the estimate of $Q^{*}$(A, left) over training. Finally, the lower-left plot displays the number of averaged means of TE-$Q$-Learning when updating $\hat{Q}^{*}$(A, left). $Q$-Learning is the same algorithm as TE-$Q$-Learning with $\alpha=0.5$ and only included for comparison. $Q$-Learning initially overestimates the value of the ’left’ action in state A and selects it nearly twice as optimal after 500 episodes. Double $Q$-Learning performs better and achieves a final rate of roughly $6\%$. On the other hand, TE-$Q$-Learning can modulate the overestimation bias through its significance level $\alpha$ and reaches a near-optimal selection percentage for $\alpha\leq 0.10$. Interestingly, TE-$Q$ ($\alpha=0.4$) performs worse after 500 episodes than $Q$-Learning. Although the initial overestimation is not as large as for $\alpha=0.5$, the effect is more persistent, and more interactions are needed to reduce the estimate. For additional insights, we display the number of means averaged by the TE. TE-$Q$-Learning with $\alpha=0.5$ naturally considers only the maximum sample mean, which is non-unique in the first several episodes since all action-value estimates are initialized with zero. The lower the significance level of TE-$Q$ gets, the more means are averaged until nearly all sample means are considered for $\alpha\leq 0.1$. Furthermore, KE-$Q$-Learning with the standard Gaussian kernel performs reasonably well and achieves a final selection rate below Double $Q$-Learning. Figure 7: Maximization Bias example from Sutton and Barto (2018) with parameters $\gamma=1$, $\epsilon=0.1$, and $\tau=0.1$. Q and DQ refer to $Q$-Learning and Double $Q$-Learning, respectively. Results are averaged over $100\,000$ runs and 95% confidence intervals are included. The action-value estimate in the lower-right graph for DQ is generated by averaging over both $Q$-tables. ### 6.3 Cliff Walking We examine the Cliff Walking task from Example 6.6 in Sutton and Barto (2018), which is an undiscounted, episodic task with start and goal states. Our environment is a grid of width 10 and height 5. Start state S is the lower- left grid point; goal state G is the lower-right grid point. All transitions are rewarded with $-1$, except those which lead to the grid points directly between S and G. Those are referred to as ’Cliff,’ yield reward $-100$, and send the agent back to S. Actions are the four movement directions up, down, right, and left. Performance is measured via the return during an episode. Figure 8 follows the setup of Zhu and Rigotti (2021) and contains results for constant $\epsilon=0.1$ and annealing $\epsilon=1/\sqrt{n(s)}$, where $n(s)$ is the number of times state $s$ has been visited. The learning rate is $\tau=0.1(100+1)/\left[100+n(s,a)\right]$, with $n(s,a)$ being the number of updates for the state-action pair. Next to $Q$\- and Double $Q$-Learning, we consider Weighted $Q$-Learning (WQ, D’Eramo et al. 2016) and Self-correcting $Q$-Learning (SCQ, Zhu and Rigotti 2021) with $\beta\in\\{2,3\\}$, following the recommendation of the authors. We run each algorithm for 3000 episodes and average the results over 500 independent runs. Additionally, we display the maximum action-value estimate of the start state S over training. For comparison, since at least eleven steps are necessary to walk across our map, it holds for the optimal policy: $\max_{a^{\prime}}Q^{*}(S,a^{\prime})=-11$. We see the strong performance of the newly proposed algorithms for both exploration strategies. Similar to the example in Section 6.2, especially TE-$Q$ with $\alpha=0.05$ and KE-$Q$ are appropriate for this task and achieve the highest returns together with WQ. Furthermore, the higher action-value estimates for $Q$-Learning are apparent, while Double $Q$-Learning leads to severe underestimation. Finally, the returns are higher for all algorithms with $\epsilon=1/\sqrt{n(s)}$ than with a constant exploration rate, which is reasonable since action selection yields a higher probability of selecting greedy in the long-term. Figure 8: Cliff Walking example from Sutton and Barto (2018) with parameters $\gamma=1$, $\tau=0.1(100+1)/\left[100+n(s,a)\right]$, and two different $\epsilon$-greedy strategies. Results are averaged over 500 runs, exponentially smoothed for visualization purposes, and 95% confidence intervals are included. The maximum action-value estimate of the start state for DQ is computed by averaging this quantity over both $Q$-tables. To investigate the interim and asymptotic behavior of the algorithms for different learning rates, we run an analysis similar to Van Seijen et al. (2009). Precisely, we consider learning rates in $\left\\{0.1,0.2,\ldots,0.9,1.0\right\\}$ and employ again the two exploration strategies $\epsilon=0.1$ and $\epsilon=1/\sqrt{n(s)}$. We analyze the average return over the first 100 episodes and average the results over 5000 runs for the interim performance. For the asymptotic scenario, we run each algorithm for $50\,000$ episodes and average the results over 5 runs. Figure 9: Cliff Walking example adapted from Van Seijen et al. (2009). The algorithms’ interim (dotted lines) and asymptotic (solid lines) return averages are analyzed for different learning rates. The number of episodes is $n$. The TE-$Q$ and KE-$Q$ algorithms offer the most robust interim progress across learning rates for both exploration strategies, while the DQ and SCQ expose a severe performance drop when $\tau=1$. This might be because both algorithms rely on two different $Q$-tables, and complete replacement of the entries yields instabilities in this case. Regarding the asymptotic analysis, the SCQ and DQ algorithms improve on $Q$-Learning and are marginally above WQ, TE-$Q$, and KE-$Q$ for $\epsilon=0.1$, while the return differences are close to zero for the annealing exploration strategy due to long-term greedy action selection. ### 6.4 MinAtar We select the MinAtar (Young and Tian, 2019) environments to test the proposed Deep RL algorithms. MinAtar is a testbed incorporating several Atari games from the Arcade Learning Environment (Bellemare et al., 2013) with a reduced state-representation. The platform incorporates sticky actions (Machado et al., 2018) and is designed to enable thorough algorithmic comparisons due to reduced computation times. Following Young and Tian (2019), the network structure consists of a convolutional and a fully-connected layer. The remaining hyperparameters match Young and Tian (2019), except that we use the Adam (Kingma and Ba, 2014) optimizer, which led to much more stable results during our experiments. Appendix C contains the full list of specifications. The compared algorithms are the DQN, DDQN, Self-Correcting DQN (SCDQN, Zhu and Rigotti 2021), BDQN, TE-BDQN, KE-BDQN (with standard Gaussian cdf), and Ada- TE-BDQN. For the parametrization of the BDQN and its modifications, we follow Osband et al. (2016) by using $K=10$ bootstrap heads, each corresponding to one fully-connected layer, and setting $p=1$ for the masking Bernoulli distribution. The BDQN uses the target computation of the DDQN, which we apply consequently. Furthermore, we scale the gradients of the convolutional core part for the bootstrap-based algorithms by $1/K$, which was also recommended by Osband et al. (2016). We consider $\beta\in\\{2,3,4\\}$ for the SCDQN and $\alpha\in\\{0.1,0.2,0.3,0.4\\}$ for the TE-BDQN. The bias parameter of the Ada-TE-BDQN is initialized with $\alpha=0.25$ and we consider two step sizes $\tau_{\rm Ada}\in\\{10^{-4},10^{-5}\\}$ with horizon $T_{\rm Ada}=32$. Figure 10: Algorithm comparison on Asterix. The top three rows show the training for $\tau=10^{-4.5}$, while the plots in the last row compare the final return over different learning rates. DQN, DDQN, and SCDQN are in the left column; the BDQN-based algorithms are in the right column. Figure 11: Algorithm comparison on Breakout. The top three rows show the training for $\tau=10^{-4.5}$, while the plots in the last row compare the final return over different learning rates. DQN, DDQN, and SCDQN are in the left column; the BDQN-based algorithms are in the right column. Figure 12: Algorithm comparison on Freeway. The top three rows show the training for $\tau=10^{-4.5}$, while the plots in the last row compare the final return over different learning rates. DQN, DDQN, and SCDQN are in the left column; the BDQN-based algorithms are in the right column. Figure 13: Algorithm comparison on Seaquest. The top three rows show the training for $\tau=10^{-4.5}$, while the plots in the last row compare the final return over different learning rates. DQN, DDQN, and SCDQN are in the left column; the BDQN-based algorithms are in the right column. Figure 14: Algorithm comparison on SpaceInvaders. The top three rows show the training for $\tau=10^{-4.5}$, while the plots in the last row compare the final return over different learning rates. DQN, DDQN, and SCDQN are in the left column; the BDQN-based algorithms are in the right column. To check the robustness of the algorithms, we analyze three different learning rates for each environment and algorithm: $\tau\in\\{10^{-5},10^{-4.5},10^{-4}\\}$. Every $10\,000$ steps during an experiment, we average the return of 10 test episodes. For the BDQN and its variants, the majority vote of the ensemble is applied. Additionally, we run for all algorithms bias estimation episodes from random initial states sampled from the replay buffer, following Algorithm 3. The number of those episodes are 10 for the DQN, DDQN, and SCDQN, while we run only 3 episodes for each head of the BDQN-based algorithms due to computation time. We repeat all experiments for ten independent runs, exponentially smooth the results for clarity, and include $95\%$ point-wise confidence intervals over the runs. Figures 10 \- 14 depict the results. We show the bias and return training plots for $\tau=10^{-4.5}$ in the upper two rows of each figure, while the last row contains the final return across learning rates. The BDQN-based algorithms generally outperform their competitors, and especially the KE-BDQN and Ada-TE-BDQN show a robust performance across environments, although the algorithms’ variances are relatively high in Seaquest. As expected, the DQN is affected by massive overestimations, while the DDQN can reduce the $Q$-estimates in comparison. Although the DE theoretically underestimates the MEV, the DDQN still offers a positive bias in the given set of experiments. This observation is in line with Van Hasselt et al. (2016) and illustrates the effect of time-dependence and function approximation on the analysis, compare Section 3.4. As theoretically discussed in prior sections, a larger $\alpha$ in the TE-BDQN yields larger $Q$-estimates and, consequently, a larger estimation bias. The adaptive mechanism of Ada-TE-BDQN, especially for $\tau_{\rm Ada}=10^{-4}$, results in approximately unbiased action-value estimates. Throughout environments, the $\alpha$ of the Ada-TE-BDQN mostly increases during training but stays on moderate values in approximately $[0.2,0.4]$. Large values with $\alpha\approx 0.5$ are not achieved even in later stages of training, indicating the criticality of the ME combined with function approximation. ### 6.5 Discussion The experimental results confirm that we can embed statistical thinking in form of the TE/KE into value-based RL methods to control their estimation bias. In addition, the experiments support the finding of Lan et al. (2020) that unbiased $Q$-estimation does not necessarily translate into the best return performance. For example, in SpaceInvaders, the KE-BDQN is return-wise the strongest algorithm despite its severe negative bias. However, the TE-BDQN ($\alpha=0.1$) offers an even lower bias but cannot match the return of the KE-BDQN. There seems to be a critical level - or path over time - of estimation bias for a given MDP, which yields maximum performance. Careful selection of a bias control parameter like $\alpha$ for the TE or the kernel function for the KE thus constitutes a crucial component in designing temporal-difference algorithms. Analyzing the behavior of the algorithms in more detail, we see that the estimated bias _changes over time_. With random initialization of the networks and a couple of zero-return episodes, all algorithms’ bias is shortly approximately zero. As soon as some non-zero rewards are observed, the different target specifications affect the update routine and result in severely different bias plots over time. Besides the Ada-TE-BDQN, each algorithm reveals its tendency towards over- or underestimation, although exceptions are possible. For example, the TE-BDQN ($\alpha=0.3$) offers slight overestimations during the first three million steps in Breakout before shifting towards underestimation. Importantly, none of the non-adaptive algorithms shows reliable convergence to zero-bias as training proceeds, which agrees with the observations of Van Hasselt et al. (2016). Finally, we summarize the core findings of our investigation: 1. 1. _Absolute bias minimization does not equal return maximization._ In order to maximize performance in a real application, different bias control configurations should be considered, for which the TE/KE-BDQN build a flexible framework. 2. 2. _Approximately unbiased estimation offers a robust baseline across tasks._ Although it is not always the return-maximizing choice, using a scheme for approximately unbiased value-estimation appears more robust across tasks than fixing a particular bias control parameter. The Ada-TE-BDQN is a powerful candidate for such a scheme. 3. 3. _The compatibility between bias control algorithms and exploration schemes requires systematic analysis._ The impact on exploration most likely constitutes an essential factor in the occasional return-improvement of a biased procedure over an unbiased one (Liang et al., 2021). Further study needs to generate insights on how these components interact and, crucially, whether the algorithmic approaches are compatible. Can we use the Ada-TE-BDQN with a modified exploration scheme to boost return performance? Do we maintain unbiased action-value estimation in this process? Can we achieve or even improve on the return peaks of a fine-tuned bias control configuration in this manner? Exploration and action-value estimation are even in off-policy RL not necessarily orthogonal and thus constitute a crucial path for future research. ## 7 Related Works Next to Van Hasselt (2010), D’Eramo et al. (2016), and Zhu and Rigotti (2021), several proposals have been made to further tackle the issue of estimation bias in temporal-difference algorithms. Zhang et al. (2017) proposed a hybrid between the ME and the DE called Weighted Double Estimator. It relies on a hyperparameter on the positive real axis, for which the authors propose a heuristic based on empirical experiences. Lee et al. (2013) proposed Bias- corrected $Q$-Learning, which incorporates a correction term depending on the reward variance. Ensemble-based methods like Maxmin $Q$-Learning (Lan et al., 2020) and Randomized Ensembled Double $Q$-Learning (REDQ, Chen et al. 2021a) apply a minimization operator over the ensemble or a subset of the latter. The Action-Candidate based Clipped Double Estimator (Jiang et al., 2021) extends the DE by creating a so-called candidate set of indices of which the maximizing one will be picked. Furthermore, the Clipped Double Estimator of Fujimoto et al. (2018) and the Truncated Quantile Critic (TQC, Kuznetsov et al. 2020) algorithm are relevant contributions to addressing the overestimation issue in actor-critic frameworks. Finally, Lee et al. (2021) pursuit a re-weighting strategy of sampled transitions based on uncertainty estimates from an ensemble. Apart from methodological extensions, Chen et al. (2021b) recently reported that a lower learning rate or an adequate schedule could also avoid the massive overestimations of $Q$-Learning. However, lowering the learning rate can come at the expense of impractically slow learning, as seen in our Breakout experiments, and constitutes thus not a practical option to address the issue of action-value estimation. Recently, some proposals have been made to minimize the estimation bias of temporal-difference algorithms through online parameter adjustments in the spirit of the Ada-TE-BDQN. Liang et al. (2021) expand the work of Fox et al. (2016), Fox (2019) by using an ensemble to adjust the temperature parameter in a maximum entropy framework. Kuznetsov et al. (2021) and Dorka et al. (2021) introduce adaptive variants of the TQC by adjusting the number of quantiles to drop based on recent near on-policy trajectories. Finally, Wang et al. (2021) generalize MaxMin $Q$-Learning and REDQ by changing the size of the subset of the ensemble on which the minimization operator is performed. The metric driving the adjustment is the ensemble’s function approximation error since it is argued that high approximation error is connected with the overestimation of action-values. ## 8 Conclusion Reinforcement learning is a domain of artificial intelligence with significant breakthroughs in a diverse set of real-world applications, particularly in the last decade. A key issue of frequently applied temporal-difference algorithms is the propagation of biased action-value estimates. We address this topic by proposing the $T$-Estimator and the $K$-Estimator for the underlying problem of estimating the maximum expected value of random variables. Both estimators are easy to compute and allow to flexibly interpolate between over- and underestimation bias, leading to promising modifications of $Q$-Learning and the Bootstrapped DQN algorithm. Coupled with the dynamic selection procedure of the significance level of TE, our work constitutes an important step towards unbiased estimation of action-values with function approximation. In future research, we will analyze the discussed interplay of action-value estimation and exploration. As methodological extensions, we will investigate possibilities to extend the two-sample testing procedures into continuous actions spaces to modify policy gradient methods because the latter constitute an elementary class of methods in several application domains. Furthermore, next to the considered procedure, there are alternative approaches for uncertainty quantification in the neural network scenario. For example, the regularization technique dropout (Srivastava et al., 2014) will be applied similar to D’Eramo et al. (2021) to obtain the required variance estimate for the newly proposed algorithms, and the Bootstrapped DQN will be enhanced by adding random prior functions (Osband et al., 2018). Finally, our work focuses on the estimation bias over the whole state-action distribution of a policy, and we aggregate this quantity into one scalar estimate. While we analyze how this scalar changes throughout training, we do not differentiate how the bias is distributed over the state-action space at a particular time in training. We acknowledge that assessing complex MDPs in this fashion might result in an over-simplification. In the future, we will work on more tailored solutions and consider, e.g., an individual significance level of the TE for different regions of the state-action space. ### Acknowledgements We would like to thank Niklas Paulig for fruitful discussions in the early stages of this work. Furthermore, we want to thank two anonymous reviewers for their constructive feedback, which improved this manuscript thoroughly. ## References * Armstrong (2014) Armstrong, R. A. When to use the Bonferroni correction. Ophthalmic and Physiological Optics, 2014, 34, 502–508. * Aven (1985) Aven, T. Upper (lower) bounds on the mean of the maximum (minimum) of a number of random variables. Journal of Applied Probability, 1985, 22, 723–728. * Bellemare et al. (2013) Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 2013, 47, 253–279. * Bellman (1954) Bellman, R. The theory of dynamic programming. Bulletin of the American Mathematical Society, 1954, 60, 503–515. * Bertsekas (2019) Bertsekas, D., Reinforcement Learning and Optimal Control, 2019, Belmont: Athena Scientific. * Blumenthal and Cohen (1968) Blumenthal, S. and Cohen, A. Estimation of the larger of two normal means. Journal of the American Statistical Association, 1968, 63, 861–876. * Casella and Berger (2002) Casella, G. and Berger, R. L., Statistical Inference (2nd ed.), 2002, Belmont, CA: Brooks/Cole Cengage Learning. * Chen et al. (2021a) Chen, X., Wang, C., Zhou, Z., and Ross, K. W. 2021a, Randomized Ensembled Double Q-Learning: Learning Fast Without a Model. In International Conference on Learning Representations. * Chen et al. (2021b) Chen, Y., Schomaker, L., and Wiering, M. A. 2021b, An Investigation Into the Effect of the Learning Rate on Overestimation Bias of Connectionist Q-learning. In International Conference on Agents and Artificial Intelligence, pp. 107–118. * D’Eramo et al. (2021) D’Eramo, C., Cini, A., Nuara, A., Pirotta, M., Alippi, C., Peters, J., and Restelli, M. Gaussian Approximation for Bias Reduction in Q-Learning. Journal of Machine Learning Research, 2021, 22, 1–51. * Dhariyal et al. (1985) Dhariyal, I., Sharma, D., and Krishnamoorthy, K. Non-existence of unbiased estimators of ordered parameters. Statistics: A Journal of Theoretical and Applied Statistics, 1985, 16, 89–95. * Dorka et al. (2021) Dorka, N., Boedecker, J., and Burgard, W. 2021, Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning. In Deep RL Workshop NeurIPS 2021. * Dudewicz (1971) Dudewicz, E. J. Maximum likelihood estimators for ranked means. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 1971, 19, 29–42. * D’Eramo et al. (2019) D’Eramo, C., Cini, A., and Restelli, M. 2019, Exploiting Action-Value uncertainty to drive exploration in reinforcement learning. In International Joint Conference on Neural Networks, IEEE, pp. 1–8. * D’Eramo et al. (2016) D’Eramo, C., Restelli, M., and Nuara, A. 2016, Estimating maximum expected value through gaussian approximation. In International Conference on Machine Learning, PMLR, pp. 1032–1040. * Efron and Tibshirani (1994) Efron, B. and Tibshirani, R. J., An Introduction to the Bootstrap, 1994, CRC press. * Fox (2019) Fox, R. 2019, Toward provably unbiased temporal-difference value estimation. In Optimization Foundations for Reinforcement Learning Workshop at NeurIPS. * Fox et al. (2016) Fox, R., Pakman, A., and Tishby, N. 2016, Taming the noise in reinforcement learning via soft updates. In Conference on Uncertainty in Artificial Intelligence, pp. 202–211. * Fujimoto et al. (2018) Fujimoto, S., Hoof, H., and Meger, D. 2018, Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning, PMLR, pp. 1587–1596. * Haarnoja et al. (2018) Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. 2018, Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, PMLR, pp. 1861–1870. * Hessel et al. (2018) Hessel, M., Modayil, J., Van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., and Silver, D. 2018, Rainbow: Combining improvements in deep reinforcement learning. In AAAI Conference on Artificial Intelligence, vol. 32. * Imagaw and Kaneko (2017) Imagaw, T. and Kaneko, T. 2017, Estimating the maximum expected value through upper confidence bound of likelihood. In Conference on Technologies and Applications of Artificial Intelligence, IEEE, pp. 202–207. * Jiang et al. (2021) Jiang, H., Xie, J., and Yang, J. 2021, Action Candidate Based Clipped Double Q-learning for Discrete and Continuous Action Tasks. In AAAI Conference on Artificial Intelligence, vol. 35, pp. 7979–7986. * Kingma and Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. * Kish (1965) Kish, L., Survey Sampling, 1965, New York: John Wiley & Sons. * Kuznetsov et al. (2021) Kuznetsov, A., Grishin, A., Tsypin, A., Ashukha, A., and Vetrov, D. Automating Control of Overestimation Bias for Continuous Reinforcement Learning. arXiv preprint arXiv:2110.13523, 2021. * Kuznetsov et al. (2020) Kuznetsov, A., Shvechikov, P., Grishin, A., and Vetrov, D. 2020, Controlling overestimation bias with truncated mixture of continuous distributional quantile critics. In International Conference on Machine Learning, PMLR, pp. 5556–5566. * Lan et al. (2020) Lan, Q., Pan, Y., Fyshe, A., and White, M. 2020, Maxmin Q-learning: Controlling the Estimation Bias of Q-learning. In International Conference on Learning Representations. * Lee et al. (2013) Lee, D., Defourny, B., and Powell, W. B. 2013, Bias-corrected q-learning to control max-operator bias in q-learning. In Symposium on Adaptive Dynamic Programming and Reinforcement Learning, IEEE, pp. 93–99. * Lee et al. (2021) Lee, K., Laskin, M., Srinivas, A., and Abbeel, P. 2021, SUNRISE: A simple unified framework for ensemble learning in deep reinforcement learning. In International Conference on Machine Learning, PMLR, pp. 6131–6141. * Liang et al. (2021) Liang, L., Xu, Y., McAleer, S. M., Hu, D., Ihler, A., Abbeel, P., and Fox, R. 2021, Temporal-Difference Value Estimation via Uncertainty-Guided Soft Updates. In Deep RL Workshop NeurIPS 2021. * Machado et al. (2018) Machado, M. C., Bellemare, M. G., Talvitie, E., Veness, J., Hausknecht, M., and Bowling, M. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 2018, 61, 523–562. * Mammen (1991) Mammen, E. Estimating a smooth monotone regression function. The Annals of Statistics, 1991, 19, 724–740. * Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature, 2015, 518, 529–533. * Nadarajah and Kotz (2008) Nadarajah, S. and Kotz, S. Exact distribution of the max/min of two Gaussian random variables. IEEE Transactions on very large scale integration systems, 2008, 16, 210–212. * Osband et al. (2018) Osband, I., Aslanides, J., and Cassirer, A. Randomized Prior Functions for Deep Reinforcement Learning. Advances in Neural Information Processing Systems, 2018, 31. * Osband et al. (2016) Osband, I., Blundell, C., Pritzel, A., and Van Roy, B. Deep exploration via bootstrapped DQN. Advances in Neural Information Processing Systems, 2016, 29, 4026–4034. * Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 2019, 32, 8026–8037. * Puterman (1994) Puterman, M. L., Markov Decision Processes: Discrete Stochastic Dynamic Programming, 1994, John Wiley & Sons. * Robbins and Monro (1951) Robbins, H. and Monro, S. A stochastic approximation method. The Annals of Mathematical Statistics, 1951, 400–407. * Schaul et al. (2016) Schaul, T., Quan, J., Antonoglou, I., and Silver, D. Prioritized experience replay. International Conference on Learning Representations, 2016. * Sheather (2004) Sheather, S. J. Density estimation. Statistical Science, 2004, 19, 588–597. * Silver et al. (2017) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. Mastering the game of go without human knowledge. Nature, 2017, 550, 354–359. * Srivastava et al. (2014) Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, 15, 1929–1958. * Sutton and Barto (2018) Sutton, R. S. and Barto, A. G., Reinforcement Learning: An Introduction, 2018, Cambridge: The MIT Press. * Thrun and Schwartz (1993) Thrun, S. and Schwartz, A. 1993, Issues in using function approximation for reinforcement learning. In Proceedings of the Fourth Connectionist Models Summer School, Hillsdale, NJ, pp. 255–263. * Tsay (2010) Tsay, R. S., Analysis of Financial Time Series, 2010, New Jersey: John Wiley & Sons. * Tsitsiklis (1994) Tsitsiklis, J. N. Asynchronous stochastic approximation and Q-learning. Machine learning, 1994, 16, 185–202. * Van Hasselt (2010) Van Hasselt, H. Double Q-learning. Advances in Neural Information Processing Systems, 2010, 23, 2613–2621. * Van Hasselt (2013) —Estimating the maximum expected value: an analysis of (nested) cross validation and the maximum sample average. arXiv preprint arXiv:1302.7175, 2013. * Van Hasselt et al. (2016) Van Hasselt, H., Guez, A., and Silver, D. 2016, Deep reinforcement learning with double Q-learning. In AAAI Conference on Artificial Intelligence, vol. 30. * Van Seijen et al. (2009) Van Seijen, H., Van Hasselt, H., Whiteson, S., and Wiering, M. 2009, A theoretical and empirical analysis of Expected Sarsa. In Symposium on Adaptive Dynamic Programming and Reinforcement Learning, IEEE, pp. 177–184. * Vinyals et al. (2019) Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 2019, 575, 350–354. * Wackerly et al. (2008) Wackerly, D. D., Mendenhall, W., and Scheaffer, R. L., Mathematical Statistics with Applications (7th ed.), 2008, Belmont, CA: Thomson Brooks/Cole. * Wang et al. (2021) Wang, H., Lin, S., and Zhang, J. Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback. Advances in Neural Information Processing Systems, 2021, 34. * Watkins and Dayan (1992) Watkins, C. J. and Dayan, P. Q-learning. Machine learning, 1992, 8, 279–292. * Young and Tian (2019) Young, K. and Tian, T. Minatar: An atari-inspired testbed for thorough and reproducible reinforcement learning experiments. arXiv preprint arXiv:1903.03176, 2019. * Zhang et al. (2017) Zhang, Z., Pan, Z., and Kochenderfer, M. J. 2017, Weighted Double Q-learning. In International Joint Conference on Artificial Intelligence, pp. 3455–3461. * Zhu and Rigotti (2021) Zhu, R. and Rigotti, M. 2021, Self-correcting Q-learning. In AAAI Conference on Artificial Intelligence, vol. 35, pp. 11185–11192. ## Appendix Appendix A Analytic forms for Section 3.3 ### A.1 Maximum Estimator Consider the following setup: $M=2$, $X_{1}\sim\mathcal{N}(\mu_{1},\sigma^{2})$, $X_{2}\sim\mathcal{N}(\mu_{2},\sigma^{2})$, and given sample sizes $|S_{1}|$, $|S_{2}|$, from which follows: $\hat{\mu}_{1}\sim\mathcal{N}(\mu_{1},\frac{\sigma^{2}}{|S_{1}|})$ and $\hat{\mu}_{2}\sim\mathcal{N}(\mu_{2},\frac{\sigma^{2}}{|S_{2}|})$. Regarding the ME, we use the expectation in (1) to compute the bias. However, we can alternatively use the following closed-form solutions (Nadarajah and Kotz, 2008): $\displaystyle\operatorname{E}\left[\max(\hat{\mu}_{1},\hat{\mu}_{2})\right]$ $\displaystyle=\mu_{1}\Phi\left(\frac{\mu_{1}-\mu_{2}}{\theta}\right)+\mu_{2}\Phi\left(\frac{\mu_{2}-\mu_{1}}{\theta}\right)+\theta\phi\left(\frac{\mu_{1}-\mu_{2}}{\theta}\right),$ $\displaystyle\operatorname{E}\left\\{\left[\max(\hat{\mu}_{1},\hat{\mu}_{2})\right]^{2}\right\\}$ $\displaystyle=\left(\frac{\sigma^{2}}{|S_{1}|}+\mu_{1}^{2}\right)\Phi\left(\frac{\mu_{1}-\mu_{2}}{\theta}\right)+\left(\frac{\sigma^{2}}{|S_{2}|}+\mu_{2}^{2}\right)\Phi\left(\frac{\mu_{2}-\mu_{1}}{\theta}\right)$ $\displaystyle+(\mu_{1}+\mu_{2})\theta\phi\left(\frac{\mu_{1}-\mu_{2}}{\theta}\right),$ where $\phi$ is the standard Gaussian pdf and $\theta=\sqrt{\frac{\sigma^{2}}{|S_{1}|}+\frac{\sigma^{2}}{|S_{2}|}}$. The expectation of the squared ME can be used to compute the variance: $\operatorname{Var}\left[\max(\hat{\mu}_{1},\hat{\mu}_{2})\right]=\operatorname{E}\left\\{\left[\max(\hat{\mu}_{1},\hat{\mu}_{2})\right]^{2}\right\\}-\operatorname{E}\left[\max(\hat{\mu}_{1},\hat{\mu}_{2})\right]^{2}.$ ### A.2 Double Estimator The expectation of the DE is given in (3), which directly yields the bias. As already mentioned, what we refer to as the DE throughout the paper is actually the CVE whenever possible, thus we compute the variance of the latter for this example. For notation, we use $\hat{\mu}_{i}^{A}=\hat{\mu}_{i}(S^{A})$ and $\hat{f}_{i}^{A}$, $\hat{F}_{i}^{A}$ for the pdf and cdf of $\hat{\mu}_{i}^{A}$, respectively, and similar for $S^{B}$. We assume that the sample $S$ is split evenly between $S_{A}$ and $S_{B}$, so that the theoretical mean distribution $\hat{f}_{i}^{A}$ equals $\hat{f}_{i}^{B}$. The DE estimate when index selection is performed on subsample $S^{A}$ is denoted with $\hat{\mu}^{DE,A}_{*}$, and when selecting based on $S^{B}$ with $\hat{\mu}^{DE,B}_{*}$. It follows: $\displaystyle\operatorname{Var}\left(\hat{\mu}^{CVE}_{*}\right)$ $\displaystyle=\operatorname{Var}\left(\frac{\hat{\mu}^{DE,A}_{*}+\hat{\mu}^{DE,B}_{*}}{2}\right)$ $\displaystyle=\frac{1}{4}\operatorname{Var}\left(\hat{\mu}^{DE,A}_{*}\right)+\frac{1}{4}\operatorname{Var}\left(\hat{\mu}^{DE,B}_{*}\right)+\frac{1}{2}\operatorname{Cov}\left(\hat{\mu}^{DE,A}_{*},\hat{\mu}^{DE,B}_{*}\right)$ $\displaystyle=\frac{1}{2}\operatorname{Var}\left(\hat{\mu}^{DE,A}_{*}\right)+\frac{1}{2}\operatorname{Cov}\left(\hat{\mu}^{DE,A}_{*},\hat{\mu}^{DE,B}_{*}\right),$ (18) because $\operatorname{Var}\left(\hat{\mu}^{DE,A}_{*}\right)=\operatorname{Var}\left(\hat{\mu}^{DE,B}_{*}\right)$. Using definition: $\operatorname{Var}\left(\hat{\mu}^{DE,A}_{*}\right)=\operatorname{E}\left[\left(\hat{\mu}^{DE,A}_{*}\right)^{2}\right]-\operatorname{E}\left[\hat{\mu}^{DE,A}_{*}\right]^{2},$ (19) in which: $\operatorname{E}\left[\left(\hat{\mu}^{DE,A}_{*}\right)^{2}\right]=\operatorname{E}\left[\left(\hat{\mu}^{B}_{1}\right)^{2}\right]\int_{-\infty}^{\infty}\hat{f}_{1}^{A}(x)\hat{F}_{2}^{A}(x)dx+\operatorname{E}\left[\left(\hat{\mu}^{B}_{2}\right)^{2}\right]\int_{-\infty}^{\infty}\hat{f}_{2}^{A}(x)\hat{F}_{1}^{A}(x)dx,$ where we compute: $\operatorname{E}\left[\left(\hat{\mu}^{B}_{1}\right)^{2}\right]=\operatorname{Var}\left(\hat{\mu}^{B}_{1}\right)+\operatorname{E}\left[\hat{\mu}^{B}_{1}\right]^{2}$; and $\operatorname{E}\left[\left(\hat{\mu}^{B}_{2}\right)^{2}\right]$ analogously, so that (19) is complete. To compute the covariance in (A.2), we have: $\operatorname{Cov}\left(\hat{\mu}^{DE,A}_{*},\hat{\mu}^{DE,B}_{*}\right)=\operatorname{E}\left[\hat{\mu}^{DE,A}_{*}\hat{\mu}^{DE,B}_{*}\right]-\operatorname{E}\left[\hat{\mu}^{DE,A}_{*}\right]\operatorname{E}\left[\hat{\mu}^{DE,B}_{*}\right],$ with the expectation of the product being $\begin{aligned} \operatorname{E}\left[\hat{\mu}^{DE,A}_{*}\hat{\mu}^{DE,B}_{*}\right]&=\operatorname{E}\left\\{\left[\mathcal{I}\left(\hat{\mu}_{1}^{A}>\hat{\mu}_{2}^{A}\right)\hat{\mu}_{1}^{B}+\mathcal{I}\left(\hat{\mu}_{1}^{A}\leq\hat{\mu}_{2}^{A}\right)\hat{\mu}_{2}^{B}\right]\left[\mathcal{I}\left(\hat{\mu}_{1}^{B}>\hat{\mu}_{2}^{B}\right)\hat{\mu}_{1}^{A}+\mathcal{I}\left(\hat{\mu}_{1}^{B}\leq\hat{\mu}_{2}^{B}\right)\hat{\mu}_{2}^{A}\right]\right\\}\\\ &=\operatorname{E}\left[\mathcal{I}\left(\hat{\mu}_{1}^{A}>\hat{\mu}_{2}^{A}\right)\hat{\mu}_{1}^{A}\right]\operatorname{E}\left[\mathcal{I}\left(\hat{\mu}_{1}^{B}>\hat{\mu}_{2}^{B}\right)\hat{\mu}_{1}^{B}\right]+\operatorname{E}\left[\mathcal{I}\left(\hat{\mu}_{1}^{A}>\hat{\mu}_{2}^{A}\right)\hat{\mu}_{2}^{A}\right]\operatorname{E}\left[\mathcal{I}\left(\hat{\mu}_{1}^{B}\leq\hat{\mu}_{2}^{B}\right)\hat{\mu}_{1}^{B}\right]\\\ &+\operatorname{E}\left[\mathcal{I}\left(\hat{\mu}_{1}^{A}\leq\hat{\mu}_{2}^{A}\right)\hat{\mu}_{1}^{A}\right]\operatorname{E}\left[\mathcal{I}\left(\hat{\mu}_{1}^{B}>\hat{\mu}_{2}^{B}\right)\hat{\mu}_{2}^{B}\right]+\operatorname{E}\left[\mathcal{I}\left(\hat{\mu}_{1}^{A}\leq\hat{\mu}_{2}^{A}\right)\hat{\mu}_{2}^{A}\right]\operatorname{E}\left[\mathcal{I}\left(\hat{\mu}_{1}^{B}\leq\hat{\mu}_{2}^{B}\right)\hat{\mu}_{2}^{B}\right].\end{aligned}$ This expression is simplified using $\mathcal{I}\left(\hat{\mu}_{1}^{A}>\hat{\mu}_{2}^{A}\right)=1-\mathcal{I}\left(\hat{\mu}_{1}^{A}\leq\hat{\mu}_{2}^{A}\right)$ to get $\operatorname{E}\left[\hat{\mu}^{DE,A}_{*}\hat{\mu}^{DE,B}_{*}\right]=\mu_{1}^{2}+2I_{1}(\mu_{2}-\mu_{1})+(I_{1}-I_{2})^{2},$ where $\displaystyle I_{1}$ $\displaystyle=\operatorname{E}\left[\mathcal{I}(\hat{\mu}_{1}^{A}\leq\hat{\mu}_{2}^{A})\hat{\mu}_{1}^{A}\right]=\mu_{1}-\int_{-\infty}^{\infty}x\hat{f}_{1}^{A}(x)\hat{F}_{2}^{A}(x)dx,$ $\displaystyle I_{2}$ $\displaystyle=\operatorname{E}\left[\mathcal{I}(\hat{\mu}_{1}^{A}\leq\hat{\mu}_{2}^{A})\hat{\mu}_{2}^{A}\right]=\int_{-\infty}^{\infty}x\hat{f}_{2}^{A}(x)\hat{F}_{1}^{A}(x)dx.$ ### A.3 T-Estimator and K-Estimator Regarding the expectation of the KE, we have: $\displaystyle\operatorname{E}\left[\hat{\mu}^{KE}_{*}\right]$ $\displaystyle=\operatorname{E}\left[\hat{\mu}^{KE}_{*}\mathcal{I}(\hat{\mu}_{1}>\hat{\mu}_{2})\right]+\operatorname{E}\left[\hat{\mu}^{KE}_{*}\mathcal{I}(\hat{\mu}_{1}\leq\hat{\mu}_{2})\right]$ $\displaystyle=\operatorname{E}\left[\left\\{\left[\sum_{j=1}^{2}\kappa\left(\frac{\hat{\mu}_{j}-\hat{\mu}_{1}}{\theta_{1j}}\right)\right]^{-1}\sum_{j=1}^{2}\kappa\left(\frac{\hat{\mu}_{j}-\hat{\mu}_{1}}{\theta_{1j}}\right)\hat{\mu}_{j}\right\\}\mathcal{I}\left(\hat{\mu}_{1}>\hat{\mu}_{2}\right)\right]$ $\displaystyle+\operatorname{E}\left[\left\\{\left[\sum_{j=1}^{2}\kappa\left(\frac{\hat{\mu}_{j}-\hat{\mu}_{2}}{\theta_{2j}}\right)\right]^{-1}\sum_{j=1}^{2}\kappa\left(\frac{\hat{\mu}_{j}-\hat{\mu}_{2}}{\theta_{2j}}\right)\hat{\mu}_{j}\right\\}\mathcal{I}\left(\hat{\mu}_{1}\leq\hat{\mu}_{2}\right)\right]$ $\displaystyle=\operatorname{E}\left\\{\frac{1}{\kappa(0)+\kappa\left(\frac{\hat{\mu}_{2}-\hat{\mu}_{1}}{\theta_{12}}\right)}\left[\kappa(0)\hat{\mu}_{1}+\kappa\left(\frac{\hat{\mu}_{2}-\hat{\mu}_{1}}{\theta_{12}}\right)\hat{\mu}_{2}\right]\mathcal{I}(\hat{\mu}_{1}>\hat{\mu}_{2})\right\\}$ $\displaystyle+\operatorname{E}\left\\{\frac{1}{\kappa\left(\frac{\hat{\mu}_{1}-\hat{\mu}_{2}}{\theta_{21}}\right)+\kappa(0)}\left[\kappa\left(\frac{\hat{\mu}_{1}-\hat{\mu}_{2}}{\theta_{21}}\right)\hat{\mu}_{1}+\kappa(0)\hat{\mu}_{2}\right]\mathcal{I}(\hat{\mu}_{1}\leq\hat{\mu}_{2})\right\\}$ $\displaystyle=\int_{-\infty}^{\infty}\int_{-\infty}^{x_{1}}\frac{1}{\kappa(0)+\kappa\left(\frac{x_{2}-x_{1}}{\theta_{12}}\right)}\left[\kappa(0)x_{1}+\kappa\left(\frac{x_{2}-x_{1}}{\theta_{12}}\right)x_{2}\right]\hat{f}_{1}(x_{1})\hat{f}_{2}(x_{2})dx_{2}dx_{1}$ $\displaystyle+\int_{-\infty}^{\infty}\int_{-\infty}^{x_{2}}\frac{1}{\kappa\left(\frac{x_{1}-x_{2}}{\theta_{21}}\right)+\kappa(0)}\left[\kappa\left(\frac{x_{1}-x_{2}}{\theta_{21}}\right)x_{1}+\kappa(0)x_{2}\right]\hat{f}_{1}(x_{1})\hat{f}_{2}(x_{2})dx_{1}dx_{2},$ where $\theta_{ij}=\sqrt{\frac{\sigma^{2}}{|S_{i}|}+\frac{\sigma^{2}}{|S_{j}|}}$ and $\hat{f}_{i}$ is the pdf of $\hat{\mu}_{i}$. For the variance, we can compute $\operatorname{E}\left\\{[\hat{\mu}^{KE}_{*}]^{2}\right\\}$ analogously. Since the TE is a special case of the KE with $\kappa(T)=\mathcal{I}(T\geq z_{\alpha})$, the above formula is also applicable for TE. ## Appendix Appendix B Adaptive TE-BDQN Algorithm initialize Action-value estimate networks with $K$ outputs $\left\\{\hat{Q}^{*}_{k}\right\\}^{K}_{k=1}$, masking distribution $M$, empty replay buffer $D$ repeat Initialize $s$ Pick a value function to act: $k\sim\text{Uniform}\\{1,\ldots,K\\}$ repeat Choose action $a$ from state $s$ with greedy policy derived from $\hat{Q}^{*}_{k}$ Take action $a$, observe reward $r$ and next state $s^{\prime}$ Sample bootstrap mask $m\sim M$ Add $(s,a,r,s^{\prime},m)$ to replay buffer $D$ Sample random minibatch of transitions $\left\\{(s_{i},a_{i},s^{\prime}_{i},r_{i},m_{i})\right\\}_{i=1}^{B}$ from $D$ Perform gradient descent step based on (15) Every $C$ steps: Reset $\theta_{k}=\theta_{k}^{-}$ for $k=1,\ldots,K$ Run partial episodes to update $\alpha$ via: $\alpha\leftarrow\alpha+\frac{\tau_{\rm Ada}}{K}\sum_{k=1}^{K}\sum_{t=1}^{T_{\rm Ada}}\left[R_{k}(s_{t,k},a_{t,k})-\hat{Q}^{*}_{k}(s_{t,k},a_{t,k};\theta_{k})\right]$ $s\leftarrow s^{\prime}$ until _$s$ is terminal_ until __ Algorithm 4 Ada-TE-BDQN ## Appendix Appendix C Hyperparameters in MinAtar Hyperparameter | Value ---|--- Batch size ($B$) | 32 Discount factor ($\gamma$) | 0.99 Loss function | MSE Min. replay buffer size | $5\,000$ Max. replay buffer size | $100\,000$ Optimizer | Adam Target network update frequency ($C$) | $1\,000$ Initial exploration rate* ($\epsilon_{\rm initial}$) | 1.0 Final exploration rate* ($\epsilon_{\rm final}$) | 0.1 Test exploration rate* ($\epsilon_{\rm test}$) | 0.0 Exploration steps* | $100\,000$ Bernoulli mask probability† ($p$) | 1.0 Number of bootstrap heads† ($K$) | 10 Initial bias parameter‡ ($\alpha$) | 0.25 Time horizon‡ ($T_{\rm Ada})$ | 32 Table 2: List of hyperparameters used in the MinAtar experiments. Parameters with a * are used by DQN, DDQN, and SCDQN, while the ones with a † are relevant for BDQN, TE-BDQN, KE-BDQN, and Ada-TE-BDQN. An ‡ exclusively refers to Ada-TE-BDQN. Table 2 details the settings for the experiments in MinAtar (Young and Tian, 2019). All algorithms were implemented using PyTorch (Paszke et al., 2019) and the computation was performed on Intel(R) Xeon(R) CPUs E5-2680 v3 (12 cores) @ 2.50GHz. The source code is available at: https://github.com/MarWaltz/TUD_RL. Note that we replaced some extreme outlier seeds for the bootstrap-based algorithms in Breakout and Seaquest environments. For example, the algorithm TE-BDQN led to an astonishing peak performance with a test return of over 200 in a Breakout run, while it got stuck in a rare occasion on Seaquest. Including those exceptions would paint an unrealistic picture of the actual capabilities of the algorithm. Similar was observed for all bootstrap-based algorithms, and we argue that those rare instabilities are due to the algorithm’s dependence on the initialization of the bootstrap heads.